🐍 Python Code-Only Qwen

A fine-tuned model that generates ONLY executable Python code, with no explanations or conversations.

πŸš€ Quick Inference Example

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model
model = AutoModelForCausalLM.from_pretrained("Mercy-62/python-code-only-Qwen-lora")
tokenizer = AutoTokenizer.from_pretrained("Mercy-62/python-code-only-Qwen-lora")

# Use exact same prompt format from training
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""

# Enable faster inference (if using Unsloth)
# FastLanguageModel.for_inference(model)

# Test 1: Simple code generation
inputs = tokenizer(
    [
        alpaca_prompt.format(
            "Write a Python function to reverse a string",  # instruction
            "",  # input (leave empty if no context)
            "",  # output - leave blank for generation
        )
    ],
    return_tensors="pt"
).to("cuda")

outputs = model.generate(**inputs, max_new_tokens=128, use_cache=True)
print("Test 1 - Reverse string function:")
print(tokenizer.batch_decode(outputs)[0])
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Mercy-62/python-code-only-Qwen-lora

Unable to build the model tree, the base model loops to the model itself. Learn more.

Datasets used to train Mercy-62/python-code-only-Qwen-lora