Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      10 Top Node.js Development Companies for Enterprise-Scale Projects (2025-2026 Ranked & Reviewed)

      July 4, 2025

      12 Must-Know Cost Factors When Hiring Node.js Developers for Your Enterprise

      July 4, 2025

      Mirantis reveals Lens Prism, an AI copilot for operating Kubernetes clusters

      July 3, 2025

      Avoid these common platform engineering mistakes

      July 3, 2025

      “A fantastic device for creative users” — this $550 discount on ASUS’s 3K OLED creator laptop disappears before Prime Day

      July 5, 2025

      Distribution Release: Rhino Linux 2025.3

      July 5, 2025

      Just days after joining Game Pass, the Xbox PC edition of Call of Duty: WW2 is taken offline for “an issue”

      July 5, 2025

      Xbox layoffs and game cuts wreak havoc on talented developers and the company’s future portfolio — Weekend discussion 💬

      July 5, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Flaget – new small 5kB CLI argument parser

      July 5, 2025
      Recent

      Flaget – new small 5kB CLI argument parser

      July 5, 2025

      The dog days of JavaScript summer

      July 4, 2025

      Databricks Lakebase – Database Branching in Action

      July 4, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      “A fantastic device for creative users” — this $550 discount on ASUS’s 3K OLED creator laptop disappears before Prime Day

      July 5, 2025
      Recent

      “A fantastic device for creative users” — this $550 discount on ASUS’s 3K OLED creator laptop disappears before Prime Day

      July 5, 2025

      Distribution Release: Rhino Linux 2025.3

      July 5, 2025

      EmptyEpsilon – spaceship bridge simulator game

      July 5, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»A Step-by-Step Coding Guide to Efficiently Fine-Tune Qwen3-14B Using Unsloth AI on Google Colab with Mixed Datasets and LoRA Optimization

    A Step-by-Step Coding Guide to Efficiently Fine-Tune Qwen3-14B Using Unsloth AI on Google Colab with Mixed Datasets and LoRA Optimization

    May 20, 2025

    Fine-tuning LLMs often requires extensive resources, time, and memory, challenges that can hinder rapid experimentation and deployment. Unsloth AI revolutionizes this process by enabling fast, efficient fine-tuning state-of-the-art models like Qwen3-14B with minimal GPU memory, leveraging advanced techniques such as 4-bit quantization and LoRA (Low-Rank Adaptation). In this tutorial, we walk through a practical implementation on Google Colab to fine-tune Qwen3-14B using a combination of reasoning and instruction-following datasets, combining Unsloth’s FastLanguageModel utilities with trl.SFTTrainer users can achieve powerful fine-tuning performance with just consumer-grade hardware.

    Copy CodeCopiedUse a different Browser
    %%capture
    import os
    if "COLAB_" not in "".join(os.environ.keys()):
        !pip install unsloth
    else:
        !pip install --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl==0.15.2 triton cut_cross_entropy unsloth_zoo
        !pip install sentencepiece protobuf "datasets>=3.4.1" huggingface_hub hf_transfer
        !pip install --no-deps unsloth

    We install all the essential libraries required for fine-tuning the Qwen3 model using Unsloth AI. It conditionally installs dependencies based on the environment, using a lightweight approach on Colab to ensure compatibility and reduce overhead. Key components like bitsandbytes, trl, xformers, and unsloth_zoo are included to enable 4-bit quantized training and LoRA-based optimization.

    Copy CodeCopiedUse a different Browser
    from unsloth import FastLanguageModel
    import torch
    
    
    model, tokenizer = FastLanguageModel.from_pretrained(
        model_name = "unsloth/Qwen3-14B",
        max_seq_length = 2048,
        load_in_4bit = True,
        load_in_8bit = False,
        full_finetuning = False,
    )
    

    We load the Qwen3-14B model using FastLanguageModel from the Unsloth library, which is optimized for efficient fine-tuning. It initializes the model with a context length of 2048 tokens and loads it in 4-bit precision, significantly reducing memory usage. Full fine-tuning is disabled, making it suitable for lightweight parameter-efficient techniques like LoRA.

    Copy CodeCopiedUse a different Browser
    model = FastLanguageModel.get_peft_model(
        model,
        r = 32,
        target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                          "gate_proj", "up_proj", "down_proj"],
        lora_alpha = 32,
        lora_dropout = 0,
        bias = "none",
        use_gradient_checkpointing = "unsloth",
        random_state = 3407,
        use_rslora = False,
        loftq_config = None,
    )
    

    We apply LoRA (Low-Rank Adaptation) to the Qwen3 model using FastLanguageModel.get_peft_model. It injects trainable adapters into specific transformer layers (like q_proj, v_proj, etc.) with a rank of 32, enabling efficient fine-tuning while keeping most model weights frozen. Using “unsloth” gradient checkpointing further optimizes memory usage, making it suitable for training large models on limited hardware.

    Copy CodeCopiedUse a different Browser
    from datasets import load_dataset
    
    
    reasoning_dataset = load_dataset("unsloth/OpenMathReasoning-mini", split="cot")
    non_reasoning_dataset = load_dataset("mlabonne/FineTome-100k", split="train")

    We load two pre-curated datasets from the Hugging Face Hub using the library. The reasoning_dataset contains chain-of-thought (CoT) problems from Unsloth’s OpenMathReasoning-mini, designed to enhance logical reasoning in the model. The non_reasoning_dataset pulls general instruction-following data from mlabonne’s FineTome-100k, which helps the model learn broader conversational and task-oriented skills. Together, these datasets support a well-rounded fine-tuning objective.

    Copy CodeCopiedUse a different Browser
    def generate_conversation(examples):
        problems  = examples["problem"]
        solutions = examples["generated_solution"]
        conversations = []
        for problem, solution in zip(problems, solutions):
            conversations.append([
                {"role": "user", "content": problem},
                {"role": "assistant", "content": solution},
            ])
        return {"conversations": conversations}

    This function, generate_conversation, transforms raw question–answer pairs from the reasoning dataset into a chat-style format suitable for fine-tuning. For each problem and its corresponding generated solution, a conversation is conducted in which the user asks a question and the assistant provides the answer. The output is a list of dictionaries following the structure expected by chat-based language models, preparing the data for tokenization with a chat template.

    Copy CodeCopiedUse a different Browser
    reasoning_conversations = tokenizer.apply_chat_template(
        reasoning_dataset["conversations"],
        tokenize=False,
    )
    
    
    from unsloth.chat_templates import standardize_sharegpt
    dataset = standardize_sharegpt(non_reasoning_dataset)
    
    
    non_reasoning_conversations = tokenizer.apply_chat_template(
        dataset["conversations"],
        tokenize=False,
    )
    
    
    import pandas as pd
    
    
    chat_percentage = 0.75
    non_reasoning_subset = pd.Series(non_reasoning_conversations).sample(
        int(len(reasoning_conversations) * (1.0 - chat_percentage)),
        random_state=2407,
    )
    
    
    data = pd.concat([
        pd.Series(reasoning_conversations),
        pd.Series(non_reasoning_subset)
    ])
    data.name = "text"

    We prepare the fine-tuning dataset by converting the reasoning and instruction datasets into a consistent chat format and then combining them. It first applies the tokenizer’s apply_chat_template to convert structured conversations into tokenizable strings. The standardize_sharegpt function normalizes the instruction dataset into a compatible structure. Then, a 75-25 mix is created by sampling 25% of the non-reasoning (instruction) conversations and combining them with the reasoning data. This blend ensures the model is exposed to logical reasoning and general instruction-following tasks, improving its versatility during training. The final combined data is stored as a single-column Pandas Series named “text”.

    Copy CodeCopiedUse a different Browser
    from datasets import Dataset
    
    
    combined_dataset = Dataset.from_pandas(pd.DataFrame(data))
    combined_dataset = combined_dataset.shuffle(seed=3407)
    
    
    from trl import SFTTrainer, SFTConfig
    
    
    trainer = SFTTrainer(
        model=model,
        tokenizer=tokenizer,
        train_dataset=combined_dataset,
        eval_dataset=None,  
        args=SFTConfig(
            dataset_text_field="text",
            per_device_train_batch_size=2,
            gradient_accumulation_steps=4,
            warmup_steps=5,
            max_steps=30,
            learning_rate=2e-4,
            logging_steps=1,
            optim="adamw_8bit",
            weight_decay=0.01,
            lr_scheduler_type="linear",
            seed=3407,
            report_to="none",
        )
    )
    

    We take the preprocessed conversations, wrap them into a Hugging Face Dataset (ensuring the data is in a consistent format), and shuffle the dataset with a fixed seed for reproducibility. Then, the fine-tuning trainer is initialized using trl’s SFTTrainer and SFTConfig. The trainer is set up to use the combined dataset (with the text column field named “text”) and defines training hyperparameters like batch size, gradient accumulation, number of warmup and training steps, learning rate, optimizer parameters, and a linear learning rate scheduler. This configuration is geared towards efficient fine-tuning while maintaining reproducibility and logging minimal details (with report_to=”none”).

    Copy CodeCopiedUse a different Browser
    trainer.train()

    trainer.train() starts the fine-tuning process for the Qwen3-14B model using the SFTTrainer. It trains the model on the prepared mixed dataset of reasoning and instruction-following conversations, optimizing only the LoRA-adapted parameters thanks to the underlying Unsloth setup. Training will proceed according to the configuration specified earlier (e.g., max_steps=30, batch_size=2, lr=2e-4), and progress will be printed every logging step. This final command launches the actual model adaptation based on your custom data.

    Copy CodeCopiedUse a different Browser
    model.save_pretrained("qwen3-finetuned-colab")
    tokenizer.save_pretrained("qwen3-finetuned-colab")

    We save the fine-tuned model and tokenizer locally to the “qwen3-finetuned-colab” directory. By calling save_pretrained(), the adapted weights and tokenizer configuration can be reloaded later for inference or further training, locally or for uploading to the Hugging Face Hub.

    In conclusion, with the help of Unsloth AI, fine-tuning massive LLMs like Qwen3-14B becomes feasible, using limited resources, and is highly efficient and accessible. This tutorial demonstrated how to load a 4-bit quantized version of the model, apply structured chat templates, mix multiple datasets for better generalization, and train using TRL’s SFTTrainer. Whether you’re building custom assistants or specialized domain models, Unsloth’s tools dramatically reduce the barrier to fine-tuning at scale. As open-source fine-tuning ecosystems evolve, Unsloth continues to lead the way in making LLM training faster, cheaper, and more practical for everyone.


    Check out the COLAB NOTEBOOK. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

    The post A Step-by-Step Coding Guide to Efficiently Fine-Tune Qwen3-14B Using Unsloth AI on Google Colab with Mixed Datasets and LoRA Optimization appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleMeta Introduces KernelLLM: An 8B LLM that Translates PyTorch Modules into Efficient Triton GPU Kernels
    Next Article Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source Integration

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 5, 2025
    Machine Learning

    Soup-of-Experts: Pretraining Specialist Models via Parameters Averaging

    July 4, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-5842 – WordPress Modern Design Library Stored Cross-Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2024-7074 – WSO2 SOAP Admin File Upload RCE

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-6934 – Opal Estate Pro – WordPress Property Management Plugin Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Elder Scrolls Oblivion remaster surges to the number one spot on Steam, over 100,000 players have entered the gates of Oblivion

    News & Updates

    Highlights

    Brother printer bug in 689 models exposes default admin passwords

    June 26, 2025

    Brother printer bug in 689 models exposes default admin passwords

    A total of 689 printer models from Brother, along with 53 other models from Fujifilm, Toshiba, and Konica Minolta, come with a default administrator password that remote attackers can generate. Even w …
    Read more

    Published Date:
    Jun 26, 2025 (58 minutes ago)

    Vulnerabilities has been mentioned in this article.

    CVE-2024-51984

    CVE-2024-51983

    CVE-2024-51982

    CVE-2024-51981

    CVE-2024-51980

    CVE-2024-51979

    CVE-2024-51978

    CVE-2024-51977

    Tx-DevSecOps – Bridging the Gap Between Security and Speed in DevOps

    May 6, 2025

    Multimodal Foundation Models Fall Short on Physical Reasoning: PHYX Benchmark Highlights Key Limitations in Visual and Symbolic Integration

    May 31, 2025

    CVE-2025-28201 – Victure RX1800 Root RCE

    May 9, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.