Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The state of DevOps and AI: Not just hype

      September 1, 2025

      A Breeze Of Inspiration In September (2025 Wallpapers Edition)

      August 31, 2025

      10 Top Generative AI Development Companies for Enterprise Node.js Projects

      August 30, 2025

      Prompting Is A Design Act: How To Brief, Guide And Iterate With AI

      August 29, 2025

      Recreating Palmer’s Draggable Product Grid with GSAP

      September 1, 2025

      Best of…: Classic WTF: We Are Not Meatbots!

      September 1, 2025

      Report: Samsung’s tri-fold phone, XR headset, and AI smart glasses to be revealed at Sep 29 Unpacked event

      September 1, 2025

      Are smart glasses with built-in hearing aids viable? My verdict after months of testing

      September 1, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      August report 2025

      September 1, 2025
      Recent

      August report 2025

      September 1, 2025

      Fake News Detection using Python Machine Learning (ML)

      September 1, 2025

      Common FP – A New JS Utility Lib

      August 31, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Download Transcribe! for Windows

      September 1, 2025
      Recent

      Download Transcribe! for Windows

      September 1, 2025

      Microsoft Fixes CertificateServicesClient (CertEnroll) Error in Windows 11

      September 1, 2025

      Download Icecream Ebook Reader for Free on PC

      September 1, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»A Coding Implementation on Introduction to Weight Quantization: Key Aspect in Enhancing Efficiency in Deep Learning and LLMs

    A Coding Implementation on Introduction to Weight Quantization: Key Aspect in Enhancing Efficiency in Deep Learning and LLMs

    April 13, 2025

    In today’s deep learning landscape, optimizing models for deployment in resource-constrained environments is more important than ever. Weight quantization addresses this need by reducing the precision of model parameters, typically from 32-bit floating point values to lower bit-width representations, thus yielding smaller models that can run faster on hardware with limited resources. This tutorial introduces the concept of weight quantization using PyTorch’s dynamic quantization technique on a pre-trained ResNet18 model. The tutorial will explore how to inspect weight distributions, apply dynamic quantization to key layers (such as fully connected layers), compare model sizes, and visualize the resulting changes. This tutorial will equip you with the theoretical background and practical skills required to deploy deep learning models.

    Copy CodeCopiedUse a different Browser
    import torch
    import torch.nn as nn
    import torch.quantization
    import torchvision.models as models
    import matplotlib.pyplot as plt
    import numpy as np
    import os
    
    
    print("Torch version:", torch.__version__)

    We import the required libraries such as PyTorch, torchvision, and matplotlib, and prints the PyTorch version, ensuring all necessary modules are ready for model manipulation and visualization.

    Copy CodeCopiedUse a different Browser
    model_fp32 = models.resnet18(pretrained=True)
    model_fp32.eval()  
    
    
    print("Pretrained ResNet18 (FP32) model loaded.")

    A pretrained ResNet18 model is loaded in FP32 (floating-point) precision and set to evaluation mode, preparing it for further processing and quantization.

    Copy CodeCopiedUse a different Browser
    fc_weights_fp32 = model_fp32.fc.weight.data.cpu().numpy().flatten()
    
    
    plt.figure(figsize=(8, 4))
    plt.hist(fc_weights_fp32, bins=50, color='skyblue', edgecolor='black')
    plt.title("FP32 - FC Layer Weight Distribution")
    plt.xlabel("Weight values")
    plt.ylabel("Frequency")
    plt.grid(True)
    plt.show()
    

    In this block, the weights from the final fully connected layer of the FP32 model are extracted and flattened, then a histogram is plotted to visualize their distribution before any quantization is applied.

    The output of the above block
    Copy CodeCopiedUse a different Browser
    quantized_model = torch.quantization.quantize_dynamic(model_fp32, {nn.Linear}, dtype=torch.qint8)
    quantized_model.eval()  
    
    
    print("Dynamic quantization applied to the model.")

    We apply dynamic quantization to the model, specifically targeting the Linear layers—to convert them to lower-precision formats, demonstrating a key technique for reducing model size and inference latency.

    Copy CodeCopiedUse a different Browser
    def get_model_size(model, filename="temp.p"):
        torch.save(model.state_dict(), filename)
        size = os.path.getsize(filename) / 1e6
        os.remove(filename)
        return size
    
    
    fp32_size = get_model_size(model_fp32, "fp32_model.p")
    quant_size = get_model_size(quantized_model, "quant_model.p")
    
    
    print(f"FP32 Model Size: {fp32_size:.2f} MB")
    print(f"Quantized Model Size: {quant_size:.2f} MB")

    A helper function is defined to save and check the model size on disk; then, it is used to measure and compare the sizes of the original FP32 model and the quantized model, showcasing the compression impact of quantization.

    Copy CodeCopiedUse a different Browser
    dummy_input = torch.randn(1, 3, 224, 224)
    
    
    with torch.no_grad():
        output_fp32 = model_fp32(dummy_input)
        output_quant = quantized_model(dummy_input)
    
    
    print("Output from FP32 model (first 5 elements):", output_fp32[0][:5])
    print("Output from Quantized model (first 5 elements):", output_quant[0][:5])

    A dummy input tensor is created to simulate an image, and both FP32 and quantized models are run on this input so that you can compare their outputs and validate that quantization does not drastically alter predictions.

    Copy CodeCopiedUse a different Browser
    if hasattr(quantized_model.fc, 'weight'):
        fc_weights_quant = quantized_model.fc.weight().dequantize().cpu().numpy().flatten()
    else:
        fc_weights_quant = quantized_model.fc._packed_params._packed_weight.dequantize().cpu().numpy().flatten()
    
    
    plt.figure(figsize=(14, 5))
    
    
    plt.subplot(1, 2, 1)
    plt.hist(fc_weights_fp32, bins=50, color='skyblue', edgecolor='black')
    plt.title("FP32 - FC Layer Weight Distribution")
    plt.xlabel("Weight values")
    plt.ylabel("Frequency")
    plt.grid(True)
    
    
    plt.subplot(1, 2, 2)
    plt.hist(fc_weights_quant, bins=50, color='salmon', edgecolor='black')
    plt.title("Quantized - FC Layer Weight Distribution")
    plt.xlabel("Weight values")
    plt.ylabel("Frequency")
    plt.grid(True)
    
    
    plt.tight_layout()
    plt.show()
    

    In this block, the quantized weights (after dequantization) are extracted from the fully connected layer and compared via histograms against the original FP32 weights to illustrate the changes in weight distribution due to quantization.

    The output of the above block

    In conclusion, the tutorial has provided a step-by-step guide to understanding and implementing weight quantization, highlighting its impact on model size and performance. By quantizing a pre-trained ResNet18 model, we observed the shifts in weight distributions, the tangible benefits in model compression, and potential inference speed improvements. This exploration sets the stage for further experimentation, such as implementing Quantization Aware Training (QAT), which can further optimize performance on quantized models.


    Here is the Colab Notebook. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 85k+ ML SubReddit.

    The post A Coding Implementation on Introduction to Weight Quantization: Key Aspect in Enhancing Efficiency in Deep Learning and LLMs appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleNew White House tariff exemptions for electronics could offer temporary break for tech
    Next Article NVIDIA AI Releases Introduce UltraLong-8B: A Series of Ultra-Long Context Language Models Designed to Process Extensive Sequences of Text (up to 1M, 2M, and 4M tokens)

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 1, 2025
    Machine Learning

    Introducing auto scaling on Amazon SageMaker HyperPod

    August 30, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-5856 – PHPGurukul BP Monitoring Management System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Microsoft will gradually retire SharePoint Alerts over the next two years

    Operating Systems

    CVE-2025-46327 – Snowflake Golang Driver Local File Configuration TOCTOU

    Common Vulnerabilities and Exposures (CVEs)

    RustoBot Botnet Exploits Router Flaws in Sophisticated Attacks

    Security

    Highlights

    CVE-2025-40632 – Icewarp Mail Server Cross-site Scripting (XSS)

    May 16, 2025

    CVE ID : CVE-2025-40632

    Published : May 16, 2025, 11:15 a.m. | 1 hour, 7 minutes ago

    Description : Cross-site scripting (XSS) in Icewarp Mail Server affecting version 11.4.0. This vulnerability allows an attacker to modify the “lastLogin” cookie with malicious JavaScript code that will be executed when the page is rendered.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-47898 – Apache HTTP Server Authentication Bypass

    May 14, 2025

    CVE-2025-3435 – Mang Board WP Stored Cross-Site Scripting Vulnerability

    April 24, 2025

    Ubuntu’s New “Dangerous” Daily Builds – What Are They?

    August 17, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.