Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      June 1, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 1, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 1, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 1, 2025

      7 MagSafe accessories that I recommend every iPhone user should have

      June 1, 2025

      I replaced my Kindle with an iPad Mini as my ebook reader – 8 reasons why I don’t regret it

      June 1, 2025

      Windows 11 version 25H2: Everything you need to know about Microsoft’s next OS release

      May 31, 2025

      Elden Ring Nightreign already has a duos Seamless Co-op mod from the creator of the beloved original, and it’ll be “expanded on in the future”

      May 31, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Student Record Android App using SQLite

      June 1, 2025
      Recent

      Student Record Android App using SQLite

      June 1, 2025

      When Array uses less memory than Uint8Array (in V8)

      June 1, 2025

      Laravel 12 Starter Kits: Definite Guide Which to Choose

      June 1, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Photobooth is photobooth software for the Raspberry Pi and PC

      June 1, 2025
      Recent

      Photobooth is photobooth software for the Raspberry Pi and PC

      June 1, 2025

      Le notizie minori del mondo GNU/Linux e dintorni della settimana nr 22/2025

      June 1, 2025

      Rilasciata PorteuX 2.1: Novità e Approfondimenti sulla Distribuzione GNU/Linux Portatile Basata su Slackware

      June 1, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»A Coding Guide to Compare Three Stability AI Diffusion Models (v1.5, v2-Base & SD3-Medium) Diffusion Capabilities Side-by-Side in Google Colab Using Gradio

    A Coding Guide to Compare Three Stability AI Diffusion Models (v1.5, v2-Base & SD3-Medium) Diffusion Capabilities Side-by-Side in Google Colab Using Gradio

    May 5, 2025

    In this hands-on tutorial, we’ll unlock the creative potential of Stability AI’s industry-leading diffusion models, Stable Diffusion v1.5, Stability AI’s v2-base, and the cutting-edge Stable Diffusion 3 Medium, to generate eye-catching imagery. Running entirely in Google Colab with a Gradio interface, we’ll experience side-by-side comparisons of three powerful pipelines, rapid prompt iteration, and seamless GPU-accelerated inference. Whether we’re a marketer looking to elevate our brand’s visual narrative or a developer eager to prototype AI-driven content workflows, this tutorial showcases how Stability AI’s open-source models can be deployed instantly and at no infrastructure cost, allowing you to focus on storytelling, engagement, and driving real-world results.

    Copy CodeCopiedUse a different Browser
    !pip install huggingface_hub
    from huggingface_hub import notebook_login
    
    
    notebook_login()

    We install the huggingface_hub library and then import and invoke the notebook_login() function, which prompts you to authenticate your notebook session with your Hugging Face account, allowing you to seamlessly access and manage models, datasets, and other hub resources.

    Copy CodeCopiedUse a different Browser
    !pip uninstall -y torchvision
    
    
    !pip install --upgrade torch torchvision --index-url https://download.pytorch.org/whl/cu118
    
    
    !pip install --upgrade diffusers transformers accelerate safetensors gradio pillow

    We first force-uninstalls any existing torchvision to clear potential conflicts, then reinstalls torch and torchvision from the CUDA 11.8–compatible PyTorch wheels, and finally upgrades key libraries, diffusers, transformers, accelerate, safetensors, gradio, and pillow, to ensure you have the latest versions for building and running GPU-accelerated generative pipelines and web demos.

    Copy CodeCopiedUse a different Browser
    import torch
    from diffusers import StableDiffusionPipeline, StableDiffusion3Pipeline
    import gradio as gr
    
    
    device = "cuda" if torch.cuda.is_available() else "cpu"

    We import PyTorch alongside both the Stable Diffusion v1 and v3 pipelines from the Diffusers library, as well as Gradio for building interactive demos. It then checks for CUDA availability and sets the device variable to “cuda” if a GPU is present; otherwise, it falls back to “cpu”, ensuring your models run on the optimal hardware.

    Copy CodeCopiedUse a different Browser
    pipe1 = StableDiffusionPipeline.from_pretrained(
        "runwayml/stable-diffusion-v1-5",
        torch_dtype=torch.float16,
        safety_checker=None
    ).to(device)
    pipe1.enable_attention_slicing()

    We load the Stable Diffusion v1.5 model in half-precision (float16) without the built-in safety checker, transfers it to your selected device (GPU, if available), and then enables attention slicing to reduce peak VRAM usage during image generation.

    Copy CodeCopiedUse a different Browser
    pipe2 = StableDiffusionPipeline.from_pretrained(
        "stabilityai/stable-diffusion-2-base",
        torch_dtype=torch.float16,
        safety_checker=None
    ).to(device)
    pipe2.enable_attention_slicing()

    We load the Stable Diffusion v2 “base” model in 16-bit precision without the default safety filter, transfers it to your chosen device, and activates attention slicing to optimize memory usage during inference.

    Copy CodeCopiedUse a different Browser
    pipe3 = StableDiffusion3Pipeline.from_pretrained(
        "stabilityai/stable-diffusion-3-medium-diffusers",
        torch_dtype=torch.float16,
        safety_checker=None
    ).to(device)
    pipe3.enable_attention_slicing()

    We pull in Stability AI’s Stable Diffusion 3 “medium” checkpoint in 16-bit precision (skipping the built-in safety checker), transfers it to your selected device, and enables attention slicing to reduce GPU memory usage during generation.

    Copy CodeCopiedUse a different Browser
    def generate(prompt, steps, scale):
        img1 = pipe1(prompt, num_inference_steps=steps, guidance_scale=scale).images[0]
        img2 = pipe2(prompt, num_inference_steps=steps, guidance_scale=scale).images[0]
        img3 = pipe3(prompt, num_inference_steps=steps, guidance_scale=scale).images[0]
        return img1, img2, img3
    

    Now, this function runs the same text prompt through all three loaded pipelines (pipe1, pipe2, pipe3) using the specified inference steps and guidance scale, then returns the first image from each, making it perfect for comparing outputs across Stable Diffusion v1.5, v2-base, and v3-medium.

    Copy CodeCopiedUse a different Browser
    def choose(selection):
        return f"✅ You selected: **{selection}**"
    
    
    with gr.Blocks() as demo:
        gr.Markdown("## AI Social-Post Generator with 3 Models")
        with gr.Row():
            prompt = gr.Textbox(label="Prompt", placeholder="A vibrant beach sunset…")
            steps  = gr.Slider( 1, 100, value=50, step=1,     label="Inference Steps")
            scale  = gr.Slider( 1.0, 20.0, value=7.5, step=0.1, label="Guidance Scale")
        btn = gr.Button("Generate Images")
        with gr.Row():
            out1 = gr.Image(label="Model 1: SD v1.5")
            out2 = gr.Image(label="Model 2: SD v2-base")
            out3 = gr.Image(label="Model 3: SD v3-medium")
        sel = gr.Radio(
            ["Model 1: SD v1.5","Model 2: SD v2-base","Model 3: SD v3-medium"],
            label="Select your favorite"
        )
        txt = gr.Markdown()
    
    
        btn.click(fn=generate, inputs=[prompt, steps, scale], outputs=[out1, out2, out3])
        sel.change(fn=choose, inputs=sel, outputs=txt)
    
    
    demo.launch(share=True)

    Finally, this Gradio app builds a three-column UI where you can enter a text prompt, adjust inference steps and guidance scale, then generate and display images from SD v1.5, v2-base, and v3-medium side by side. It also features a radio selector, allowing you to select your preferred model output, and displays a simple confirmation message when a choice is made.

    A web interface to compare the three Stability AI models’ output 

    In conclusion, by integrating Stability AI’s state-of-the-art diffusion architectures into an easy-to-use Gradio app, you’ve seen how effortlessly you can prototype, compare, and deploy stunning visuals that resonate on today’s platforms. From A/B-testing creative directions to automating campaign assets at scale, Stability AI provides the performance, flexibility, and vibrant community support to transform your content pipeline.


    Check out the Colab Notebook. Don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. For Promotion and Partnerships, please talk us.

    🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

    The post A Coding Guide to Compare Three Stability AI Diffusion Models (v1.5, v2-Base & SD3-Medium) Diffusion Capabilities Side-by-Side in Google Colab Using Gradio appeared first on MarkTechPost.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleCVE-2025-1909 – BuddyBoss Platform Pro WordPress Authentication Bypass Vulnerability
    Next Article Ubuy Scales E-Commerce Globally and Unlocks AI With MongoDB

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 1, 2025
    Machine Learning

    BOND 2025 AI Trends Report Shows AI Ecosystem Growing Faster than Ever with Explosive User and Developer Adoption

    June 1, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    CVE-2025-46827 – Graylog HTML Form Cookie Disclosure

    Common Vulnerabilities and Exposures (CVEs)

    How Cyble is Leading the Fight Against Deepfakes with Real-Time Detection & Takedowns

    Development

    Smart Vape Cyber Pro 30K Puffs Disposable Vape

    Development

    I love that DOOM: The Dark Ages requires ray tracing, and I hope more games follow

    News & Updates
    GetResponse

    Highlights

    CVE-2025-20671 – Thermal Out-of-Bounds Write Privilege Escalation Vulnerability

    May 4, 2025

    CVE ID : CVE-2025-20671

    Published : May 5, 2025, 3:15 a.m. | 17 minutes ago

    Description : In thermal, there is a possible out of bounds write due to a race condition. This could lead to local escalation of privilege if a malicious actor has already obtained the System privilege. User interaction is not needed for exploitation. Patch ID: ALPS09698599; Issue ID: MSV-3228.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    European Accessibility Act : What You Need to Know

    January 23, 2025

    Fujitsu Data Breach: No Ransomware, But Advanced Attack Evades Detection

    July 10, 2024

    CVE-2025-23181 – Apache Tomcat Unprivileged Command Execution Vulnerability

    April 29, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.