Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Full-Stack Techies vs Toptal: Which Is Better for React.js Outsourcing?

      July 3, 2025

      The AI productivity paradox in software engineering: Balancing efficiency and human skill retention

      July 2, 2025

      The impact of gray work on software development

      July 2, 2025

      CSS Intelligence: Speculating On The Future Of A Smarter Language

      July 2, 2025

      Your Roku has secret menus and screens – here’s how to unlock them

      July 3, 2025

      Add Paramount+, STARZ, and more to your Prime Video account for $0.99 a month – here’s how

      July 3, 2025

      My new favorite keychain accessory gives me 2TB of SSD storage instantly

      July 3, 2025

      HP’s latest OmniBook finally sold me on the 2-in-1 form factor (and it’s on sale)

      July 3, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Simplifying Stream Handling with Laravel’s resource Method

      July 3, 2025
      Recent

      Simplifying Stream Handling with Laravel’s resource Method

      July 3, 2025

      Intelligent Parsing and Formatting of Names in PHP Applications

      July 3, 2025

      This Week in Laravel: Cursor Rules, Nightwatch Review, and Race Conditions

      July 3, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft confirms Windows 11 KB5060829 issues, but you can safely ignore it

      July 3, 2025
      Recent

      Microsoft confirms Windows 11 KB5060829 issues, but you can safely ignore it

      July 3, 2025

      Hash Calculator – calculates around 50 cryptographic hashes of strings and files

      July 3, 2025

      Rilasciato Thunderbird 140 ESR: Un’attenzione alle esigenze aziendali

      July 3, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Google Researchers Introduce LightLab: A Diffusion-Based AI Method for Physically Plausible, Fine-Grained Light Control in Single Images

    Google Researchers Introduce LightLab: A Diffusion-Based AI Method for Physically Plausible, Fine-Grained Light Control in Single Images

    May 17, 2025

    Manipulating lighting conditions in images post-capture is challenging. Traditional approaches rely on 3D graphics methods that reconstruct scene geometry and properties from multiple captures before simulating new lighting using physical illumination models. Though these techniques provide explicit control over light sources, recovering accurate 3D models from single images remains a problem that frequently results in unsatisfactory results. Modern diffusion-based image editing methods have emerged as alternatives that use strong statistical priors to bypass physical modeling requirements. However, these approaches struggle with precise parametric control due to their inherent stochasticity and dependence on textual conditioning.

    Generative image editing methods have been adapted for various relighting tasks with mixed results. Portrait relighting approaches often use light stage data to supervise generative models, while object relighting methods might fine-tune diffusion models using synthetic datasets conditioned on environment maps. Some methods assume a single dominant light source for outdoor scenes, like the sun, while indoor scenes present more complex multi-illumination challenges. Various approaches address these issues, including inverse rendering networks and methods that manipulate StyleGAN’s latent space. Flash photography research shows progress in multi-illumination editing through techniques that use flash/no-flash pairs to disentangle and manipulate scene illuminants.

    Researchers from Google, Tel Aviv University, Reichman University, and Hebrew University of Jerusalem have proposed LightLab, a diffusion-based method enabling explicit parametric control over light sources in images. It targets two fundamental properties of light sources, intensity and color. LightLab provides control over ambient illumination and tone mapping effects, creating a comprehensive set of editing tools that allow users to manipulate an image’s overall look and feel through illumination adjustments. The method shows effectiveness on indoor images containing visible light sources, though additional results show promise for outdoor scenes and out-of-domain examples. Comparative analysis confirms that LightLab is pioneering in delivering high-quality, precise control over visible local light sources.

    LightLab uses a pair of images to implicitly model controlled light changes in image space, which then trains a specialized diffusion model. The data collection combines real photographs with synthetic renderings. The photography dataset consists of 600 raw image pairs captured using mobile devices on tripods, with each pair showing identical scenes where only a visible light source is switched on or off. Auto-exposure settings and post-capture calibration ensure proper exposure. A larger set of synthetic images is rendered from 20 artist-created indoor 3D scenes to augment this collection using physically-based rendering in Blender. This synthetic pipeline randomly samples camera views around target objects and procedurally assigns light source parameters, including intensity, color temperature, area size, and cone angle.

    Comparative analysis shows that using a weighted mixture of real captures and synthetic renders achieves optimal results across all settings. The quantitative improvement from adding synthetic data to real captures is relatively modest at only 2.2% in PSNR, likely because significant local illumination changes are overshadowed by low-frequency image-wide details in these metrics. Qualitative comparisons on evaluation datasets show LightLab’s superiority over competing methods like OmniGen, RGB ↔ X, ScribbleLight, and IC-Light. These alternatives often introduce unwanted illumination changes, color distortion, or geometric inconsistencies. In contrast, LightLab provides faithful control over target light sources while generating physically plausible lighting effects throughout the scene.

    In conclusion, researchers introduced LightLab, an advancement in diffusion-based light source manipulation for images. Using light linearity principles and synthetic 3D data, the researchers created high-quality paired images that implicitly model complex illumination changes. Despite its strengths, LightLab faces limitations from dataset bias, particularly regarding light source types. This could be addressed through integration with unpaired fine-tuning methods. Moreover, while the simplistic data capture process using consumer mobile devices with post-capture exposure calibration facilitated easier dataset collection, it prevents precise relighting in absolute physical units, indicating room for further refinement in future iterations.


    Check out the Paper and Project Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

    The post Google Researchers Introduce LightLab: A Diffusion-Based AI Method for Physically Plausible, Fine-Grained Light Control in Single Images appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAWS Open-Sources Strands Agents SDK to Simplify AI Agent Development
    Next Article This AI paper from DeepSeek-AI Explores How DeepSeek-V3 Delivers High-Performance Language Modeling by Minimizing Hardware Overhead and Maximizing Computational Efficiency

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 3, 2025
    Machine Learning

    The Super Weight in Large Language Models

    July 2, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Behind “ANCESTRA”: combining Veo with live-action filmmaking

    Artificial Intelligence

    CVE-2025-43005 – SAP GUI for Windows Insecure Credential Storage Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-24270 – Apple macOS Network Information Leakage Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Unpatched Wazuh servers targeted by Mirai botnets (CVE-2025-24016)

    Security

    Highlights

    Development

    Empower Users and Protect Against GenAI Data Loss

    June 6, 2025

    When generative AI tools became widely available in late 2022, it wasn’t just technologists who…

    Explosive Growth of Non-Human Identities Creating Massive Security Blind Spots

    Explosive Growth of Non-Human Identities Creating Massive Security Blind Spots

    April 9, 2025

    CVE-2025-52969 – ClickHouse Executable Table Command Injection Vulnerability

    June 23, 2025

    Nuove direzioni nella programmazione: Rust e TypeScript

    April 13, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.