Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Can Good UX Protect Older Users From Digital Scams?

      June 25, 2025

      Warp 2.0 evolves terminal experience into an Agentic Development Environment

      June 25, 2025

      Qodo launches CLI agent framework

      June 25, 2025

      CodeSOD: Classic WTF: When it’s OK to GOTO

      June 25, 2025

      ChatGPT’s iOS app downloaded 30 million times last month – clobbering all social apps

      June 25, 2025

      How Avast’s free AI-powered Scam Guardian protects you from online con artists

      June 25, 2025

      I’ve tested dozens of MagSafe accessories, and this is the one I keep on my iPhone

      June 25, 2025

      The best Sam’s Club deals to compete with Prime Day: TVs, speakers, laptops, and appliances

      June 25, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      What Are the PHP Trends in 2025

      June 25, 2025
      Recent

      What Are the PHP Trends in 2025

      June 25, 2025

      Real-Time Observability for Node.js – Without Code Changes

      June 25, 2025

      Elevating API Automation: Exploring Karate as an Alternative to Rest-Assured

      June 25, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      NVIDIA Brings DLSS 4 to Budget GPUs with RTX 5050 for $249

      June 25, 2025
      Recent

      NVIDIA Brings DLSS 4 to Budget GPUs with RTX 5050 for $249

      June 25, 2025

      Program Execution, follow-up

      June 25, 2025

      I’ve Seen Things, pt II

      June 25, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»Merging AI and underwater photography to reveal hidden ocean worlds

    Merging AI and underwater photography to reveal hidden ocean worlds

    June 25, 2025

    In the Northeastern United States, the Gulf of Maine represents one of the most biologically diverse marine ecosystems on the planet — home to whales, sharks, jellyfish, herring, plankton, and hundreds of other species. But even as this ecosystem supports rich biodiversity, it is undergoing rapid environmental change. The Gulf of Maine is warming faster than 99 percent of the world’s oceans, with consequences that are still unfolding.

    A new research initiative developing at MIT Sea Grant, called LOBSTgER — short for Learning Oceanic Bioecological Systems Through Generative Representations — brings together artificial intelligence and underwater photography to document the ocean life left vulnerable to these changes and share them with the public in new visual ways. Co-led by underwater photographer and visiting artist at MIT Sea Grant Keith Ellenbogen and MIT mechanical engineering PhD student Andreas Mentzelopoulos, the project explores how generative AI can expand scientific storytelling by building on field-based photographic data.

    Just as the 19th-century camera transformed our ability to document and reveal the natural world — capturing life with unprecedented detail and bringing distant or hidden environments into view — generative AI marks a new frontier in visual storytelling. Like early photography, AI opens a creative and conceptual space, challenging how we define authenticity and how we communicate scientific and artistic perspectives. 

    In the LOBSTgER project, generative models are trained exclusively on a curated library of Ellenbogen’s original underwater photographs — each image crafted with artistic intent, technical precision, accurate species identification, and clear geographic context. By building a high-quality dataset grounded in real-world observations, the project ensures that the resulting imagery maintains both visual integrity and ecological relevance. In addition, LOBSTgER’s models are built using custom code developed by Mentzelopoulos to protect the process and outputs from any potential biases from external data or models. LOBSTgER’s generative AI builds upon real photography, expanding the researchers’ visual vocabulary to deepen the public’s connection to the natural world.

    At its heart, LOBSTgER operates at the intersection of art, science, and technology. The project draws from the visual language of photography, the observational rigor of marine science, and the computational power of generative AI. By uniting these disciplines, the team is not only developing new ways to visualize ocean life — they are also reimagining how environmental stories can be told. This integrative approach makes LOBSTgER both a research tool and a creative experiment — one that reflects MIT’s long-standing tradition of interdisciplinary innovation.

    Underwater photography in New England’s coastal waters is notoriously difficult. Limited visibility, swirling sediment, bubbles, and the unpredictable movement of marine life all pose constant challenges. For the past several years, Ellenbogen has navigated these challenges and is building a comprehensive record of the region’s biodiversity through the project, Space to Sea: Visualizing New England’s Ocean Wilderness. This large dataset of underwater images provides the foundation for training LOBSTgER’s generative AI models. The images span diverse angles, lighting conditions, and animal behaviors, resulting in a visual archive that is both artistically striking and biologically accurate.

    LOBSTgER’s custom diffusion models are trained to replicate not only the biodiversity Ellenbogen documents, but also the artistic style he uses to capture it. By learning from thousands of real underwater images, the models internalize fine-grained details such as natural lighting gradients, species-specific coloration, and even the atmospheric texture created by suspended particles and refracted sunlight. The result is imagery that not only appears visually accurate, but also feels immersive and moving.

    The models can both generate new, synthetic, but scientifically accurate images unconditionally (i.e., requiring no user input/guidance), and enhance real photographs conditionally (i.e., image-to-image generation). By integrating AI into the photographic workflow, Ellenbogen will be able to use these tools to recover detail in turbid water, adjust lighting to emphasize key subjects, or even simulate scenes that would be nearly impossible to capture in the field. The team also believes this approach may benefit other underwater photographers and image editors facing similar challenges. This hybrid method is designed to accelerate the curation process and enable storytellers to construct a more complete and coherent visual narrative of life beneath the surface.

    In one key series, Ellenbogen captured high-resolution images of lion’s mane jellyfish, blue sharks, American lobsters, and ocean sunfish (Mola mola) while free diving in coastal waters. “Getting a high-quality dataset is not easy,” Ellenbogen says. “It requires multiple dives, missed opportunities, and unpredictable conditions. But these challenges are part of what makes underwater documentation both difficult and rewarding.”

    Mentzelopoulos has developed original code to train a family of latent diffusion models for LOBSTgER grounded on Ellenbogen’s images. Developing such models requires a high level of technical expertise, and training models from scratch is a complex process demanding hundreds of hours of computation and meticulous hyperparameter tuning.

    The project reflects a parallel process: field documentation through photography and model development through iterative training. Ellenbogen works in the field, capturing rare and fleeting encounters with marine animals; Mentzelopoulos works in the lab, translating those moments into machine-learning contexts that can extend and reinterpret the visual language of the ocean.

    “The goal isn’t to replace photography,” Mentzelopoulos says. “It’s to build on and complement it — making the invisible visible, and helping people see environmental complexity in a way that resonates both emotionally and intellectually. Our models aim to capture not just biological realism, but the emotional charge that can drive real-world engagement and action.”

    LOBSTgER points to a hybrid future that merges direct observation with technological interpretation. The team’s long-term goal is to develop a comprehensive model that can visualize a wide range of species found in the Gulf of Maine and, eventually, apply similar methods to marine ecosystems around the world.

    The researchers suggest that photography and generative AI form a continuum, rather than a conflict. Photography captures what is — the texture, light, and animal behavior during actual encounters — while AI extends that vision beyond what is seen, toward what could be understood, inferred, or imagined based on scientific data and artistic vision. Together, they offer a powerful framework for communicating science through image-making.

    In a region where ecosystems are changing rapidly, the act of visualizing becomes more than just documentation. It becomes a tool for awareness, engagement, and, ultimately, conservation. LOBSTgER is still in its infancy, and the team looks forward to sharing more discoveries, images, and insights as the project evolves.

    Answer from the lead image: The left image was generated using using LOBSTgER’s unconditional models and the right image is real.

    For more information, contact Keith Ellenbogen and Andreas Mentzelopoulos.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAlphaGenome: AI for better understanding the genome
    Next Article Defending against Prompt Injection with Structured Queries (StruQ) and Preference Optimization (SecAlign)

    Related Posts

    Artificial Intelligence

    Introducing Gemma 3

    June 25, 2025
    Artificial Intelligence

    Experiment with Gemini 2.0 Flash native image generation

    June 25, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    A Closer Look at the AI Assistant of Oracle Analytics

    Development

    Universally Instance-Optimal Mechanisms for Private Statistical Estimation

    Machine Learning

    Netflix Adds ChatGPT-Powered AI to Stop You From Scrolling Forever

    Artificial Intelligence

    Why I just added Gemini 2.5 Pro to the very short list of AI tools I pay for

    News & Updates

    Highlights

    CVE-2025-44867 – Tenda W20E Command Injection Vulnerability

    May 1, 2025

    CVE ID : CVE-2025-44867

    Published : May 1, 2025, 6:15 p.m. | 1 hour, 11 minutes ago

    Description : Tenda W20E V15.11.0.6 was found to contain a command injection vulnerability in the formSetNetCheckTools function via the hostName parameter. This vulnerability allows attackers to execute arbitrary commands via a crafted request.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-6449 – Code-projects Simple Online Hotel Reservation System SQL Injection Vulnerability

    June 22, 2025

    CVE-2025-4033 – PHPGurukul Nipah Virus Testing Management System SQL Injection Vulnerability

    April 28, 2025

    Microsoft releases updated HLK and VHLK for Windows 11 24H2 & Server 2025

    May 8, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.