Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      AI and its impact on the developer experience, or ‘where is the joy?’

      July 23, 2025

      Google launches OSS Rebuild tool to improve trust in open source packages

      July 23, 2025

      AI-enabled software development: Risk of skill erosion or catalyst for growth?

      July 23, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Power bank slapped with a recall? Stop using it now – here’s why

      July 23, 2025

      I recommend these budget earbuds over pricier Bose and Sony models – here’s why

      July 23, 2025

      Microsoft’s big AI update for Windows 11 is here – what’s new

      July 23, 2025

      Slow internet speed on Linux? This 30-second fix makes all the difference

      July 23, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Singleton and Scoped Container Attributes in Laravel 12.21

      July 23, 2025
      Recent

      Singleton and Scoped Container Attributes in Laravel 12.21

      July 23, 2025

      wulfheart/laravel-actions-ide-helper

      July 23, 2025

      lanos/laravel-cashier-stripe-connect

      July 23, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      ‘Wuchang: Fallen Feathers’ came close to fully breaking me multiple times — a soulslike as brutal and as beautiful as it gets

      July 23, 2025
      Recent

      ‘Wuchang: Fallen Feathers’ came close to fully breaking me multiple times — a soulslike as brutal and as beautiful as it gets

      July 23, 2025

      Sam Altman is “terrified” of voice ID fraudsters embracing AI — and threats of US bioweapon attacks keep him up at night

      July 23, 2025

      NVIDIA boasts a staggering $111 million in market value per employee — since it became the world’s first $4 trillion company

      July 23, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»3 Questions: Visualizing research in the age of AI

    3 Questions: Visualizing research in the age of AI

    June 8, 2025

    For over 30 years, science photographer Felice Frankel has helped MIT professors, researchers, and students communicate their work visually. Throughout that time, she has seen the development of various tools to support the creation of compelling images: some helpful, and some antithetical to the effort of producing a trustworthy and complete representation of the research. In a recent opinion piece published in Nature magazine, Frankel discusses the burgeoning use of generative artificial intelligence (GenAI) in images and the challenges and implications it has for communicating research. On a more personal note, she questions whether there will still be a place for a science photographer in the research community.

    Q: You’ve mentioned that as soon as a photo is taken, the image can be considered “manipulated.” There are ways you’ve manipulated your own images to create a visual that more successfully communicates the desired message. Where is the line between acceptable and unacceptable manipulation?

    A: In the broadest sense, the decisions made on how to frame and structure the content of an image, along with which tools used to create the image, are already a manipulation of reality. We need to remember the image is merely a representation of the thing, and not the thing itself. Decisions have to be made when creating the image. The critical issue is not to manipulate the data, and in the case of most images, the data is the structure. For example, for an image I made some time ago, I digitally deleted the petri dish in which a yeast colony was growing, to bring attention to the stunning morphology of the colony. The data in the image is the morphology of the colony. I did not manipulate that data. However, I always indicate in the text if I have done something to an image. I discuss the idea of image enhancement in my handbook, “The Visual Elements, Photography.”

    Q: What can researchers do to make sure their research is communicated correctly and ethically?

    A: With the advent of AI, I see three main issues concerning visual representation: the difference between illustration and documentation, the ethics around digital manipulation, and a continuing need for researchers to be trained in visual communication. For years, I have been trying to develop a visual literacy program for the present and upcoming classes of science and engineering researchers. MIT has a communication requirement which mostly addresses writing, but what about the visual, which is no longer tangential to a journal submission? I will bet that most readers of scientific articles go right to the figures, after they read the abstract. 

    We need to require students to learn how to critically look at a published graph or image and decide if there is something weird going on with it. We need to discuss the ethics of “nudging” an image to look a certain predetermined way. I describe in the article an incident when a student altered one of my images (without asking me) to match what the student wanted to visually communicate. I didn’t permit it, of course, and was disappointed that the ethics of such an alteration were not considered. We need to develop, at the very least, conversations on campus and, even better, create a visual literacy requirement along with the writing requirement.

    Q: Generative AI is not going away. What do you see as the future for communicating science visually?

    A: For the Nature article, I decided that a powerful way to question the use of AI in generating images was by example. I used one of the diffusion models to create an image using the following prompt:

    “Create a photo of Moungi Bawendi’s nano crystals in vials against a black background, fluorescing at different wavelengths, depending on their size, when excited with UV light.”

    The results of my AI experimentation were often cartoon-like images that could hardly pass as reality — let alone documentation — but there will be a time when they will be. In conversations with colleagues in research and computer-science communities, all agree that we should have clear standards on what is and is not allowed. And most importantly, a GenAI visual should never be allowed as documentation.

    But AI-generated visuals will, in fact, be useful for illustration purposes. If an AI-generated visual is to be submitted to a journal (or, for that matter, be shown in a presentation), I believe the researcher MUST

    • clearly label if an image was created by an AI model;
    • indicate what model was used;
    • include what prompt was used; and
    • include the image, if there is one, that was used to help the prompt.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleOnline Scrap Portal Using PHP and MySQL
    Next Article Markus Buehler receives 2025 Washington Award

    Related Posts

    Repurposing Protein Folding Models for Generation with Latent Diffusion
    Artificial Intelligence

    Repurposing Protein Folding Models for Generation with Latent Diffusion

    July 23, 2025
    Artificial Intelligence

    Scaling Up Reinforcement Learning for Traffic Smoothing: A 100-AV Highway Deployment

    July 23, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-26795 – Apache IoTDB JDBC Driver Information Exposure and Log Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    This custom magnetic keyboard shows how “gaming” is becoming less of a category and more of a feature set

    News & Updates

    CVE-2025-6135 – Projectworlds Life Insurance Management System SQL Injection

    Common Vulnerabilities and Exposures (CVEs)

    Changing the conversation in health care

    Artificial Intelligence

    Highlights

    CVE-2025-20234 – ClamAV UDF Denial of Service

    June 18, 2025

    CVE ID : CVE-2025-20234

    Published : June 18, 2025, 5:15 p.m. | 1 hour, 2 minutes ago

    Description : A vulnerability in Universal Disk Format (UDF) processing of ClamAV could allow an unauthenticated, remote attacker to cause a denial of service (DoS) condition on an affected device.

    This vulnerability is due to a memory overread during UDF file scanning. An attacker could exploit this vulnerability by submitting a crafted file containing UDF content to be scanned by ClamAV on an affected device. A successful exploit could allow the attacker to terminate the ClamAV scanning process, resulting in a DoS condition on the affected software.
    For a description of this vulnerability, see the .

    Severity: 5.3 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Critical Security Vulnerability Found in WordPress Plugin InstaWP Connect

    April 22, 2025

    CVE-2025-4099 – WordPress List Children Plugin Stored Cross-Site Scripting Vulnerability

    May 1, 2025

    lstree – ls in tree form

    June 23, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.