Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Microsoft adds Copilot-powered debugging features for .NET in Visual Studio

      August 21, 2025

      Blackstone portfolio company R Systems Acquires Novigo Solutions, Strengthening its Product Engineering and Full-Stack Agentic-AI Capabilities

      August 21, 2025

      HoundDog.ai Launches Industry’s First Privacy-by-Design Code Scanner for AI Applications

      August 21, 2025

      The Double-Edged Sustainability Sword Of AI In Web Design

      August 20, 2025

      How VPNs are helping people evade increased censorship – and much more

      August 22, 2025

      Google’s AI Mode can now find restaurant reservations for you – how it works

      August 22, 2025

      Best early Labor Day TV deals 2025: Save up to 50% on Samsung, LG, and more

      August 22, 2025

      Claude wins high praise from a Supreme Court justice – is AI’s legal losing streak over?

      August 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Preserving Data Integrity with Laravel Soft Deletes for Recovery and Compliance

      August 22, 2025
      Recent

      Preserving Data Integrity with Laravel Soft Deletes for Recovery and Compliance

      August 22, 2025

      Quickly Generate Forms based on your Eloquent Models with Laravel Formello

      August 22, 2025

      Pest 4 is Released

      August 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      FOSS Weekly #25.34: Mint 22.2 Features, FreeVPN Fiasco, Windows Update Killing SSDs, AI in LibreOffice and More

      August 21, 2025
      Recent

      FOSS Weekly #25.34: Mint 22.2 Features, FreeVPN Fiasco, Windows Update Killing SSDs, AI in LibreOffice and More

      August 21, 2025

      You’ll need standalone Word, PowerPoint, Excel on iOS, as Microsoft 365 app becomes a Copilot wrapper

      August 21, 2025

      Microsoft to Move Copilot Previews to iOS While Editing Returns to Office Apps

      August 21, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»Study shows vision-language models can’t handle queries with negation words

    Study shows vision-language models can’t handle queries with negation words

    May 14, 2025

    Imagine a radiologist examining a chest X-ray from a new patient. She notices the patient has swelling in the tissue but does not have an enlarged heart. Looking to speed up diagnosis, she might use a vision-language machine-learning model to search for reports from similar patients.

    But if the model mistakenly identifies reports with both conditions, the most likely diagnosis could be quite different: If a patient has tissue swelling and an enlarged heart, the condition is very likely to be cardiac related, but with no enlarged heart there could be several underlying causes.

    In a new study, MIT researchers have found that vision-language models are extremely likely to make such a mistake in real-world situations because they don’t understand negation — words like “no” and “doesn’t” that specify what is false or absent. 

    “Those negation words can have a very significant impact, and if we are just using these models blindly, we may run into catastrophic consequences,” says Kumail Alhamoud, an MIT graduate student and lead author of this study.

    The researchers tested the ability of vision-language models to identify negation in image captions. The models often performed as well as a random guess. Building on those findings, the team created a dataset of images with corresponding captions that include negation words describing missing objects.

    They show that retraining a vision-language model with this dataset leads to performance improvements when a model is asked to retrieve images that do not contain certain objects. It also boosts accuracy on multiple choice question answering with negated captions.

    But the researchers caution that more work is needed to address the root causes of this problem. They hope their research alerts potential users to a previously unnoticed shortcoming that could have serious implications in high-stakes settings where these models are currently being used, from determining which patients receive certain treatments to identifying product defects in manufacturing plants.

    “This is a technical paper, but there are bigger issues to consider. If something as fundamental as negation is broken, we shouldn’t be using large vision/language models in many of the ways we are using them now — without intensive evaluation,” says senior author Marzyeh Ghassemi, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.

    Ghassemi and Alhamoud are joined on the paper by Shaden Alshammari, an MIT graduate student; Yonglong Tian of OpenAI; Guohao Li, a former postdoc at Oxford University; Philip H.S. Torr, a professor at Oxford; and Yoon Kim, an assistant professor of EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. The research will be presented at Conference on Computer Vision and Pattern Recognition.

    Neglecting negation

    Vision-language models (VLM) are trained using huge collections of images and corresponding captions, which they learn to encode as sets of numbers, called vector representations. The models use these vectors to distinguish between different images.

    A VLM utilizes two separate encoders, one for text and one for images, and the encoders learn to output similar vectors for an image and its corresponding text caption.

    “The captions express what is in the images — they are a positive label. And that is actually the whole problem. No one looks at an image of a dog jumping over a fence and captions it by saying ‘a dog jumping over a fence, with no helicopters,’” Ghassemi says.

    Because the image-caption datasets don’t contain examples of negation, VLMs never learn to identify it.

    To dig deeper into this problem, the researchers designed two benchmark tasks that test the ability of VLMs to understand negation.

    For the first, they used a large language model (LLM) to re-caption images in an existing dataset by asking the LLM to think about related objects not in an image and write them into the caption. Then they tested models by prompting them with negation words to retrieve images that contain certain objects, but not others.

    For the second task, they designed multiple choice questions that ask a VLM to select the most appropriate caption from a list of closely related options. These captions differ only by adding a reference to an object that doesn’t appear in the image or negating an object that does appear in the image.

    The models often failed at both tasks, with image retrieval performance dropping by nearly 25 percent with negated captions. When it came to answering multiple choice questions, the best models only achieved about 39 percent accuracy, with several models performing at or even below random chance.

    One reason for this failure is a shortcut the researchers call affirmation bias — VLMs ignore negation words and focus on objects in the images instead.

    “This does not just happen for words like ‘no’ and ‘not.’ Regardless of how you express negation or exclusion, the models will simply ignore it,” Alhamoud says.

    This was consistent across every VLM they tested.

    “A solvable problem”

    Since VLMs aren’t typically trained on image captions with negation, the researchers developed datasets with negation words as a first step toward solving the problem.

    Using a dataset with 10 million image-text caption pairs, they prompted an LLM to propose related captions that specify what is excluded from the images, yielding new captions with negation words.

    They had to be especially careful that these synthetic captions still read naturally, or it could cause a VLM to fail in the real world when faced with more complex captions written by humans.

    They found that finetuning VLMs with their dataset led to performance gains across the board. It improved models’ image retrieval abilities by about 10 percent, while also boosting performance in the multiple-choice question answering task by about 30 percent.

    “But our solution is not perfect. We are just recaptioning datasets, a form of data augmentation. We haven’t even touched how these models work, but we hope this is a signal that this is a solvable problem and others can take our solution and improve it,” Alhamoud says.

    At the same time, he hopes their work encourages more users to think about the problem they want to use a VLM to solve and design some examples to test it before deployment.

    In the future, the researchers could expand upon this work by teaching VLMs to process text and images separately, which may improve their ability to understand negation. In addition, they could develop additional datasets that include image-caption pairs for specific applications, such as health care.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms
    Next Article MIT Department of Economics to launch James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work

    Related Posts

    Artificial Intelligence

    Scaling Up Reinforcement Learning for Traffic Smoothing: A 100-AV Highway Deployment

    August 22, 2025
    Repurposing Protein Folding Models for Generation with Latent Diffusion
    Artificial Intelligence

    Repurposing Protein Folding Models for Generation with Latent Diffusion

    August 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-32813 – Infoblox NETMRI Unauthenticated Command Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-53763 – Azure Databricks Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Breaking news – the Washington Post has been hacked

    Development

    CVE-2025-37823 – Linux Kernel Net-Sched HFSC Use-After-Free Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-38164 – VirtualBox F2FS Inconsistent Segment Type

    July 3, 2025

    CVE ID : CVE-2025-38164

    Published : July 3, 2025, 9:15 a.m. | 2 hours, 14 minutes ago

    Description : In the Linux kernel, the following vulnerability has been resolved:

    f2fs: zone: fix to avoid inconsistence in between SIT and SSA

    w/ below testcase, it will cause inconsistence in between SIT and SSA.

    create_null_blk 512 2 1024 1024
    mkfs.f2fs -m /dev/nullb0
    mount /dev/nullb0 /mnt/f2fs/
    touch /mnt/f2fs/file
    f2fs_io pinfile set /mnt/f2fs/file
    fallocate -l 4GiB /mnt/f2fs/file

    F2FS-fs (nullb0): Inconsistent segment (0) type [1, 0] in SSA and SIT
    CPU: 5 UID: 0 PID: 2398 Comm: fallocate Tainted: G O 6.13.0-rc1 #84
    Tainted: [O]=OOT_MODULE
    Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
    Call Trace:

    dump_stack_lvl+0xb3/0xd0
    dump_stack+0x14/0x20
    f2fs_handle_critical_error+0x18c/0x220 [f2fs]
    f2fs_stop_checkpoint+0x38/0x50 [f2fs]
    do_garbage_collect+0x674/0x6e0 [f2fs]
    f2fs_gc_range+0x12b/0x230 [f2fs]
    f2fs_allocate_pinning_section+0x5c/0x150 [f2fs]
    f2fs_expand_inode_data+0x1cc/0x3c0 [f2fs]
    f2fs_fallocate+0x3c3/0x410 [f2fs]
    vfs_fallocate+0x15f/0x4b0
    __x64_sys_fallocate+0x4a/0x80
    x64_sys_call+0x15e8/0x1b80
    do_syscall_64+0x68/0x130
    entry_SYSCALL_64_after_hwframe+0x67/0x6f
    RIP: 0033:0x7f9dba5197ca
    F2FS-fs (nullb0): Stopped filesystem due to reason: 4

    The reason is f2fs_gc_range() may try to migrate block in curseg, however,
    its SSA block is not uptodate due to the last summary block data is still
    in cache of curseg.

    In this patch, we add a condition in f2fs_gc_range() to check whether
    section is opened or not, and skip block migration for opened section.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-32454 – Siemens Teamcenter Visualization Out-of-Bounds Read Vulnerability

    May 13, 2025
    AI Generated Test Cases: How Good Are They?

    AI Generated Test Cases: How Good Are They?

    April 8, 2025

    Vahatraker is a live MIDI sequencer

    May 21, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.