Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 14, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 14, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 14, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 14, 2025

      I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of work

      May 14, 2025

      How to use your Android phone as a webcam when your laptop’s default won’t cut it

      May 14, 2025

      The 5 most customizable Linux desktop environments – when you want it your way

      May 14, 2025

      Gen AI use at work saps our motivation even as it boosts productivity, new research shows

      May 14, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Strategic Cloud Partner: Key to Business Success, Not Just Tech

      May 14, 2025
      Recent

      Strategic Cloud Partner: Key to Business Success, Not Just Tech

      May 14, 2025

      Perficient’s “What If? So What?” Podcast Wins Gold at the 2025 Hermes Creative Awards

      May 14, 2025

      PIM for Azure Resources

      May 14, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Windows 11 24H2’s Settings now bundles FAQs section to tell you more about your system

      May 14, 2025
      Recent

      Windows 11 24H2’s Settings now bundles FAQs section to tell you more about your system

      May 14, 2025

      You can now share an app/browser window with Copilot Vision to help you with different tasks

      May 14, 2025

      Microsoft will gradually retire SharePoint Alerts over the next two years

      May 14, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»AI lie detector beats humans and could be socially disruptive

    AI lie detector beats humans and could be socially disruptive

    July 8, 2024

    Researchers from the University of Würzburg and the Max-Planck Institute for Human Development trained an AI model to detect lies and it could disrupt the way we engage with each other.

    Humans aren’t great at telling whether a person is lying or telling the truth. Experiments show that our hit rate is around 50% at best and this poor performance dictates how we engage with each other.

    The truth-default theory (TDT) says that people will typically assume that what a person tells them is true. The social cost of calling the person a liar is too big a risk with our 50/50 lie detection ability and fact-checking isn’t always practical in the moment.

    Polygraphs and other lie-detecting tech can pick up on data like stress indicators and eye movements but you’re not likely to use one of these in your next conversation. Could AI help?

    The paper explains how the research team trained Google’s BERT LLM to detect when people were lying.

    The researchers recruited 986 participants and asked them to describe their weekend plans with a follow-up explanation supporting the truthfulness of their statement.

    They were then presented with the weekend plans of another participant and asked to write a false supporting statement arguing that these were in fact their plans for the weekend.

    BERT was trained on 80% of the 1,536 statements and was then tasked with evaluating the truthfulness of the balance of the statements.

    The model was able to accurately label a statement as true or false with an accuracy of 66.86%, significantly better than the human judges who achieved a 46.47% accuracy rate in further experiments.

    Would you use an AI lie detector?

    The researchers found that when participants were presented with the option to use the AI lie detection model, only a third decided to accept the offer.

    Those who opted to use the algorithm almost always followed the algorithmic prediction in accepting the statement as true or making an accusation of lying.

    Participants who sought algorithmic predictions demonstrated accusation rates of almost 85% when it suggested the statement was false. The baseline of those who did not request machine predictions was 19.71%.

    People who are open to the idea of an AI lie detector are more likely to call BS when they see the red light flashing.

    The researchers suggest that “One plausible explanation is that an available lie-detection algorithm offers the opportunity to transfer the accountability for accusations from oneself to the machine-learning system.”

    ‘I’m not calling you a liar, the machine is.’

    This changes everything

    What would happen in our societies if people were 4 times more likely to start calling each other liars?

    The researchers concluded that if people relied on AI to be the arbiter of truth it could have strong disruptive potential.

    The paper noted that “high accusation rates may strain our social fabric by fostering generalized distrust and further increasing polarization between groups that already find it difficult to trust one another.”

    An accurate AI lie detector would have positive impacts too. It could identify AI-generated disinformation and fake news, help in business negotiations, or combat insurance fraud.

    What about the ethics of using a tool like this? Could border agents use it to detect whether a migrant’s asylum claim was true or an opportunistic fabrication?

    More advanced models than BERT will likely push AI’s lie detection accuracy toward a point where human attempts at deception become all too easy to spot.

    The researchers concluded that their “research underscores the urgent need for a comprehensive policy framework to address the impact of AI-powered lie detection algorithms.”

    The post AI lie detector beats humans and could be socially disruptive appeared first on DailyAI.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleGoogle Chrome is upgrading Android’s Safety Check feature after its desktop version
    Next Article MIT researchers introduce generative AI for databases

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 15, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-30419 – NI Circuit Design Suite SymbolEditor Out-of-Bounds Read Vulnerability

    May 15, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    How VTEX improved the shopper experience with Amazon DynamoDB

    Databases

    Top 12 Things To Do After Installing Elementary OS 8 (Circe)

    Development

    OpenAI Bans Accounts Misusing ChatGPT for Surveillance and Influence Campaigns

    Development

    How to Build a REST API in Django

    Development

    Highlights

    Machine Learning

    This AI Research Developed a Question-Answering System based on Retrieval-Augmented Generation (RAG) Using Chinese Wikipedia and Lawbank as Retrieval Sources

    January 14, 2025

    Knowledge Retrieval systems have been prevalent for decades in many industries, such as healthcare, education,…

    Virtual Try On AI software for fashion

    April 7, 2025

    CVE-2025-32472 – HPE MultiScan and picoScan Slowloris Denial-of-Service Vulnerability

    April 28, 2025

    Sneaky Credit Card Skimmer Disguised as Harmless Facebook Tracker

    April 12, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.