Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 14, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 14, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 14, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 14, 2025

      I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of work

      May 14, 2025

      How to use your Android phone as a webcam when your laptop’s default won’t cut it

      May 14, 2025

      The 5 most customizable Linux desktop environments – when you want it your way

      May 14, 2025

      Gen AI use at work saps our motivation even as it boosts productivity, new research shows

      May 14, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Strategic Cloud Partner: Key to Business Success, Not Just Tech

      May 14, 2025
      Recent

      Strategic Cloud Partner: Key to Business Success, Not Just Tech

      May 14, 2025

      Perficient’s “What If? So What?” Podcast Wins Gold at the 2025 Hermes Creative Awards

      May 14, 2025

      PIM for Azure Resources

      May 14, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Windows 11 24H2’s Settings now bundles FAQs section to tell you more about your system

      May 14, 2025
      Recent

      Windows 11 24H2’s Settings now bundles FAQs section to tell you more about your system

      May 14, 2025

      You can now share an app/browser window with Copilot Vision to help you with different tasks

      May 14, 2025

      Microsoft will gradually retire SharePoint Alerts over the next two years

      May 14, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»DeepMind study exposes deep fakes as leading form of AI misuse

    DeepMind study exposes deep fakes as leading form of AI misuse

    June 26, 2024

    AI has a myriad of uses, but one of its most concerning applications is the creation of deep fake media and misinformation.

    A new study from Google DeepMind and Jigsaw, a Google technology incubator that monitors societal threats, analyzed misuse of AI between January 2023 and March 2024.

    It assessed some 200 real-world incidents of AI misuse, revealing that creating and disseminating deceptive deep fake media, particularly those targeting politicians and public figures, is the most common form of malicious AI use.

    Deep fakes, synthetic media generated using AI algorithms to create highly realistic but fake images, videos, and audio, have become more lifelike and pervasive. 

    Incidents like when explicit fake images of Taylor Swift appeared on X showed that such images can reach millions of people before deletion. 

    But most insidious are deep fakes targeted at political issues, such as the Israel-Palestine conflict. In some cases, not even the fact checkers charged with labeling them as “AI-generated” can reliably detect their authenticity. 

    The DeepMind study collected data from a diverse array of sources, including social media platforms like X and Reddit, online blogs, and media reports. 

    Each incident was analyzed to determine the specific type of AI technology misused, the intended purpose behind the abuse, and the level of technical expertise required to carry out the malicious activity.

    Deep fakes are the dominant form of AI misuse

    The findings paint an alarming picture of the current landscape of malicious AI use:

    Deep fakes emerged as the dominant form of AI misuse, accounting for nearly twice as many incidents as the next most prevalent category.
    The second most frequently observed type of AI abuse was using language models and chatbots to generate and disseminate disinformation online. By automating the creation of misleading content, bad actors can flood social media and other platforms with fake news and propaganda at an unprecedented scale.
    Influencing public opinion and political narratives was the primary motivation behind over a quarter (27%) of the AI misuse cases analyzed. This finding underscores the grave threat that deepfakes and AI-generated disinformation pose to democratic processes and the integrity of elections worldwide.
    Financial gain was identified as the second most common driver of malicious AI activity, with unscrupulous actors offering paid services for creating deep fakes, including non-consensual explicit imagery, and leveraging generative AI to mass-produce fake content for profit.
    The majority of AI misuse incidents involved readily accessible tools and services that required minimal technical expertise to operate. This low barrier to entry greatly expands the pool of potential malicious actors, making it easier than ever for individuals and groups to engage in AI-powered deception and manipulation.

    Mapping AI misuse to intent. Source: DeepMind.

    Nahema Marchal, the study’s lead author and a DeepMind researcher, explained the evolving landscape of AI misuse to the Financial Times: “There had been a lot of understandable concern around quite sophisticated cyber attacks facilitated by these tools,” continuing, “We saw were fairly common misuses of GenAI [such as deep fakes that] might go under the radar a little bit more.”

    Policymakers, technology companies, and researchers must work together to develop comprehensive strategies for detecting and countering deepfakes, AI-generated disinformation, and other forms of AI misuse.

    But the truth is, they’ve already tried – and largely failed. Just recently, we’ve observed more incidents of children getting caught up in deep fake incidents, showing that the societal harm they inflict can be grave. 

    Currently, tech companies can’t reliably detect deep fakes at scale, and they’ll only grow more realistic and tougher to detect in time. 

    And once text-to-video systems like OpenAI’s Sora land, there’ll be a whole new dimension of deep fakes to handle. 

    The post DeepMind study exposes deep fakes as leading form of AI misuse appeared first on DailyAI.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous Article12 Tips for Improving Your Slow Laptop’s Speed 
    Next Article Samsung’s $99 Galaxy Watch is the best WearOS deal right now – here’s how to qualify

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 15, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-4564 – TicketBAI Facturas para WooCommerce File Deletion Vulnerability (Arbitrary File Deletion)

    May 15, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    CVE-2025-46759 – Apache HTTP Server Cross-Site Request Forgery

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-31236 – Apple macOS Sequoia Information Disclosure Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-0671 – Icegram Express WordPress Stored Cross-Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    React Router Vulnerabilities CVE-2025-43864 and CVE-2025-43865 Expose Web Applications to Attack

    Security

    Highlights

    Livewire Starter Kit: Switch Flux to Flowbite

    March 22, 2025

    Example of how you may want to switch from Flux to any other UI kit…

    Encrypting and Decrypting Passwords Using Java in Selenium

    July 29, 2024

    4 useful Samsung Galaxy S25 Ultra features that power users will drool over

    January 22, 2025

    CVE-2025-43861 – ManageWiki Stored and Reflected XSS Vulnerability

    April 24, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.