Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»OpenAI Exposes AI-Powered State Actors in Global Influence Operations

    OpenAI Exposes AI-Powered State Actors in Global Influence Operations

    May 31, 2024

    Malicious actors from Russia, China, Israel, and Iran have been leveraging artificial intelligence to target victims, according to OpenAI’s latest report. These threat actors from the aforementioned nations are using AI models in covert influence operations.

    The report details various adversary tactics ranging from the grammatical manipulations by the “Bad Grammar” network to the advanced strategies employed by the “Doppelganger” threat actor, providing deep insights into these malevolent activities.

    Through an in-depth analysis of recent developments and disruptions, the AI and Covert Influence Operations Latest Trends report offers invaluable insights into the modern-day tactics employed by threat actors to manipulate narratives and influence public opinion across online platforms.

    Threat Actors Employ AI and Covert Influence Operations

    These threat actors, hailing from diverse geopolitical regions, including Russia, China, Iran, and a commercial entity based in Israel, have exploited the technology of artificial intelligence, especially generative AI, to create a series of covert influence operations.

    These operations, meticulously documented and analyzed within the report, exemplify the sophisticated strategies employed by malicious actors to exploit AI technologies for their nefarious agendas, says OpenAI.

    One of the prominent operations highlighted in the report is “Bad Grammar,” a previously undisclosed campaign originating from Russia. Operating primarily on the messaging platform Telegram, Bad Grammar sought to disseminate politically charged content targeting audiences in Ukraine, Moldova, the Baltic States, and the United States.

    Despite its geographic reach, this operation was characterized by its blatant grammatical errors, reflecting a deliberate attempt to undermine credibility while leveraging AI models for content generation.

    Similarly, the report sheds light on the activities of “Doppelganger,” a persistent threat actor linked to Russia, engaged in disseminating anti-Ukraine propaganda across various online channels.

    Employing a hybrid approach that combines AI-generated content with traditional formats such as memes sourced from the internet, Doppelganger exemplifies the fusion of old and new tactics in these campaigns.

    Influencing Geographical Politics

    The report also highlights covert influence campaigns linked to China, Iran, and a commercial group in Israel, in addition to those connected with Russia.

    These operations, known by names like “Spamouflage” and “STOIC,” use various strategies to push their specific agendas. Their activities include promoting pro-China narratives while attacking its detractors, as well as creating content focused on the Gaza conflict and the elections in India.

    Despite the diverse origins and tactics employed by these threat actors, the report highlights common trends that shed light on the current state of covert influence. One such trend is the pervasive use of AI models to augment productivity and streamline content generation processes.

    From generating multilingual articles to automating the creation of website tags, AI serves as a force multiplier for malicious entities seeking to manipulate digital discourse.

    Furthermore, the report goes deeper into the intricate interplay between AI-driven strategies and human error, emphasizing the inherent fallibility of human operators engaged in covert influence operations. Instances of AI-generated content containing threatening signs of automation by state-hackers.

    Source: Read More

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleUnitedHealth’s Leadership Criticized by Senator Wyden for Appointment of Underqualified CISO
    Next Article OpenAI, Meta, and TikTok Crack Down on Covert Influence Campaigns, Some AI-Powered

    Related Posts

    Machine Learning

    LLMs Struggle with Real Conversations: Microsoft and Salesforce Researchers Reveal a 39% Performance Drop in Multi-Turn Underspecified Tasks

    May 17, 2025
    Machine Learning

    This AI paper from DeepSeek-AI Explores How DeepSeek-V3 Delivers High-Performance Language Modeling by Minimizing Hardware Overhead and Maximizing Computational Efficiency

    May 17, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Playwright Visual Testing: A Comprehensive Guide to UI Regression

    Development

    It’s honestly tough for me to believe this 27-inch Mini LED gaming monitor costs less than $400

    Development

    Universal Design in Pharmacies for All Disabilities

    Development

    The Skies of Tomorrow

    Artificial Intelligence

    Highlights

    Motion Highlights #2

    February 19, 2025

    A curated selection of the latest motion design and animation trends to keep you inspired…

    Ready to ditch Google Drive? Here are the 5 best alternatives to check out

    November 8, 2024

    5 common assumptions in load testing—and why you should rethink them

    April 3, 2025

    Jeremy’s Larabits: How to Provision a Server with Laravel Forge

    June 4, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.