Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      June 2, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 2, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 2, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 2, 2025

      The Alters: Release date, mechanics, and everything else you need to know

      June 2, 2025

      I’ve fallen hard for Starsand Island, a promising anime-style life sim bringing Ghibli vibes to Xbox and PC later this year

      June 2, 2025

      This new official Xbox 4TB storage card costs almost as much as the Xbox SeriesXitself

      June 2, 2025

      I may have found the ultimate monitor for conferencing and productivity, but it has a few weaknesses

      June 2, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      May report 2025

      June 2, 2025
      Recent

      May report 2025

      June 2, 2025

      Write more reliable JavaScript with optional chaining

      June 2, 2025

      Deploying a Scalable Next.js App on Vercel – A Step-by-Step Guide

      June 2, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      The Alters: Release date, mechanics, and everything else you need to know

      June 2, 2025
      Recent

      The Alters: Release date, mechanics, and everything else you need to know

      June 2, 2025

      I’ve fallen hard for Starsand Island, a promising anime-style life sim bringing Ghibli vibes to Xbox and PC later this year

      June 2, 2025

      This new official Xbox 4TB storage card costs almost as much as the Xbox SeriesXitself

      June 2, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»3 Questions: Modeling adversarial intelligence to exploit AI’s security vulnerabilities

    3 Questions: Modeling adversarial intelligence to exploit AI’s security vulnerabilities

    January 29, 2025

    If you’ve watched cartoons like Tom and Jerry, you’ll recognize a common theme: An elusive target avoids his formidable adversary. This game of “cat-and-mouse” — whether literal or otherwise — involves pursuing something that ever-so-narrowly escapes you at each try.

    In a similar way, evading persistent hackers is a continuous challenge for cybersecurity teams. Keeping them chasing what’s just out of reach, MIT researchers are working on an AI approach called “artificial adversarial intelligence” that mimics attackers of a device or network to test network defenses before real attacks happen. Other AI-based defensive measures help engineers further fortify their systems to avoid ransomware, data theft, or other hacks.

    Here, Una-May O’Reilly, an MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) principal investigator who leads the Anyscale Learning For All Group (ALFA), discusses how artificial adversarial intelligence protects us from cyber threats.

    Q: In what ways can artificial adversarial intelligence play the role of a cyber attacker, and how does artificial adversarial intelligence portray a cyber defender?

    A: Cyber attackers exist along a competence spectrum. At the lowest end, there are so-called script-kiddies, or threat actors who spray well-known exploits and malware in the hopes of finding some network or device that hasn’t practiced good cyber hygiene. In the middle are cyber mercenaries who are better-resourced and organized to prey upon enterprises with ransomware or extortion. And, at the high end, there are groups that are sometimes state-supported, which can launch the most difficult-to-detect “advanced persistent threats” (or APTs).

    Think of the specialized, nefarious intelligence that these attackers marshal — that’s adversarial intelligence. The attackers make very technical tools that let them hack into code, they choose the right tool for their target, and their attacks have multiple steps. At each step, they learn something, integrate it into their situational awareness, and then make a decision on what to do next. For the sophisticated APTs, they may strategically pick their target, and devise a slow and low-visibility plan that is so subtle that its implementation escapes our defensive shields. They can even plan deceptive evidence pointing to another hacker! 

    My research goal is to replicate this specific kind of offensive or attacking intelligence, intelligence that is adversarially-oriented (intelligence that human threat actors rely upon). I use AI and machine learning to design cyber agents and model the adversarial behavior of human attackers. I also model the learning and adaptation that characterizes cyber arms races.

    I should also note that cyber defenses are pretty complicated. They’ve evolved their complexity in response to escalating attack capabilities. These defense systems involve designing detectors, processing system logs, triggering appropriate alerts, and then triaging them into incident response systems. They have to be constantly alert to defend a very big attack surface that is hard to track and very dynamic. On this other side of attacker-versus-defender competition, my team and I also invent AI in the service of these different defensive fronts. 

    Another thing stands out about adversarial intelligence: Both Tom and Jerry are able to learn from competing with one another! Their skills sharpen and they lock into an arms race. One gets better, then the other, to save his skin, gets better too. This tit-for-tat improvement goes onwards and upwards! We work to replicate cyber versions of these arms races.

    Q: What are some examples in our everyday lives where artificial adversarial intelligence has kept us safe? How can we use adversarial intelligence agents to stay ahead of threat actors?

    A: Machine learning has been used in many ways to ensure cybersecurity. There are all kinds of detectors that filter out threats. They are tuned to anomalous behavior and to recognizable kinds of malware, for example. There are AI-enabled triage systems. Some of the spam protection tools right there on your cell phone are AI-enabled!

    With my team, I design AI-enabled cyber attackers that can do what threat actors do. We invent AI to give our cyber agents expert computer skills and programming knowledge, to make them capable of processing all sorts of cyber knowledge, plan attack steps, and to make informed decisions within a campaign.

    Adversarially intelligent agents (like our AI cyber attackers) can be used as practice when testing network defenses. A lot of effort goes into checking a network’s robustness to attack, and AI is able to help with that. Additionally, when we add machine learning to our agents, and to our defenses, they play out an arms race we can inspect, analyze, and use to anticipate what countermeasures may be used when we take measures to defend ourselves.

    Q: What new risks are they adapting to, and how do they do so?

    A: There never seems to be an end to new software being released and new configurations of systems being engineered. With every release, there are vulnerabilities an attacker can target. These may be examples of weaknesses in code that are already documented, or they may be novel. 

    New configurations pose the risk of errors or new ways to be attacked. We didn’t imagine ransomware when we were dealing with denial-of-service attacks. Now we’re juggling cyber espionage and ransomware with IP [intellectual property] theft. All our critical infrastructure, including telecom networks and financial, health care, municipal, energy, and water systems, are targets. 

    Fortunately, a lot of effort is being devoted to defending critical infrastructure. We will need to translate that to AI-based products and services that automate some of those efforts. And, of course, to keep designing smarter and smarter adversarial agents to keep us on our toes, or help us practice defending our cyber assets.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow to insert data into MySQL database using Laravel 11
    Next Article MIT students’ works redefine human-AI collaboration

    Related Posts

    Artificial Intelligence

    Markus Buehler receives 2025 Washington Award

    June 2, 2025
    Artificial Intelligence

    LWiAI Podcast #201 – GPT 4.5, Sonnet 3.7, Grok 3, Phi 4

    June 2, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    ansicolors – ANSI colors for Python

    Development

    Use language embeddings for zero-shot classification and semantic search with Amazon Bedrock

    Machine Learning

    CVE-2025-46336 – Rack::Session Pool Session Restoration Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Cybersecurity Startup Treacle Raises About 40 million in Pre-Seeding Round

    Development

    Highlights

    Development

    AI Assistant Demo & Tips for Enterprise Projects

    May 15, 2025

    After highlighting the key benefits of the AI Assistant for enterprise analytics in my previous…

    Data Governance in DevOps: Ensuring Compliance in the AI Era

    December 20, 2024

    Crunchbangplusplus – minimalist Linux distribution

    January 14, 2025

    How to Make Learning to Code Easier by Focusing on the Fundamentals

    January 2, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.