Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Upwork Freelancers vs Dedicated React.js Teams: What’s Better for Your Project in 2025?

      August 1, 2025

      Is Agile dead in the age of AI?

      August 1, 2025

      Top 15 Enterprise Use Cases That Justify Hiring Node.js Developers in 2025

      July 31, 2025

      The Core Model: Start FROM The Answer, Not WITH The Solution

      July 31, 2025

      Finally, a sleek gaming laptop I can take to the office (without sacrificing power)

      August 1, 2025

      These jobs face the highest risk of AI takeover, according to Microsoft

      August 1, 2025

      Apple’s tariff costs and iPhone sales are soaring – how long until device prices are too?

      August 1, 2025

      5 ways to successfully integrate AI agents into your workplace

      August 1, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Enhancing Laravel Queries with Reusable Scope Patterns

      August 1, 2025
      Recent

      Enhancing Laravel Queries with Reusable Scope Patterns

      August 1, 2025

      Everything We Know About Livewire 4

      August 1, 2025

      Everything We Know About Livewire 4

      August 1, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      YouTube wants to use AI to treat “teens as teens and adults as adults” — with the most age-appropriate experiences and protections

      August 1, 2025
      Recent

      YouTube wants to use AI to treat “teens as teens and adults as adults” — with the most age-appropriate experiences and protections

      August 1, 2025

      Sam Altman is afraid of OpenAI’s GPT-5 creation — “The Manhattan Project feels very fast, like there are no adults in the room”

      August 1, 2025

      9 new features that arrived on the Windows 11 Insider Program during the second half of July 2025

      August 1, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Tech & Work»Managing the growing risk profile of agentic AI and MCP in the enterprise

    Managing the growing risk profile of agentic AI and MCP in the enterprise

    June 17, 2025

    Advancements in artificial intelligence continue to give developers an edge in efficiently producing code, but developers and companies can’t forget that it’s an edge that can always cut both ways.

    The latest innovation is the advent of agentic AI, which brings automation and decision-making to complex development tasks. Agentic AI can be coupled with the recently open-sourced Model Context Protocol (MCP), a protocol released by Anthropic, providing an open standard for orchestrating connections between AI assistants and data sources, streamlining the work of development and security teams, which can turbocharge productivity that AI has already accelerated. 

    Anthropic’s competitors have different “MCP-like” protocols making their way into the space, and as it stands, the internet at large has yet to determine a “winner” of this software race. MCP is Anthropic for AI-to-tool connections. A2A is Google, and also facilitates AI-to-AI comms. Cisco and Microsoft will both come out with their own protocol, as well. 

    But, as we’ve seen with generative AI, this new approach to speeding up software production comes with caveats. If not carefully controlled, it can introduce new vulnerabilities and amplify existing ones, such as vulnerability to prompt injection attacks, the generation of insecure code, exposure to unauthorized access and data leakage. The interconnected nature of these tools inevitably expands the attack surface.

    Security leaders need to take a hard look at how these risks affect their business, being sure they understand the potential vulnerabilities that result from using agentic AI and MCP, and take the necessary steps to minimize those risks.

    How Agentic AI Works With MCP

    After generative AI took the world by storm starting in November 2022 with the release of ChatGPT, agentic AI can seem like the next step in AI’s evolution, but they are two different forms of AI.

    GenAI creates content, using advanced machine learning to draw on existing data to create text, images, videos, music and code. 

    Agentic AI is about solving problems and getting things done, using tools such as machine learning, natural language processing and automation technologies to make decisions and take action. Agentic AI can be used, for example, in self-driving cars (responding to circumstances on the road), cybersecurity (initiating a response to a cyberattack) or customer service (proactively offering help to customers). In software development, agentic AI can be used to write large sections of code, optimize code and troubleshoot problems.

    Meanwhile, MCP, developed by Anthropic and introduced in November 2024, accelerates the work of agentic AI and other coding assistants by providing an open, universal standard for connecting large language models (LLMs) with data sources and tools, enabling teams to apply AI capabilities throughout their environment without having to write separate code for each tool. By essentially providing a common language for LLMs such as ChatGPT, Gemini, DALL•E, DeepSeek and many others to communicate, it greatly increases interoperability among LLMs.

    MCP is even touted as a way to improve security, by providing a standard way to integrate AI capabilities and automate security operations across an organization’s toolchain. Although it was treated as a general-purpose tool, MCP can be used by security teams to increase efficiency by centralizing access, adding interoperability with security tools and applications, and giving teams flexible control over which LLMs are used for specific tasks.

    But as with any powerful new tool, organizations should not just blindly jump into this new model of development without taking a careful look at what could go wrong. There is a significant profile of increased security risks associated with agentic AI coding tools within enterprise environments, specifically focusing on MCP. 

    Productivity Is Great, but MCP Also Creates Risks

    Invariant Labs recently discovered a critical vulnerability in MCP that could allow for data exfiltration via indirect prompt injections, a high-risk issue that Invariant has dubbed “tool poisoning” attacks. Such an attack embeds malicious code instructing an AI model to perform unauthorized actions, such as accessing sensitive files and transmitting data without the user being aware. Invariant said many providers and systems like OpenAI, Anthropic, Cursor and Zapier are vulnerable to this type of attack. 

    In addition to tool poisoning, such as indirect prompt injection, MCP can introduce other potential vulnerabilities related to authentication and authorization, including excessive permissions. MCP can also lack robust logging and monitoring, which are essential to maintaining the security and performance of systems and applications. 

    The vulnerability concerns are valid, though they are unlikely to stem the tide moving toward the use of agentic AI and MCP. The benefits in productivity are too great to ignore. After all, concerns about secure code have always revolved around GenAI coding tools, which can introduce flaws into the software ecosystem if the GenAI models were initially trained on buggy software. However, developers have been happy to make use of GenAI assistants anyway. In a recent survey by Stack Overflow, 76% of developers said they were using or planned to use AI tools. That’s an increase from 70% in 2023, despite the fact that during the same time period, those developers’ view of AI tools as favorable or very favorable dropped from 77% to 72%.

    The good news for organizations is that, as with GenAI coding assistants, agentic AI tools and MCP functions can be safely leveraged, as long as security-skilled developers handle them. The key emergent risk factor here is that skilled human oversight is not scaling at anywhere near the rate of agentic AI tool adoption, and this trend must course-correct, pronto.

    Developer Education and Risk Management Is the Key

    Regardless of the technologies and tools in play, the key to security in a highly connected digital environment (which is pretty much every environment these days) is the Software Development Lifecycle (SDLC). Flaws at the code level are a top target of cyberattackers, and eliminating those flaws depends on ensuring that secure coding practices are de rigueur in the SDLC, which are applied from the beginning of the development cycle. 

    With AI assistance, it’s a real possibility that we will finally see the eradication of long-standing vulnerabilities like SQL injection and cross-site scripting (XSS) after decades of them haunting every pentest report. However, most other categories of vulnerabilities will remain, especially those relating to design flaws, and we will inevitably see new groups of AI-borne vulnerabilities as the technology progresses. Navigating these issues depends on developers being security-aware with the skills to ensure, as much as possible, that both the code they create and code generated by AI is secure from the get-go. 

    Organizations need to implement ongoing education and upskilling programs that give developers the skills and tools they need to work with security teams to mitigate flaws in software before they can be released into the ecosystem. A program should make use of benchmarks to establish the baseline skills developers need and measure their progress. It should be framework and language-specific, allowing developers to work in real-world scenarios with the programming language they use on the job. Interactive sessions work best, within a curriculum that is flexible enough to adjust to changes in circumstances.

    And organizations need to confirm that the lessons from upskilling programs have hit home, with developers putting secure best practices to use on a routine basis. A tool that makes use of benchmarking metrics to track the progress of individuals, teams and the organization overall, assessing the effectiveness of a learning program against both internal and industry standards, would provide the granular insights needed to truly move the needle is the most beneficial. Enterprise security leaders ultimately need a fine-grained view of developers’ specific skills for every code commit while showing how well developers apply their new skills to the job.

    Developer upskilling has proved to be effective in improving software security, with our research showing that companies that implemented developer education saw 22% to 84% fewer software vulnerabilities, depending on factors such as the size of the companies and whether the training focused on specific problems. Security-skilled developers are in the best position to ensure that AI-generated code is secure, whether it comes from GenAI coding assistants or the more proactive agentic AI tools.

    The drawcard of agentic models is their ability to work autonomously and make decisions independently, and these being embedded into enterprise environments at scale without appropriate human governance will inevitably introduce security issues that are not particularly visible or easy to stop. Skilled developers using AI securely will see immense productivity gains, whereas unskilled developers will simply generate security chaos at breakneck speed.

    CISOs must reduce developer risk, and provide continuous learning and skills verification within their security programs to safely implement the help of agentic AI agents.

    The post Managing the growing risk profile of agentic AI and MCP in the enterprise appeared first on SD Times.

    Source: Read More 

    news
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAnubis Ransomware Encrypts and Wipes Files, Making Recovery Impossible Even After Payment
    Next Article SD Times 100

    Related Posts

    Tech & Work

    Upwork Freelancers vs Dedicated React.js Teams: What’s Better for Your Project in 2025?

    August 1, 2025
    Tech & Work

    Is Agile dead in the age of AI?

    August 1, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Improve Vision Language Model Chain-of-thought Reasoning

    Machine Learning

    A Coding Guide to Different Function Calling Methods to Create Real-Time, Tool-Enabled Conversational AI Agents

    Machine Learning

    Need a new laptop for the office? Save $400 on the Dell 16 Plus and improve your workflow

    News & Updates

    CVE-2025-45986 – Blink Command Injection Vulnerability in Router SetMacBlack Function

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    Development

    How to Build Secure SSR Authentication with Supabase, Astro, and Cloudflare Turnstile

    June 20, 2025

    In this guide, you’ll build a full server-side rendered (SSR) authentication system using Astro, Supabase,…

    CVE-2025-39398 – Bellevue Hotel + Bed and Breakfast Booking Calendar Theme Missing Authorization Vulnerability

    May 19, 2025

    Triada Malware Preloaded on Counterfeit Android Phones Infects 2,600+ Devices

    April 3, 2025

    Oblivion Remastered and Metal Gear Solid Delta co-developer Virtuos faces layoffs — with 270 workers cut

    July 17, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.