Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Top 15 Enterprise Use Cases That Justify Hiring Node.js Developers in 2025

      July 31, 2025

      The Core Model: Start FROM The Answer, Not WITH The Solution

      July 31, 2025

      AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals   

      July 31, 2025

      Understanding the code modernization conundrum

      July 31, 2025

      Not just YouTube: Google is using AI to guess your age based on your activity – everywhere

      July 31, 2025

      Malicious extensions can use ChatGPT to steal your personal data – here’s how

      July 31, 2025

      What Zuckerberg’s ‘personal superintelligence’ sales pitch leaves out

      July 31, 2025

      This handy NordVPN tool flags scam calls on Android – even before you answer

      July 31, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Route Optimization through Laravel’s Shallow Resource Architecture

      July 31, 2025
      Recent

      Route Optimization through Laravel’s Shallow Resource Architecture

      July 31, 2025

      This Week in Laravel: Laracon News, Free Laravel Idea, and Claude Code Course

      July 31, 2025

      Everything We Know About Pest 4

      July 31, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      FOSS Weekly #25.31: Kernel 6.16, OpenMandriva Review, Conky Customization, System Monitoring and More

      July 31, 2025
      Recent

      FOSS Weekly #25.31: Kernel 6.16, OpenMandriva Review, Conky Customization, System Monitoring and More

      July 31, 2025

      Windows 11’s MSN Widgets board now opens in default browser, such as Chrome (EU only)

      July 31, 2025

      Microsoft’s new “move to Windows 11” campaign implies buying OneDrive paid plan

      July 31, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Tech & Work»Managing the growing risk profile of agentic AI and MCP in the enterprise

    Managing the growing risk profile of agentic AI and MCP in the enterprise

    June 17, 2025

    Advancements in artificial intelligence continue to give developers an edge in efficiently producing code, but developers and companies can’t forget that it’s an edge that can always cut both ways.

    The latest innovation is the advent of agentic AI, which brings automation and decision-making to complex development tasks. Agentic AI can be coupled with the recently open-sourced Model Context Protocol (MCP), a protocol released by Anthropic, providing an open standard for orchestrating connections between AI assistants and data sources, streamlining the work of development and security teams, which can turbocharge productivity that AI has already accelerated. 

    Anthropic’s competitors have different “MCP-like” protocols making their way into the space, and as it stands, the internet at large has yet to determine a “winner” of this software race. MCP is Anthropic for AI-to-tool connections. A2A is Google, and also facilitates AI-to-AI comms. Cisco and Microsoft will both come out with their own protocol, as well. 

    But, as we’ve seen with generative AI, this new approach to speeding up software production comes with caveats. If not carefully controlled, it can introduce new vulnerabilities and amplify existing ones, such as vulnerability to prompt injection attacks, the generation of insecure code, exposure to unauthorized access and data leakage. The interconnected nature of these tools inevitably expands the attack surface.

    Security leaders need to take a hard look at how these risks affect their business, being sure they understand the potential vulnerabilities that result from using agentic AI and MCP, and take the necessary steps to minimize those risks.

    How Agentic AI Works With MCP

    After generative AI took the world by storm starting in November 2022 with the release of ChatGPT, agentic AI can seem like the next step in AI’s evolution, but they are two different forms of AI.

    GenAI creates content, using advanced machine learning to draw on existing data to create text, images, videos, music and code. 

    Agentic AI is about solving problems and getting things done, using tools such as machine learning, natural language processing and automation technologies to make decisions and take action. Agentic AI can be used, for example, in self-driving cars (responding to circumstances on the road), cybersecurity (initiating a response to a cyberattack) or customer service (proactively offering help to customers). In software development, agentic AI can be used to write large sections of code, optimize code and troubleshoot problems.

    Meanwhile, MCP, developed by Anthropic and introduced in November 2024, accelerates the work of agentic AI and other coding assistants by providing an open, universal standard for connecting large language models (LLMs) with data sources and tools, enabling teams to apply AI capabilities throughout their environment without having to write separate code for each tool. By essentially providing a common language for LLMs such as ChatGPT, Gemini, DALL•E, DeepSeek and many others to communicate, it greatly increases interoperability among LLMs.

    MCP is even touted as a way to improve security, by providing a standard way to integrate AI capabilities and automate security operations across an organization’s toolchain. Although it was treated as a general-purpose tool, MCP can be used by security teams to increase efficiency by centralizing access, adding interoperability with security tools and applications, and giving teams flexible control over which LLMs are used for specific tasks.

    But as with any powerful new tool, organizations should not just blindly jump into this new model of development without taking a careful look at what could go wrong. There is a significant profile of increased security risks associated with agentic AI coding tools within enterprise environments, specifically focusing on MCP. 

    Productivity Is Great, but MCP Also Creates Risks

    Invariant Labs recently discovered a critical vulnerability in MCP that could allow for data exfiltration via indirect prompt injections, a high-risk issue that Invariant has dubbed “tool poisoning” attacks. Such an attack embeds malicious code instructing an AI model to perform unauthorized actions, such as accessing sensitive files and transmitting data without the user being aware. Invariant said many providers and systems like OpenAI, Anthropic, Cursor and Zapier are vulnerable to this type of attack. 

    In addition to tool poisoning, such as indirect prompt injection, MCP can introduce other potential vulnerabilities related to authentication and authorization, including excessive permissions. MCP can also lack robust logging and monitoring, which are essential to maintaining the security and performance of systems and applications. 

    The vulnerability concerns are valid, though they are unlikely to stem the tide moving toward the use of agentic AI and MCP. The benefits in productivity are too great to ignore. After all, concerns about secure code have always revolved around GenAI coding tools, which can introduce flaws into the software ecosystem if the GenAI models were initially trained on buggy software. However, developers have been happy to make use of GenAI assistants anyway. In a recent survey by Stack Overflow, 76% of developers said they were using or planned to use AI tools. That’s an increase from 70% in 2023, despite the fact that during the same time period, those developers’ view of AI tools as favorable or very favorable dropped from 77% to 72%.

    The good news for organizations is that, as with GenAI coding assistants, agentic AI tools and MCP functions can be safely leveraged, as long as security-skilled developers handle them. The key emergent risk factor here is that skilled human oversight is not scaling at anywhere near the rate of agentic AI tool adoption, and this trend must course-correct, pronto.

    Developer Education and Risk Management Is the Key

    Regardless of the technologies and tools in play, the key to security in a highly connected digital environment (which is pretty much every environment these days) is the Software Development Lifecycle (SDLC). Flaws at the code level are a top target of cyberattackers, and eliminating those flaws depends on ensuring that secure coding practices are de rigueur in the SDLC, which are applied from the beginning of the development cycle. 

    With AI assistance, it’s a real possibility that we will finally see the eradication of long-standing vulnerabilities like SQL injection and cross-site scripting (XSS) after decades of them haunting every pentest report. However, most other categories of vulnerabilities will remain, especially those relating to design flaws, and we will inevitably see new groups of AI-borne vulnerabilities as the technology progresses. Navigating these issues depends on developers being security-aware with the skills to ensure, as much as possible, that both the code they create and code generated by AI is secure from the get-go. 

    Organizations need to implement ongoing education and upskilling programs that give developers the skills and tools they need to work with security teams to mitigate flaws in software before they can be released into the ecosystem. A program should make use of benchmarks to establish the baseline skills developers need and measure their progress. It should be framework and language-specific, allowing developers to work in real-world scenarios with the programming language they use on the job. Interactive sessions work best, within a curriculum that is flexible enough to adjust to changes in circumstances.

    And organizations need to confirm that the lessons from upskilling programs have hit home, with developers putting secure best practices to use on a routine basis. A tool that makes use of benchmarking metrics to track the progress of individuals, teams and the organization overall, assessing the effectiveness of a learning program against both internal and industry standards, would provide the granular insights needed to truly move the needle is the most beneficial. Enterprise security leaders ultimately need a fine-grained view of developers’ specific skills for every code commit while showing how well developers apply their new skills to the job.

    Developer upskilling has proved to be effective in improving software security, with our research showing that companies that implemented developer education saw 22% to 84% fewer software vulnerabilities, depending on factors such as the size of the companies and whether the training focused on specific problems. Security-skilled developers are in the best position to ensure that AI-generated code is secure, whether it comes from GenAI coding assistants or the more proactive agentic AI tools.

    The drawcard of agentic models is their ability to work autonomously and make decisions independently, and these being embedded into enterprise environments at scale without appropriate human governance will inevitably introduce security issues that are not particularly visible or easy to stop. Skilled developers using AI securely will see immense productivity gains, whereas unskilled developers will simply generate security chaos at breakneck speed.

    CISOs must reduce developer risk, and provide continuous learning and skills verification within their security programs to safely implement the help of agentic AI agents.

    The post Managing the growing risk profile of agentic AI and MCP in the enterprise appeared first on SD Times.

    Source: Read More 

    news
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAnubis Ransomware Encrypts and Wipes Files, Making Recovery Impossible Even After Payment
    Next Article SD Times 100

    Related Posts

    Tech & Work

    Top 15 Enterprise Use Cases That Justify Hiring Node.js Developers in 2025

    July 31, 2025
    Tech & Work

    The Core Model: Start FROM The Answer, Not WITH The Solution

    July 31, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    RondoDox: Sophisticated Botnet Exploits TBK DVRs & Four-Faith Routers for DDoS Attacks

    Security

    Here’s how to get a free iPhone 16 Pro from T-Mobile with no trade-in required

    News & Updates

    CVE-2025-53165 – Apache HTTP Server Cross-Site Request Forgery

    Common Vulnerabilities and Exposures (CVEs)

    Manage multi-tenant Amazon Bedrock costs using application inference profiles

    Machine Learning

    Highlights

    Salesforce OmniStudio Flaws Expose Encrypted Data

    June 10, 2025

    Salesforce OmniStudio Flaws Expose Encrypted Data

    Security research partner AppOmni disclosed five critical vulnerabilities in Salesforce’s powerful OmniStudio platform that could expose encrypted and sensitive customer data. These flaws, discovered …
    Read more

    Published Date:
    Jun 11, 2025 (28 minutes ago)

    Vulnerabilities has been mentioned in this article.

    Star Citizen Alpha 4.2 ‘Storm Breaker’ brings radiation, weather, and a deadly new event

    June 20, 2025

    “Product Kondo”: A Guide To Evaluating Your Organizational Product Portfolio

    April 25, 2025

    Rilasciata SteamOS 3.7.8: Miglioramenti e Nuove Funzionalità

    May 26, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.