Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      8 Top AI Agent Development Companies Transforming Node.js Automation (2025–2026 Edition)

      September 17, 2025

      Representative Line: Reduced to a Union

      September 17, 2025

      Functional Personas With AI: A Lean, Practical Workflow

      September 17, 2025

      Vibe Coding vs React.js AI-Assisted Coding: A C-Suite Comparison (2025)

      September 17, 2025

      DistroWatch Weekly, Issue 1139

      September 14, 2025

      Building personal apps with open source and AI

      September 12, 2025

      What Can We Actually Do With corner-shape?

      September 12, 2025

      Craft, Clarity, and Care: The Story and Work of Mengchu Yao

      September 12, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Shopping Portal using Python Django & MySQL

      September 17, 2025
      Recent

      Shopping Portal using Python Django & MySQL

      September 17, 2025

      Perficient Earns Adobe’s Real-time CDP Specialization

      September 17, 2025

      What is Microsoft Copilot?

      September 17, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      I Ran Local LLMs on My Android Phone

      September 16, 2025
      Recent

      I Ran Local LLMs on My Android Phone

      September 16, 2025

      DistroWatch Weekly, Issue 1139

      September 14, 2025

      sudo vs sudo-rs: What You Need to Know About the Rust Takeover of Classic Sudo Command

      September 14, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Tech & Work»Managing the growing risk profile of agentic AI and MCP in the enterprise

    Managing the growing risk profile of agentic AI and MCP in the enterprise

    June 17, 2025

    Advancements in artificial intelligence continue to give developers an edge in efficiently producing code, but developers and companies can’t forget that it’s an edge that can always cut both ways.

    The latest innovation is the advent of agentic AI, which brings automation and decision-making to complex development tasks. Agentic AI can be coupled with the recently open-sourced Model Context Protocol (MCP), a protocol released by Anthropic, providing an open standard for orchestrating connections between AI assistants and data sources, streamlining the work of development and security teams, which can turbocharge productivity that AI has already accelerated. 

    Anthropic’s competitors have different “MCP-like” protocols making their way into the space, and as it stands, the internet at large has yet to determine a “winner” of this software race. MCP is Anthropic for AI-to-tool connections. A2A is Google, and also facilitates AI-to-AI comms. Cisco and Microsoft will both come out with their own protocol, as well. 

    But, as we’ve seen with generative AI, this new approach to speeding up software production comes with caveats. If not carefully controlled, it can introduce new vulnerabilities and amplify existing ones, such as vulnerability to prompt injection attacks, the generation of insecure code, exposure to unauthorized access and data leakage. The interconnected nature of these tools inevitably expands the attack surface.

    Security leaders need to take a hard look at how these risks affect their business, being sure they understand the potential vulnerabilities that result from using agentic AI and MCP, and take the necessary steps to minimize those risks.

    How Agentic AI Works With MCP

    After generative AI took the world by storm starting in November 2022 with the release of ChatGPT, agentic AI can seem like the next step in AI’s evolution, but they are two different forms of AI.

    GenAI creates content, using advanced machine learning to draw on existing data to create text, images, videos, music and code. 

    Agentic AI is about solving problems and getting things done, using tools such as machine learning, natural language processing and automation technologies to make decisions and take action. Agentic AI can be used, for example, in self-driving cars (responding to circumstances on the road), cybersecurity (initiating a response to a cyberattack) or customer service (proactively offering help to customers). In software development, agentic AI can be used to write large sections of code, optimize code and troubleshoot problems.

    Meanwhile, MCP, developed by Anthropic and introduced in November 2024, accelerates the work of agentic AI and other coding assistants by providing an open, universal standard for connecting large language models (LLMs) with data sources and tools, enabling teams to apply AI capabilities throughout their environment without having to write separate code for each tool. By essentially providing a common language for LLMs such as ChatGPT, Gemini, DALL•E, DeepSeek and many others to communicate, it greatly increases interoperability among LLMs.

    MCP is even touted as a way to improve security, by providing a standard way to integrate AI capabilities and automate security operations across an organization’s toolchain. Although it was treated as a general-purpose tool, MCP can be used by security teams to increase efficiency by centralizing access, adding interoperability with security tools and applications, and giving teams flexible control over which LLMs are used for specific tasks.

    But as with any powerful new tool, organizations should not just blindly jump into this new model of development without taking a careful look at what could go wrong. There is a significant profile of increased security risks associated with agentic AI coding tools within enterprise environments, specifically focusing on MCP. 

    Productivity Is Great, but MCP Also Creates Risks

    Invariant Labs recently discovered a critical vulnerability in MCP that could allow for data exfiltration via indirect prompt injections, a high-risk issue that Invariant has dubbed “tool poisoning” attacks. Such an attack embeds malicious code instructing an AI model to perform unauthorized actions, such as accessing sensitive files and transmitting data without the user being aware. Invariant said many providers and systems like OpenAI, Anthropic, Cursor and Zapier are vulnerable to this type of attack. 

    In addition to tool poisoning, such as indirect prompt injection, MCP can introduce other potential vulnerabilities related to authentication and authorization, including excessive permissions. MCP can also lack robust logging and monitoring, which are essential to maintaining the security and performance of systems and applications. 

    The vulnerability concerns are valid, though they are unlikely to stem the tide moving toward the use of agentic AI and MCP. The benefits in productivity are too great to ignore. After all, concerns about secure code have always revolved around GenAI coding tools, which can introduce flaws into the software ecosystem if the GenAI models were initially trained on buggy software. However, developers have been happy to make use of GenAI assistants anyway. In a recent survey by Stack Overflow, 76% of developers said they were using or planned to use AI tools. That’s an increase from 70% in 2023, despite the fact that during the same time period, those developers’ view of AI tools as favorable or very favorable dropped from 77% to 72%.

    The good news for organizations is that, as with GenAI coding assistants, agentic AI tools and MCP functions can be safely leveraged, as long as security-skilled developers handle them. The key emergent risk factor here is that skilled human oversight is not scaling at anywhere near the rate of agentic AI tool adoption, and this trend must course-correct, pronto.

    Developer Education and Risk Management Is the Key

    Regardless of the technologies and tools in play, the key to security in a highly connected digital environment (which is pretty much every environment these days) is the Software Development Lifecycle (SDLC). Flaws at the code level are a top target of cyberattackers, and eliminating those flaws depends on ensuring that secure coding practices are de rigueur in the SDLC, which are applied from the beginning of the development cycle. 

    With AI assistance, it’s a real possibility that we will finally see the eradication of long-standing vulnerabilities like SQL injection and cross-site scripting (XSS) after decades of them haunting every pentest report. However, most other categories of vulnerabilities will remain, especially those relating to design flaws, and we will inevitably see new groups of AI-borne vulnerabilities as the technology progresses. Navigating these issues depends on developers being security-aware with the skills to ensure, as much as possible, that both the code they create and code generated by AI is secure from the get-go. 

    Organizations need to implement ongoing education and upskilling programs that give developers the skills and tools they need to work with security teams to mitigate flaws in software before they can be released into the ecosystem. A program should make use of benchmarks to establish the baseline skills developers need and measure their progress. It should be framework and language-specific, allowing developers to work in real-world scenarios with the programming language they use on the job. Interactive sessions work best, within a curriculum that is flexible enough to adjust to changes in circumstances.

    And organizations need to confirm that the lessons from upskilling programs have hit home, with developers putting secure best practices to use on a routine basis. A tool that makes use of benchmarking metrics to track the progress of individuals, teams and the organization overall, assessing the effectiveness of a learning program against both internal and industry standards, would provide the granular insights needed to truly move the needle is the most beneficial. Enterprise security leaders ultimately need a fine-grained view of developers’ specific skills for every code commit while showing how well developers apply their new skills to the job.

    Developer upskilling has proved to be effective in improving software security, with our research showing that companies that implemented developer education saw 22% to 84% fewer software vulnerabilities, depending on factors such as the size of the companies and whether the training focused on specific problems. Security-skilled developers are in the best position to ensure that AI-generated code is secure, whether it comes from GenAI coding assistants or the more proactive agentic AI tools.

    The drawcard of agentic models is their ability to work autonomously and make decisions independently, and these being embedded into enterprise environments at scale without appropriate human governance will inevitably introduce security issues that are not particularly visible or easy to stop. Skilled developers using AI securely will see immense productivity gains, whereas unskilled developers will simply generate security chaos at breakneck speed.

    CISOs must reduce developer risk, and provide continuous learning and skills verification within their security programs to safely implement the help of agentic AI agents.

    The post Managing the growing risk profile of agentic AI and MCP in the enterprise appeared first on SD Times.

    Source: Read More 

    news
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAnubis Ransomware Encrypts and Wipes Files, Making Recovery Impossible Even After Payment
    Next Article SD Times 100

    Related Posts

    Tech & Work

    8 Top AI Agent Development Companies Transforming Node.js Automation (2025–2026 Edition)

    September 17, 2025
    Tech & Work

    Representative Line: Reduced to a Union

    September 17, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    How AI in Staff Augmentation Transforms Hiring and Delivery

    Web Development

    CVE-2025-6252 – “Qi Addons For Elementor Stored Cross-Site Scripting Vulnerability”

    Common Vulnerabilities and Exposures (CVEs)

    Beginner’s Guide to AI Tools: How to Get Started Without A Code

    Web Development

    CVE-2025-5722 – SourceCodester Student Result Management System Cross-Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    High-Severity Flaw in HashiCorp Nomad (CVE-2025-4922) Allows Privilege Escalation

    June 12, 2025

    High-Severity Flaw in HashiCorp Nomad (CVE-2025-4922) Allows Privilege Escalation

    HashiCorp has disclosed a high-severity vulnerability in its workload orchestration tool, Nomad, which could allow attackers to escalate privileges by exploiting a flaw in the system’s Access Control …
    Read more

    Published Date:
    Jun 13, 2025 (1 hour, 56 minutes ago)

    Vulnerabilities has been mentioned in this article.

    CVE-2025-49146

    CVE-2025-4922

    CVE-2025-53488 – Wikimedia Foundation Mediawiki WikiHiero Extension Stored XSS

    July 7, 2025

    CVE-2025-32819 – SonicWall SMA SSLVPN File Deletion Vulnerability

    May 7, 2025

    Microsoft’s nightmare — Apple’s cheap MacBook rumored to launch as Windows 10 users scramble

    August 18, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.