Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»AI-Powered Productivity or Security Nightmare? The Risks of Enterprise AI

    AI-Powered Productivity or Security Nightmare? The Risks of Enterprise AI

    March 26, 2025

    AI

    Generative AI (GenAI) has quickly become a core in enterprise environments, but with its growing adoption comes significant security concerns. A recent report highlights 30-fold increase in the volume of data—including sensitive corporate information—being fed into GenAI applications over the past year. The findings highlights the urgent need for businesses to reevaluate their security strategies as AI-driven tools become embedded in daily workflows.

    The report reveals that enterprise users are increasingly sharing sensitive data such as source code, regulated information, passwords, and intellectual property with GenAI applications.

    Adding to the challenge, 72% of enterprise users access GenAI apps using personal accounts rather than company-managed platforms. This growing trend of “shadow AI”—akin to the earlier shadow IT phenomenon—poses a major governance issue for security teams. Without proper oversight, businesses lack visibility into what data is being shared and where it is going, creating potential entry points for cyber threats.

    The Scope of AI Integration in Enterprises

    The report provides a comprehensive analysis of AI usage in the workplace, showing that 90% of organizations have adopted dedicated GenAI applications, while an even higher 98% are using software that integrates AI-powered features. Though only 4.9% of employees use standalone AI apps, a staggering 75% interact with AI-powered features in other enterprise tools.

    Security teams now face a new and evolving challenge: the unintentional insider threat. Employees may not realize the risks of sharing proprietary information with AI-driven platforms, making it essential for organizations to enforce strict data security measures.

    Shadow AI and Its Implications

    One of the report’s key findings is that shadow AI has become the primary shadow IT concern for organizations. Employees using personal accounts to interact with AI models mean businesses have little to no control over how their data is being processed, stored, or leveraged by third-party providers. The unregulated use of AI tools leaves companies vulnerable to data exfiltration and regulatory non-compliance.

    Organizations are increasingly adopting strict policies to mitigate these risks, with many choosing to block unapproved AI applications altogether. Security teams are also implementing Data Loss Prevention (DLP) solutions, real-time user coaching, and access controls to limit the risk of exposure.

    How Data is Being Exposed to AI

    The report identifies two main ways sensitive enterprise data is making its way into GenAI applications:

    1. Summarization Requests: Employees rely on AI tools to condense large documents, datasets, and source code. This increases the likelihood of exposing proprietary information to external AI systems.
    2. Content Generation: AI-powered applications are commonly used to generate text, images, videos, and code. When users input confidential data into these tools, they risk exposing sensitive details that could be used to train external models, leading to unintended data leaks.

    The Challenge of Early AI Adoption

    The rapid proliferation of AI apps has created an unpredictable security landscape. The report finds that early adopters of new AI tools are present in nearly every enterprise, with 91% of organizations containing users who experiment with newly released GenAI applications. This poses a security risk, as employees may unknowingly share proprietary data with unvetted platforms.

    To contend this issue, many businesses are taking a “block first, ask questions later” approach. Instead of trying to keep pace with the constant influx of new AI tools, they opt to preemptively block all unapproved applications while allowing only a vetted selection of AI services. This proactive approach minimizes the risk of sensitive data exposure and allows security teams to conduct proper evaluations before approving new tools.

    The Shift to Local AI Infrastructure

    A notable trend highlighted in the report is the increasing deployment of GenAI infrastructure within enterprises. Over the past year, the number of organizations running AI models locally has jumped from less than 1% to 54%. While this shift helps reduce reliance on third-party cloud providers and mitigates some external data leakage risks, it introduces new challenges.

    Local AI deployments come with their own security concerns, including supply chain vulnerabilities, data leakage, improper data output handling, and risks related to prompt injection attacks. To address these issues, organizations must strengthen their security posture by implementing best practices outlined in frameworks such as:

    • The OWASP Top 10 for Large Language Model Applications
    • The National Institute of Standards and Technology (NIST) AI Risk Management Framework
    • The MITRE ATLAS framework for AI threat assessment

    A CISO’s Perspective on AI Security

    As AI-driven cyber threats evolve, Chief Information Security Officers (CISOs) are increasingly looking to existing security tools to help mitigate risks. Nearly all enterprises are now implementing policies to control AI tool access, limiting what data can be shared and which users can interact with specific AI applications.

    The report suggests that organizations should take the following tactical steps to strengthen their AI security strategies:

    • Assess AI Usage: Identify which GenAI apps and infrastructure are in use, who is using them, and how they are being utilized.
    • Implement Strong AI Controls: Regularly review security policies, block unauthorized apps, enforce DLP measures, and provide real-time user guidance to minimize risk.
    • Strengthen Local AI Security: Ensure that any on-premise AI deployments align with industry security frameworks to prevent data leaks and cyber threats.

    While AI offers immense benefits in productivity and efficiency, it also presents new challenges that organizations must address. The findings of this report reinforce the importance of security policies, continuous monitoring, and proactive risk mitigation strategies to safeguard sensitive enterprise data in an AI-powered world.

    Source: Read More

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHackers Using E-Crime Tool Atlantis AIO for Credential Stuffing on 140+ Platforms
    Next Article Abracadabra Cyberattack: How Hackers Drained $13M from DeFi Platform

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 17, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2024-47893 – VMware GPU Firmware Memory Disclosure

    May 17, 2025
    Leave A Reply Cancel Reply

    Hostinger

    Continue Reading

    Harnessing Machine Learning for Advanced Bioprocess Development: From Data-Driven Optimization to Real-Time Monitoring

    Development

    How FP8 boosts LLM training by 18% on Amazon SageMaker P5 instances

    Development

    Learn Event Driven Architecture by Building a SaaS Application

    Development

    What the State of Pentesting Report 2025 Reveals About Cybersecurity Readiness

    Development
    Hostinger

    Highlights

    Development

    Hackers Exploit Unpatched Bug in Helsinki Education Division Data Breach

    May 14, 2024

    Hackers exploited an unpatched remote access server vulnerability in the Helsinki education division data breach…

    Durex India’s Security Lapse Reveals Personal Data of Customers

    August 30, 2024

    CVE-2025-3965 – Itwanger Paicoding Cross Site Scripting Vulnerability

    April 27, 2025

    CodeSOD: Dating in Another Language

    April 23, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.