Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 17, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 17, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 17, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 17, 2025

      Microsoft’s allegiance isn’t to OpenAI’s pricey models — Satya Nadella’s focus is selling any AI customers want for maximum profits

      May 17, 2025

      If you think you can do better than Xbox or PlayStation in the Console Wars, you may just want to try out this card game

      May 17, 2025

      Surviving a 10 year stint in dev hell, this retro-styled hack n’ slash has finally arrived on Xbox

      May 17, 2025

      Save $400 on the best Samsung TVs, laptops, tablets, and more when you sign up for Verizon 5G Home or Home Internet

      May 17, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      NodeSource N|Solid Runtime Release – May 2025: Performance, Stability & the Final Update for v18

      May 17, 2025
      Recent

      NodeSource N|Solid Runtime Release – May 2025: Performance, Stability & the Final Update for v18

      May 17, 2025

      Big Changes at Meteor Software: Our Next Chapter

      May 17, 2025

      Apps in Generative AI – Transforming the Digital Experience

      May 17, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft’s allegiance isn’t to OpenAI’s pricey models — Satya Nadella’s focus is selling any AI customers want for maximum profits

      May 17, 2025
      Recent

      Microsoft’s allegiance isn’t to OpenAI’s pricey models — Satya Nadella’s focus is selling any AI customers want for maximum profits

      May 17, 2025

      If you think you can do better than Xbox or PlayStation in the Console Wars, you may just want to try out this card game

      May 17, 2025

      Surviving a 10 year stint in dev hell, this retro-styled hack n’ slash has finally arrived on Xbox

      May 17, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Japanese Man Arrested for GenAI Ransomware as AI Jailbreak Concerns Grow

    Japanese Man Arrested for GenAI Ransomware as AI Jailbreak Concerns Grow

    May 30, 2024

    A 25-year-old man from Kawasaki, Japan was arrested this week for allegedly using generative AI tools to create ransomware in an AI jailbreaking case that may be the first of its kind in Japan.

    The arrest of Ryuki Hayashi, widely reported in Japan, is the latest example of an attacker defeating AI guardrails, which has become something of an obsession for hackers and cybersecurity researchers alike.

    Just this week, researchers from Germany’s CISPA Helmholtz Center for Information Security reported on their efforts to jailbreak GPT-4o, the latest multimodal large language model (MLLM) released by OpenAI a little more than two weeks ago. Concerns raised by those researchers and others led OpenAI to establish a safety and security committee this week to try to address AI risks.

    AI Jailbreak Tools and Methods Unclear

    News reports on Hayashi’s arrest have been lacking in details on the tools and methods he used to create the ransomware.

    The Japan Times reported that Hayashi, a former factory worker, “is not an expert on malware. He allegedly learned online how to ask AI tools questions that would elicit information on how to create malware.”

    Hayashi came under suspicion after police arrested him in March “for allegedly using fake identification to obtain a SIM card registered under someone else’s name,” the paper reported.

    The Japan News, which reported that Hayashi is unemployed, said police found “a homemade virus on a computer” following the March arrest.

    The News said police suspect he “used his home computer and smartphone to combine information about creating malware programs obtained after giving instructions to several generative AI systems in March last year.”

    Hayashi “allegedly gave instructions to the AI systems while concealing his purpose of creating the virus to obtain design information necessary for encrypting files and demanding ransom,” the News reported. “He is said to have searched online for ways to illegally obtain information.”

    Hayashi reportedly admitted to charges during questioning, and told police, “I wanted to make money through ransomware. I thought I could do anything if I asked AI.”

    There have been no reports of damage from the ransomware he created, the News said.

    LLM Jailbreak Research Heats Up

    The news comes as research on AI jailbreaking and attack techniques has grown, with a number of recent reports on risks and possible solutions.

    In a paper posted to arXiv this week, the CISPA researchers said they were able to more than double their attack success rate (ASR) on GPT-4o’s voice mode with an attack they dubbed VOICEJAILBREAK, “a novel voice jailbreak attack that humanizes GPT-4o and attempts to persuade it through fictional storytelling (setting, character, and plot).”

    Another arXiv paper, posted in February by researchers at the University of California at Berkeley, looked at a range of risks associated with GenAI tools such as Microsoft Copilot and ChatGPT, along with possible solutions, such as development of an “AI firewall” to monitor and change LLM inputs and outputs if necessary.

    And earlier this month, OT and IoT security company SCADAfence outlined a wide range of AI tools, threat actors and attack techniques. In addition to general use case chatbots like ChatGPT and Google Gemini, the report looked at “dark LLMs” created for malicious purposes, such as WormGPT, FraudGPT, DarkBERT and DarkBART.

    SCADAfence recommended that OT and SCADA organizations follow best practices such as limiting network exposure for control systems, patching, access control and up to date offline backups.

    GenAI uses and misuses is also expected to be the topic of a number of presentations at Gartner’s Security and Risk Management Summit next week in National Harbor, Maryland, just outside the U.S. capital.

    Source: Read More

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleLeveraging Database Observability at MongoDB: Unlocking Performance Insights and Optimization Strategies
    Next Article Toshiba America Data Breach: Customers and State Authorities Notified

    Related Posts

    Development

    February 2025 Baseline monthly digest

    May 17, 2025
    Development

    Learn A1 Level Spanish

    May 17, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    CVE-2025-48134 – ShapedPlugin LLC WP Tabs Object Injection

    Common Vulnerabilities and Exposures (CVEs)

    Samsung introduces new turnkey foundry service, advanced nodes for AI chips

    Development

    CVE-2025-2492: Critical ASUS Router Vulnerability Requires Immediate Firmware Update

    Security

    Apply a radom pacing in Jmeter without using Timers

    Development

    Highlights

    Development

    DeBaTeR: A New AI Method that Leverages Time Information in Neural Graph Collaborative Filtering to Enhance both Denoising and Prediction Performance

    November 18, 2024

    Recommender systems have been widely applied for studying user preferences; however, they face significant challenges…

    CVE-2025-4837 – Projectworlds Student Project Allocation System SQL Injection Vulnerability

    May 17, 2025

    Bluetti slashes power station prices in big summer sale

    August 9, 2024

    CVE-2024-58237 – Linux Kernel BPF Packet Pointer Invalidation Vulnerability

    May 5, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.