Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 14, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 14, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 14, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 14, 2025

      I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of work

      May 14, 2025

      How to use your Android phone as a webcam when your laptop’s default won’t cut it

      May 14, 2025

      The 5 most customizable Linux desktop environments – when you want it your way

      May 14, 2025

      Gen AI use at work saps our motivation even as it boosts productivity, new research shows

      May 14, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Strategic Cloud Partner: Key to Business Success, Not Just Tech

      May 14, 2025
      Recent

      Strategic Cloud Partner: Key to Business Success, Not Just Tech

      May 14, 2025

      Perficient’s “What If? So What?” Podcast Wins Gold at the 2025 Hermes Creative Awards

      May 14, 2025

      PIM for Azure Resources

      May 14, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Windows 11 24H2’s Settings now bundles FAQs section to tell you more about your system

      May 14, 2025
      Recent

      Windows 11 24H2’s Settings now bundles FAQs section to tell you more about your system

      May 14, 2025

      You can now share an app/browser window with Copilot Vision to help you with different tasks

      May 14, 2025

      Microsoft will gradually retire SharePoint Alerts over the next two years

      May 14, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»Study challenges the narrative of AI posing an ‘existential threat’

    Study challenges the narrative of AI posing an ‘existential threat’

    August 15, 2024

    Is AI risky or not? It’s the debate that just keeps raging on.

    Researchers from the University of Bath and the Technical University of Darmstadt launched a study to evaluate AI risks in the context of current language models. 

    The findings, published as part of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), challenge views that AI, particularly large language models (LLMs) like ChatGPT, could evolve beyond human control and pose an existential threat to humanity.

    This confronts the fears expressed by some of the world’s leading AI researchers, including Geoffrey Hinton and Yoshua Bengio, two of the “godfathers of AI” who conveyed concerns about the potential dangers of advanced AI. 

    Yann LeCun, the third “godfather of AI” and Meta’s chief AI scientist, alongside Dr. Gary Marcus and others, argues the counter – that AI risks are simply overblown. 

    This divergence in opinion among the field’s most influential figures has fueled a fierce debate about the nature and severity of the risks posed by advanced AI systems.

    This new study probes LLMs’ “emergent abilities,” which refer to a model’s ability to perform tasks for which it was not explicitly trained. 

    AI risks are multifaceted, but at least some relate to models developing their own goals that could harm humans, like shutting down computer systems or leaking data.

    The worry under inspection is whether or not an LLM might spontaneously develop these skills without instruction or control. 

    To investigate this, the research team conducted a series of experiments:

    They examined the underlying mechanisms of “in-context learning” (ICL) in LLMs, which allows models to generate responses based on examples provided during interactions. As the study states, “The ability to follow instructions does not imply having reasoning abilities, and more importantly, it does not imply the possibility of latent, potentially-dangerous abilities.”
    They assessed LLMs’ true capabilities and limitations by evaluating their performance on a range of tasks, including those that require complex reasoning and problem-solving skills. The researchers argue that LLMs can’t independently develop new skills
    They analyzed the relationship between model size, training data, and emergent abilities to determine whether increasing model complexity leads to AI developing hazardous skills. The study said, “These observations imply that our findings hold true for any model which exhibits a propensity for hallucination or requires prompt engineering, including those with greater complexity, regardless of scale or number of modalities, such as GPT-4.”

    The researchers conclude from their investigation that “the prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies and also diverts attention from the genuine issues that require our focus.”

    This strongly aligns with LeCun and others who believe AI risks are over-publicized.

    However, while evaluating the risks posed by current AI models is obviously essential, accounting for the future is a tougher task. 

    Each generation of models comes with new abilities and, thus, new risks, as shown by some strange behaviors documented in GPT-4o’s test card. 

    One red teaming exercise (designed to identify unpredictable AI behaviors) quite shockingly saw GPT-4o’s voice feature unexpectedly clone a user’s voice and start talking to them in their own voice. 

    Tracking AI risks as and when they emerge is critical, as the goalposts are changing all the time. 

    The study makes a salient point that some non-existential AI risks are already knocking at the door, “Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news.”

    As the authors admit, then, just because AI doesn’t pose large-scale threats right now doesn’t mean safety is a non-issue. 

    The post Study challenges the narrative of AI posing an ‘existential threat’ appeared first on DailyAI.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleRussian National Receives 40-Month Sentence for Selling Stolen Login Credentials
    Next Article One of the most reliable power banks I’ve tested can even inflate car tires

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 15, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-3053 – “UiPress Lite WordPress Remote Code Execution Vulnerability”

    May 15, 2025
    Leave A Reply Cancel Reply

    Hostinger

    Continue Reading

    CVE-2025-45322 – Kashipara Online Service Management Portal SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    New computational chemistry techniques accelerate the prediction of molecules and materials

    Artificial Intelligence

    HPE partners with Nvidia to offer ‘turnkey’ GenAI development and deployment

    Development

    LLMs vs SLMs vs STLMs: A Comprehensive Analysis

    Development

    Highlights

    Jesus Is My Savior Pope Is My Coach T Shirt

    November 14, 2024

    Post Content Source: Read More 

    CVE-2025-37881 – Aspeed USB Gadget NULL Pointer Dereference

    May 9, 2025

    Map Eloquent Attributes into an Object Using the Collection Cast in Laravel 12.10

    April 23, 2025

    Best Free and Open Source Alternatives to Google Messages

    March 28, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.