Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      A Week In The Life Of An AI-Augmented Designer

      August 22, 2025

      This week in AI updates: Gemini Code Assist Agent Mode, GitHub’s Agents panel, and more (August 22, 2025)

      August 22, 2025

      Microsoft adds Copilot-powered debugging features for .NET in Visual Studio

      August 21, 2025

      Blackstone portfolio company R Systems Acquires Novigo Solutions, Strengthening its Product Engineering and Full-Stack Agentic-AI Capabilities

      August 21, 2025

      The best AirTag alternative for Samsung users is currently 30% off

      August 24, 2025

      One of the biggest new features on the Google Pixel 10 is also one of the most overlooked

      August 24, 2025

      I tested these viral ‘crush-proof’ Bluetooth speakers, and they’re not your average portables

      August 24, 2025

      I compared the best smartwatches from Google and Apple – and there’s a clear winner

      August 24, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      MongoDB Data Types

      August 23, 2025
      Recent

      MongoDB Data Types

      August 23, 2025

      Building Cross-Platform Alerts with Laravel’s Notification Framework

      August 23, 2025

      Add Notes Functionality to Eloquent Models With the Notable Package

      August 23, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft Teams updated with a feature you probably thought already existed — “Can you hear me?” is now a thing of the past

      August 24, 2025
      Recent

      Microsoft Teams updated with a feature you probably thought already existed — “Can you hear me?” is now a thing of the past

      August 24, 2025

      Xbox Game Pass gets Gears of War: Reloaded, Dragon Age: The Veilguard, and more — here’s what is coming through the rest of August

      August 24, 2025

      Resident Evil ‘9’ Requiem has some of the most incredible lighting I’ve seen in a game — and Capcom uses it as a weapon

      August 24, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Should Children Use AI Chatbots? Google Thinks So, Critics Strongly Disagree

    Should Children Use AI Chatbots? Google Thinks So, Critics Strongly Disagree

    May 22, 2025

    Google Gemini

    Google’s quiet rollout of its AI-powered Gemini chatbot to children under the age of 13 has sparked intense debate or I should say backlash, from privacy and child advocacy groups. Critics argue that the move not only raises ethical concerns but may also violate U.S. law, particularly the Children’s Online Privacy Protection Act (COPPA).

    At the core of the controversy is Google’s decision to allow children with supervised accounts, managed through its Family Link program, to access Gemini, a generative AI chatbot that can create stories, songs, poetry, and help with homework.

    While Google frames this as an educational and creative tool for kids, a growing alliance of parents’ groups sees it as a potential privacy problem and a threat to children’s mental well-being.

    Parents Get Emails—Advocates Raise the Concern

    The issue came into the spotlight after Google sent emails to parents using Family Link, notifying them that their children could now access Gemini. The chatbot is available through web and mobile apps, and while parents have the option to disable access, the default setting allows use. This opt-out model, critics argue, bypasses an essential requirement of COPPA: verifiable parental consent.

    The backlash was instant and loud. A broad coalition led by the Electronic Privacy Information Center (EPIC) and Fairplay fired off letters to both the Federal Trade Commission (FTC) and Google CEO Sundar Pichai, demanding an immediate halt to the rollout. They called on the FTC to investigate whether Google has violated federal privacy law.

    “Shame on Google for attempting to unleash this dangerous and addictive technology on our kids,” said Josh Golin, Executive Director of Fairplay. “Gemini and other AI bots are a serious threat to children’s mental health and social development.”

    Gemini AI for Kids: What’s the Risk?

    Gemini might appear harmless or even beneficial. It talks like a human, answers questions, and entertains kids with stories or songs. But the concerns run deeper.

    The parents’ groups warn that children are particularly vulnerable to manipulation and misinformation from AI systems. Generative AI doesn’t always provide factual answers, and its human-like communication style can mislead young users into forming parasocial relationships, where children treat the chatbot as a friend or confidant. This could foster emotional dependency and blur the line between reality and simulation.

    Additionally, Gemini’s warnings about inaccuracies and sensitive content are deeply troubling. Google itself admits in its documentation that Gemini “can make mistakes” and “may encounter content you don’t want [your child] to see.” Yet instead of fixing those issues or pausing the rollout, the company shifts the responsibility onto parents, suggesting they teach their children to “think critically” about Gemini’s responses.

    This is a tough ask, especially when the users in question are under 13. How realistic is it to expect young children to recognize bias, misinformation, or emotional manipulation from an AI system that mimics human conversation?

    What Does the Law Say?

    Under the Children’s Online Privacy Protection Act (COPPA), any online service that collects personal data from children under 13 must obtain verifiable parental consent before doing so. According to EPIC and Fairplay, Google appears to have sidestepped this requirement by simply notifying parents after enabling access by default.

    In its email, Google tells parents that they will be notified if their child uses Gemini and can disable access if they choose. But the opt-out model isn’t enough under COPPA. The law requires proactive consent, not passive acknowledgment.

    Newly appointed FTC Chair Andrew Ferguson emphasized this in recent Congressional testimony. “Protecting children and teens online is of paramount importance,” he wrote, adding that COPPA mandates companies obtain clear consent before collecting data from children.

    Ferguson’s comments suggest that the FTC may be more willing to investigate companies like Google moving forward, especially in light of this public pressure.

    Google’s Defense: Not Enough?

    So far, Google has tried to defend its move by stressing that children’s data will not be used to train AI models. The company also points to parental controls and educational resources about AI.

    But critics say these measures fall short. The company hasn’t disclosed what other safeguards are in place to protect kids’ emotional well-being, guard against bias, or ensure compliance with privacy law.

    In a particularly damning part of the letter sent to the FTC, EPIC and Fairplay argue that “Google has not identified additional safeguards to ensure that it would not misuse data collected through these interactions.”

    “If Google wants to market its products to children, it is Google’s responsibility to ensure that the product is safe and developmentally appropriate,” said Suzanne Bernstein, Counsel at EPIC. “Which it has not done.”

    Shifting Responsibility onto Parents?

    One of the most controversial aspects of Google’s rollout is how it frames the burden of safety. Rather than taking full responsibility for making its AI child-safe, Google instead offers a how-to guide for parents on managing access and helping kids direct AI responses.

    While parental involvement is undeniably crucial, critics argue that it should not be used as a shield by tech companies. The developers of AI systems, who best understand the risks and reap the profits, must be held accountable for ensuring the technology is safe before putting it into the hands of children.

    Who’s Leading the Fight?

    A broad alliance of organizations has joined forces to push back against Google’s decision. This includes the U.S. Public Interest Research Group (PIRG), The Anxious Generation Campaign, Design It For Us, Eating Disorders Coalition, and Tech Transparency Project, among others.

    The campaign also has heavyweight academic backing. Signatories to the letter include Jonathan Haidt, a well-known social psychologist, MIT professor Sherry Turkle, and Fordham Law Professor Zephyr Teachout.

    Their message is clear: AI chatbots are not developmentally appropriate for young children, and until the science says otherwise, big tech should keep them away.

    What Happens Next?

    The FTC has not yet announced whether it will open a formal investigation into Google’s rollout of Gemini for kids. But the issue has gained significant traction among both policymakers and the public.

    Given Chair Ferguson’s stated priorities around children’s privacy and the weight of expert opinion against Google’s decision, the tech giant may face regulatory scrutiny in the coming weeks.

    In the meantime, many parents may be left wondering: Should they trust an AI chatbot with their child’s development?

    Google’s decision to move ahead with Gemini for children, despite so many unanswered questions and warnings, suggests that in the race to dominate the AI market, caution is being thrown to the wind, even when the stakes involve the well-being of the most vulnerable users of all.

    Source: Read More

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleCyberattack Hits Cellcom: Voice, Text Services Down; FBI Notified
    Next Article DOJ charges 12 more in $263 million crypto fraud takedown where money was hidden in squishmallow stuffed animals

    Related Posts

    Artificial Intelligence

    Scaling Up Reinforcement Learning for Traffic Smoothing: A 100-AV Highway Deployment

    August 24, 2025
    Repurposing Protein Folding Models for Generation with Latent Diffusion
    Artificial Intelligence

    Repurposing Protein Folding Models for Generation with Latent Diffusion

    August 24, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Simple ReFlow: Improved Techniques for Fast Flow Models

    Machine Learning

    ‘No aggressive monetization’ — Nexus Mods’ new ownership responds to worried members

    News & Updates

    octoDNS – manage DNS across multiple providers

    Linux

    CVE-2024-49835 – Apple Safari Heap Overflow

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-38154 – Linux Kernel BPF Sockmap Use After Free Vulnerability

    July 3, 2025

    CVE ID : CVE-2025-38154

    Published : July 3, 2025, 9:15 a.m. | 2 hours, 14 minutes ago

    Description : In the Linux kernel, the following vulnerability has been resolved:

    bpf, sockmap: Avoid using sk_socket after free when sending

    The sk->sk_socket is not locked or referenced in backlog thread, and
    during the call to skb_send_sock(), there is a race condition with
    the release of sk_socket. All types of sockets(tcp/udp/unix/vsock)
    will be affected.

    Race conditions:
    ”’
    CPU0 CPU1

    backlog::skb_send_sock
    sendmsg_unlocked
    sock_sendmsg
    sock_sendmsg_nosec
    close(fd):
    …
    ops->release() -> sock_map_close()
    sk_socket->ops = NULL
    free(socket)
    sock->ops->sendmsg
    ^
    panic here
    ”’

    The ref of psock become 0 after sock_map_close() executed.
    ”’
    void sock_map_close()
    {
    …
    if (likely(psock)) {
    …
    // !! here we remove psock and the ref of psock become 0
    sock_map_remove_links(sk, psock)
    psock = sk_psock_get(sk);
    if (unlikely(!psock))
    goto no_psock; work);
    ? die_addr+0x40/0xa0
    ? exc_general_protection+0x14c/0x230
    ? asm_exc_general_protection+0x26/0x30
    ? sock_sendmsg+0x21d/0x440
    ? sock_sendmsg+0x3e0/0x440
    ? __pfx_sock_sendmsg+0x10/0x10
    __skb_send_sock+0x543/0xb70
    sk_psock_backlog+0x247/0xb80
    …
    ”’

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-53490 – Wikimedia Foundation Mediawiki CampaignEvents Extension Cross-Site Scripting (XSS)

    July 3, 2025

    Multimodal Foundation Models Fall Short on Physical Reasoning: PHYX Benchmark Highlights Key Limitations in Visual and Symbolic Integration

    May 31, 2025

    Rilasciato IceWM 3.8.1: Un Gestore di Finestre Leggero e Altamente Configurabile per il Sistema X Window

    July 15, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.