Google’s Gemini AI Chatbot Targets Kids Under 13: A Risky Venture

Google is set to launch its Gemini AI chatbot for children under 13, a move sparking both excitement and concern. Initially rolling out in the U.S. and Canada this week, the chatbot will expand to Australia later this year, accessible exclusively through Google’s Family Link accounts. While the platform promises parental controls, the risks associated with exposing young users to AI technology are significant. This development underscores the challenges parents face in safeguarding children from emerging digital tools, even as social media bans for minors are implemented.

NEWS

5/12/20252 min read

boy sitting on concrete stairs
boy sitting on concrete stairs

How Gemini Works

Family Link accounts allow parents to manage their child’s access to apps and content, requiring personal details like names and birthdates. Although Google assures that children’s data won’t be used to train its AI, privacy concerns linger.

The chatbot feature is enabled by default, requiring parents to manually disable it if desired. Children can use Gemini to generate text or images, but Google admits the system may produce inaccurate or misleading content, known as “hallucinations.” This raises concerns about its reliability for tasks like homework, where fact-checking is essential.

Content Generation and Challenges

Unlike traditional search engines, generative AI tools like Gemini create new content by identifying patterns in data. For instance, a child asking the chatbot to “draw a cat” would receive an AI-generated image based on common feline features.

However, distinguishing between AI-generated content and reliable sources can be difficult, especially for young users. Studies reveal even adults and professionals have been misled by AI-generated misinformation, highlighting the potential risks for children.

Age-Appropriate Safeguards

Google claims Gemini includes safeguards to block inappropriate content, but these measures may inadvertently restrict access to essential, age-appropriate information. For example, filtering certain terms could prevent children from learning about puberty-related topics.

Moreover, tech-savvy children often find ways to bypass restrictions, making it crucial for parents to actively monitor content and educate their children on evaluating AI-generated material.

Risks of AI Chatbots for Children

The eSafety Commission has warned about the dangers of AI chatbots, including their potential to distort reality, share harmful content, and provide unsafe advice. Young children, still developing critical thinking skills, are particularly vulnerable to manipulation by these systems.

Research shows AI chatbots mimic human social behaviors to build trust, which could mislead children into believing they are interacting with a person rather than a machine. This trust could make them more susceptible to accepting false or harmful information.

The Need for Digital Duty of Care

As Australia prepares to ban social media accounts for children under 16 in December, the Gemini rollout highlights the broader risks of digital engagement beyond social media. Parents must stay informed about new technologies and their potential dangers.

This situation underscores the urgency of implementing digital duty of care legislation in Australia. While similar laws were enacted in the EU and UK in 2023, Australia’s proposal has been stalled since late 2024. Such legislation would hold tech companies accountable for addressing harmful content at its source, offering stronger protections for all users.

Google’s move to introduce Gemini to young users raises critical questions about privacy, safety, and the ethical responsibilities of tech giants. As the digital landscape evolves, so too must the measures to protect its youngest participants.