Back to blog
Legal

AI and Minors: What European Law Says

AI and Minors: What European Law Says
By Joan Pons 16 min min read

The European Union has a comprehensive regulatory framework protecting minors from AI: the EU AI Act, GDPR, and DSA. These laws require parental consent, algorithmic transparency, and specific data protection for minors — yet most popular chatbots fail to comply.

When your teenager opens an AI chatbot on their phone, they are not just starting a conversation — they are entering a complex legal landscape that most adults (and certainly most teenagers) don't understand. The regulatory framework governing AI and minors in Europe has evolved rapidly over the past two years, and 2026 marks a turning point: the major provisions of the EU AI Act are now in force, GDPR enforcement around children's data has intensified, and new national laws are adding additional layers of protection.

This article breaks down what parents need to know — not in legal jargon, but in practical terms that help you make informed decisions about which AI platforms your family uses. For a broader perspective on choosing safe AI tools for teenagers, see our guide to protecting your teenager online.

The EU AI Act: A Risk-Based Framework

The EU Artificial Intelligence Act, which entered full application in stages between 2024 and 2026, is the world's first comprehensive AI regulation. Its approach is built around risk categories: the higher the risk an AI system poses, the stricter the requirements.

The Four Risk Tiers

Unacceptable Risk (Banned)

AI systems that pose a clear threat to fundamental rights are prohibited entirely. This includes:

  • Social scoring systems by governments
  • Real-time biometric surveillance in public spaces (with limited exceptions)
  • AI that exploits vulnerabilities of specific groups, including children, to distort behavior in harmful ways
  • Subliminal manipulation techniques

High Risk (Heavily Regulated)

AI systems used in critical areas like healthcare, education, employment, and law enforcement face extensive obligations: risk assessments, data governance requirements, human oversight, transparency, and conformity assessments before market deployment.

Critically for our discussion: AI systems that assess, diagnose, or make decisions affecting minors can fall into this category. An AI chatbot that diagnoses depression in a teenager, recommends a treatment plan, or assesses their psychological state is operating in high-risk territory.

Limited Risk (Transparency Obligations)

AI systems that interact with humans must disclose that the user is talking to an AI. Systems that generate content must label it as AI-generated. These are lighter requirements, but they are mandatory.

Minimal Risk (No Specific Obligations)

Simple AI applications like spam filters or video game AI face no additional regulation.

What Does EU AI Act Classification Mean for Teen Chatbots?

The classification of an AI chatbot depends on what it does, not what it is called. Whether a chatbot is classified as "limited risk" or "high risk" has enormous implications for the obligations it must meet — and for the parents choosing platforms for their teenagers. A chatbot that:

  • Engages in conversation and provides companionship → Limited risk
  • Detects concerning patterns and alerts parents → Limited risk
  • Diagnoses mental health conditions → High risk
  • Recommends treatment plans → High risk
  • Assesses a minor's psychological state → High risk
  • Assigns scores or ratings to a teen's behavior → Potentially high risk

This distinction is why HolaNolis is designed to detect, alert, and redirect rather than diagnose, prescribe, or assess. It is not just an ethical choice — it is a deliberate regulatory positioning. By functioning as a conversational companion that identifies concerns and connects teens with professional help (rather than trying to be the professional help), HolaNolis operates as a limited-risk system with proportionate compliance requirements. For a detailed explanation of what this means in practice, see our article on what a supervised chatbot actually is.

Platforms that blur these lines — chatbots that effectively function as unlicensed therapists for teenagers — face a regulatory reckoning as EU AI Act enforcement ramps up.

GDPR and Children's Data

The General Data Protection Regulation has been in force since 2018, but its provisions regarding children's data have become increasingly central to enforcement actions, particularly as AI platforms collect massive amounts of conversational data from minors.

Key GDPR Provisions for Minors

Parental Consent (Article 8): For information society services offered directly to a child, processing of personal data requires parental consent for children below a threshold age. The GDPR sets this at 16 years as a default, but allows member states to lower it to as young as 13. In practice:

  • Spain: 14 years
  • France: 15 years
  • Germany: 16 years
  • Ireland: 16 years (significant because many tech companies are headquartered there)
  • UK (post-Brexit, under UK GDPR): 13 years

Data Minimization (Article 5): Organizations must collect only the data that is strictly necessary for the stated purpose. For an AI chatbot serving teenagers, this means collecting the minimum data needed to provide the service — not hoarding conversation logs for future model training.

Right to Erasure (Article 17): The "right to be forgotten" has particular relevance for minors. A teenager's conversations with an AI chatbot at age 13 should not follow them into adulthood. Data subjects (or their parents on their behalf) can request complete deletion of personal data.

Purpose Limitation (Article 5): Data collected for one purpose cannot be repurposed without additional consent. If a chatbot collects conversation data to provide the service, it cannot unilaterally decide to use that data for model training, advertising, or profiling.

Data Protection Impact Assessment (Article 35): Processing that is likely to result in a high risk to rights and freedoms — which includes large-scale processing of children's data — requires a formal DPIA before processing begins.

How Does GDPR Apply in Practice to AI Chatbots?

Most mainstream AI chatbots operate in a gray area regarding children's data under GDPR. They nominally require users to be 13 or older, rely on self-reported age, and have broad terms of service that grant extensive data usage rights. Enforcement has been slow, but the trend is clearly toward stricter accountability. Under GDPR, parents have the right to access all data a platform holds about their child, request erasure, restrict processing, and object to profiling — rights that are enforceable with significant penalties (up to 4% of global annual turnover or 20 million euros).

The Italian data protection authority's temporary ban of ChatGPT in 2023 was an early signal. Since then, several European DPAs have opened investigations into AI platforms' handling of children's data. The trajectory is clear: platforms that process children's data without robust age verification, clear parental consent mechanisms, and strict data minimization will face increasing legal pressure.

The Digital Services Act (DSA)

The Digital Services Act, fully applicable since February 2024, adds another regulatory layer specifically relevant to online platforms used by minors.

Key DSA Provisions

Ban on Targeted Advertising to Minors: Online platforms are prohibited from using personal data of minors for targeted advertising. This affects AI platforms that monetize through ads or that share data with advertising networks.

Algorithmic Transparency: Very large online platforms must provide transparency about their recommendation algorithms and offer users the option to use the service without profiling-based recommendations.

Risk Assessment for Minors: Platforms must assess and mitigate systemic risks, specifically including risks to children's wellbeing. This includes risks arising from the design, functioning, and use of the service.

Reporting Obligations: Platforms must have clear mechanisms for reporting harmful content and must act promptly on reports. For content affecting minors, response expectations are heightened.

Practical Impact

The DSA's most significant impact on AI chatbots is the risk assessment obligation. Platforms that serve minors must proactively identify and mitigate risks to children's wellbeing — not just respond when harm is reported. This shifts the burden from reactive to proactive, which aligns with the supervised AI companion model of continuous monitoring rather than after-the-fact content moderation.

The UK Age Appropriate Design Code (AADC)

Although the UK is no longer in the EU, the Age Appropriate Design Code (also known as the Children's Code) represents one of the most detailed regulatory frameworks for digital services used by minors anywhere in the world.

The 15 Standards

The AADC establishes 15 standards that online services likely to be accessed by children must meet, including:

  • Best interests of the child must be a primary consideration in design decisions
  • Age-appropriate application: Different protections for different age groups
  • Transparency: Privacy information must be provided in language children can understand
  • Data minimization: Collect and retain the minimum data necessary
  • Default settings: Privacy settings must default to "high" for child users
  • Nudge techniques: Services must not use design techniques that encourage children to provide more data or weaken their privacy protections
  • Connected toys and devices: Special provisions for AI-powered physical products

Relevance to AI Chatbots

The AADC's requirement for high-privacy defaults is particularly relevant. Under this code, an AI chatbot accessed by a child should default to the most restrictive privacy settings, not the most permissive. Data sharing should be opt-in, not opt-out. And design choices should prioritize the child's interest, not engagement metrics.

The ICO (UK's data regulator) has demonstrated willingness to enforce these standards, and several platforms have already modified their designs in response.

US Regulatory Landscape

While this article focuses on European regulation, many families in Europe have connections to US-based platforms, and understanding the American regulatory context provides useful contrast.

COPPA (Children's Online Privacy Protection Act)

The original US framework for children's online privacy, COPPA applies to children under 13 and requires:

  • Verifiable parental consent before collecting personal information
  • Clear privacy policies
  • Data minimization and security requirements

COPPA's main limitation is its age threshold of 13, which means teenagers aged 13-17 fall into a regulatory gap.

California SB 243 (2024)

California's landmark legislation specifically targets AI interactions with minors and goes further than federal law:

  • Requires AI platforms to identify and mitigate risks to minors
  • Mandates transparency about AI limitations
  • Establishes reporting requirements for AI-related harm to minors
  • Creates a framework for age-appropriate AI design

KOSA (Kids Online Safety Act)

After years of legislative debate, KOSA imposes a duty of care on platforms toward minor users, requiring them to:

  • Provide minors with options to protect their information
  • Disable addictive design features by default for minors
  • Enable parents to control privacy and safety settings

The Transatlantic Gap

The key difference between European and American regulation is enforcement and comprehensiveness. European regulation (GDPR + AI Act + DSA) creates a more unified, enforceable framework. US regulation remains more fragmented, with state laws filling gaps that federal legislation leaves open. However, the direction is the same on both sides of the Atlantic: stronger protections for minors using AI.

What All This Means for Parents

The regulatory landscape is complex, but the practical implications are clear:

1. The Law Is on Your Side

European regulation firmly establishes that children deserve special protection when using AI and digital services. You have the legal right to know what data is being collected, how it is used, and to request its deletion. You have the right to provide or withhold consent for your child's data processing. And platforms have the legal obligation to design their services with your child's best interests in mind.

2. Not All Platforms Are Compliant

Having a law and enforcing a law are different things. Many AI platforms are still operating in ways that don't fully comply with GDPR, the AI Act, or the DSA when it comes to minor users. Enforcement is increasing, but it is not yet universal. This means parents need to evaluate platforms themselves, not assume that "if it is available, it must be legal."

3. Classification Matters

Whether an AI chatbot is classified as "limited risk" or "high risk" under the EU AI Act has enormous implications for the obligations it must meet. A chatbot that diagnoses, assesses, or prescribes is operating in high-risk territory and must meet extensive compliance requirements. A chatbot that detects, alerts, and redirects operates at limited risk with lighter (but still real) obligations.

When choosing an AI platform for your teenager, ask: What does this system claim to do? If it promises to assess your teen's mental health, that is a high-risk claim that should come with high-risk compliance. If it positions itself as a companion that connects concerns to human professionals, that is a more appropriate — and legally sustainable — approach.

4. Data Rights Are Actionable

Under GDPR, you have the right to:

  • Access all data a platform holds about your child
  • Rectify inaccurate data
  • Erase data that is no longer necessary
  • Restrict processing in certain circumstances
  • Object to data processing based on legitimate interests
  • Port data to another service

These are not theoretical rights — they are enforceable, with significant penalties for non-compliance (up to 4% of global annual turnover or 20 million euros, whichever is higher).

5. The Trend Is Toward Stricter Protection

Every regulatory development in 2025 and 2026 has moved toward stronger protections for minors. Platforms that are barely compliant today will likely need to do more tomorrow. Choosing a platform that is built for compliance — rather than one that is scrambling to adapt — is a safer long-term bet.

How HolaNolis Is Built for This Regulatory Reality

HolaNolis was designed with the full European regulatory framework in mind from day one. This is not a retrofit — it is foundational architecture.

EU AI Act Positioning: HolaNolis operates as a limited-risk system. Nolis detects, alerts, and redirects — it never diagnoses, assesses, or prescribes. This is a deliberate design choice that keeps the platform in a regulatory category with proportionate requirements while maximizing safety for teens.

GDPR Compliance by Design: Conversations are encrypted at rest using AES-256-GCM with per-user encryption keys. Data minimization is built into the data model. Parental consent is required and verified during onboarding. Data deletion requests are fully supported. No conversation data is used for model training.

DSA Alignment: Risk assessments for minors are conducted proactively. The platform is designed to minimize harm, not maximize engagement. Reporting mechanisms are clear and accessible.

UK AADC Principles: Privacy defaults are set to maximum protection. Supervision levels are transparent to the teenager. Design decisions prioritize the child's interests. No nudge techniques are used to weaken privacy protections.

Multi-jurisdiction Readiness: With support for 15 languages and awareness of varying consent ages across EU member states, HolaNolis is built for the reality that families cross borders and regulations vary.

Explore HolaNolis to see how regulatory compliance translates into practical features, or register your family to experience the platform directly.

Frequently Asked Questions

Is it legal for minors to use ChatGPT in Europe? +
ChatGPT sets a minimum age of 13 and relies on self-reported age with no verification mechanism. Under GDPR, processing personal data of children under 16 (or as low as 13 in some EU member states) requires verifiable parental consent. Most mainstream AI platforms operate in a compliance gray area, with enforcement increasing as regulators focus on this issue.
At what age can a minor use AI without parental consent in Europe? +
The GDPR default is 16, but EU member states can lower this threshold. In Spain it is 14, in France 15, in Germany and Ireland 16, and in the UK (under UK GDPR) 13. These age limits apply specifically to data processing consent — meaning the platform must obtain verifiable parental consent for users below the threshold in that country.
What rights do parents have under GDPR regarding their child's AI chatbot data? +
Parents have the right to access all data a platform holds about their child, request rectification of inaccurate data, request complete erasure ("right to be forgotten"), restrict certain processing, object to processing based on legitimate interests, and port data to another service. Violations carry penalties of up to 4% of a company's global annual turnover or 20 million euros.
What are the EU AI Act risk levels and which applies to AI chatbots for teens? +
The EU AI Act defines four tiers: Unacceptable Risk (banned), High Risk (heavily regulated), Limited Risk (transparency obligations), and Minimal Risk. An AI chatbot that detects concerns and redirects to professionals is Limited Risk. One that diagnoses mental health conditions, assesses psychological states, or prescribes treatments falls into High Risk territory — with corresponding compliance burdens that most chatbots cannot currently meet.
How can I report an AI platform that I think is not compliant with EU child protection law? +
File a complaint with your national data protection authority (in Spain: AEPD, in France: CNIL, in Germany: your state's DPA, in Ireland: DPC, in the UK: ICO). For DSA violations, report to the platform's designated Digital Services Coordinator in its EU country of establishment. Keep documentation of the concerning behavior or data practices as evidence.
What about US law — does COPPA protect my teenager? +
COPPA (Children's Online Privacy Protection Act) only covers children under 13, leaving teens aged 13-17 in a regulatory gap at the federal level. California's SB 243 (2024) goes further, specifically targeting AI interactions with minors and establishing duty-of-care obligations. The EU framework (GDPR + AI Act + DSA) provides significantly broader and more enforceable protections than current US federal law.

The Regulatory Bottom Line

The era of unregulated AI interactions with minors is ending. The EU AI Act, GDPR, DSA, UK AADC, and emerging US legislation are converging on a clear principle: AI systems used by children must be designed, deployed, and operated with their safety and rights as primary considerations.

For parents, this regulatory evolution is good news. It means you have increasing legal backing for demanding transparency, safety, and accountability from the platforms your children use. For platforms, it means compliance is not optional — and those that build safety in from the start will be better positioned than those that treat it as an afterthought.

The smartest choice you can make today is to choose AI platforms for your family that are already built for the regulatory reality of tomorrow. Because the law is not going to get more lenient — and your teenager's safety should not depend on whether a platform decides to comply.

Want to protect your child with safe AI?

Start free
JP

Joan Pons

Founder of HolaNolis · Father

A father, telecommunications engineer, and entrepreneur. HolaNolis was born at home: when I saw my kids start using AI, I got worried as a parent and decided to build the tool I wish I'd had. I develop it as a family project because teen safety around AI can't just be a business — it's something personal. I'm also the founder and CEO of WorkMeter, a leading productivity measurement company.

LinkedIn