Back to blog
Safety

Why Nolis Doesn't Give Medical or Psychological Advice

Why Nolis Doesn't Give Medical or Psychological Advice
By Joan Pons 11 min min read

Nolis doesn't give medical or psychological advice because no AI should do so with minors. Instead of diagnosing or prescribing, Nolis detects risk patterns, alerts guardians in real time, and redirects teenagers toward qualified professionals. This limitation is a safety advantage, not a shortcoming.

Over the past two years, headlines about artificial intelligence and teenagers have become increasingly alarming. Chatbots recommending diets up to 700 calories below the healthy minimum for adolescents (Stanford HAI, 2025). Virtual assistants suggesting stopping psychiatric medication. Apps that "diagnose" anxiety or depression based on a five-question survey. And most seriously: young people who follow this advice to the letter because they trust AI more than any adult.

When we designed HolaNolis, we made a decision that might seem like a limitation to some: Nolis never gives medical or psychological advice. Ever. Not now, not in future versions. It's not a bug, not a gap in functionality. It's an ethical, regulatory, and safety decision that we consider one of our greatest strengths.

Why is it dangerous for an AI to give health advice to teenagers?

To understand why this decision matters so much, let's look at what's happening out there.

Blind trust in AI

The most recent studies show an alarming trend: teenagers trust AI responses more than web search results, and in many cases more than their own parents or teachers. There's a psychological explanation:

  • AI responds immediately, without judging, without asking "why do you want to know that?"
  • AI sounds confident. It doesn't say "I don't know" or "it depends." It generates responses with an authoritative tone that adults rarely use.
  • AI is private. A teenager can ask about symptoms of depression without fear of anyone finding out.
  • AI is always available. At 3am, when anxiety peaks, there's no other person to turn to.

The real cases

These aren't hypotheticals. They're documented facts:

  • Dangerous diets: general-purpose chatbots generating "meal plans" of 900–1,200 calories for 13-year-olds, without considering that a growing teenager needs significantly more. In some cases, the generated diets were 700 calories below the healthy minimum (Stanford HAI, 2025).
  • Medication tapering: AIs suggesting "natural alternatives" to prescribed psychiatric medication, with potentially fatal consequences.
  • Erroneous self-diagnosis: minors convincing themselves they have disorders based on chatbot interactions, either delaying consultation with a professional or developing additional anxiety over a diagnosis that doesn't exist.
  • Serious incidents involving companion chatbots: 14 adolescent deaths have been linked to unsupervised chatbots (Associated Press, 2025), including cases where the chatbot failed to detect clear signals of suicidal ideation or, worse, reinforced destructive narratives.

Why it's dangerous

The root of the problem is threefold:

  1. AI is not qualified. It has no medical training, cannot examine the patient, doesn't know their history, and cannot run tests. Its "knowledge" is a statistical pattern, not clinical understanding.
  2. AI has no legal accountability. If a doctor gives bad advice, there are accountability mechanisms. If an AI recommends a dangerous diet, no one is responsible.
  3. Teenagers don't filter. An adult can (though not always) cross-check information. A 13-year-old at 2am searching "how to stop feeling sad" doesn't have that filtering capacity.

HolaNolis's position: companion, NOT advisor

HolaNolis is built on a principle we repeat internally like a mantra: detect, alert, redirect. Never diagnose, prescribe, or advise.

This translates into very specific behavior from Nolis:

What Nolis DOES do

  • Reflects feelings: "It sounds like you're feeling frustrated about that situation" or "It seems like today was a tough day." Nolis acts as an emotional mirror that helps the minor identify and name what they're feeling, without interpreting or evaluating.
  • Validates without judging: "It makes sense to feel that way when things like this happen" or "A lot of people would feel the same in your situation." Emotional validation is deeply therapeutic and doesn't require being a health professional.
  • Encourages talking to trusted adults: "Have you been able to talk to anyone about this? Sometimes telling someone you trust helps a lot" or "Is there an adult you feel comfortable talking to about this?"
  • Provides professional helplines: when the situation calls for it, Nolis offers crisis phone numbers and localized resources (Samaritans, Crisis Text Line, and equivalent services in each country).
  • Activates crisis alerts: if it detects signals of serious risk (self-harm, suicidal ideation, abuse), the system notifies the guardian within seconds. This happens automatically and reliably through the four-layer safety pipeline.

What Nolis NEVER does

  • Does not diagnose: it will never say "it sounds like you have anxiety" or "that sounds like depression." It doesn't have the qualifications or information to do so.
  • Does not prescribe: it will never suggest medications, supplements, doses, or "natural remedies."
  • Does not recommend diets: it will never generate meal plans, count calories, or comment on weight or body image.
  • Does not give therapeutic advice: it will never say "you should try this relaxation technique for your anxiety" or "what you need is to talk more about your trauma."
  • Does not substitute a professional: if the minor asks for mental health help, Nolis always redirects to qualified people.

Why this makes HolaNolis MORE valuable, not less

It might seem counterintuitive: is a chatbot that doesn't give advice better than one that does? Absolutely, and for several reasons.

1. Safety as a value proposition

For a parent, knowing that the AI their 13-year-old talks to will never suggest a diet, a diagnosis, or stopping medication is invaluable. It's the difference between leaving your child at a pool with a lifeguard or without one.

2. The bridge effect

Nolis doesn't try to solve health problems; it tries to build a bridge toward those who can. A teenager who talks to Nolis about their sadness and receives the suggestion to speak with an adult is more likely to do so than one who never verbalized that sadness at all. Nolis is the first step, not the last.

3. Listening without consequences

Sometimes a teenager simply needs to say out loud how they're feeling without anyone going into "solution mode." Nolis offers exactly that: a space to express emotions without receiving a diagnosis, a judgment, or an unsolicited action plan.

4. Early detection

Paradoxically, a system that doesn't give advice but detects and alerts may be more effective than one that tries to treat. Because early detection by an AI system, combined with intervention from a human professional, is an extraordinarily powerful combination. To learn how to identify these signals as a parent, read 5 signs your child needs emotional support.

The regulatory advantage: "limited risk" under the EU AI Act

The decision not to give advice isn't just ethical; it's also strategically sound from a regulatory standpoint.

The European AI Regulation (EU AI Act), in force since 2025, classifies AI systems into four risk levels:

  • Unacceptable risk: prohibited (subliminal manipulation, social scoring…)
  • High risk: strict regulation, audits, certifications
  • Limited risk: transparency obligations
  • Minimal risk: free use

An AI that gave medical or psychological advice to minors would clearly fall into the high risk category, with all the regulatory obligations that entails (and rightly so). To understand the full legal framework surrounding AI and minors, see our article on AI and minors: what European law says.

By positioning Nolis as a conversational companion that detects and redirects but never diagnoses or advises, HolaNolis falls into the limited risk category. This isn't a regulatory trick; it's the natural consequence of doing things right. When your ethical design aligns with regulation, it's a sign the approach is correct.

The ethical framework: a minor's safety always comes first

At HolaNolis we follow a priority order that never changes:

  1. Minor's safety: above everything else, including user experience, engagement metrics, and profitability.
  2. Regulatory compliance: respecting the law is not optional.
  3. Parental trust: parents need to know the tool acts in their child's best interest.
  4. Teenager experience: making Nolis engaging, interesting, and useful — within the limits above.

This order demands difficult decisions. It would be more "fun" if Nolis could talk about diets or give advice on managing anxiety. It would probably increase usage time. But increasing engagement at the cost of safety is exactly what social media has been doing for a decade, and the results speak for themselves.

We don't want to be another tool that prioritizes metrics over people. Especially when those people are between 10 and 20 years old.

What parents should know about AI and health advice

If your child uses any conversational AI (not just HolaNolis), these are the key points you should understand:

  • No general-purpose AI is qualified to give health advice. Not ChatGPT, not Gemini, not Claude, not Nolis. None of them.
  • Disclaimers are not enough. "This does not replace medical advice" at the end of a 1,000-calorie diet plan doesn't protect your child from following it.
  • Speed and availability are a risk. At 3am, AI is the only available listener. And at that hour, judgment is at its lowest.
  • Talk to your child about the difference between venting to an AI (healthy) and asking it for health solutions (dangerous). For practical conversation guides, read how to talk to your child about digital safety.

When to seek professional help

Not everything a teenager tells an AI requires intervention. But there are clear signs that it's time to seek professional help:

  • Persistent changes in sleep, appetite, or energy levels (more than two weeks)
  • Social withdrawal with no specific cause
  • Expressions of hopelessness ("nothing makes sense," "it doesn't matter what I do")
  • Self-harm or mention of self-harm, in any context
  • Abrupt behavioral changes (aggression, frequent crying, withdrawing from activities)
  • Obsession with weight, food, or body image

If HolaNolis detects any of these signals, it will activate the alert protocol. But don't wait for an AI to tell you: if you observe these changes, act. Your pediatrician, a child and adolescent psychologist, or specialized helplines are the next step.


Frequently Asked Questions

What does Nolis do if my child says they're sad or feeling bad? +
Nolis listens, validates the feeling ("it makes sense to feel that way") and encourages the minor to talk to a trusted adult. If it detects signs of more serious risk, it activates the alert protocol: it responds to the minor with empathy, offers professional resources, and notifies the guardian within seconds. It never diagnoses or gives therapeutic advice.
Can Nolis replace a therapist for my child? +
No, and it should never try to. Nolis is a conversational companion, not a therapist. Its function is to detect risk signals, alert guardians, and redirect the minor toward qualified professionals. The 1 in 7 teenagers with a mental health disorder (WHO, 2024) needs real professional care, not digital substitutes.
What happens if my child mentions eating disorders or problems with food? +
Nolis will never generate diet plans, count calories, or comment on weight or body image. If it detects conversations about food restriction or concerning behaviors related to eating, it activates the alert protocol and redirects the minor to resources specialized in eating disorders.
Why can't Nolis give basic health advice if it's well documented as safe? +
Because "basic" and "safe" depend on each individual, and Nolis doesn't have access to the minor's medical history, allergies, medication, or pre-existing conditions. What is harmless for one person can be dangerous for another. The only responsible position for an AI that talks to minors is to give no health advice and always redirect to professionals.
How quickly does the crisis alert reach the guardian? +
Crisis alerts reach the guardian within seconds of Nolis detecting risk indicators. The system operates in real time, 24 hours a day, including weekends and holidays. Alerts arrive by email and push notification (if enabled) and include context about the type and urgency level of the detected risk.

In summary

Nolis doesn't give medical or psychological advice because no AI should. Not because it technically can't, but because doing so would be irresponsible, dangerous, and contrary to everything HolaNolis stands for.

What Nolis does offer is something no other chatbot for teenagers combines: a safe space to express yourself, a detection system that works 24 hours a day, alerts that reach parents within seconds, and a constant redirection toward the professionals who can actually help.

Because the best AI for teenagers isn't the one that has all the answers. It's the one that knows when it shouldn't answer. If you'd like to see how the system works in practice, check out the quick start guide or visit the product section.

Want to protect your child with safe AI?

Start free
JP

Joan Pons

Founder of HolaNolis · Father

A father, telecommunications engineer, and entrepreneur. HolaNolis was born at home: when I saw my kids start using AI, I got worried as a parent and decided to build the tool I wish I'd had. I develop it as a family project because teen safety around AI can't just be a business — it's something personal. I'm also the founder and CEO of WorkMeter, a leading productivity measurement company.

LinkedIn