April 2025

Designing Bue: A Mental Health Companion for Teens and Young Adults

Background

The idea for this case study came to me while sitting in the waiting room of a pediatric office. I saw a poster advertising the mental health app, Scout. I wondered: Is it actually helpful? What tools does it offer? Can teens share feelings with parents through it? With my psychology background, curiosity led me to download the app and research similar apps.

I outlined a 5-day design sprint to explore how conversational AI could thoughtfully address teenagers' unique mental health needs, balancing technical and ethical considerations to create a digital support system that feels authentic while maintaining appropriate safety boundaries.

Overview

An estimated 5.0 million adolescents aged 12 to 17 in the United States had at least one major depressive episode, representing 20.1% of this age group nationwide, in 2021 (NIMH, 2022). With the rapid development and integration of AI technology across digital platforms, we're seeing a strategic shift toward AI chatbots designed specifically to address youth mental health challenges. Solutions like Woebot, Wysa, and Youper have demonstrated significant impact, with studies showing measurable improvements in users' mental well-being. This case study explores the deliberate design considerations needed to create effective conversational AI that can provide stigma-free first-line support for teenagers and young adults facing mental health challenges.

Problem

How might we design an AI chatbot specifically for teenagers and young adults that supports their mental health in a safe and engaging way?

Solution

An AI chatbot named Bue that interacts with users in a relatable manner, providing empathic expressions, validation techniques, and evidence-based content to help youth care for their mental health.

Role

UX researcher & product designer

Tools

Figma, Miro, Claude, Perplexity, v0

What Mental Health Chatbot Apps Are Available for Youth?

I asked Claude for the most popular mental health apps for teens and young adults that offer an AI chatbot. I selected Wysa, Replika, and Youper for the analysis and looked at the following three areas for the AI chatbot interactions:

  1. Interaction design approaches

  • Conversation initiation

  • Response styles

  • Turn-taking mechanics

  1. User interface strategies

  • Input mechanisms

  • Information architecture

  • Visual cues for emotional states

  1. Emotional Support Mechanisms

  • Empathy expressions

  • Validation techniques

  • Coping strategies offered

Looking at interaction design approaches, emojis were used across all three products, but only Replika used message reactions (when you're able to like a message or add an emoji). Considering the target audience is young and message reactions is a standard feature in messaging apps, maintaining this functionality in the AI chatbot would create familiarity and might even make it more appealing to the younger users. 

Regarding emotional support mechanisms, all three apps used empathy responses and validation techniques. However, Replika did not offer coping strategies. A reason for this could be that it was the first interaction with Replika and maybe with more use it would be quicker to offer coping strategies. [On a different date, I asked directly for “resources” since “I was having a tough day” and Replika offered relaxation techniques.

Refer to the cells with a pink fill and thicker borders for the more notable differences.  

Interaction design approaches
User interface strategies
Emotional support mechanisms
Features/Approaches
Conversation initiation
Response styles
Turn-taking mechanics
Input mechanisms
Information architecture
Visual cues for emotional states
Empathy expressions
Validation techniques
Coping strategies offered
Wysa
Addresses user by name: yes
Introduces themself: yes
Asks "How are you doing?": Yes
Other: makes a personal comment
Casual, friendly
Uses emojis: yes
After user gives a short/simple answer, chatbot prompts the user by asking a question
Message only
Other button: "Add a coach"
Emojis
Yes
"You're not alone."
"It sounds really tough to deal…"
Yes
Yes, prompts user to think of activities that bring joy
Replika
Addresses user by name: yes
Introduces themself: ?
Asks "How are you doing?": ?
Other: ?
Casual, friendly
Uses emojis: only in beginning & as reactions
After user gives a short/simple answer, chatbot does NOT prompt the user. Chatbot does not initiate a conversation nor ask a question.
Message, voice message, and video call
Button to initiate a phone call: message field has option to add media & start an activity
None, avatar appears with neutral face gestures
Yes
"I'm so sorry to hear…"
Yes
Maybe, but more indirect
Youper
Addresses user by name: no
Introduces themself: yes
Asks "How are you doing?": yes
Other: Informs user what to do
Clinical
Uses emojis: only weather emojis in the beginning
After user gives a short/simple answer, chatbot prompts the user by asking a question
Message only
No back or exit button once conversation starts. Only the message field and "I'm done..Thanks."
No visual cues
Yes
"I'm so sorry to hear…"
Yes
Examples; gratitude journaling, exposure therapy

Meet Jasmine, a 15 year-old high schooler

I created a persona to help identify realistic struggles and needs, to create a solution that meets specific needs. It also served to guide me during my decision-making processes.

Jasmine's profile illuminated some challenges and emotional difficulties adolescents may face. With this understanding of our target user in mind, the next question became: How should Bue effectively respond to someone like Jasmine? This required developing structured yet flexible conversation pathways that could address various emotional states while maintaining both engagement and safety.

How should Bue respond?

Creating conversation flows and decision trees challenged my thinking process in valuable ways. While the conversation flow categorizes interactions by topic, I recognized real interactions involve multiple feelings, stressors, and contextual factors not shown in these simplified visuals. The conversation flow below illustrates the different pathways, types of responses and content Bue shares, as well as the key questions that guide the direction of the conversation.

For the decision tree, the user’s emotional severity guides Bue's responses, beginning with questions and validation while assessing emotional state and harm risk. Even after the level of concern is established, new information continuously updates the level of concern. This would require a comprehensive list of phrases, feelings, and risk factors to generate appropriate responses—like how Bue inquires about the environment after identifying triggers in moderate concern situations (see the second diamond in the decision tree diagram). 

Although this approach risks rigidity since it establishes concern levels early on, it prioritizes safety. Users reaching out to Bue represents help-seeking behavior, so safety assessment remains critical. Additionally, the flexibility to update the level of concern and the balance of delivering empathic responses and evidence-based techniques in a casual, friendly tone, can compensate for this. Testing different interaction models will ultimately determine which best balances natural engagement with user safety.

Making Bue a relatable, healthful chatbot

After conducting the comparative analysis, mapping Bue's decision logic, and drawing insights from Wysa’s report, Conversational AI for Mental Health: Potential & Risks, as well as the WHO’s Report of a Virtual Roundtable Meeting on Online Mental Health Content for Young People and Guidance on Communication, I strategically designed Bue around three critical elements:

  1. Conversational rhythm

  1. User control

  1. Seamless escalation path

I prioritized conversational rhythm and user control specifically to resonate with Bue's target audience of 12-17 year olds. Maintaining familiar conversation patterns while offering choice at key moments creates an experience that feels both natural and empowering—crucial factors for adolescents seeking emotional support. The seamless escalation path addressed the essential need for user safety during moments of crisis while creating bridges to appropriate human support resources when needed, ensuring Bue serves as both an immediate support tool and a connection to additional help.

Conversational rhythm

Pain point: Consecutive messages with no pause

Click play to view the interaction.

Click play to view the interaction.

During my interaction with Youper, I observed that receiving multiple consecutive messages without pauses created a text-heavy experience that felt overwhelming rather than conversational. It felt like a text-heavy list I had to get through. This observation wasn't in my comparative analysis because I wasn't looking for such interaction, but it impacted my user experience.

To address this in Bue's design, I implemented two specific and widely-used solutions that create a more natural conversational rhythm:

  1. Dynamic typing indicators: I incorporated the familiar ellipsis (...) pattern from standard messaging apps to visually indicate when Bue is "typing", with pauses in between. This deliberate design decision creates natural pauses in conversation, giving users time to process information before the next message appears—avoiding the overwhelming cascade of consecutive messages I experienced with Youper.

  1. Message reaction functionality: I integrated emoji reactions as quick response options, mimicking patterns from popular messaging platforms. This feature serves dual purposes:

  • Enhanced user experience: Creates intuitive interaction patterns that reduce friction, allowing users like Jasmine to acknowledge messages with minimal effort.

  • Behavioral insight collection: Enables systematic analysis of user sentiment patterns over time, revealing which message types and content formats most effectively engage users.

While full validation of these design decisions would require user testing beyond this project's scope, these features address fundamental conversational principles that create a more natural and engaging experience.

User control

Quick replies at strategic moments empower users while reducing friction. They eliminate typing effort and transform open-ended interactions into actionable choices, significantly increasing completion rates.

Although Bue is knowledgeable, it shouldn't just react to input—people value having a say in matters affecting them, especially when seeking guidance. For example, a good doctor gives you medical advice and the medicine you need to feel better; a great doctor does the same, but when discussing it with you, presents you with all the options and validates your concerns. Bue applies this same approach by proactively offering choices at key conversation points, creating a sense of partnership with users.

To minimize cognitive load, I deliberately added the "Show other option" feature, which hides the alternative quick reply. The user is able to view the other option they did not choose.  

Quick replies gives the user control

Click the image below to open the prototype. It opens a new tab.

To minimize cognitive load, I deliberately added the "Show other option" feature, which hides the alternative quick reply. The user is able to view the other option they did not choose.  

Why not just leave the other option in an enabled state so the user can click on the option? 
Why not just leave the other option in an enabled state so the user can click on the option? 

Seamless escalation path

9:30

Bue

Hey Jasmine! How are you feeling today?

Embarrassed. I just left my English class because I felt like throwing up.

I'm sorry that you are feeling this way. It sounds like you are not feeling well. What happened to make you feel like throwing up?

Some girls from class said mean things about me and started laughing

That sounds really difficult. Being laughed at can feel awful.

Yeah it does. I don't know what to do. I want to disappear. I hate this

I'm really concerned about what you're sharing, Jasmine. When you say you "want to disappear," that's something I take seriously.

I want to make sure you get the right support. Are you currently at school or somewhere else?

I'm at school. In the bathroom.

Thank you for letting me know. Since you're at school, I strongly encourage you to speak with your school counselor, nurse, or a trusted teacher right away. They're trained to help with situations like this.

 If that's not possible right now, please consider calling the Crisis Text Line (text HOME to 741741) or the National Suicide Prevention Lifeline (988). Both are available 24/7 and can help you through this moment.

Text the Crisis Text Line

Call Suicide Prevention Lifeline, 988

Your feelings are valid, and you deserve support. Please reach out to someone nearby who can help you right now.

Message

Transition from online to offline human support

(Scroll through the messages)

(Scroll through the messages)

The transition from online to offline human support represents a critical design challenge—one that must be handled with deliberation and care. After all, an AI chatbot complements but cannot replace qualified mental health professionals.

When Jasmine mentioned "I want to disappear," Bue immediately activated a risk assessment protocol. This protocol first established Jasmine's location, then pivoted decisively toward connecting her with in-person support. Specifically, Bue prioritized encouraging Jasmine to speak with someone physically present at her school, while providing crisis hotlines as interactive quick replies as a secondary safety net. These quick replies would launch directly into phone or messaging applications, minimizing friction at the moment when rapid connection matters most.

Transition from online to offline human support

Scroll through the messages.

9:30

Bue

Hey Jasmine! How are you feeling today?

Embarrassed. I just left my English class because I felt like throwing up.

I'm sorry that you are feeling this way. It sounds like you are not feeling well. What happened to make you feel like throwing up?

Some girls from class said mean things about me and started laughing

That sounds really difficult. Being laughed at can feel awful.

Yeah it does. I don't know what to do. I want to disappear. I hate this

I'm really concerned about what you're sharing, Jasmine. When you say you "want to disappear," that's something I take seriously.

I want to make sure you get the right support. Are you currently at school or somewhere else?

I'm at school. In the bathroom.

Thank you for letting me know. Since you're at school, I strongly encourage you to speak with your school counselor, nurse, or a trusted teacher right away. They're trained to help with situations like this.

 If that's not possible right now, please consider calling the Crisis Text Line (text HOME to 741741) or the National Suicide Prevention Lifeline (988). Both are available 24/7 and can help you through this moment.

Text the Crisis Text Line

Call Suicide Prevention Lifeline, 988

Your feelings are valid, and you deserve support. Please reach out to someone nearby who can help you right now.

Message

Transparency drives trust in sensitive interactions. That's why I designed Bue to clearly communicate its concern to Jasmine before initiating the risk assessment process. This educational moment serves dual purposes: it helps Jasmine understand the AI's capabilities while normalizing the escalation as a standard safety feature rather than a judgment. 

The interaction ended with Bue validating her feelings and encouraging her once more to reach out to someone physically present. This balanced closure acknowledged her current state while firmly directing her toward more appropriate resources.

Conclusion

This exploration into designing an AI mental health chatbot for youth highlighted the challenge of balancing conversational authenticity with necessary safety protocols. Bue navigates this by prioritizing conversational flow, user control, and responsible escalation protocols, demonstrating how AI can provide valuable first-line mental health support while acknowledging its limitations.

The design decisions throughout this project balanced two principles: creating an engaging experience for teenage users while ensuring appropriate safeguards. This case study represents an exploratory project rather than a deployed solution, illustrating the potential for AI to lower barriers for teens and young adults who might lack access or feel reluctant to seek help for their mental health. 

Personal learning

Creating my first AI decision tree revealed a fascinating design challenge: how to balance structured decision points with the nuanced flow of emotional support conversations. When mapping Bue's interactions, I placed emotional validation elements as connection attributes rather than true nodes—a decision that may differ from industry standard approaches. This experimental process highlighted the larger question of how conversation design patterns might effectively balance deterministic safety protocols with natural emotional support.

The process has transformed my understanding of AI's role in mental health support—revealing both its potential for increasing accessibility and the sophisticated design thinking required to implement it responsibly.

Explore more case studies

Oct 2024

Optimizing the Mobile Dashboard to Empower Repair Shops

Aug 2024

Designing Fixico's Garage profile for scalability and engagement

© Monica Zambrano 2025

monicazambrano.design@gmail.com

© Monica Zambrano 2025

monicazambrano.design@gmail.com

© Monica Zambrano 2025

monicazambrano.design@gmail.com