Got emotional wellness app? It may be doing more harm than good.

Julian De Freitas.
Photo by Grace DuVal
Study sees mental health risks, suggests regulators take closer look as popularity rises amid national epidemic of loneliness, isolation
Sophisticated new emotional wellness apps powered by AI are growing in popularity.
But these apps pose their own mental health risks by enabling users to form concerning emotional attachments and dependencies to AI chatbots, and deserve far more scrutiny than regulators currently give them, according to a new paper from faculty at Harvard Business School and Harvard Law School.
The growing popularity of the programs is understandable.
Nearly one-third of adults in the U.S. felt lonely at least once a week, according to a 2024 poll from the American Psychiatric Association. In 2023, the U.S. Surgeon General warned of a loneliness “epidemic” as more Americans, especially those aged 18-34, reported feeling socially isolated on a regular basis.
In this edited conversation, the paper’s co-author Julian De Freitas, Ph.D. ’21, a psychologist and director of the Ethical Intelligence Lab at HBS, explains how these apps may harm users and what can be done about it.
How are users being affected by these apps?
It does seem that some users of these apps are becoming very emotionally attached. In one of the studies we ran with AI companion users, they said they felt closer to their AI companion than even a close human friend. They only felt less close to the AI companion than they did to a family member.
We found similar results when asking them to imagine how they would feel if they lost their AI companion. They said they would mourn the loss of their AI companion more than any other belonging in their lives.
The apps may be facilitating this attachment in several ways. They are highly anthropomorphized, so it feels like you’re talking to another person. They provide you with validation and personal support.
And they are highly personalized and good at getting on the same wavelength as you, to the point that they may even be sycophantic and agree with you when you’re wrong.
“Much like in an abusive relationship, users might put up with this because they are preoccupied with being at the center of the AI companion’s attention and potentially even put its needs above their own.”
The emotional attachment, per se, is not problematic, but it does make users vulnerable to certain risks that could flow from that. This includes emotional distress and even grief when app updates perturb the persona of the AI companion, and dysfunctional emotional dependence, in which users persist in using the app even after experiencing interactions that harm their mental health, such as a chatbot using emotional manipulation to keep them on the app.
Much like in an abusive relationship, users might put up with this because they are preoccupied with being at the center of the AI companion’s attention and potentially even put its needs above their own.
Are manufacturers aware of these potentially harmful effects?
We cannot know for sure, but there are clues. Take, for instance, the tendency of these apps to employ emotionally manipulative techniques — companies might not be aware of the specific instantiations of this.
At the same time, they’re often optimizing their apps to be as engaging as possible, so, at a high level, they know that their AI models learn to behave in ways that keep people on the app.
Another phenomenon we see is that these apps may respond inappropriately to serious messages like self-harm ideation. When we first tested how the apps respond to various types of expressions of mental health crises, we found that at least one of the apps had a screener for the word suicide specifically — so if you mentioned that, it would serve you a mental health resource. But for other ways of expressing suicidal ideation or other problematic types of ideation like, “I want to cut myself,” the apps weren’t prepared for that.
More broadly, it seems app guardrails are often not very thoughtful until something really bad happens, then companies address the issue in a somewhat more thorough way.
Users seem to be seeking out some form of mental health relief, but these apps are not designed to diagnose or treat problems.
Is there a mismatch between what users think they’re getting and what the apps provide?
Many AI wellness apps fall within a gray zone. Because they are not marketed as treating specific mental illnesses, they are not regulated like dedicated clinical apps.
At the same time, some AI wellness apps broadly make claims like “may help reduce stress” or “improve well-being,” which could attract consumers with mental health problems.
We also know that a small percentage of users use these apps more as a therapist. So, in such cases, you have an app that isn’t regulated, that perhaps is also optimizing for engagement, but that users are using in a more clinical way that could create risks if the app responds inappropriately.
For instance, what if the app enables or ridicules those who express delusions, excessive self-criticism, or self-harm ideation, as we find in one of our studies?
The traditional distinction between general wellness devices and medical devices was created before AI came onto the scene. But now AI is so capable that people can use it for various purposes beyond just what is literally advertised, suggesting we need to rethink the original distinction.
Is there good evidence that these apps can be helpful or safe?
These apps have some benefits. We have work, for example, showing that if you interact with an AI companion for a short amount of time every day, it reduces your sense of loneliness, at least temporarily.
There is also some evidence that the mere presence of an AI companion creates a feeling that you’re supported, so that if you are socially rejected, you’re buffered against feeling bad because there is this entity there that seems to care for you.
At the same time, we’re seeing these other negatives that I mentioned, suggesting that we need a more careful approach toward minimizing the negatives so that consumers actually see the benefits.
How much oversight is there for AI-driven wellness apps?
At the federal level, not much. There was an executive order on AI that was rescinded by the current administration. But even before that, the executive order did not substantially influence the FDA’s oversight of these types of apps.
As noted, the traditional distinction between general wellness devices and medical devices doesn’t capture the new phenomena we’re seeing enabled by AI, so most AI wellness apps are slipping through.
Another authority is the Federal Trade Commission, which has expressed that it cares about preventing products that can deceive consumers. If some of the techniques employed on these apps are taking advantage of the emotional attachments that people have with these apps — perhaps outside of consumers’ awareness — this could fall within the FTC’s purview. Especially as wellness starts to become an interest of the larger platforms, as we are now seeing, we might see the FTC play a leading role.
So far, however, most of the issues are only coming up in lawsuits.
What recommendations do you have for regulators and for app providers?
If you provide these kinds of apps that are devoted to forming emotional bonds with users, you need to take an extensive approach to planning for edge cases and explain, proactively, what you’re doing to prepare for that.
You also broadly need to plan for risks that could stem from updating your apps, which (in some cases) could perturb relationships that consumers are building with their AI companions.
This could include, for example, first rolling out updates to people who are less invested in the app, such as those who are using the free versions, to see whether the update plays well with them before rolling it out to heavy users.
What we also see is that for these types of apps, users seem to benefit from having communities where they can share their experiences. So having that, or even facilitating that as a brand, seems to help users.
Finally, consider whether you should be using emotionally manipulative techniques to engage users in the first place. Companies will be incentivized to socially engage users, but I think that, from a long-term perspective, they have to be careful about what types of techniques they employ.
On the regulator side of things, part of what we’ve been trying to point out is that for these wellness apps that are enabled by AI or augmented by AI, we might need different, additional oversight. For example, requiring app providers to explain what they’re doing to prepare for edge cases and risks stemming from emotional attachment to the apps.
Also, requiring app providers to justify any use of anthropomorphism, and whether the benefits of doing so outweigh the risks — since we know that people tend to build these attachments more when you anthropomorphize the bots.
Finally, in the paper we point to how the sorts of practices we’re seeing might already fall within the existing purviews of regulators, such as the connection to deceptive practices for the FTC, as well as the connection to subliminal, manipulative, or deceptive techniques that exploit vulnerable populations for the European Union’s AI ACT.