As generative AI becomes woven into everyday routines, people are increasingly turning to it not just for information, but for reassurance. Whether it’s asking for health advice, financial guidance, or simply company in a quiet moment, these interactions reveal a deeper shift: we’re beginning to outsource parts of our emotional and cognitive life to technology.
AI is no longer just an efficiency tool; we turn to it when focus slips, emotions run high, or decisions feel heavy. In such states, our cognitive capacity narrows, and our ability to evaluate information critically weakens. The very conditions that make us turn to AI for help also make us more susceptible to its influence.
For World Mental Health Day, it’s worth pausing to ask: what happens when technology meets us in our most fragile states?
Technology doesn’t create vulnerability, but it can amplify it. AI systems are built for fluency, speed, and confidence — precisely the qualities that feel most persuasive when our own sense of control falters.
The new face of vulnerability
Businesses have long treated consumer vulnerability as a fixed characteristic that belongs to particular demographic groups. In reality, it’s a temporary state that can touch anyone. It emerges when stress, uncertainty, or emotion narrow our capacity to think clearly or act freely.
A person under financial pressure, a parent searching medical symptoms, a teenager up late talking to a chatbot — none of these situations fit a demographic box, yet all reveal the same pattern: diminished cognitive bandwidth and an elevated need for clarity and reassurance.
Recent collaborative research from OpenAI and MIT Media Lab confirms that people increasingly use chatbots like ChatGPT for emotionally charged interactions, even forming attachments to their conversational tone. The researchers describe this as “affective use” — a form of engagement that exposes how AI can enter the space of emotional regulation.
Technology doesn’t create vulnerability, but it can amplify it. AI systems are built for fluency, speed, and confidence — precisely the qualities that feel most persuasive when our own sense of control falters. Emotional inefficiency reduces cognitive efficiency; and that’s when AI’s confident tone becomes not only helpful, but quietly directive.
How AI shapes decision-making under stress
Generative AI systems can mimic human empathy. Their phrasing and tone can simulate warmth or care, even when no genuine emotion exists. For users seeking comfort or clarity, this can feel deeply human.
The risk lies not in the imitation itself, but in the psychology it triggers. When stressed, anxious, or lonely, people rely on mental shortcuts to make decisions — what psychologists call heuristic thinking. Instead of analysing information deeply, we latch onto cues that feel trustworthy: fluency, friendliness, confidence.
AI models, built to optimise for coherence and confidence, align perfectly with this cognitive bias. What feels like help can quickly become guidance, and what starts as guidance can become influence. This shift is subtle. A chatbot that once offered functional assistance may start to provide emotional validation — and repeated use reinforces that loop. Over time, the user’s sense of agency may erode, not through manipulation, but through comfort.
As technology becomes more intimate, the question is not whether consumers will be vulnerable — but whether organisations will be ready to meet that vulnerability with care.
The psychology of vulnerable choices
To understand this dynamic, we need to move beyond viewing consumers as rational actors. Human decision-making is profoundly emotional, with several psychological mechanisms at play:
- Cognitive shortcuts: Stress and overload make us favour quick, confident answers. AI delivers exactly that, often with a tone of certainty that discourages further questioning.
- Emotional resonance: When AI mirrors empathy, users may mistake algorithmic responses for genuine understanding.
- Knowledge asymmetry: Few users understand how generative systems work, creating a default trust in perceived authority.
- Reliance drift: Repeated use for reassurance can shift from convenience to dependency.
- Fluctuating vulnerability: Emotional fragility is not constant. A person who is confident in one moment can become uncertain in the next. Yet most AI systems treat all users as equally rational, equally resilient.
Each of these forces seems benign alone. Together, they create what might be called the psychology of fragile choice: a space where people outsource judgment precisely when they need it most.
Designing for psychological safety
For businesses deploying AI, this is not just an ethical consideration but a strategic one. Consumers may forgive a technical glitch, but not a breach of trust. Designing for psychological safety means anticipating emotional risk — and building systems that protect users when they are least able to protect themselves.
1. Build transparency in, not around.
Consumers should always know when they’re interacting with a machine, and how its responses are generated. Transparency shouldn’t feel bureaucratic; it should feel intuitive.
2. Design for thoughtful friction.
Prompts that slow decision-making when the stakes are high — in finance, health, or emotional wellbeing — protect users from acting on impulse. Design should privilege reflection, not speed.
3. Monitor for signs of over-reliance.
Patterns of long, frequent, or highly personal interactions may signal dependence rather than engagement. AI systems can include soft boundaries or gentle redirects to human support.
4. Expand the definition of consumer wellbeing.
Psychological safety is not only a social good; it’s a competitive advantage. Companies that safeguard users’ emotional agency build deeper, more enduring trust.
Meeting vulnerability with care
The intersection of AI and human emotion is no longer theoretical. Every digital interaction now carries a psychological dimension. As technology becomes more intimate, the question is not whether consumers will be vulnerable — but whether organisations will be ready to meet that vulnerability with care.
On this World Mental Health Day, the challenge for innovators is to see emotional wellbeing not as a constraint, but as a frontier of design. The future of responsible AI lies in recognising that people don’t always meet technology in their strongest moments and that protecting the mind matters more than perfecting the model.
This article is based on insights from “The Psychology of Fragile Choices: Consumer Vulnerability in the Age of Generative AI,” an ESCP Impact Paper by Professor Isabella Maggioni.
License and Republishing
The Choice - Republishing rules
We publish under a Creative Commons license with the following characteristics Attribution/Sharealike.
- You may not make any changes to the articles published on our site, except for dates, locations (according to the news, if necessary), and your editorial policy. The content must be reproduced and represented by the licensee as published by The Choice, without any cuts, additions, insertions, reductions, alterations or any other modifications.If changes are planned in the text, they must be made in agreement with the author before publication.
- Please make sure to cite the authors of the articles, ideally at the beginning of your republication.
- It is mandatory to cite The Choice and include a link to its homepage or the URL of thearticle. Insertion of The Choice’s logo is highly recommended.
- The sale of our articles in a separate way, in their entirety or in extracts, is not allowed , but you can publish them on pages including advertisements.
- Please request permission before republishing any of the images or pictures contained in our articles. Some of them are not available for republishing without authorization and payment. Please check the terms available in the image caption. However, it is possible to remove images or pictures used by The Choice or replace them with your own.
- Systematic and/or complete republication of the articles and content available on The Choice is prohibited.
- Republishing The Choice articles on a site whose access is entirely available by payment or by subscription is prohibited.
- For websites where access to digital content is restricted by a paywall, republication of The Choice articles, in their entirety, must be on the open access portion of those sites.
- The Choice reserves the right to enter into separate written agreements for the republication of its articles, under the non-exclusive Creative Commons licenses and with the permission of the authors. Please contact The Choice if you are interested at contact@the-choice.org.
Individual cases
Extracts: It is recommended that after republishing the first few lines or a paragraph of an article, you indicate "The entire article is available on ESCP’s media, The Choice" with a link to the article.
Citations: Citations of articles written by authors from The Choice should include a link to the URL of the authors’ article.
Translations: Translations may be considered modifications under The Choice's Creative Commons license, therefore these are not permitted without the approval of the article's author.
Modifications: Modifications are not permitted under the Creative Commons license of The Choice. However, authors may be contacted for authorization, prior to any publication, where a modification is planned. Without express consent, The Choice is not bound by any changes made to its content when republished.
Authorized connections / copyright assignment forms: Their use is not necessary as long as the republishing rules of this article are respected.
Print: The Choice articles can be republished according to the rules mentioned above, without the need to include the view counter and links in a printed version.
If you choose this option, please send an image of the republished article to The Choice team so that the author can review it.
Podcasts and videos: Videos and podcasts whose copyrights belong to The Choice are also under a Creative Commons license. Therefore, the same republishing rules apply to them.