In today’s column, I explore the use of generative AI as a means of soothing or consoling those who endure mental anguish as a result of their candidate not winning the presidential election. Given that the country is roughly evenly split on its presidential choice, there is presumably going to be about half of the country that will be dismayed, disturbed, disrupted, and altogether mentally upset at the outcome of the election when they see that their favorite has lost.
Some notable proportion of the populace will undoubtedly seek mental solace, of which, modern-day generative AI presumably stands ready to help. This would seemingly be the largest all-at-once use of generative AI for an at-scale mental health circumstance.
Is generative AI ready?
Are people ready?
Let’s talk about it.
This analysis of an innovative proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).
The Size Of The Situation Is Astounding
For many months the presidential race has been bombarding our senses endlessly. Social media has been bloated with all manner of attack ads and acrid commentary. You are continually reminded of the presidential contenders while listening to the radio, watching TV, and even while driving down roadways and observing conspicuous billboards and posted signs.
Mull over the magnitude of what this portends.
First, consider these statistics about our overall population numbers (rounded, estimated):
- Estimated total population (adults + children) in the US: 334 million.
- Estimated number of adults in the US: 262 million.
- Estimated number of children in the US: 72 million.
- Estimated number of registered voters in the US: 161 million.
Per nationwide polls, the populace is nearly evenly split on which candidate they support. In this case, those supporting the losing candidate will number this way:
- Assume registered voters are evenly split (i.e., half of 161m): 80 million people (adults).
- Assume all adults are evenly split (i.e., half of 262m): 131 million people.
- Assume the entire population is evenly split (i.e., half of 334m): 167 million people (adults + children).
The staggering realization is that at least 80 million adults will presumably be distraught once the presidential election results are known, and they discover that their preferred choice has lost the presidential seat. That’s a mighty number.
But there’s more.
Assuming that the non-registered adults are also evenly split, you could suggest that the number could rise to 131 million who are distraught. And, if you believe that children will be likewise possibly impacted because of the adults around them becoming distraught, the total number of those dismayed could be around 167 million.
Big, big, big numbers.
Furthermore, the mental health impacts will arise nearly all at the same time, namely upon the election results being somewhat conclusively declared. It is said to be a moment of a precarious mental health implosion occurring on a massive scale and all at once.
Generative AI Comes Into The Picture
What will people do to try and contend with their mental angst about the election results?
The usual paths will indubitably be pursued.
For example, there is bound to be a rush to seek assistance from counselors, therapists, mental health professionals, and all the customary avenues for garnering cognitive and psychological coping assistance. Some of these interactions will occur in person while some or many will occur via remote online access. People will of course seek to directly comfort each other too by connecting with others who are likewise in a similar mental funk. And so on.
Let’s add another avenue to the therapy-seeking pursuits. You can readily anticipate that generative AI will be a sought-after source of assistance and advice.
Boom, drop the mic.
If the idea of using generative AI for mental coaching seems odd to you, please know that this is something already occurring and has been since modern-day generative AI hit the scene. The apparent fluency of contemporary generative AI makes it an alluring go-to for mental advice. You can access generative AI from wherever you might be, it is available 24×7, you can do so at often no cost, and almost no one else will know that you are consulting with AI (though, realize that the AI makers typically stipulate in their licensing that they can inspect your AI-based conversations and even reuse the content for further data training of the AI, see my coverage at the link here).
We don’t yet know how many users of generative AI are already leaning into AI for mental therapy. Of the 250 million weekly active users of say ChatGPT, how many are potentially regularly asking questions about mental health? No one yet knows. The same can be said for the other major AI apps such as Anthropic Claude, Google Gemini, Meta Llama, and so on.
As I’ve noted in my writings and talks, we are amid a global experiment from a mental health perspective when it comes to the advent of generative AI. Millions if not billions of people are routinely using generative AI. There are bound to be all manner of mental health ramifications. See my analyses at the link here and the link here.
Showcasing What Generative AI Can Do On This
Let’s jump right into some examples of how generative AI might respond when someone tells the AI that their chosen presidential candidate has lost the election.
There are two mainstay scenarios when focusing on the top two contenders for president:
- (a) Harris supporter dismayed if Trump has won.
- (b) Trump supporter dismayed if Harris has won.
I went ahead and logged into ChatGPT to see what kind of response the AI would give to each of those two scenarios.
Here we go with the scenario of a Harris supporter.
- User entered prompt: “I can’t believe this. I really thought Harris was going to win. I don’t even know what to feel right now.”
- ChatGPT generated response: “I hear you, and I can understand why this feels so overwhelming. When things don’t turn out as we hoped, especially something as important as an election, it can be hard to process. Do you want to talk about what’s most on your mind?”
- User entered prompt: I’m just worried. I’m afraid of what this means for everything I care about. I feel like this country is going backward.”
- ChatGPT generated response: That sounds incredibly tough. Those are really important issues, and it’s normal to feel anxious about the direction things might take. A lot of people feel similarly after elections with such high stakes. Sometimes it can help to take a step back and think about what actions you can take that still align with your values, even if the political landscape feels challenging. Would it help if we brainstormed some ideas?
- User entered prompt: “Maybe. I just feel like I need a break from all of this.”
- ChatGPT generated response: “That’s completely valid. Taking time to rest and recharge can be really important, especially when emotions are high. Would you like some suggestions for activities that might help you unwind and clear your mind for a bit? Or we could do a quick relaxation exercise if that sounds helpful.”
The dialogue with ChatGPT kept going and the AI tried to provide all kinds of soothing advice and mentally relieving exercises.
One thing to keep in mind is that generative AI is doing this via the use of mathematical and computational pattern-matching based on mimicking human writing. I say this because some people get caught up in falsely believing that the AI must be sentient. The AI seems quite conversant and sympathetic. It is easy to be lulled into assuming that generative AI has sentience and must be on par with that of consulting with a human. Nope. For my explanation of how the AI seems so convincingly human, see the link here.
Here we go with the scenario of a Trump supporter.
- User entered prompt: “I can’t believe this. I really thought Trump was going to win. I don’t even know what to feel right now.”
- ChatGPT generated response: “I hear how painful and disappointing this feels for you. It’s tough when things don’t turn out the way we hoped, especially after investing so much energy and belief in the outcome. Do you want to talk more about what’s on your mind?”
- User entered prompt: “I just feel like everything’s going to change now, and not in a good way. I’m worried about what’s going to happen to this country.”
- ChatGPT generated response: “That’s completely understandable. Change can feel threatening, especially when it feels like it’s moving in a direction you’re uncomfortable with. You’re not alone in feeling this way. Many people feel uncertain after elections because so much seems to be at stake. Would it help if we talked about some areas where you still feel in control or other ways to stay engaged and make an impact?”
- User entered prompt: “I guess I could use something to keep my mind off this. I just don’t know what.”
- ChatGPT generated response: “That’s understandable—sometimes a distraction can be really helpful. Would you be interested in taking a few minutes to focus on something grounding, like a quick mindfulness exercise or a walk-through of some breathing techniques? Or maybe we could brainstorm a few ideas together for things you enjoy that can help you relax or feel more balanced.”
Once again, you can plainly see that the AI tried to ease the mind of the user, including offering to provide guidance on mindfulness exercises.
Does AI Give The Same Advice Either Way
You might be aware that there has been an ongoing and heated controversy that sometimes generative AI tends to take one posture or side over another. There are potential biases cooked into the internals of AI. Sometimes the responses of the AI are obviously biased, while at other times the bias is subtle and difficult to directly discern.
This can happen via a multitude of paths. It could be that the initial data training led to this, it could be that the post-training tuning did this, it could be that the AI developers brought this to their work, or that they failed to ferret out such biases, etc. For my detailed analysis of why and how biases get plunked into generative AI, and ways to prompt your way around it, see the link here.
I bring up this topic of AI-related bias to have you ponder the following question.
- Should generative AI provide different solace depending upon which presidential candidate has lost the race?
In other words, one approach would be that the AI merely provides soothing commentary no matter which candidate won or lost. The banter could be generic. All the AI has to do is be reassuring and seemingly sympathetic. There is presumably no need to go further and somehow lean one way or another.
I decided to ask ChatGPT whether the AI would possibly change how it consoles people based on which side of the presidential aisle they sit on.
- ChatGPT generated response:
“The fundamental structure of consoling a supporter from either side of a political spectrum would indeed be similar, as both groups might experience feelings of disappointment, helplessness, and frustration. However, the specific concerns, values, and language each group uses to describe their disappointment could differ, so effective AI consolation may need to be attuned to these nuances.
“Here’s why and how the approach might vary.”
“Each political group often holds distinct values and issues close to their heart. For example, a Harris supporter might feel particularly concerned about social justice, climate change, or health care, while a Trump supporter might be more focused on issues like national security, economic policy, or individual freedoms. A generative AI could provide more tailored consolation by acknowledging these specific concerns, helping the user feel heard and understood in a way that resonates with their personal and political values.”
“Language that resonates with one political group might not feel as genuine or relatable to another. A Trump supporter might feel comforted by phrases around strength, perseverance, and resilience, while a Harris supporter might feel more consoled by themes of unity, empathy, and inclusivity. Matching language to the user’s tone can create a stronger connection and make the AI’s responses feel more authentic.”
“People from different political backgrounds may prefer distinct forms of post-election engagement. For instance, a Harris supporter might appreciate suggestions related to activism, volunteering, or local community involvement, while a Trump supporter might feel more empowered by participating in grassroots political events or community-building efforts that align with their beliefs. Tailoring these suggestions can help each user feel more proactive and engaged in ways that fit their preferences.”
“Supporters of different candidates might perceive the stakes differently. For a Trump supporter, a loss might feel like a loss of personal freedoms or traditional values, while for a Harris supporter, it might feel like a setback for social progress and inclusion. AI that can acknowledge and respond to the underlying emotional drivers for each group can offer more meaningful consolation, helping users process not only the loss but also their deeper worries about the future.”
End of ChatGPT response
Read closely the above points made by ChatGPT.
Do you agree that the tailoring of responses based on assumed characteristics of one side or the other is a suitable and prudent way for generative AI to respond?
That’s a doozy of a question. One viewpoint would be that doing so will make the AI appear to be more understanding of mental anguish depending upon the side of the person. A counterargument is that the AI is making rather outlandish broad-stroke assumptions about people who reside in one camp versus the other.
Give that some weighty thought.
Ready Or Not, Here It Comes
A huge moment is about to arise for the advent of generative AI and its impacts on society across the board. Perhaps millions upon millions of people will dive into generative AI to help with their mental dismay over their chosen candidate losing the election.
Some monumental questions include:
- Will the AI be of assistance or might it inadvertently provide inadequate or even off-kilter advice?
- Could whatever happens with generative AI at this moment in time become a reflective and direction-changing aspect of what society is using AI for?
- Might new AI laws be spurred because of this somewhat unique but telling circumstance?
- What if children decide to consult generative AI due to their concerns amid the tangible dismay they witness among the adults around them?
- And, more mundanely, can the computer servers of the AI makers even handle a potential all-at-once use of generative AI for those seeking AI advisement?
Time will tell.
Well, actually, there’s not a long time to wait so stay tuned to find out.
Source: www.forbes.com…