In a world where digital assistance is becoming an integral part of daily life, new research from the University of Kansas reveals fascinating insights into how people interact with AI chatbots, particularly when it comes to sensitive health information. The study, led by assistant professor Vaibhav Diwanji, found that while individuals prefer the nonjudgmental and anonymous nature of AI chatbots when discussing embarrassing topics, they still lean towards human interaction when dealing with anger—particularly in situations involving politically charged topics like COVID-19.
The study focused on how emotions like anger and embarrassment influence people’s interactions with AI versus human support, specifically in the context of discussing COVID-19 vaccines. With the pandemic stirring up both anger and embarrassment among people worldwide, KU researchers conducted a lab-based study where participants were exposed to videos that could elicit these emotions—ranging from clips of domestic violence to erotic scenes. They were then assigned either a human or AI chatbot to discuss their vaccine opinions and intentions.
As emotions ran high, participants showed distinct preferences: those feeling embarrassed preferred AI chatbots, while those who felt angry chose human interaction. This finding underscores the significant role that emotions play in determining the medium through which individuals seek health-related information. The AI chatbots, designed to be nonjudgmental, offered a safe space for those dealing with discomfort, while human responders were viewed as better suited for handling the more intense emotion of anger.
The study highlights a crucial aspect of AI integration into sensitive domains like healthcare: personalization and empathy. While AI technology is becoming increasingly sophisticated, it is not set to replace humans but rather to complement them. “AI is getting more and more pervasive,” Diwanji explains, “and it’s important for health professionals and marketers to ensure that it is used ethically. By blending technology with emotional insights, we can create more personalized and effective interactions with consumers.”
The research, which utilized advanced eye-tracking technology to measure participants’ emotional responses, offers a glimpse into the future of marketing and consumer interactions. As AI tools like chatbots and virtual assistants continue to evolve, understanding how to balance technological convenience with emotional sensitivity will be key to building long-term trust and engagement.
This study is a reminder that in a world where AI, including more niche applications like AI human generators, is becoming ubiquitous, human emotions remain complex and often need the understanding and empathy that only another human can provide. AI human generators, for instance, can create realistic, human-like avatars for everything from customer service to virtual influencers. Examples include platforms like Replika, an AI chatbot designed to mimic human conversation, or Synthesia, which generates realistic video avatars that can be used for personalized marketing and training.
As the study suggests, understanding when to engage AI and when to turn to humans could be pivotal in shaping how we navigate the digital age, especially in healthcare and beyond.