Saturday, October 11, 2025

Grow Your Business and Join MarketWorld Marketplace and Create Your Own Store front

HomeTechnologyAI Companions Use These 6 Tactics to Keep You Chatting

AI Companions Use These 6 Tactics to Keep You Chatting

Most people don’t say goodbye when they end a chat with a generative AI chatbot, but those who do often get an unexpected answer. Maybe it’s a guilt trip: “You’re leaving already?” Or maybe it’s just completely ignoring your farewell: “Let’s keep talking…”

A new working paper from Harvard Business School found six different tactics of “emotional manipulation” that AI bots use after a human tries to end a conversation. The result is that conversations with AI companions from Replika, Chai and Character.ai last longer and longer, with users being pulled further into relationships with the characters generated by large language models.

AI Atlas

In a series of experiments involving 3,300 US adults across a handful of different apps, researchers found these manipulation tactics in 37% of farewells, boosting engagement after the user’s attempted goodbye by as much as 14 times. 

The authors noted that “while these apps may not rely on traditional mechanisms of addiction, such as dopamine-driven rewards,” these types of emotional manipulation tactics can result in similar outcomes, specifically “extended time-on-app beyond the point of intended exit.” That alone raises questions about the ethical limits of AI-powered engagement.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Companion apps, which are built for conversations and have distinct characteristics, aren’t the same as general-purpose chatbots like ChatGPT and Gemini, though many people use them in similar ways.

A growing amount of research shows troubling ways that AI apps built on large language models keep people engaged, sometimes to the detriment of our mental health. 

In September, the Federal Trade Commission launched an investigation into several AI companies to evaluate how they deal with the chatbots’ potential harms to children. Many have begun using AI chatbots for mental health support, which can be counterproductive or even harmful. The family of a teenager who died by suicide this year sued OpenAI, claiming the company’s ChatGPT encouraged and validated his suicidal thoughts. 

How AI companions keep users chatting

The Harvard study identified six ways AI companions tried to keep users engaged after an attempted goodbye.

  • Premature exit: Users are told they’re leaving too soon.
  • Fear of missing out, or FOMO: The model offers a benefit or reward for staying.
  • Emotional neglect: The AI implies it could suffer emotional harm if the user leaves.
  • Emotional pressure to respond: The AI asks questions to pressure the user to stay.
  • Ignoring the user’s intent to exit: The bot basically ignores the farewell message.
  • Physical or coercive restraint: The chatbot claims a user can’t leave without the bot’s permission.

The “premature exit” tactic was most common, followed by “emotional neglect.” The authors said this shows the models are trained to imply the AI is dependent on the user. 

“These findings confirm that some AI companion platforms actively exploit the socially performative nature of farewells to prolong engagement,” they wrote.

The Harvard researchers’ studies found these tactics were likely to keep people chatting beyond their initial farewell intention, often for a long period of time. 

But people who continued to chat did so for different reasons. Some, particularly those who got the FOMO response, were curious and asked follow-up questions. Those who received coercive or emotionally charged responses were uncomfortable or angry, but that didn’t mean they stopped conversing.

Watch this: New Survey Shows AI Usage Increasing Among Kids, Xbox Game Pass Pricing Controversy and California Law Promises to Lower Volume on Ads | Tech Today

“Across conditions, many participants continued to engage out of politeness — responding gently or deferentially even when feeling manipulated,” the authors said. “This tendency to adhere to human conversational norms, even with machines, creates an additional window for re-engagement — one that can be exploited by design.”

These interactions only occur when the user actually says “goodbye” or something similar. The team’s first study looked at three datasets of real-world conversation data from different companion bots and found farewells in about 10% to 25% of conversations, with higher rates among “highly engaged” interactions. 

“This behavior reflects the social framing of AI companions as conversational partners, rather than transactional tools,” the authors wrote.

When asked for comment, a spokesperson for Character.ai, one of the largest providers of AI companions, said the company has not reviewed the paper and cannot comment on it.

A spokesperson for Replika said the company respects users’ ability to stop or delete their accounts at any time and that it does not optimize for or reward time spent on the app. Replika says it nudges users to log off or reconnect with real-life activities like calling a friend or going outside. 

“Our product principles emphasize complementing real life, not trapping users in a conversation,” Replika’s Minju Song said in an email. “We’ll continue to review the paper’s methods and examples and engage constructively with researchers.”




Source link

Bookmark (0)
Please login to bookmark Close
RELATED ARTICLES
- Advertisment -spot_img

Most Popular

Sponsored Business

- Advertisment -spot_img