Using ChatGPT Frequently Can Make You Feel Lonely, Says MIT Study
The authors of the study came to the conclusion that those who “bonded” and trusted ChatGPT more were more likely to experience loneliness than those who did not.

MIT Media Lab and OpenAI jointly conducted a study that suggests ChatGPT may be increasing the feelings of loneliness among its most frequent users. Over 400 million individuals use ChatGPT every week, making it a phenomenon since its launch more than two years ago. The study was driven by the fact that a portion of users interact emotionally with ChatGPT, despite the platform’s lack of marketing and design as an AI companion.
The researchers conducted the study using a two-pronged approach. They started by surveying more than 4,000 users on their self-reported behaviour using ChatGPT and analysing millions of chat conversations and audio exchanges.
related stories
Second, 1,000 participants were enlisted by the MIT Media Lab to participate in a four-week trial that looked at how they used ChatGPT for at least five minutes daily.
Despite the fact that social isolation and loneliness are sometimes caused by a variety of circumstances, the study’s authors came to the conclusion that individuals who trusted and “bonded" with ChatGPT more were more likely to experience loneliness and rely on it than others.
“Overall, higher daily usage across all modalities and conversation types-correlated with higher loneliness, dependence, problematic use, and lower socialization," the study emphasised.
The researchers also thoroughly examined how users interacted with ChatGPT’s Advanced Voice Mode, which is a speech-to-speech interface. The bot was designed to communicate in two different ways: neutral and engaging. The LLM-powered bot openly displayed its feelings in the latter mode, but in the former option, the bot remained neutral regardless of the user’s emotional state.
“Results showed that while voice-based chatbots initially appeared beneficial in mitigating loneliness and dependence compared with text-based chatbots, these advantages diminished at high usage levels, especially with a neutral-voice chatbot," the study also stated.
Researchers stated that even though the technology is still in its infancy, the study might help spark a discussion about its full effects on users’ mental health.
“A lot of what we are doing here is preliminary, but we are trying to start the conversation with the field about the kinds of things that we can start to measure and to start thinking about what the long-term impact on users is," Jason Phang, an OpenAI safety researcher who worked on the project stated in the MIT Media Lab report.
The study is conducted against the backdrop of OpenAI’s release of GPT-4.5, which it says is a more emotionally intelligent and intuitive model than its predecessor and rivals.
- Location :
- First Published: