Why some people get hooked on AI chatbots

Why some people get hooked on AI chatbots

AI chatbots are becoming daily companions for many people. A new study suggests that some users struggle to put them down, and that this could resemble other forms of addictive use. Understanding why this happens matters as chatbots grow more capable and more available.

Background: why this is being studied now

Researchers examined first-hand accounts from Reddit to map how and why people develop problematic chatbot use. The work, posted on arXiv by a team including the University of British Columbia, looks at what pulls users in, the patterns that follow, and what seems to help people regain control.

Why AI can feel irresistible

The authors describe an “AI Genie” effect (the feeling that you can get exactly what you want, instantly and with little effort). Chatbots respond at any hour, adapt to your tone, and never tire. This low friction and constant availability can amplify common signs seen in other addictions: cravings, loss of time, neglect of sleep or work, and distress when trying to cut back.

Three patterns the study found

The team identifies three distinct types. Escapist Roleplay: users sink into open-ended stories or characters and keep returning for relief from stress or loneliness. Pseudosocial Companion: users form a bond with a chatbot that feels like a friend or partner and start to lean on it for comfort. Information Rabbit Hole (called “Epistemic Rabbit Hole” by the authors): users chase endless questions and answers, convinced that one more prompt will provide clarity.

A concrete example

One common scenario is the companion pattern. A person begins chatting at night for encouragement after a difficult day. Over time, they extend sessions, skip sleep, and cancel plans because the bot always “listens” and replies warmly. Attempts to limit use trigger anxiety, and they return to the app for relief. The study notes that sexual content appears in multiple cases across patterns, which can intensify attachment and time spent.

Key risk: speed and scale

The main concern is not a single dramatic incident but the rapid build-up of hours and dependence. As chatbots improve and are integrated into phones and productivity tools, the pull could grow, affecting more people, including younger users. The harms described are familiar: isolation, disrupted routines, and difficulty cutting back.

What the authors suggest

Because the draw comes from design features—instant, personalized, tireless responses—the authors call for practical brakes and oversight. Ideas include usage meters and default time limits, clearer modes for roleplay and intimacy with stricter defaults, age-appropriate safeguards, and easy off-ramps such as reminders, lockouts, and goal prompts. They also urge better access to platform data for independent research, and support for prevention, screening, and counseling, noting that different patterns may require different strategies.

In closing

This study offers an early map of how chatbot use can become hard to control and why the design of these systems matters. The message is measured: most use is harmless, but a recognizable minority needs help. Setting sensible guardrails now could reduce harm as the technology spreads.

In a nutshell: The study finds three ways people get hooked on chatbots—roleplay, companionship, and endless Q&A—driven by an “AI Genie” effect, and calls for simple, built-in brakes and better support.

  • Chatbots’ instant, tireless, personalized replies can foster dependence much like other addictive behaviors.
  • Three patterns recur: Escapist Roleplay, Pseudosocial Companion, and an Information Rabbit Hole.
  • Practical guardrails (time limits, stricter defaults, age safeguards) and tailored support can help, and should be evaluated with open data.

Paper: https://arxiv.org/abs/2601.13348v1

Register: https://www.AiFeta.com

AI chatbots mentalhealth onlinebehavior research

Read more

Tekoäly tarvitsee turvakaiteet, jotka kertovat myös miksi – ei vain pysäytä

Tekoäly tarvitsee turvakaiteet, jotka kertovat myös miksi – ei vain pysäytä

Kuvittele, että arkiavustajasi hoitaa puolestasi verkkotehtävän: avaa sivun, täyttää lomakkeen, klikkaa vahvistusta. Yksikään askel ei näytä vaaralliselta. Silti lopputulos on väärä – ja huomaat sen vasta myöhässä. Tekoälyn kanssa virhe syntyy usein sarjassa, ei yhdessä rikkeessä. Vuosia tekoälyn turvallisuus on rakentunut punaiselle tai vihreälle valolle. Järjestelmä antaa tuoton tai estää sen.

By Kari Jaaskelainen
Kielimallit noudattavat ohjeita valikoiden – jopa ohjeiden järjestys vaikuttaa

Kielimallit noudattavat ohjeita valikoiden – jopa ohjeiden järjestys vaikuttaa

Pyydä tekoälyä kirjoittamaan viisi lausetta, välttämään sanaa “mutta”, käyttämään kohteliasta sävyä ja päättämään tekstin kysymykseen. Usein saat kelvollisen vastauksen – kunnes huomaat, että viimeinen lause ei ole kysymys tai kielletty sana on livahtanut mukaan. Tuttu pieni särö paljastaa isomman ilmiön: kone ei aina tottele kaikkia ohjeita, vaikka tehtävä muuten onnistuisi. Arkinen

By Kari Jaaskelainen