What your posts may reveal: UCLA tests if social media text can flag health risks among gay and bisexual men
Researchers in California say everyday writing on social media and dating apps can help spot patterns linked to sexual health and alcohol use among men who have sex with men. If used carefully and with consent, these tools could support earlier, more tailored public health outreach.
Why this is being studied now
The study comes from the University of California, Los Angeles. As more of life moves online, public health teams are asking whether language in posts and messages could signal when someone might benefit from information on prevention, testing or treatment. The team worked only with volunteers who agreed to share their text data.
What the authors tested
The researchers examined whether text could predict several outcomes: monthly binge drinking, heavy drinking, having more than five sexual partners, and use of HIV pre‑exposure prophylaxis (PrEP, a daily pill that prevents HIV). They tried modern language tools, including ChatGPT-style systems (programs that turn text into numeric patterns for analysis), alongside simpler word-count methods.
How well it worked
The models performed well for some measures and moderately for others. On a 0–1 scale used in this field, they reached about 0.78 for predicting monthly binge drinking and having more than five partners. For PrEP use and heavy drinking, the scores were around 0.64 and 0.63. The result suggests that text can hold useful signals, though not perfect ones.
A structural issue to keep in mind
Language is an indirect window into behaviour. Models learn from patterns in words, which can vary by community, platform and time. This means a system that works in one group may be less reliable in another, and small errors can add up if tools are deployed widely.
A concrete example
Imagine a man who often posts about weekend bar crawls. A model might flag a risk of binge drinking, prompting a health service to send information about support and safer use. If the model misreads jokes or quotes as personal behaviour, the message could feel intrusive or off the mark.
Main risk: speed and scale
The authors see promise but warn about privacy, consent and stigma. Even with good intentions, automated screening of sensitive topics could become a form of unwanted monitoring if not clearly explained, voluntary and well protected.
What the authors propose
Use these tools only with informed consent, strong data protection and clear limits on purpose. Test models for bias, review them regularly, and involve community organisations in design. Keep a human in the loop so outreach is respectful, optional and helpful.
In closing
The UCLA study shows that language in posts can offer clues about health-related behaviour among gay and bisexual men and others in this group. The findings point to practical uses in public health, provided that safeguards, choice and careful oversight come first.
In a nutshell: With consent and safeguards, language tools can help tailor health outreach to men who have sex with men, but privacy, bias and error risks require strict oversight.
- Text can be a useful signal: the best models were fairly accurate for some behaviours.
- Limits matter: words are imperfect proxies and can vary across people and platforms.
- Governance is essential: use opt-in, strong privacy, bias testing and human review.
Paper: https://arxiv.org/abs/2601.13558v1
Register: https://www.AiFeta.com
publichealth AI privacy LGBTQ health research socialmedia