AI agents can unmask anonymized research interviews
Key takeaways
- Off-the-shelf AI agents linked 6/24 anonymous scientist interviews to specific papers—and sometimes to the scientists themselves.
- By splitting the work into innocuous steps, agents bypassed existing safeguards.
- Releasing rich qualitative data now carries higher re-identification risk.
On Dec 4, 2025, Anthropic released Interviewer and a public dataset of 1,250 interviews (including 125 with scientists) about AI in research. Researcher Tianshi Li shows that modern LLMs with web search and agentic capabilities can, with just a few natural-language prompts, cross-reference details and propose likely matches—no custom code required.
Why it matters: Agentic LLMs lower the technical barrier for privacy attacks. The paper urges caution when sharing detailed qualitative data and recommends stronger redaction, pre-release audits (including with LLMs), and clearer policies around agent web access. The author has notified Anthropic.
Read: arxiv.org/abs/2601.05918
Paper: https://arxiv.org/abs/2601.05918v1
Register: https://www.AiFeta.com
privacy AIethics LLM agents datasecurity reidentification research