Can AI spot phishing? A new email dataset puts it to the test
Can AI spot phishing? This new email dataset puts it to the test
Phishing and spam are getting smarter—often written by large language models. Rebeka Toth, Tamas Bisztray, and Richard Dubniczky release a labeled email dataset that separates phishing, spam, and legitimate messages, and flags whether they were written by humans or LLMs.
- Each email is annotated for emotional hooks (e.g., urgency, fear, authority) and the attacker’s goal (link-clicks, credential theft, financial fraud).
- Multiple LLMs were benchmarked to detect these cues; the most reliable model helped annotate the full set.
- To test robustness, emails were rephrased by several LLMs while keeping meaning and intent.
- A state-of-the-art LLM was evaluated on original and rephrased emails against expert ground truth.
Findings: AI is strong at catching phishing, but still struggles to tell spam from legitimate emails.
The dataset, code, and templates are openly available: https://arxiv.org/abs/2511.21448v1
Useful for defenders, researchers, and anyone building safer inboxes.
Paper: https://arxiv.org/abs/2511.21448v1
Register: https://www.AiFeta.com
cybersecurity phishing spam emailsecurity dataset LLM AI openscience infosec