Teaching chatbots to stop contradicting themselves (DECODE)
Teaching chatbots to stop contradicting themselves
Ever had a bot say one thing, then the opposite a few turns later? This study introduces DECODE—a new task and dataset for spotting contradictions in everyday conversations, drawn from both human-human and human-bot chats.
- New data beats existing natural language inference (NLI) resources for training contradiction detectors in dialogue.
- A structured, utterance-by-utterance approach using pre-trained Transformers outperforms typical unstructured methods, especially on tough, out-of-distribution chats.
- The best model’s scores align well with human judgments.
- It can automatically evaluate—and even help improve—the consistency of modern generative chatbots.
Why it matters: More consistent assistants feel smarter, safer, and more trustworthy.
Paper: http://arxiv.org/abs/2012.13391v2
Authors: Yixin Nie, Mary Williamson, Mohit Bansal, Douwe Kiela, Jason Weston
Paper: http://arxiv.org/abs/2012.13391v2
Register: https://www.AiFeta.com
AI Chatbots NLP MachineLearning ConversationalAI Consistency Research