Enhancing Public Speaking Skills in Engineering Students Through AI
Meet your AI public-speaking coach
Engineers need to explain complex ideas clearly, but personalized practice is hard to get. This research builds a multimodal AI trainer that analyzes both what you say and how you say it, then delivers targeted feedback at scale.
- Verbal: pitch, loudness, pacing, intonation
- Non-verbal: facial expressions, gestures, posture
- Expressive coherence: alignment between speech and body language
Unlike tools that score these separately, the system fuses signals from speech analysis, computer vision, and sentiment detection to suggest concrete improvements.
In early tests, AI feedback showed moderate agreement with human experts. Among evaluated Large Language Models, Gemini Pro matched human annotators best in this setup.
Bottom line: students can rehearse repeatedly and get consistent, personalized pointers, helping them sync words, presence, and emotion for stronger talks.
Paper: http://arxiv.org/abs/2511.04995v1
Paper: http://arxiv.org/abs/2511.04995v1
Register: https://www.AiFeta.com
AI PublicSpeaking EngineeringEducation EdTech CommunicationSkills Multimodal SpeechAnalysis ComputerVision SentimentAnalysis LLMs