Teaching Humanoid Robots to Team Up—By Watching Humans
Humanoid robots need to coordinate physically with people—lifting, handing over, steadying—but we lack data of humans interacting with robots. What if they learned from humans interacting with humans? The catch: simply mapping human motions onto a robot often breaks crucial touches and supports.
The team proposes PAIR (Physics-Aware Interaction Retargeting), which keeps contact semantics—like handshakes, pushes, and holds—consistent despite different body shapes. PAIR generates physically believable human–robot training data from abundant human–human videos.
Data alone isn’t enough. Conventional imitation just mimics trajectories. Their policy, D-STAR (Decoupled Spatio-Temporal Action Reasoner), separates when to act from where to act: Phase Attention learns timing, a Multi-Scale Spatial module picks targets, and a diffusion head blends them for synchronized whole‑body behavior.
In extensive simulations, this combo produces more responsive, coordinated interactions than baseline methods—pointing to a practical path for teaching robots collaborative skills from widely available human–human footage. Paper: https://arxiv.org/abs/2601.09518v1
Paper: https://arxiv.org/abs/2601.09518v1
Register: https://www.AiFeta.com
Robotics Humanoid AI ImitationLearning HumanRobotInteraction ComputerVision Simulation DiffusionModels