MotionV2V: Edit Motion, Not Pixels

MotionV2V: Edit Motion, Not Pixels

What if you could change how a video moves—without changing how it looks? MotionV2V lets editors "edit motion, not pixels."

The team defines a motion edit as a small tweak to the sparse paths (trajectories) objects follow—like a hand wave, a camera pan, or a dancer's step. Those edits guide a motion-conditioned diffusion model to regenerate the clip with new motion while preserving content and style.

  • Precise, timestamped edits that naturally propagate through the video
  • Consistent identity, lighting, and background
  • Useful for smoothing shaky shots, re-timing gestures, changing choreography, or refining camera moves

To train this control, they create "motion counterfactuals": paired videos with the same content but different motion, and fine-tune the model on them. In a four-way user study, MotionV2V was preferred over prior methods in 65%+ of comparisons.

Project page: https://ryanndagreat.github.io/MotionV2V • Paper: https://arxiv.org/abs/2511.20640v1

Paper: https://arxiv.org/abs/2511.20640v1

Register: https://www.AiFeta.com

#AI #VideoEditing #ComputerVision #GenerativeAI #DiffusionModels #Research

Read more