Runway's new Gen3 Video-2-Video is a huge improvement
Runway just released a new Video-2-Video (V2V) model—and I’m hooked. Here’s why you should give it a try.
Traditionally, you would use the Image-2-Video model so you have full control over the final look of the shot (with Midjourney), but then the movement in a scene is really hard to get with text prompts in Runway. It can be hit and miss.
Instead, the new Gen3 V2V model essentially enables you to control the movement of your characters, the camera and more with your source video, and let Runway add the visual magic.
Very useful in shots where the camera and character motion needs to be very specific.
Here’s what I discovered:
1️⃣ It’s all about the foundation
Runway’s V2V mostly adds textures and VFX to the source video. This means your character shapes and locations need to be in place. I found my AI characters reflected my body shape and structure closely—this applies to your environments, too. So, I might want to wear a muscle suit next time I want to have a knight in my film!
2️⃣ Transforming characters? Crank up the structure strength
To replace myself with a woman, I had to push the structure transformation beyond 0.75. At lower levels, it kept features like my beard. Even at 0.75, Runway refused to add long hair, so be prepared to wear some wigs.
These were quick tests, but if you want the best results, I recommend prepping your sets, costumes, and even wigs beforehand to lock in the right geometry and structure. Let Runway do the heavy lifting from there.