KINOMOTO.MAG

WAN 2.2 Is Here

Hello Creators,

This latest release from the WAN team brings serious firepower to the table: new text-to-video, image-to-video, and a hybrid model, with support for 24fps, better prompt control, and a new VAE. You’ve got two model sizes to choose from — 14B and 5B — and yeah, it’s still not lightning-fast. But it’s powerful, chaotic, and very WAN.

What’s changed?

If you were using WAN 2.1, here’s what you need to know before diving into 2.2:

  • Two-model setup: You’ll need a high-noise model to start the chaos and a low-noise one to clean it up
  • Split sampling: Use two KSamplers — steps 1–4 for noise, 5–8 for refinement
  • LoRAs still work, but quality takes a hit. If you want sharp results, retrain or ditch them
  • Prompting is stricter: WAN won’t guess your vision. Be specific, or deal with weird outputs

Setup tips

  • Update ComfyUI to nightly
  • Drop models into the right folders
  • Use updated workflows with W22 latent nodes
  • If nodes are missing, upgrade your frontend to v1.25.1+

And yes — ClipVision is gone for I2V. It now grabs latents directly, which means faster results but more instability. Hybrid mode? Still jittery. Still wild. Still fun to break.

Quick advice

  • Test at 480p, render final at 720p+
  • Use UniPC + CFG 3.5 for better results
  • It’s finally real 24fps — no more faking it

WAN 2.2 isn’t here to guide you gently. It’s here to challenge your workflow, your patience, and your understanding of AI video generation. And that’s exactly why we love it.

Stay curious,
Kinomoto.Mag