how to work well “mov2mov” in Stable diffusion

How to output successfully with the Stable diffusion web ui extension mov2mov.

Here are some cases where I have done it and it worked.

mov2mov ?

stable diffusion web ui, extension.

I read the code, and it seems that the video is captured frame by frame, taken out as an image, img2img it, and saved again as a video file.
It seems that OpenCV is used to handle the images and videos.


  • model: anything_V4.5
  • VAE: none
  • prompt : a dancing girl
  • negative prompt :bad-picture-chill-75v
  • mov2mov mp4 famous tiktok dancing
  • Width,Height: same of source
  • Generate Movie Mode: MP4V
  • Noise multiplier: 0.89
  • Movie Frames: same of source
  • ControlNet1: canny
  • ControlNet2: openpose


  • Size and number of frames are the same as the original MP4.
  • Noise multiplier is important, need to try this and that.
  • Needed at least two ControlNet, one didn’t work.
  • This file took about 6 hours to output. (RTX3060 12GB)