how to work well “mov2mov” in Stable diffusion

Stable diffusion web ui のエクステンション mov2movでうまく出力する方法です。


mov2mov とは

stable diffusion web ui のエクステンション

画像、動画の取り回しには、OpenCV が用いられているようです。


  • model: anything_V4.5
  • VAE: none
  • prompt : a dancing girl
  • negative prompt :bad-picture-chill-75v
  • mov2mov mp4 famous tiktok dancing
  • Width,Height: same of source
  • Generate Movie Mode: MP4V
  • Noise multiplier: 0.89
  • Movie Frames: same of source
  • ControlNet1: canny
  • ControlNet2: openpose


  • サイズ、フレーム数はもとのMP4と同じに
  • Noise multiplierが重要、あれこれ試す必要あり
  • ControlNetは2個は必要。1個ではうまくいかなかった
  • このファイルで出力に6時間ぐらいかかった


How to output well with extension mov2mov for stable diffusion web ui.

I’ll go ahead and show you a successful case.

What is mov2mov?

extension for stable diffusion web ui

I read the code, and it seems to capture the video frame by frame, extract it as an image, img2img it, and save it again as a video file.
It seems that OpenCV is used to handle the images and videos.


  • Model: any_V4.5
  • VAEs: none
  • Prompt: dancing girl
  • Negative prompt: bad-picture-chill-75v
  • mov2mov mp4 famous tiktok dance
  • width, height: same as source
  • Movie mode generation: MP4V
  • Noise multiplier: 0.89
  • Movie frame: same as source
  • ControlNet1: Canny
  • ControlNet2: Open pose


  • Same size and number of frames as the original MP4
  • Noise multiplier is important, some experimentation required
  • Two ControlNets are required. one didn’t work well
  • This file took about 6 hours to output
  • complete