how to work well “mov2mov” in Stable diffusion

How to output successfully with the Stable diffusion web ui extension mov2mov.

Here are some cases where I have done it and it worked.

mov2mov ?

stable diffusion web ui, extension.
https://github.com/Scholar01/sd-webui-mov2mov

I read the code, and it seems that the video is captured frame by frame, taken out as an image, img2img it, and saved again as a video file.
It seems that OpenCV is used to handle the images and videos.

recipe

  • model: anything_V4.5
  • VAE: none
  • prompt : a dancing girl
  • negative prompt :bad-picture-chill-75v
  • mov2mov mp4 famous tiktok dancing https://www.douyin.com/video/7111592075606461703
  • Width,Height: same of source
  • Generate Movie Mode: MP4V
  • Noise multiplier: 0.89
  • Movie Frames: same of source
  • ControlNet1: canny
  • ControlNet2: openpose

memo

  • Size and number of frames are the same as the original MP4.
  • Noise multiplier is important, need to try this and that.
  • Needed at least two ControlNet, one didn’t work.
  • This file took about 6 hours to output. (RTX3060 12GB)

完成

MY SNS

ご相談・お問合せ

お気軽にどうぞ。