Key Points

- MagicVideo-V2 is a multi-stage high-aesthetic video generation system that integrates text-to-image, image-to-video, video-to-video, and video frame interpolation modules into an end-to-end video generation pipeline.

- The text-to-image module generates a high-fidelity image from a given text prompt, and the image-to-video module uses the text prompt and generated image to produce low-resolution keyframes of the video.

- The video-to-video module enhances the resolution and details of the keyframes, while the frame interpolation module smoothens the video motion and generates a high-resolution, smooth, and aesthetically pleasing video.

- The system demonstrates superior performance over leading Text-to-Video (T2V) systems through user evaluation at a large scale.

- MagicVideo-V2 leverages an internally developed diffusion-based text-to-image model and a high-aesthetic SD1.5 model for animation and motion, both trained on internal datasets.

- The system employs a reference image embedding module and a latent noise prior strategy to effectively decouple the image prompt from the text prompts and improve temporal coherence across frames.

- An image-video joint training strategy is used to train the image-to-video module, leveraging internal image datasets to improve the quality of generated videos.

- The video-to-video module shares a similar design as the image-to-video module and uses the information from the reference image to guide the video diffusion steps and enhance details at higher resolution.

- The video frame interpolation module utilizes an internally trained GAN based VFI model and pretrained lightweight interpolation model to ensure stability and smoothness in the generated videos, which was evaluated positively through human evaluators' preference for MagicVideo-V2 over other state-of-the-art T2V methods.

Summary

The research paper introduces the MagicVideo-V2 framework, a multi-stage Text-to-Video (T2V) model that integrates Text-to-Image (T2I), Image-to-Video (I2V), Video-to-Video (V2V), and Video Frame Interpolation (VFI) modules. The framework is designed to generate high-fidelity, aesthetically pleasing videos from textual descriptions. The T2I module generates an aesthetic image from the text prompt, which serves as the reference image for video generation.

The I2V module produces low-resolution keyframes of the video, while the V2V module enhances the resolution and details of the keyframes. The VFI module interpolates frames to smoothen video motion. Human evaluators rated MagicVideo-V2 superior to leading T2V systems, highlighting its remarkable fidelity and smoothness. The framework aims to advance T2V models by providing a new strategy for generating high-aesthetic videos. The paper provides detailed insights into each module's architecture, its training strategies, and the use of techniques such as latent noise prior, appearance encoder, and ControlNet module to improve video quality and aesthetics.

Human evaluations demonstrate a clear preference for MagicVideo-V2 over other T2V methods, indicating its superior performance from the standpoint of human visual perception. The paper also includes examples of videos generated using MagicVideo-V2, showcasing its smoothness and aesthetic quality.

Overall, the MagicVideo-V2 framework presents a comprehensive solution for text-to-video generation, as validated by human judgment and evaluations.

Reference: https://arxiv.org/abs/2401.04468