Wan 2.2 Accelerated Inference
Collection
optimized demos for Wan 2.2 14B models, using FP8 quantization + AoT compilation & community LoRAs for fast & high quality inference on ZeroGPU 💨 • 3 items • Updated
• 13
demos and checkpoints for Wan 2.2 models