Anima in diffusers
Anima weights in diffusers environment. Copy the "anima-preview.safetensors" to an the "transformer" directory.
I'm aware that another diffusers compatible code has been uploaded elsewhere. However, if the llm_adapter cannot handle padded tokens, then the implementation was incorrect. This repo relies on the assumption that the adapter model can handle padded input tokens; as it should.
Inference
from cosmos_predict2 import CosmosPredict2Pipeline
pipeline = CosmosPredict2Pipeline('/path/to/this/bundle')
# Unlike in diffusers, we don't call a long forward function here.
qwen_embeds, t5_input_ids = pipeline.prepare_text_embeds(text)
crossattn_emb = pipeline.transformer.preprocess_text_embeds(qwen_embeds, t5_input_ids)
pipeline.transformer.to('cuda')
# Sampling with FlowMatchEulerDiscreteScheduler.
output = pipeline.sample_fm(
torch.randn((1, 16, 1, 64, 64), device='cuda', dtype=torch.bfloat16),
num_inference_steps=30,
crossattn_emb=crossattn_emb.to('cuda'),
)
pipeline.vae.to('cuda')
image = pipeline.decode_vae(output)
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support