Diffusers documentation

HunyuanVideoTransformer3DModel

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

HunyuanVideoTransformer3DModel

A Diffusion Transformer model for 3D video-like data was introduced in HunyuanVideo: A Systematic Framework For Large Video Generative Models by Tencent.

The model can be loaded with the following code snippet.

from diffusers import HunyuanVideoTransformer3DModel

transformer = HunyuanVideoTransformer3DModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder="transformer", torch_dtype=torch.bfloat16)

HunyuanVideoTransformer3DModel

class diffusers.HunyuanVideoTransformer3DModel

< >

( in_channels: int = 16 out_channels: int = 16 num_attention_heads: int = 24 attention_head_dim: int = 128 num_layers: int = 20 num_single_layers: int = 40 num_refiner_layers: int = 2 mlp_ratio: float = 4.0 patch_size: int = 2 patch_size_t: int = 1 qk_norm: str = 'rms_norm' guidance_embeds: bool = True text_embed_dim: int = 4096 pooled_projection_dim: int = 768 rope_theta: float = 256.0 rope_axes_dim: typing.Tuple[int, ...] = (16, 56, 56) image_condition_type: typing.Optional[str] = None )

Parameters

  • in_channels (int, defaults to 16) — The number of channels in the input.
  • out_channels (int, defaults to 16) — The number of channels in the output.
  • num_attention_heads (int, defaults to 24) — The number of heads to use for multi-head attention.
  • attention_head_dim (int, defaults to 128) — The number of channels in each head.
  • num_layers (int, defaults to 20) — The number of layers of dual-stream blocks to use.
  • num_single_layers (int, defaults to 40) — The number of layers of single-stream blocks to use.
  • num_refiner_layers (int, defaults to 2) — The number of layers of refiner blocks to use.
  • mlp_ratio (float, defaults to 4.0) — The ratio of the hidden layer size to the input size in the feedforward network.
  • patch_size (int, defaults to 2) — The size of the spatial patches to use in the patch embedding layer.
  • patch_size_t (int, defaults to 1) — The size of the tmeporal patches to use in the patch embedding layer.
  • qk_norm (str, defaults to rms_norm) — The normalization to use for the query and key projections in the attention layers.
  • guidance_embeds (bool, defaults to True) — Whether to use guidance embeddings in the model.
  • text_embed_dim (int, defaults to 4096) — Input dimension of text embeddings from the text encoder.
  • pooled_projection_dim (int, defaults to 768) — The dimension of the pooled projection of the text embeddings.
  • rope_theta (float, defaults to 256.0) — The value of theta to use in the RoPE layer.
  • rope_axes_dim (Tuple[int], defaults to (16, 56, 56)) — The dimensions of the axes to use in the RoPE layer.
  • image_condition_type (str, optional, defaults to None) — The type of image conditioning to use. If None, no image conditioning is used. If latent_concat, the image is concatenated to the latent stream. If token_replace, the image is used to replace first-frame tokens in the latent stream and apply conditioning.

A Transformer model for video-like data used in HunyuanVideo.

Transformer2DModelOutput

class diffusers.models.modeling_outputs.Transformer2DModelOutput

< >

( sample: torch.Tensor )

Parameters

  • sample (torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability distributions for the unnoised latent pixels.

The output of Transformer2DModel.

Update on GitHub