Text-driven motion generation has advanced significantly with the rise of denoising diffusion models. However, previous methods often oversimplify representations for the skeletal joints, temporal frames, and textual words, limiting their ability to fully capture the information within each modality and their interactions. Moreover, when using pre-trained models for downstream tasks, such as editing, they typically require additional efforts, including manual interventions, optimization, or fine-tuning. In this paper, we introduce a skeleton-aware latent diffusion (SALAD), a model that explicitly captures the intricate inter-relationships between joints, frames, and words. Furthermore, by leveraging cross-attention maps produced during the generation process, we enable the attention-based zero-shot text-driven motion editing using a pre-trained SALAD model, requiring no additional user input beyond text prompts. Our approach significantly outperforms previous methods in terms of text-motion alignment without compromising generation quality, and demonstrates practical versatility by providing diverse editing capabilities beyond generation.
Word swap (a): inplace → forward
Word swap (b): chair → ground
Word swap (c): forward → overhead
Word swap (d): forward → upward
Prompt refinement (a): + then turns right
Prompt refinement (b): + while stumbling
Prompt refinement (c): + on a treadmill
Prompt refinement (d): with both hands raised
Attention re-weighting (a): slowly x (source, 2, 3, 4)
Attention re-weighting (b): wide x (source, -1, 3)
Attention re-weighting (c): wide x (source, -2, -1, 2, 3, 4)
Mirroring (a): upper body motion
Mirroring (b): lower body motion