vcai.mpi-inf.mpg.de/projects/MoFusion

Preview meta tags from the vcai.mpi-inf.mpg.de website.

Linked Hostnames

12

Thumbnail

Search Engine Appearance

Google

https://vcai.mpi-inf.mpg.de/projects/MoFusion

MoFusion

Conventional methods for human motion synthesis have either been deterministic or have had to struggle with the trade-off between motion diversity vs~motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can synthesise long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion-editing applications like in-betweening, seed-conditioning, and text-based editing, thus, providing crucial abilities for virtual-character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state-of-the-art on established benchmarks in the literature. We urge the reader to watch our supplementary video. For more details, see https://vcai.mpi-inf.mpg.de/projects/MoFusion



Bing

MoFusion

https://vcai.mpi-inf.mpg.de/projects/MoFusion

Conventional methods for human motion synthesis have either been deterministic or have had to struggle with the trade-off between motion diversity vs~motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can synthesise long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion-editing applications like in-betweening, seed-conditioning, and text-based editing, thus, providing crucial abilities for virtual-character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state-of-the-art on established benchmarks in the literature. We urge the reader to watch our supplementary video. For more details, see https://vcai.mpi-inf.mpg.de/projects/MoFusion



DuckDuckGo

https://vcai.mpi-inf.mpg.de/projects/MoFusion

MoFusion

Conventional methods for human motion synthesis have either been deterministic or have had to struggle with the trade-off between motion diversity vs~motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can synthesise long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion-editing applications like in-betweening, seed-conditioning, and text-based editing, thus, providing crucial abilities for virtual-character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state-of-the-art on established benchmarks in the literature. We urge the reader to watch our supplementary video. For more details, see https://vcai.mpi-inf.mpg.de/projects/MoFusion

  • General Meta Tags

    10
    • title
      MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis
    • Content-Type
      text/html; charset=UTF-8
    • viewport
      width=device-width, initial-scale=1
    • citation_title
      MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis
    • citation_author
      Dabral, Rishabh
  • Open Graph Meta Tags

    4
    • og:title
      MoFusion
    • og:image
      images/title.jpg
    • og:description
      A Framework for Denoising-Diffusion-based Motion Synthesis
    • og:url
      https://vcai.mpi-inf.mpg.de/projects/MoFusion
  • Twitter Meta Tags

    1
    • twitter:card
      summary_large_image
  • Link Tags

    4
    • author
      https://www.cse.iitb.ac.in/~rdabral/
    • stylesheet
      https://fonts.googleapis.com/css?family=Open+Sans:400italic,700italic,800italic,400,700,800
    • stylesheet
      css/iconize.css?v=5ce372b
    • stylesheet
      css/project.css?v=5f2b5f6

Emails

2
  • rda%62ral%40mpi%2Di%6E%66.m%70g.de
  • %67o%6Cyan%69k@m%70i%2D%69%6Ef.%6Dp%67.d%65

Links

14