
vcai.mpi-inf.mpg.de/projects/MoFusion
Preview meta tags from the vcai.mpi-inf.mpg.de website.
Linked Hostnames
12- 2 links tosaarland-informatics-campus.de
- 2 links towww.mpi-inf.mpg.de
- 1 link to4dqv.mpi-inf.mpg.de
- 1 link toarxiv.org
- 1 link todata-protection.mpi-klsb.mpg.de
- 1 link tograduateschool-computerscience.de
- 1 link toimprint.mpi-klsb.mpg.de
- 1 link tom-hamza-mughal.github.io
Thumbnail

Search Engine Appearance
MoFusion
Conventional methods for human motion synthesis have either been deterministic or have had to struggle with the trade-off between motion diversity vs~motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can synthesise long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion-editing applications like in-betweening, seed-conditioning, and text-based editing, thus, providing crucial abilities for virtual-character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state-of-the-art on established benchmarks in the literature. We urge the reader to watch our supplementary video. For more details, see https://vcai.mpi-inf.mpg.de/projects/MoFusion
Bing
MoFusion
Conventional methods for human motion synthesis have either been deterministic or have had to struggle with the trade-off between motion diversity vs~motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can synthesise long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion-editing applications like in-betweening, seed-conditioning, and text-based editing, thus, providing crucial abilities for virtual-character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state-of-the-art on established benchmarks in the literature. We urge the reader to watch our supplementary video. For more details, see https://vcai.mpi-inf.mpg.de/projects/MoFusion
DuckDuckGo
MoFusion
Conventional methods for human motion synthesis have either been deterministic or have had to struggle with the trade-off between motion diversity vs~motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can synthesise long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion-editing applications like in-betweening, seed-conditioning, and text-based editing, thus, providing crucial abilities for virtual-character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state-of-the-art on established benchmarks in the literature. We urge the reader to watch our supplementary video. For more details, see https://vcai.mpi-inf.mpg.de/projects/MoFusion
General Meta Tags
10- titleMoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis
- Content-Typetext/html; charset=UTF-8
- viewportwidth=device-width, initial-scale=1
- citation_titleMoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis
- citation_authorDabral, Rishabh
Open Graph Meta Tags
4- og:titleMoFusion
- og:imageimages/title.jpg
- og:descriptionA Framework for Denoising-Diffusion-based Motion Synthesis
- og:urlhttps://vcai.mpi-inf.mpg.de/projects/MoFusion
Twitter Meta Tags
1- twitter:cardsummary_large_image
Link Tags
4- authorhttps://www.cse.iitb.ac.in/~rdabral/
- stylesheethttps://fonts.googleapis.com/css?family=Open+Sans:400italic,700italic,800italic,400,700,800
- stylesheetcss/iconize.css?v=5ce372b
- stylesheetcss/project.css?v=5f2b5f6
Emails
2- rda%62ral%40mpi%2Di%6E%66.m%70g.de
- %67o%6Cyan%69k@m%70i%2D%69%6Ef.%6Dp%67.d%65
Links
14- https://4dqv.mpi-inf.mpg.de
- https://arxiv.org/abs/2212.04495
- https://data-protection.mpi-klsb.mpg.de/inf/4dqv.mpi-inf.mpg.de
- https://graduateschool-computerscience.de
- https://imprint.mpi-klsb.mpg.de/inf/4dqv.mpi-inf.mpg.de