SenseTime Research Introduces Cutting-Edge AI Technique: Transforming Text into Lifelike Human Motion and Trajectories

TL;DR:

  • AI innovation is poised to revolutionize the animation, gaming, and film industries.
  • Story-to-Motion challenge addressed through a three-component AI approach.
  • Components include Text-Driven Motion Scheduling, Text-Driven Motion Retrieval System, and Progressive Mask Transformer.
  • Extensive testing demonstrates enhanced performance in motion blending, action composition, and trajectory following.
  • Primary contributions include the integration of trajectory and semantics, Text-based Motion Matching, and superiority over existing techniques.

Main AI News:

In the ever-evolving landscape of Artificial Intelligence, a groundbreaking innovation is set to reshape the animation, video game, and film industries. The challenge of translating textual narratives into lifelike human motion has long perplexed creators. Enter the realm of Story-to-Motion, a complex domain where characters traverse diverse settings and execute specific actions guided solely by the written word. This intricate task demands a seamless fusion of high-level motion semantics and precise trajectory control.

Despite extensive research in text-to-motion and character control, a definitive solution has remained elusive. Current character control methodologies falter when faced with textual descriptions, while existing text-to-motion approaches require additional positional constraints, resulting in the generation of unstable animations.

To surmount these challenges, a dedicated team of researchers has unveiled a pioneering approach that excels in crafting trajectories and producing controlled, infinitely expansive motions harmonizing with the input text. The proposed methodology comprises three core components:

  1. Text-Driven Motion Scheduling: Harnessing the capabilities of Modern Large Language Models, this stage processes sequences of text, position, and duration pairs extracted from lengthy textual descriptions. These serve as text-driven motion schedulers, ensuring that generated motions align with the narrative while incorporating location and action duration details.
  2. Text-Driven Motion Retrieval System: A fusion of motion matching and constraints on motion trajectories and semantics culminates in a comprehensive motion retrieval system. This system guarantees that generated motions faithfully adhere to the intended semantic and positional attributes, guided by the textual description.
  3. Progressive Mask Transformer: Addressing common artifacts in transitional motions such as foot sliding and awkward poses, the progressive mask transformer plays a pivotal role in enhancing the quality of generated motions. It results in animations characterized by smoother transitions and a heightened sense of realism.

The research team subjected their approach to rigorous testing across three distinct sub-tasks: motion blending, temporal action composition, and trajectory following. The outcomes were nothing short of remarkable, showcasing enhanced performance in all areas when compared to earlier motion synthesis techniques.

The primary contributions of this research can be summarized as follows:

  1. The introduction of trajectory and semantics as key factors in comprehensive motion generation from lengthy textual descriptions, effectively resolving the Story-to-Motion challenge.
  2. The innovative text-based motion matching method utilizes extensive textual input to facilitate precise and customizable motion synthesis.
  3. A demonstrated superiority over state-of-the-art techniques in trajectory following, temporal action composition, and motion blending sub-tasks, as validated through experiments conducted on benchmark datasets.

Conclusion:

The introduction of AI-driven Story-to-Motion transformation has the potential to revolutionize the animation, video game, and film industries. This breakthrough approach addresses existing limitations, offering precise motion generation based on textual descriptions. With superior performance across various sub-tasks, including motion blending and trajectory following, this innovation promises to redefine creative possibilities and enhance the quality of entertainment content in these markets.

Source