RealTwin Docs
  • CameraTwin
    • About CameraTwin
    • Installation
      • Installation - Mobile app
      • Installation - Unreal Engine
      • Changelog
    • Get Started
    • Documentation
      • Mobile Applications
        • Entry Screen
        • Main Camera Screen
        • Menu Drawer
        • Horizon Alignment
        • Quick Actions (Mobile)
        • Advanced Camera Controls
          • Android Untracked Mode
      • Unreal Engine Plugin
        • CameraTwin LiveLink Setup
        • Control Panel
        • Quick Actions
        • Recording Settings
        • Recording Events
        • CameraTwin Recorder
          • đźš§ CameraTwin Recording Asset 🔜
        • Scene Capture Components
        • Camera Parameters Manager
          • CG Focus Controller
          • đźš§Advanced Focus Controller 🔜
      • Stage Setup
      • Multi-Camera Setup
    • Tutorials
    • FAQ
    • Troubleshooting
      • My device is not showing up in the LiveLink sources list
      • My Stream seems frozen in Unreal Engine
    • Support
  • Virtual Productions Cloud [VPC]
    • VPC Service
    • VPC Dashboard
    • VPC Plugin
      • VPC Plugin Installation
        • Changelog
      • Toolkits
        • Toolkit Installation
        • Environment Toolkit
          • Tools
            • Text to Environment Map
            • Image to Environment Map
          • Results
          • How to use the results
          • Changelog
        • Motion Toolkit
          • Tools
            • Text to Motion
            • Trajectory Control
            • Motion Edit
            • Motion Stitch
            • Motion Load
            • Tool Options
            • Using custom inputs
            • Prompt Guidelines
          • Tutorials
          • Changelog
        • Material Toolkit
          • Tools
            • Texture to Material
            • Using custom parent Materials
          • Changelog
    • FAQ
    • Support
Powered by GitBook
On this page
  • Overview
  • Tool Inputs
  • Single action generation
  • Multi action generation
  1. Virtual Productions Cloud [VPC]
  2. VPC Plugin
  3. Toolkits
  4. Motion Toolkit
  5. Tools

Text to Motion

PreviousToolsNextTrajectory Control

Last updated 6 months ago

Overview

The Text to Motion tool allows you to generate animations giving one or more text prompts as input.

We usually structure the prompts as follows:

A person <describe action> <describe style>

For example:

A person is walking forward hastily

These prompt examples are a good starting point, but feel free to experiment and let us know which prompts gave you the best results! Also dont forget to visit our Prompt Guidelines section for more information.

Tool Inputs

Single action generation

Inputs:

  • MotionPrompts: The set of of actions. Each "Action" corresponds to a distinct motion that is defined by:

    • Prompt: A text prompt describing the motion

    • Frames: The desired number of frames that the motion should last.

    • Seconds: The number of Frames selected will be automatically converted to seconds and displayed in this field

  • Seed: A random number used for generating varied outputs with identical inputs. For instance, two generated motions with different seeds, might differ even with the same text prompts and durations.

You can also describe multiple actions in a single prompt, using the "then" keyword and separating with commas.

Example: "A person is running forward, then stops and sits down"

At the moment, the FPS value of a generated animation is always 30. The upper limit for a single action is 196 frames (6 seconds) at the moment. We are working to remove these limitations in the near future.

The suggested range for the number of frames is between 70 and 180 frames.

Multi action generation

You can define one or more actions (multi-action generation) by adding or removing elements in the “Actions” array. The "Actions" defined will be "stitched together" to produce the final motion result.

You can use the multi-action feature to create animations that last longer than 6 seconds, which is the current limit of single-action generations

However, we suggest using the described below for this use case.

Multi-action feature