Figure AI has unveiled a major update to its Figure 02 humanoid robot, integrating a vision-language-action model that allows the robot to learn new tasks simply by watching a human demonstrate them once.
How It Works
The new AI system combines a large vision model with a physical action policy network. When a human demonstrates a task like folding laundry or sorting packages, the robot builds an internal model and can replicate the task with 87% accuracy on the first attempt.
Real-World Deployment
- BMW factory: 50 Figure 02 robots assembling components
- Amazon warehouse: 25 robots in pilot logistics program
- Apptronik partnership: Hospital delivery and assistance
The Competition
Tesla Optimus Gen 3, Boston Dynamics Atlas (electric), and 1X Neo are all racing to achieve similar learn-from-demonstration capabilities. The humanoid robot market is projected to reach $35 billion by 2030.