RT-2: New model translates vision and language into action DeepMind Blog
Introducing Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities. This work builds upon Robotic Transformer 1 (RT-1), a model trained on multi-task… Read More »RT-2: New model translates vision and language into action DeepMind Blog