Robo-MUTUAL

Robotic Multimodal Task Specification via Unimodal Learning

Jianxiong Li* 1, Zhihao Wang* 2 1, Jinliang Zheng* 1 3,
Xiaoai Zhou4 1, Guanming Wang5 1, Guanglu Song3, Yu Liu3, Jingjing Liu1, Ya-Qin Zhang1,
Junzhi Yu2, Xianyuan Zhan 1 6

1AIR, Tsinghua University 2CoE, Peking University 3SenseTime Research
4University of Toronto 5University College London 6Shanghai AI Lab

*Equal contribution,
†Project Lead: li-jx21@mails.tsinghua.edu.cn
✉Corresponding author: yujunzhi@pku.edu.cn and zhanxianyuan@air.tsinghua.edu.cn

Abstract

Multimodal task specification is essential for enhanced robotic performance, where Cross-modality Alignment enables the robot to holistically understand complex task instructions. Directly annotating multimodal instructions for model training proves impractical, due to the sparsity of paired multimodal data. In this study, we demonstrate that by leveraging unimodal instructions abundant in real data, we can effectively teach robots to learn multimodal task specifications. First, we endow the robot with strong Cross- modality Alignment capabilities, by pretraining a robotic multimodal encoder using extensive out-of-domain data. Then, we employ two Collapse and Corrupt operations to further bridge the remaining modality gap in the learned multimodal representation. This approach projects different modalities of identical task goal as interchangeable representations, thus enabling accurate robotic operations within a well-aligned multimodal latent space. Evaluation across more than 130 tasks and 4000 evaluations on both simulated LIBERO benchmark and real robot platforms showcases the superior capabilities of our proposed framework, demonstrating significant advantage in overcoming data constraints in robotic learning.

Transfer from Unimodal to Multimodal Goals

We train Robo-MUTUAL constrained on dataset that only visual/textual goals are available, and evaluate the performance on textual&visual goals.

Results on Real Robots

algebraic reasoning

Figure 1: Real robot experimental results. Success rate is averaged over 10 episodes and 3 seeds.

Put duck on green plate

train on Visual eval on Textual

Given task: "put the duck in the green plate"


train on Textual eval on Visual

Given task:

Put duck in pot

train on Visual eval on Textual

Given task: "put the duck in the pot"

train on Textual eval on Visual

Given task:

Move pot from right to left

train on Visual eval on Textual

Given task: "move the pot from right to left"


train on Textual eval on Visual

Given task:

Put red cup on red plate

train on Visual eval on Textual

Given task: "put the red cup on the red plate"


train on Textual eval on Visual

Given task:

Flip red cup upright

train on Visual eval on Textual

Given task: "flip the red cup upright"


train on Textual eval on Visual

Given task:

Fold cloth from right to left

train on Visual eval on Textual

Given task: "fold the cloth from right to left"


train on Textual eval on Visual

Given task:

Results on Simulation

We train Robo-MUTUAL on 130 tasks on LIBERO benchmark. Robo-MUTUAL achieves the highest success rate when evaluated with modality which doesn't appear in training dataset, demonstrating its effectiveness in achieving multimodal task specification via unimodal training.

algebraic reasoning

Figure 2: Simulation experimental results. Success rate is averaged over 10 episodes and 3 seeds.