Symmetric Models for Visual-Force Learning

Abstract: While it is generally acknowledged that force feedback is beneficial to robotic control, applications of policy learning to robotic manipulationtypically only leverage visual feedback. Recently, symmetric neural models have been used to significantly improve the sample efficiency and performance of policy learning across a variety of robotic manipulation domains. This paper explores an application of symmetric policy learning to visual-force problems. We present Symmetric Visual Force Learning (SVFL), a novel method for robotic control which leverages visual and force feedback. We demonstrate that SVFL can significantly outperform state of the art baselines for visual force learning and report several interesting empiricalfindings related to the utility of learning force feedback control policies in both general manipulation tasks and scenarios with low visual acuity.

Paper

Under review.
arXiv

Colin Kohler, Anuj Shrivatsav Srikanth, Eshan Arora, Robert Platt
Khoury College of Computer Science
Northeastern University

Method

Experiments

We test our method in both simulation and in the real-world on robotic hardware. In simulation, we focus on evaluating the overall performance of SVFL against alternative approaches and measuring the contributions of the different input modalities for different tasks under both ideal and degraded visual observations.

Benchmarking Performance

Sensor Modality Ablation

Role of Force Feedback When Visual Acuity is Degraded

On-Robot Learning

Video

Code

The code for this project is avaliable on github.

Cite

Contact

For comments or questions, please feel free to contact Colin Kohler at kohler.c@northeastern.edu.