Learning Continuous Grasping Function with a
Dexterous Hand from Human Demonstrations

arXiv 2022

UC San Diego

We aim to learn continuous human-like robot grasping motion for manipulation.

Abstract

We propose to learn to generate grasping motion for manipulation with a dexterous hand using implicit functions. With continuous time inputs, the model can generate a continuous and smooth grasping plan. We name the proposed model Continuous Grasping Function (CGF). CGF is learned via generative modeling with a Conditional Variational Autoencoder using 3D human demonstrations. We will first convert the large-scale human-object interaction trajectories to robot demonstrations via motion retargeting, and then use these demonstrations to train CGF. During inference, we perform sampling with CGF to generate different grasping plans in the simulator and select the successful ones to transfer to the real robot. By training on diverse human data, our CGF allows generalization to manipulate multiple objects. Compared to previous planning algorithms, CGF is more efficient and achieves significant improvement on success rate when transferred to grasping with the real Allegro Hand.

Video

Network Architecture

Our generative model takes object point cloud and a sequence of joint positions as input and recovers corresponding robot hands. The proposed CGF takes the latent code z, object feature, and the query time t as inputs to predict the corresponding joint position.

Results

Simulation Results

Thanks to human demonstrations, our CGF generates more natural and reasonable trajectories which are helpful for the sim-to-real transfer.



More simulation results



Real-world Results

Our CGF successfully transfers the simulation trajectory to the real robot.



More real-world results


Test on Unseen Objects

Our CGF can generalize to unseen objects.