We propose to learn to generate grasping motion for manipulation with a dexterous hand using implicit functions. With continuous time inputs, the model can generate a continuous and smooth grasping plan. We name the proposed model Continuous Grasping Function (CGF). CGF is learned via generative modeling with a Conditional Variational Autoencoder using 3D human demonstrations. We will first convert the large-scale human-object interaction trajectories to robot demonstrations via motion retargeting, and then use these demonstrations to train CGF. During inference, we perform sampling with CGF to generate different grasping plans in the simulator and select the successful ones to transfer to the real robot. By training on diverse human data, our CGF allows generalization to manipulate multiple objects. Compared to previous planning algorithms, CGF is more efficient and achieves significant improvement on success rate when transferred to grasping with the real Allegro Hand.
Our generative model takes object point cloud and a sequence of joint positions as input and recovers corresponding robot hands. The proposed CGF takes the latent code z, object feature, and the query time t as inputs to predict the corresponding joint position.
Thanks to human demonstrations, our CGF generates more natural and reasonable trajectories which are helpful for the sim-to-real transfer.
Our CGF successfully transfers the simulation trajectory to the real robot.
Our CGF can generalize to unseen objects.
Our method produces diverse grasping and the interpolation between them are also plausible.
@article{ye2023learning,
title={Learning continuous grasping function with a dexterous hand from human demonstrations},
author={Ye, Jianglong and Wang, Jiashun and Huang, Binghao and Qin, Yuzhe and Wang, Xiaolong},
journal={IEEE Robotics and Automation Letters},
year={2023}
}