News
[Jun. 2023] One paper is accepted by IROS 2023.
[Jun. 2023] Our paper RH20T is accepted by RSS 2023 Learning for Task and Motion Planning Workshop.
[Apr. 2023] Our paper AnyGrasp is accepted by T-RO.
[Feb, 2023] Our paper "Target-referenced Reactive Grasping for Dynamic Objects" is accepted by CVPR 2023.
[Feb, 2023] Our paper TransCG is accepted by ICRA 2023 as RA-L submission. See you in London!
[Dec, 2022] The preprint version of our paper AnyGrasp is released. For more details, see AnyGrasp official page.
[Jun, 2022] Our paper TransCG is accepted by RA-L. For more details, see TransCG official page
[Aug, 2021] Our paper Graspness is accepted by ICCV 2021.
|
|
Flexible Handover with Real-Time Robust Dynamic Grasp Trajectory Generation
Gu Zhang,
Hao-Shu Fang,
Hongjie Fang,
Cewu Lu
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023  
paper / bibtex
Proposes an approach for effective and robust flexible handover, which enables the robot to grasp moving objects with flexible motion trajectories with a high success rate. The key innovation of our approach is the generation of real-time robust grasp trajectories. Designs a future grasp prediction algorithm to enhance the system's adaptability to dynamic handover scenes.
|
|
RH20T: A Robotic Dataset for Learning Diverse Skills in One-Shot
Hao-Shu Fang,
Hongjie Fang,
Zhenyu Tang,
Jirong Liu,
Junbo Wang,
Haoyi Zhu,
Cewu Lu
Robotics: Science and Systems (RSS) [Workshop: Learning for Task and Motion Planning], 2023
paper / API / project page /
bibtex
Collects a dataset comprising over 110,000 contact-rich robot manipulation sequences across diverse skills, contexts, robots, and camera viewpoints, all collected in the real world. Each sequence in the dataset includes visual, force, audio, and action information, along with a corresponding human demonstration video. Puts significant efforts in calibrating all the sensors and ensures a high-quality dataset.
|
|
Target-Referenced Reactive Grasping for Dynamic Objects
Jirong Liu,
Ruo Zhang,
Hao-Shu Fang,
Minghao Gou,
Hongjie Fang,
Chenxi Wang,
Sheng Xu,
Hengxu Yan,
Cewu Lu
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023  
paper / code /
project page /
bibtex
Focuses on semantic consistency instead of temporal smoothness of the predicted grasp poses during reactive grasping. Solves the reactive grasping problem in a target-referenced setting by tracking through generated grasp spaces.
|
|
AnyGrasp: Robust and Efficient Grasp Perception in Spatial and Temporal Domains
Hao-Shu Fang,
Chenxi Wang,
Hongjie Fang,
Minghao Gou,
Jirong Liu,
Hengxu Yan,
Wenhai Liu,
Yichen Xie,
Cewu Lu
IEEE Transaction on Robotics (T-RO), 2023  
paper /
SDK /
project page /
bibtex
Proposes a powerful AnyGrasp model for general grasping, including static scenes and dynamic scenes. AnyGrasp can generate accurate, full-DoF, dense and temporally-smooth grasp poses efficiently, and works robustly against large depth sensing noise.
|
|
TransCG: A Large-Scale Real-World Dataset for Transparent Object Depth Completion and a Grasping Baseline
Hongjie Fang,
Hao-Shu Fang,
Sheng Xu,
Cewu Lu
IEEE Robotics and Automation Letters (RA-L), 2022  
IEEE International Conference on Robotics and Automation (ICRA), 2023  
paper /
code /
project page /
bibtex
Proposes TransCG, a large-scale real-world dataset for transparent object depth completion, along with a depth completion method DFNet based on the TransCG dataset.
|
|
Graspness Discovery in Clutters for Fast and Accurate Grasp Detection
Chenxi Wang,
Hao-Shu Fang,
Minghao Gou,
Hongjie Fang,
Jin Gao,
Cewu Lu
IEEE International Conference on Computer Vision (ICCV), 2021  
paper /
bibtex
Proposes graspness, a quality based on geometry cues that distinguishes graspable area in cluttered scenes, which can be measured by a look-ahead searching method. Proposes a graspness model to approximate the graspness value for quickly detect grasps in practice.
|
|
EasyRobot
Hongjie Fang
research project, under active development
code
Provides an easy and unified interface for robots, grippers, sensors and pedals.
|
|
Oh-My-Papers
Hongjie Fang, Zhanda Zhu, Haoran Zhao
course project of SJTU undergraduate course "Mobile Internet"
code / demo / report
Proposes that we can learn "jargons" like "ResNet" and "YOLO" from academic paper citation information, and such citation information can be regarded as the searching results of the corresponding "jargon". For example, when searching "ResNet", the engine should return the "Deep Residual Learning for Image Recognition", instead of the papers that contains word "ResNet" in their titles, as current scholar search engines commonly return.
|
Services
Reviewer: ICRA 2023, IROS 2023.
|
More about Me
Some of My Notes ---> Notes
|
The website is built upon this template.
|
|