Programming by Visual Demonstration for Pick-and-Place Tasks using Robot Skills
文献类型:会议论文
作者 | Hao P(郝鹏)2,3![]() ![]() ![]() ![]() |
出版日期 | 2019 |
会议日期 | 2019-12-06 |
会议地点 | 云南大理 |
英文摘要 | In this paper, we present a vision-based robot programming system for pick-and-place tasks that can generate programs from human demonstrations. The system consists of a detection network and a program generation module. The detection network leverages convolutional pose machines to detect the keypoints of the objects. The network is trained in a simulation environment in which the train set is collected and auto-labeled. To bridge the gap between reality and simulation, we propose a design method of transform function for mapping a real image to synthesized style. Compared with the unmapped results, the Mean Absolute Error (MAE) of the model completely trained with synthesized images is reduced by 23% and the False Negative Rate FNR (FNR) of the model fine-tuned by the real images is reduced by 42.5% after mapping. The program generation module provides a human-readable program based on the detection results to reproduce a real-world demonstration, in which a long- short memory (LSM) is designed to integrate current and historical information. The system is tested in the real world with a UR5 robot on the task of stacking colored cubes in different orders. |
源URL | [http://ir.ia.ac.cn/handle/173211/50910] ![]() |
专题 | 智能机器人系统研究 |
通讯作者 | Wang S(王硕) |
作者单位 | 1.中国科学院脑科学与智能技术卓越创新中心 2.中国科学院大学人工智能学院 3.中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Hao P,Lu T,Cai YH,et al. Programming by Visual Demonstration for Pick-and-Place Tasks using Robot Skills[C]. 见:. 云南大理. 2019-12-06. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。