Weakly Paired Multimodal Fusion for Object Recognition
文献类型:期刊论文
作者 | Liu, Huaping1,2,3; Wu, Yupei1,2,3; Sun, Fuchun1,2,3; Fang, Bin1,2,3; Guo, Di1,2,3 |
刊名 | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING
![]() |
出版日期 | 2018-04-01 |
卷号 | 15期号:2页码:784-795 |
关键词 | Intelligent robot system manipulation and grasping multimodal data projective dictionary learning weakly paired data |
ISSN号 | 1545-5955 |
DOI | 10.1109/TASE.2017.2692271 |
通讯作者 | Liu, Huaping(hpliu@tsinghua.edu.cn) |
英文摘要 | The ever-growing development of sensor technology has led to the use of multimodal sensors to develop robotics and automation systems. It is therefore highly expected to develop methodologies capable of integrating information from multimodal sensors with the goal of improving the performance of surveillance, diagnosis, prediction, and so on. However, real multimodal data often suffer from significant weak-pairing characteristics, i.e., the full pairing between data samples may not be known, while pairing of a group of samples from one modality to a group of samples in another modality is known. In this paper, we establish a novel projective dictionary learning framework for weakly paired multimodal data fusion. By introducing a latent pairing matrix, we realize the simultaneous dictionary learning and the pairing matrix estimation, and therefore improve the fusion effect. In addition, the kernelized version and the optimization algorithms are also addressed. Extensive experimental validations on some existing data sets are performed to show the advantages of the proposed method. Note to Practitioners-In many industrial environments, we usually use multiple heterogeneous sensors, which provide multimodal information. Such multimodal data usually lead to two technical challenges. First, different sensors may provide different patterns of data. Second, the full-pairing information between modalities may not be known. In this paper, we develop a unified model to tackle such problems. This model is based on a projective dictionary learning method, which efficiently produces the representation vector for the original data by an explicit form. In addition, the latent pairing relation between samples can be learned automatically and be used to improve the classification performance. Such a method can be flexibly used for multimodal fusion with full-pairing, partial-pairing and weak-pairing cases. |
WOS关键词 | CLASSIFICATION |
资助项目 | National Natural Science Foundation of China[U1613212] ; National Natural Science Foundation of China[61673238] ; National Natural Science Foundation of China[91420302] ; National Natural Science Foundation of China[61327809] ; National High-Tech Research and Development Plan[2015AA042306] ; National Science and Technology Pillar Program[2015BAK12B03] |
WOS研究方向 | Automation & Control Systems |
语种 | 英语 |
WOS记录号 | WOS:000429217900030 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
资助机构 | National Natural Science Foundation of China ; National High-Tech Research and Development Plan ; National Science and Technology Pillar Program |
源URL | [http://ir.ia.ac.cn/handle/173211/28250] ![]() |
专题 | 自动化研究所_复杂系统管理与控制国家重点实验室_机器人应用与理论组 |
通讯作者 | Liu, Huaping |
作者单位 | 1.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China 2.Tsinghua Univ, Tsinghua Natl Lab Informat Sci & Technol, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China 3.Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China |
推荐引用方式 GB/T 7714 | Liu, Huaping,Wu, Yupei,Sun, Fuchun,et al. Weakly Paired Multimodal Fusion for Object Recognition[J]. IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING,2018,15(2):784-795. |
APA | Liu, Huaping,Wu, Yupei,Sun, Fuchun,Fang, Bin,&Guo, Di.(2018).Weakly Paired Multimodal Fusion for Object Recognition.IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING,15(2),784-795. |
MLA | Liu, Huaping,et al."Weakly Paired Multimodal Fusion for Object Recognition".IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 15.2(2018):784-795. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。