中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Doubly Semi-Supervised Multimodal Adversarial Learning for Classification, Generation and Retrieval

文献类型:会议论文

作者Du CD(杜长德)1; Du CY(杜长营)2; He HG(何晖光)1
出版日期2019
会议日期2019/7/8
会议地点上海
英文摘要

Learning over incomplete multi-modality data is a challenging problem with strong practical applications. Most existing multi-modal data imputation approaches have two limitations: (1) they are unable to accurately control the semantics of imputed modalities; and (2) without a shared low-dimensional latent space, they do not scale well with multiple modalities. To overcome the limitations, we propose a novel doubly semi-supervised multi-modal learning framework (DSML) with a modality-shared latent space and modality-specific generators, encoders and classifiers. We design novel softmax-based discriminators to train all modules adversarially. As a unified framework, DSML can be applied in multi-modal semi-supervised classification, missing modality imputation and fast cross-modality retrieval tasks simultaneously. Experiments on multiple datasets demonstrate its advantages.

源文献作者IEEE
语种英语
源URL[http://ir.ia.ac.cn/handle/173211/51623]  
专题类脑智能研究中心_神经计算及脑机交互
通讯作者He HG(何晖光)
作者单位1.Institute of Automation,Chinese Academy of Sciences
2.Huawei Noah’s Ark Lab, Beijing, China
推荐引用方式
GB/T 7714
Du CD,Du CY,He HG. Doubly Semi-Supervised Multimodal Adversarial Learning for Classification, Generation and Retrieval[C]. 见:. 上海. 2019/7/8.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。