中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Self-Training Based Semi-Supervised and Semi-Paired Hashing Cross-Modal Retrieval

文献类型:会议论文

作者Rongrong Jing1,2; Hu Tian1,2; Xingwei Zhang1,2; Gang Zhou1,2; Xiaolong Zheng1,2; Dajun Zeng1,2
出版日期2022-07
会议日期2022-07
会议地点Padua, Italy
英文摘要

The aim of cross-modal retrieval is to search for flexible results across different types of multimedia data. However, the labeled data is usually limited and not well paired with different modalities in practical applications. These issues are not well addressed in the existing works, which cannot consider the semantic information about unlabeled and unpaired data, synchronously. Self-training is a well-known strategy to handle semi-supervised problems. Motivated by the self-training, this paper proposes a self-training-based cross-modal hashing framework (STCH) to tackle the semi-supervised and semi-paired challenges. In the framework, graph neural networks are used to capture potential intra-modality and inter-modality similarities to produce pseudo labels. Then the inconsistent pseudo labels of different modalities are refined with a heuristic filter to enhance the model robustness. To train STCH, we propose an alternating learning strategy to conduct the self-train by predicting pseudo labels during the training procedure, which can be seamlessly incorporated into semi-supervised and supervised learning. In this way, the proposed method can leverage sufficient semantic information to enhance the semi-supervised effect and address the semi-paired problem. Experiments on the real-world datasets demonstrate that our approach outperforms related methods on hash cross-modal retrieval.

源URL[http://ir.ia.ac.cn/handle/173211/48818]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_互联网大数据与安全信息学研究中心
通讯作者Xiaolong Zheng
作者单位1.University of Chinese Academy of Sciences
2.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Rongrong Jing,Hu Tian,Xingwei Zhang,et al. Self-Training Based Semi-Supervised and Semi-Paired Hashing Cross-Modal Retrieval[C]. 见:. Padua, Italy. 2022-07.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。