中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Text-to-Image Vehicle Re-Identification: Multi-Scale Multi-View Cross-Modal Alignment Network and a Unified Benchmark

文献类型:期刊论文

作者Ding, Leqi4,5; Liu, Lei4,5; Huang, Yan3; Li, Chenglong4,5; Zhang, Cheng2; Wang, Wei1; Wang, Liang3
刊名IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
出版日期2024-01-16
页码14
ISSN号1524-9050
关键词Task analysis Feature extraction Visualization Training Electronic mail Benchmark testing Trajectory Text-to-image vehicle re-identification cross-modal alignment multi-scale multi-view analysis benchmark dataset
DOI10.1109/TITS.2023.3348599
通讯作者Li, Chenglong(lcl1314@foxmail.com)
英文摘要Vehicle Re-IDentification (Re-ID) aims to retrieve the most similar images with a given query vehicle image from a set of images captured by non-overlapping cameras, and plays a crucial role in intelligent transportation systems and has made impressive advancements in recent years. In real-world scenarios, we can often acquire the text descriptions of target vehicle through witness accounts, and then manually search the image queries for vehicle Re-ID, which is time-consuming and labor-intensive. To solve this problem, this paper introduces a new fine-grained cross-modal retrieval task called text-to-image vehicle re-identification, which seeks to retrieve target vehicle images based on the given text descriptions. To bridge the significant gap between language and visual modalities, we propose a novel Multi-scale multi-view Cross-modal Alignment Network (MCANet). In particular, we incorporate view masks and multi-scale features to align image and text features in a progressive way. In addition, we design the Masked Bidirectional InfoNCE (MB-InfoNCE) loss to enhance the training stability and make the best use of negative samples. To provide an evaluation platform for text-to-image vehicle re-identification, we create a Text-to-Image Vehicle Re-Identification dataset (T2I VeRi), which contains 2465 image-text pairs from 776 vehicles with an average sentence length of 26.8 words. Extensive experiments conducted on T2I VeRi demonstrate MCANet outperforms the current state-of-art (SOTA) method by 2.2% in rank-1 accuracy.
资助项目National Natural Science Foundation of China
WOS研究方向Engineering ; Transportation
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:001167345700001
资助机构National Natural Science Foundation of China
源URL[http://ir.ia.ac.cn/handle/173211/55625]  
专题多模态人工智能系统全国重点实验室
通讯作者Li, Chenglong
作者单位1.Video Invest Detachment Hefei Publ Secur Bur, Hefei, Peoples R China
2.Anhui Univ, Stony Brook Inst, Anhui Prov Key Lab Multimodal Cognit Computat, Hefei 230601, Peoples R China
3.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
4.Anhui Univ, Sch Comp Sci & Technol, Hefei 230601, Peoples R China
5.Anhui Univ, Informat Mat & Intelligent Sensing Lab Anhui Prov, Anhui Prov Key Lab Multimodal Cognit Computat, Hefei 230601, Peoples R China
推荐引用方式
GB/T 7714
Ding, Leqi,Liu, Lei,Huang, Yan,et al. Text-to-Image Vehicle Re-Identification: Multi-Scale Multi-View Cross-Modal Alignment Network and a Unified Benchmark[J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS,2024:14.
APA Ding, Leqi.,Liu, Lei.,Huang, Yan.,Li, Chenglong.,Zhang, Cheng.,...&Wang, Liang.(2024).Text-to-Image Vehicle Re-Identification: Multi-Scale Multi-View Cross-Modal Alignment Network and a Unified Benchmark.IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS,14.
MLA Ding, Leqi,et al."Text-to-Image Vehicle Re-Identification: Multi-Scale Multi-View Cross-Modal Alignment Network and a Unified Benchmark".IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2024):14.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。