中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Multimodal Pretraining from Monolingual to Multilingual

文献类型:期刊论文

作者Liang Zhang, Ludan Ruan, Anwen Hu, Qin Jin
刊名Machine Intelligence Research
出版日期2023
卷号20期号:2页码:220-232
ISSN号2731-538X
关键词Multilingual pretraining multimodal pretraining cross-lingual transfer multilingual generation cross-modal retrieval
DOI10.1007/s11633-022-1414-4
英文摘要Multimodal pretraining has made convincing achievements in various downstream tasks in recent years. However, since the majority of the existing works construct models based on English, their applications are limited by language. In this work, we address this issue by developing models with multimodal and multilingual capabilities. We explore two types of methods to extend multimodal pretraining model from monolingual to multilingual. Specifically, we propose a pretraining-based model named multilingual multimodal pretraining (MLMM), and two generalization-based models named multilingual CLIP (M-CLIP) and multilingual acquisition (MLA). In addition, we further extend the generalization-based models to incorporate the audio modality and develop the multilingual CLIP for vision, language, and audio (CLIP4VLA). Our models achieve state-of-the-art performances on multilingual vision-text retrieval, visual question answering, and image captioning benchmarks. Based on the experimental results, we discuss the pros and cons of the two types of models and their potential practical applications.
源URL[http://ir.ia.ac.cn/handle/173211/51477]  
专题自动化研究所_学术期刊_International Journal of Automation and Computing
作者单位School of Information, Renmin University of China, Beijing 100872, China
推荐引用方式
GB/T 7714
Liang Zhang, Ludan Ruan, Anwen Hu, Qin Jin. Multimodal Pretraining from Monolingual to Multilingual[J]. Machine Intelligence Research,2023,20(2):220-232.
APA Liang Zhang, Ludan Ruan, Anwen Hu, Qin Jin.(2023).Multimodal Pretraining from Monolingual to Multilingual.Machine Intelligence Research,20(2),220-232.
MLA Liang Zhang, Ludan Ruan, Anwen Hu, Qin Jin."Multimodal Pretraining from Monolingual to Multilingual".Machine Intelligence Research 20.2(2023):220-232.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。