中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
UniGen: Unified Generative Pre-training for Multilingual Multimodal Representation

文献类型:会议论文

作者Zheyuan, Tian1,2; Guan, Luo1,2; Bo, Wang1,2; Bing, Li1,2; Weiming, Hu1,2,3
出版日期2024-03
会议日期2024.03.15-2024.03.18
会议地点Waseda University, Tokyo, Japan
英文摘要

Multilingual multimodal pre-training has garnered significant atten tion, but it faces challenges due to the substantial need for diverse CCSCONCEPTS • Computing methodologies Machine learning; Machine learning approaches; Neural networks; Artificial intelligence; Com multilingual text-image data, especially for minor languages. This article introduces UniGen, a unified strategy for efficient multilin gual multimodal pre-training inspired by internet data distribution observations. Leveraging the richer availability and higher qual ity of multilingual text-English text and English text-image data, UniGen aligns the latent space of multilingual text with visual information to a unified semantic space. This alignment, with English as a reference, proves effective in enhancing cross-modal understanding. UniGen reduces reliance on multilingual text-image data, surpassing comparable models in multilingual multimodal benchmarkIGLUEbyanotable7%. Notably, UniGenisthefirstmul tilingual multimodal model to unify all pre-training tasks within a generative pre-training framework.

源URL[http://ir.ia.ac.cn/handle/173211/57096]  
专题自动化研究所_模式识别国家重点实验室_视频内容安全团队
通讯作者Guan, Luo
作者单位1.State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese
2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
3.School of Information Science and Technology, ShanghaiTech University, Shanghai, China
推荐引用方式
GB/T 7714
Zheyuan, Tian,Guan, Luo,Bo, Wang,et al. UniGen: Unified Generative Pre-training for Multilingual Multimodal Representation[C]. 见:. Waseda University, Tokyo, Japan. 2024.03.15-2024.03.18.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。