UniGen: Unified Generative Pre-training for Multilingual Multimodal Representation
文献类型:会议论文
作者 | Zheyuan, Tian1,2![]() ![]() ![]() |
出版日期 | 2024-03 |
会议日期 | 2024.03.15-2024.03.18 |
会议地点 | Waseda University, Tokyo, Japan |
英文摘要 | Multilingual multimodal pre-training has garnered significant atten tion, but it faces challenges due to the substantial need for diverse CCSCONCEPTS • Computing methodologies Machine learning; Machine learning approaches; Neural networks; Artificial intelligence; Com multilingual text-image data, especially for minor languages. This article introduces UniGen, a unified strategy for efficient multilin gual multimodal pre-training inspired by internet data distribution observations. Leveraging the richer availability and higher qual ity of multilingual text-English text and English text-image data, UniGen aligns the latent space of multilingual text with visual information to a unified semantic space. This alignment, with English as a reference, proves effective in enhancing cross-modal understanding. UniGen reduces reliance on multilingual text-image data, surpassing comparable models in multilingual multimodal benchmarkIGLUEbyanotable7%. Notably, UniGenisthefirstmul tilingual multimodal model to unify all pre-training tasks within a generative pre-training framework. |
源URL | [http://ir.ia.ac.cn/handle/173211/57096] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_视频内容安全团队 |
通讯作者 | Guan, Luo |
作者单位 | 1.State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese 2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 3.School of Information Science and Technology, ShanghaiTech University, Shanghai, China |
推荐引用方式 GB/T 7714 | Zheyuan, Tian,Guan, Luo,Bo, Wang,et al. UniGen: Unified Generative Pre-training for Multilingual Multimodal Representation[C]. 见:. Waseda University, Tokyo, Japan. 2024.03.15-2024.03.18. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。