A unified framework for multi-modal federated learning
文献类型:期刊论文
作者 | Xiong, Baochen1,5; Yang, Xiaoshan1,3,4![]() ![]() |
刊名 | NEUROCOMPUTING
![]() |
出版日期 | 2022-04-01 |
卷号 | 480页码:110-118 |
关键词 | Multi-modal Federated learning Co-attention |
ISSN号 | 0925-2312 |
DOI | 10.1016/j.neucom.2022.01.063 |
通讯作者 | Xiong, Baochen(bcxiong@yeah.net) |
英文摘要 | Federated Learning (FL) is a machine learning setting that separates data and protects user privacy. Clients learn global models together without data interaction. However, due to the lack of high-quality labeled data collected from the real world, most of the existing FL methods still rely on single-modal data. In this paper, we consider a new problem of multimodal federated learning. Although multimodal data always benefits from the complementarity of different modalities, it is difficult to solve the multimodal FL problem with traditional FL methods due to the modality discrepancy. Therefore, we propose a unified framework to solve it. In our framework, we use the co-attention mechanism to fuse the complementary information of different modalities. Our enhanced FL algorithm can learn useful global features of different modalities to jointly train common models for all clients. In addition, we use a personalization method based on Model-Agnostic Meta-Learning(MAML) to adapt the final model for each client. Extensive experimental results on multimodal activity recognition tasks demonstrate the effectiveness of the proposed method. (c) 2022 Elsevier B.V. All rights reserved. |
资助项目 | National Key Research and Development Program of China[2018AAA0100604] ; National Natural Science Foundation of China[61720106006] ; National Natural Science Foundation of China[62072455] ; National Natural Science Foundation of China[61721004] ; National Natural Science Foundation of China[U1836220] ; National Natural Science Foundation of China[U1705262] ; National Natural Science Foundation of China[61872424] |
WOS研究方向 | Computer Science |
语种 | 英语 |
WOS记录号 | WOS:000761796800009 |
出版者 | ELSEVIER |
资助机构 | National Key Research and Development Program of China ; National Natural Science Foundation of China |
源URL | [http://ir.ia.ac.cn/handle/173211/48084] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队 |
通讯作者 | Xiong, Baochen |
作者单位 | 1.Peng Cheng Lab, Shenzhen, Peoples R China 2.Hefei Univ Technol, Hefei, Peoples R China 3.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China 4.Chinese Acad Sci, Inst Automat, NLPR, Beijing, Peoples R China 5.Zhengzhou Univ, Henan Inst Adv Technol, Zhengzhou, Peoples R China |
推荐引用方式 GB/T 7714 | Xiong, Baochen,Yang, Xiaoshan,Qi, Fan,et al. A unified framework for multi-modal federated learning[J]. NEUROCOMPUTING,2022,480:110-118. |
APA | Xiong, Baochen,Yang, Xiaoshan,Qi, Fan,&Xu, Changsheng.(2022).A unified framework for multi-modal federated learning.NEUROCOMPUTING,480,110-118. |
MLA | Xiong, Baochen,et al."A unified framework for multi-modal federated learning".NEUROCOMPUTING 480(2022):110-118. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。