中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
FedCache: A Knowledge Cache-Driven Federated Learning Architecture for Personalized Edge Intelligence

文献类型:期刊论文

作者Wu, Zhiyuan1,2; Sun, Sheng1; Wang, Yuwei1; Liu, Min1,3; Xu, Ke3,4; Wang, Wen1,2; Jiang, Xuefeng1,2; Gao, Bo5; Lu, Jinda6
刊名IEEE TRANSACTIONS ON MOBILE COMPUTING
出版日期2024-10-01
卷号23期号:10页码:9368-9382
关键词Computer architecture Training Servers Computational modeling Data models Adaptation models Performance evaluation Distributed architecture edge computing personalized federated learning knowledge distillation communication efficiency
ISSN号1536-1233
DOI10.1109/TMC.2024.3361876
英文摘要Edge Intelligence (EI) allows Artificial Intelligence (AI) applications to run at the edge, where data analysis and decision-making can be performed in real-time and close to data sources. To protect data privacy and unify data silos distributed among end devices in EI, Federated Learning (FL) is proposed for collaborative training of shared AI models across multiple devices without compromising data privacy. However, the prevailing FL approaches cannot guarantee model generalization and adaptation on heterogeneous clients. Recently, Personalized Federated Learning (PFL) has drawn growing awareness in EI, as it enables a productive balance between local-specific training requirements inherent in devices and global-generalized optimization objectives for satisfactory performance. However, most existing PFL methods are based on the Parameters Interaction-based Architecture (PIA) represented by FedAvg, which suffers from unaffordable communication burdens due to large-scale parameters transmission between devices and the edge server. In contrast, Logits Interaction-based Architecture (LIA) allows to update model parameters with logits transfer and gains the advantages of communication lightweight and heterogeneous on-device model allowance compared to PIA. Nevertheless, previous LIA methods attempt to achieve satisfactory performance either relying on unrealistic public datasets or increasing communication overhead for additional information transmission other than logits. To tackle this dilemma, we propose a knowledge cache-driven PFL architecture, named FedCache, which reserves a knowledge cache on the server for fetching personalized knowledge from the samples with similar hashes to each given on-device sample. During the training phase, ensemble distillation is applied to on-device models for constructive optimization with personalized knowledge transferred from the server-side knowledge cache. Empirical experiments on four datasets demonstrate that FedCache achieves comparable performance with state-of-art PFL approaches, with more than two orders of magnitude improvements in communication efficiency.
资助项目National Key Research and Development Program of China[2021YFB2900102] ; National Natural Science Foundation of China[62072436] ; Innovation Capability Support Program of Shaanxi[2023-CX-TD-08] ; Shaanxi Qinchuangyuan scientists+engineers team[2023KXJ-040] ; Innovation Funding of ICT, CAS[E261080]
WOS研究方向Computer Science ; Telecommunications
语种英语
WOS记录号WOS:001306818600022
出版者IEEE COMPUTER SOC
源URL[http://119.78.100.204/handle/2XEOYT63/39601]  
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Wang, Yuwei
作者单位1.Chinese Acad Sci, Inst Comp Technol, Beijing 100045, Peoples R China
2.Univ Chinese Acad Sci, Beijing 101408, Peoples R China
3.Zhongguancun Lab, Beijing 100086, Peoples R China
4.Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100190, Peoples R China
5.Beijing Jiaotong Univ, Engn Res Ctr Network Management Technol High Speed, Sch Comp & Informat Technol, Minist Educ, Beijing 100082, Peoples R China
6.Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 101127, Peoples R China
推荐引用方式
GB/T 7714
Wu, Zhiyuan,Sun, Sheng,Wang, Yuwei,et al. FedCache: A Knowledge Cache-Driven Federated Learning Architecture for Personalized Edge Intelligence[J]. IEEE TRANSACTIONS ON MOBILE COMPUTING,2024,23(10):9368-9382.
APA Wu, Zhiyuan.,Sun, Sheng.,Wang, Yuwei.,Liu, Min.,Xu, Ke.,...&Lu, Jinda.(2024).FedCache: A Knowledge Cache-Driven Federated Learning Architecture for Personalized Edge Intelligence.IEEE TRANSACTIONS ON MOBILE COMPUTING,23(10),9368-9382.
MLA Wu, Zhiyuan,et al."FedCache: A Knowledge Cache-Driven Federated Learning Architecture for Personalized Edge Intelligence".IEEE TRANSACTIONS ON MOBILE COMPUTING 23.10(2024):9368-9382.

入库方式: OAI收割

来源:计算技术研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。