中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Variational Distillation for Multi-View Learning

文献类型:期刊论文

作者Tian, Xudong1; Zhang, Zhizhong1; Wang, Cong2; Zhang, Wensheng3; Qu, Yanyun4; Ma, Lizhuang5,6; Wu, Zongze7; Xie, Yuan1,8; Tao, Dacheng9,10
刊名IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
出版日期2024-07-01
卷号46期号:7页码:4551-4566
关键词Mutual information Task analysis Representation learning Predictive models Optimization Visualization Pattern analysis Multi-view learning information bottleneck mutual information variational inference knowledge distillation
ISSN号0162-8828
DOI10.1109/TPAMI.2023.3343717
通讯作者Xie, Yuan(yxie@cs.ecnu.edu.cn)
英文摘要Information Bottleneck (IB) provides an information-theoretic principle for multi-view learning by revealing the various components contained in each viewpoint. This highlights the necessity to capture their distinct roles to achieve view-invariance and predictive representations but remains under-explored due to the technical intractability of modeling and organizing innumerable mutual information (MI) terms. Recent studies show that sufficiency and consistency play such key roles in multi-view representation learning, and could be preserved via a variational distillation framework. But when it generalizes to arbitrary viewpoints, such strategy fails as the mutual information terms of consistency become complicated. This paper presents Multi-View Variational Distillation ((MVD)-D-2), tackling the above limitations for generalized multi-view learning. Uniquely, (MVD)-D-2 can recognize useful consistent information and prioritize diverse components by their generalization ability. This guides an analytical and scalable solution to achieving both sufficiency and consistency. Additionally, by rigorously reformulating the IB objective, (MVD)-D-2 tackles the difficulties in MI optimization and fully realizes the theoretical advantages of the information bottleneck principle. We extensively evaluate our model on diverse tasks to verify its effectiveness, where the considerable gains provide key insights into achieving generalized multi-view representations under a rigorous information-theoretic principle.
WOS关键词INFORMATION-BOTTLENECK
资助项目National Key Research and Development Program of China
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:001240147800008
出版者IEEE COMPUTER SOC
资助机构National Key Research and Development Program of China
源URL[http://ir.ia.ac.cn/handle/173211/59029]  
专题精密感知与控制研究中心_人工智能与机器学习
通讯作者Xie, Yuan
作者单位1.East China Normal Univ, Sch Comp Sci & Technol, Shanghai 200062, Peoples R China
2.Huawei Technol, Distributed & Parallel Software Lab, Labs 2012, Hangzhou 518129, Peoples R China
3.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
4.Xiamen Univ, Sch Informat Sci & Technol, Xiamen 361005, Fujian, Peoples R China
5.East China Normal Univ, Sch Comp Sci & Software Engn, Shanghai 200050, Peoples R China
6.Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
7.Shenzhen Univ, Coll Mech & Control Engn, Shenzhen 518060, Peoples R China
8.Normal Univ, Chongqing Inst East China, Shanghai 200062, Peoples R China
9.JD Exploer Acad, Beijing, Peoples R China
10.Univ Sydney, Camperdown, NSW 2050, Australia
推荐引用方式
GB/T 7714
Tian, Xudong,Zhang, Zhizhong,Wang, Cong,et al. Variational Distillation for Multi-View Learning[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2024,46(7):4551-4566.
APA Tian, Xudong.,Zhang, Zhizhong.,Wang, Cong.,Zhang, Wensheng.,Qu, Yanyun.,...&Tao, Dacheng.(2024).Variational Distillation for Multi-View Learning.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,46(7),4551-4566.
MLA Tian, Xudong,et al."Variational Distillation for Multi-View Learning".IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 46.7(2024):4551-4566.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。