Self-Supervised Representation Learning from Arbitrary Scenarios
文献类型:会议论文
作者 | Li, Zhaowen![]() ![]() ![]() ![]() ![]() ![]() |
出版日期 | 2024 |
会议日期 | 2024 |
会议地点 | 美国西雅图 |
英文摘要 | Current self-supervised methods can primarily be categorized into contrastive learning and masked image modeling. Extensive studies have demonstrated that combining these two approaches can achieve state-of-the-art performance. However, these methods essentially reinforce the global consistency of contrastive learning without taking into account the conflicts between these two approaches, which hinders their generalizability to arbitrary scenarios. In this paper, we theoretically prove that MAE serves as a patch-level contrastive learning, where each patch within an image is considered as a distinct category. This presents a significant conflict with global-level contrastive learning, which treats all patches in an image as an identical category. To address this conflict, this work abandons the non-generalizable global-level constraints and proposes explicit patch-level contrastive learning as a solution. Specifically, this work employs the encoder of MAE to generate dual-branch features, which then perform patch-level learning through a decoder. In contrast to global-level data augmentation in contrastive learning, our approach leverages patch-level feature augmentation to mitigate interference from global-level learning. Consequently, our approach can learn heterogeneous representations from a single image while avoiding the conflicts encountered by previous methods. Massive experiments affirm the potential of our method for learning from arbitrary scenarios. |
源URL | [http://ir.ia.ac.cn/handle/173211/56720] ![]() |
专题 | 紫东太初大模型研究中心_大模型计算 |
作者单位 | 1.中国科学院自动化研究所2.中国科学院大学 |
推荐引用方式 GB/T 7714 | Li, Zhaowen,Zhu, Yousong,Chen, Zhiyang,et al. Self-Supervised Representation Learning from Arbitrary Scenarios[C]. 见:. 美国西雅图. 2024. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。