中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Learned Image Compression Using Cross-Component Attention Mechanism

文献类型:期刊论文

作者Duan, Wenhong2,3; Chang, Zheng4; Jia, Chuanmin5; Wang, Shanshe1; Ma, Siwei1; Song, Li6; Gao, Wen1
刊名IEEE TRANSACTIONS ON IMAGE PROCESSING
出版日期2023
卷号32页码:5478-5493
关键词Image coding Context modeling Transforms Decoding Standards Image reconstruction Transform coding Image compression cross-component information-guided unit attention mechanism information-preserving
ISSN号1057-7149
DOI10.1109/TIP.2023.3319275
英文摘要Learned image compression methods have achieved satisfactory results in recent years. However, existing methods are typically designed for RGB format, which are not suitable for YUV420 format due to the variance of different formats. In this paper, we propose an information-guided compression framework using cross-component attention mechanism, which can achieve efficient image compression in YUV420 format. Specifically, we design a dual-branch advanced information-preserving module (AIPM) based on the information-guided unit (IGU) and attention mechanism. On the one hand, the dual-branch architecture can prevent changes in original data distribution and avoid information disturbance between different components. The feature attention block (FAB) can preserve the important information. On the other hand, IGU can efficiently utilize the correlations between Y and UV components, which can further preserve the information of UV by the guidance of Y. Furthermore, we design an adaptive cross-channel enhancement module (ACEM) to reconstruct the details by utilizing the relations from different components, which makes use of the reconstructed Y as the textural and structural guidance for UV components. Extensive experiments show that the proposed framework can achieve the state-of-the-art performance in image compression for YUV420 format. More importantly, the proposed framework outperforms Versatile Video Coding (VVC) with 8.37% BD-rate reduction on common test conditions (CTC) sequences on average. In addition, we propose a quantization scheme for context model without model retraining, which can overcome the cross-platform decoding error caused by the floating-point operations in context model and provide a reference approach for the application of neural codec on different platforms.
资助项目National Natural Science Foundation of China[62025101] ; National Natural Science Foundation of China[62101007] ; Fundamental Research Funds for the Central Universities ; Young Elite Scientist Sponsorship Program by the Beijing Association for Science and Technology (BAST)[BYSS2022019] ; Wen-Tsun Wu Honorary Doctoral Scholarship ; AI Institute ; Shanghai Jiao Tong University
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:001082264400006
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
源URL[http://119.78.100.204/handle/2XEOYT63/21119]  
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Jia, Chuanmin; Ma, Siwei
作者单位1.Peking Univ, Natl Engn Res Ctr Visual Technol, Sch Comp Sci, Beijing 100871, Peoples R China
2.Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China
3.Shanghai Jiao Tong Univ, AI Inst, Shanghai 200240, Peoples R China
4.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
5.Peking Univ, Wangxuan Inst Comp Technol WICT, Beijing 100871, Peoples R China
6.Shanghai Jiao Tong Univ, Inst Image Commun & Network Engn, AI Inst, Shanghai 200240, Peoples R China
推荐引用方式
GB/T 7714
Duan, Wenhong,Chang, Zheng,Jia, Chuanmin,et al. Learned Image Compression Using Cross-Component Attention Mechanism[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2023,32:5478-5493.
APA Duan, Wenhong.,Chang, Zheng.,Jia, Chuanmin.,Wang, Shanshe.,Ma, Siwei.,...&Gao, Wen.(2023).Learned Image Compression Using Cross-Component Attention Mechanism.IEEE TRANSACTIONS ON IMAGE PROCESSING,32,5478-5493.
MLA Duan, Wenhong,et al."Learned Image Compression Using Cross-Component Attention Mechanism".IEEE TRANSACTIONS ON IMAGE PROCESSING 32(2023):5478-5493.

入库方式: OAI收割

来源:计算技术研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。