An efficient multi-scale transformer for satellite image dehazing
文献类型:期刊论文
作者 | Yang, Lei4,5![]() ![]() ![]() ![]() |
刊名 | Expert Systems
![]() |
出版日期 | 2024 |
ISSN号 | 02664720;14680394 |
DOI | 10.1111/exsy.13575 |
产权排序 | 1 |
英文摘要 | Given the impressive achievement of convolutional neural networks (CNNs) in grasping image priors from extensive datasets, they have been widely utilized for tasks related to image restoration. Recently, there is been significant progress in another category of neural architectures—Transformers. These models have demonstrated remarkable performance in natural language tasks and higher-level vision applications. Despite their ability to address some of CNNs limitations, such as restricted receptive fields and adaptability issues, Transformer models often face difficulties when processing images with a high level of detail. This is because the complexity of the computations required increases significantly with the image's spatial resolution. As a result, their application to most high-resolution image restoration tasks becomes impractical. In our research, we introduce a novel Transformer model, named DehFormer, by implementing specific design modifications in its fundamental components, for example, the multi-head attention and feed-forward network. Specifically, the proposed architecture consists of the three modules, that is, (a) multi-scale feature aggregation network (MSFAN), (b) the gated-Dconv feed-forward network (GFFN), (c) and the multi-Dconv head transposed attention (MDHTA). For the MDHTA module, our objective is to scrutinize the mechanics of scaled dot-product attention through the utilization of per-element product operations, thereby bypassing the need for matrix multiplications and operating directly in the frequency domain for enhanced efficiency. For the GFFN module, which enables only the relevant and valuable information to advance through the network hierarchy, thereby enhancing the efficiency of information flow within the model. Extensive experiments are conducted on the SateHazelk, RS-Haze, and RSID datasets, resulting in performance that significantly exceeds that of existing methods. © 2024 John Wiley & Sons Ltd. |
语种 | 英语 |
出版者 | John Wiley and Sons Inc |
源URL | [http://ir.opt.ac.cn/handle/181661/97348] ![]() |
专题 | 西安光学精密机械研究所_动态光学成像研究室 |
通讯作者 | Yang, Lei; He, Lang |
作者单位 | 1.Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, China 2.Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, China; 3.School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, China; 4.School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China; 5.Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, China; |
推荐引用方式 GB/T 7714 | Yang, Lei,Cao, Jianzhong,Chen, Weining,et al. An efficient multi-scale transformer for satellite image dehazing[J]. Expert Systems,2024. |
APA | Yang, Lei,Cao, Jianzhong,Chen, Weining,Wang, Hao,&He, Lang.(2024).An efficient multi-scale transformer for satellite image dehazing.Expert Systems. |
MLA | Yang, Lei,et al."An efficient multi-scale transformer for satellite image dehazing".Expert Systems (2024). |
入库方式: OAI收割
来源:西安光学精密机械研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。