中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Dual-stream spatio-temporal decoupling network for video deblurring

文献类型:期刊论文

作者Ning, Taigong1; Li, Weihong1; Li, Zhenghao2; Zhang, Yanfang1
刊名APPLIED SOFT COMPUTING
出版日期2022-02-01
卷号116页码:16
ISSN号1568-4946
关键词Video deblurring Decoupling learning Dual-stream network Motion compensation 3D CNNs
DOI10.1016/j.asoc.2021.108342
通讯作者Li, Weihong(weihongli@cqu.edu.cn)
英文摘要It is very important to obtain spatio-temporal information in video deblurring based on deep learning. The existing methods usually jointly learn the spatio-temporal information of blurred videos through single-stream networks, which inevitably limit spatio-temporal information learning and video deblurring performance of networks. Therefore, we propose a dual-stream spatio-temporal decoupling network (STDN), which can learn the spatio-temporal information of blurred videos more flexibly and efficiently with the decoupled temporal stream and spatial stream, for solving this problem. Firstly, in the temporal stream of STDN, we propose a video deblurring pipeline, that is motion compensation plus 3D CNNs, for solving the drawback of 3D CNNs that its receptive field cannot effectively cover the same but misplaced contents of different frames. Thus, the temporal stream can aggregate temporal information of frame sequences and handle inter-frame misalignments more effectively. Specifically, we design a novel deformable convolution compensation module (DCCM) to achieve motion compensation of this pipeline more accurately. Then, we develop a 3DConv module optimized by the designed temporal, spatial, and channel decoupling attention block, named the CTS, to achieve 3D CNNs of this pipeline. Secondly, we design a spatial stream in which two types of wide-activation residual modules are stacked, for learning more spatial features of the central frame to supplement the temporal stream. Finally, extensive experiments on the baseline datasets demonstrate that the proposed STDN has better performance than the latest methods. Remarkably, using the proposed temporal stream alone already can achieve competitive video deblurring performance than the existing methods. (C) 2021 Elsevier B.V. All rights reserved.
WOS研究方向Computer Science
语种英语
出版者ELSEVIER
WOS记录号WOS:000768204300007
源URL[http://119.78.100.138/handle/2HOD01W0/15481]  
专题中国科学院重庆绿色智能技术研究院
通讯作者Li, Weihong
作者单位1.Chongqing Univ, Coll Optoelect Engn, Key Lab Optoelect Technol & Syst, Educ Minist, Chongqing 400044, Peoples R China
2.Chinese Acad Sci, Chongqing Inst Green & Intelligent Technol, Chongqing 400714, Peoples R China
推荐引用方式
GB/T 7714
Ning, Taigong,Li, Weihong,Li, Zhenghao,et al. Dual-stream spatio-temporal decoupling network for video deblurring[J]. APPLIED SOFT COMPUTING,2022,116:16.
APA Ning, Taigong,Li, Weihong,Li, Zhenghao,&Zhang, Yanfang.(2022).Dual-stream spatio-temporal decoupling network for video deblurring.APPLIED SOFT COMPUTING,116,16.
MLA Ning, Taigong,et al."Dual-stream spatio-temporal decoupling network for video deblurring".APPLIED SOFT COMPUTING 116(2022):16.

入库方式: OAI收割

来源:重庆绿色智能技术研究院

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。