Pushing and Bounding Loss for Training Deep Super-Resolution Network
文献类型:会议论文
作者 | Shang Li1,2![]() ![]() ![]() ![]() |
出版日期 | 2020-10 |
会议日期 | October 30-31, 2020 |
会议地点 | Beijing, China |
国家 | 中国 |
英文摘要 | As deep neural networks (DNNs) are hard to be trained due to gradient vanishing, intermediate supervision is typically used to help earlier layers to be better optimized. Such deeply supervised methods have proved to be beneficial to various tasks such as classification and pose estimation, but it is rarely used for image super-resolution (SR). This is because intermediate supervision needs a set of intermediate labels, but in SR, these labels are hard to be defined. Experiments show that identity labels across the whole network, which are used for classification, will cause inconsistence and harm the final performance. We argue that ‘mediately accurate’ labels, i.e.relatively soft labels, are more suitable for intermediate supervision on SR networks. But labels in SR networks are of either completely high resolution or completely low resolution. To address this problem, we propose what we call pushing and bounding loss, which forces the network to learn better features as it goes deeper. In this way, we do not need to explicitly give any ‘mediately accurate’ labels but all internal layers can also be directly supervised. Extensive experiments show that deep SR networks trained in this scheme will receive a stable gain without adding any extra modules. |
源文献作者 | 中国传媒大学,中国科学院自动化研究所 |
产权排序 | 1 |
源URL | [http://ir.ia.ac.cn/handle/173211/47523] ![]() |
专题 | 数字内容技术与服务研究中心_新媒体服务与管理技术 |
通讯作者 | Shang Li |
作者单位 | 1.University of Chinese Academy of Sciences 2.Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Shang Li,Guixuan Zhang,Jie Liu,et al. Pushing and Bounding Loss for Training Deep Super-Resolution Network[C]. 见:. Beijing, China. October 30-31, 2020. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。