中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
End-to-end view synthesis for light field imaging with pseudo 4DCNN

文献类型:会议论文

作者Wang YL(王云龙)2; Liu F(刘菲)2; Wang ZL(王子磊)1; Hou GQ(侯广琦)2; Sun ZN(孙哲南)2; Tan TN(谭铁牛)2
出版日期2018-10-09
会议日期2018.09.08-2018.09.14
会议地点德国慕尼黑
页码333-348
英文摘要

Limited angular resolution has become the main bottleneck of microlens-based plenoptic cameras towards practical vision applications. Existing view synthesis methods mainly break the task into two steps, i.e. depth estimating and view warping, which are usually inefficient and produce artifacts over depth ambiguities. In this paper, an end-to-end deep learning framework is proposed to solve these problems by exploring Pseudo 4DCNN. Specifically, 2D strided convolutions operated on stacked EPIs and detail-restoration 3D CNNs connected with angular conversion are assembled to build the Pseudo 4DCNN. The key advantage is to efficiently synthesize dense 4D light fields from a sparse set of input views. The learning framework is well formulated as an entirely trainable problem, and all the weights can be recursively updated with standard backpropagation. The proposed framework is compared with state-of-the-art approaches on both genuine and synthetic light field databases, which achieves significant improvements of both image quality (+2 dB higher) and computational efficiency (over 10X faster). Furthermore, the proposed framework shows good performances in real-world applications such as biometrics and depth estimation.

会议录Proceedings of the european conference on computer vision (ECCV)
会议录出版者Springer
语种英语
URL标识查看原文
源URL[http://ir.ia.ac.cn/handle/173211/52381]  
专题自动化研究所_智能感知与计算研究中心
多模态人工智能系统全国重点实验室
通讯作者Tan TN(谭铁牛)
作者单位1.中国科学技术大学
2.中国科学院自动化研究所
推荐引用方式
GB/T 7714
Wang YL,Liu F,Wang ZL,et al. End-to-end view synthesis for light field imaging with pseudo 4DCNN[C]. 见:. 德国慕尼黑. 2018.09.08-2018.09.14.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。