中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Self-supervised depth-guided few-shot neural rendering

文献类型:会议论文

作者Yang J(杨健)1,2; Zhang A(张骜)1,2; Fang QH(方启航)1,2; Xiong G(熊刚)2; Shen Z(沈震)2; Wu HY(吴怀宇)2
出版日期2024-03-05
会议日期2024-3-15
会议地点中国云南昆明
英文摘要

Novel views synthesis is an important topic in metaverse application. Existing methods suffer the tremendous training
views to guarantee the synthesis quality, which is a stringent condition in practice. To address this problem, we propose a
depth-guided and self-supervised method to achieving novel views synthesis in challenging sparse training views. For
achieving this goal, we propose a depth information digging strategy and an uncertainty-based depth supervision method.
We conduct series of experiments on DTU dataset to demonstrate the rationality of our design. And experiment results
represent that our method achieves non-trivial improvement comparing baselines.

源URL[http://ir.ia.ac.cn/handle/173211/57590]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_先进控制与自动化团队
作者单位1.中国科学院大学人工智能学院
2.中国科学院自动化研究所多模态人工智能系统全国重点实验室
推荐引用方式
GB/T 7714
Yang J,Zhang A,Fang QH,et al. Self-supervised depth-guided few-shot neural rendering[C]. 见:. 中国云南昆明. 2024-3-15.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。