Research on 3D Visualization of Drone Scenes Based on Neural Radiance Fields
文献类型:期刊论文
作者 | Jin, Pengfei1,2; Yu, Zhuoyuan1,2 |
刊名 | ELECTRONICS
![]() |
出版日期 | 2024-05-01 |
卷号 | 13期号:9页码:1682 |
关键词 | neural radiance fields neural networks implicit representation drone-captured scene feature grids |
DOI | 10.3390/electronics13091682 |
产权排序 | 1 |
文献子类 | Article |
英文摘要 | Neural Radiance Fields (NeRFs), as an innovative method employing neural networks for the implicit representation of 3D scenes, have been able to synthesize images from arbitrary viewpoints and successfully apply them to the visualization of objects and room-level scenes (<50 m(2)). However, due to the capacity limitations of neural networks, the rendering of drone-captured scenes (>10,000 m(2)) often appears blurry and lacks detail. Merely increasing the model's capacity or the number of sample points can significantly raise training costs. Existing space contraction methods, designed for forward-facing trajectory or the 360 degrees object-centric trajectory, are not suitable for the unique trajectories of drone footage. Furthermore, anomalies and cloud fog artifacts, resulting from complex lighting conditions and sparse data acquisition, can significantly degrade the quality of rendering. To address these challenges, we propose a framework specifically designed for drone-captured scenes. Within this framework, while using a feature grid and multi-layer perceptron (MLP) to jointly represent 3D scenes, we introduce a Space Boundary Compression method and a Ground-Optimized Sampling strategy to streamline spatial structure and enhance sampling performance. Moreover, we propose an anti-aliasing neural rendering model based on Cluster Sampling and Integrated Hash Encoding to optimize distant details and incorporate an L1 norm penalty for outliers, as well as entropy regularization loss to reduce fluffy artifacts. To verify the effectiveness of the algorithm, experiments were conducted on four drone-captured scenes. The results show that, with only a single GPU and less than two hours of training time, photorealistic visualization can be achieved, significantly improving upon the performance of the existing NeRF approaches. |
WOS研究方向 | Computer Science ; Engineering ; Physics |
WOS记录号 | WOS:001220542500001 |
源URL | [http://ir.igsnrr.ac.cn/handle/311030/205176] ![]() |
专题 | 资源与环境信息系统国家重点实验室_外文论文 |
通讯作者 | Yu, Zhuoyuan |
作者单位 | 1.Univ Chinese Acad Sci, Coll Resource & Environm, Beijing 100049, Peoples R China 2.Chinese Acad Sci, Inst Geog Sci & Nat Resources Res, State Key Lab Resources & Environm Informat Syst, Beijing 100101, Peoples R China |
推荐引用方式 GB/T 7714 | Jin, Pengfei,Yu, Zhuoyuan. Research on 3D Visualization of Drone Scenes Based on Neural Radiance Fields[J]. ELECTRONICS,2024,13(9):1682. |
APA | Jin, Pengfei,&Yu, Zhuoyuan.(2024).Research on 3D Visualization of Drone Scenes Based on Neural Radiance Fields.ELECTRONICS,13(9),1682. |
MLA | Jin, Pengfei,et al."Research on 3D Visualization of Drone Scenes Based on Neural Radiance Fields".ELECTRONICS 13.9(2024):1682. |
入库方式: OAI收割
来源:地理科学与资源研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。