中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
PTINet: Converting 3D points to 2D images with deconvolution forpoint cloud classification

文献类型:会议论文

作者Fan Chen; Guoyuan Liang; Yimin Zhou; Xinyu Wu; Wei Feng; Shengzhan He
出版日期2018
会议日期2018
会议地点Guangzhou,China
英文摘要3D point cloud classification is a very important task for many applications such as AR/VR, human-computer interaction, environment modeling, remote sensing, etc. Over the past decade, lots of research have been conducted and great progress been made in this field. However, point cloud classification is still a very challenging problem due to the irregularity of point cloud data, which makes the most popular deep neural network difficult to be applied. In order to get a regular 2D representation for point cloud data, some researchers projected point cloud data to 2D images by following some predefined rules, and utilized convolution for further processing. In this paper, we introduce a new deep network to find proper regular representations. The basic observation is that each point is considered as a 1x1 image. Therefore, deconvolution can be applied to map 3D points to 2D images. We named this network PTINet (Point to Image Network). Instead of predefined mapping rules, PTINet has the ability to find better mapping by learning, which can preserve 3D shape information as much as possible. The experiments conducted on the ModelNet dataset demonstrate competitive results of the proposed method.
源URL[http://ir.siat.ac.cn:8080/handle/172644/13858]  
专题深圳先进技术研究院_集成所
推荐引用方式
GB/T 7714
Fan Chen,Guoyuan Liang,Yimin Zhou,et al. PTINet: Converting 3D points to 2D images with deconvolution forpoint cloud classification[C]. 见:. Guangzhou,China. 2018.

入库方式: OAI收割

来源:深圳先进技术研究院

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。