中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Pedestrian Detection Aided by Deep Learning Semantic Tasks

文献类型:会议论文

作者Yonglong Tian; Ping Luo; Xiaogang Wang; Xiaoou Tang
出版日期2015
会议名称IEEE Conference on Computer Vision and Pattern Recognition
会议地点美国波士顿
英文摘要Deep learning methods have achieved great success in pedestrian detection, owing to its ability to learn features from raw pixels. However, they can mainly capture middlelevel representations, such as pose of pedestrian, but confuses positive with hard negative samples (Fig.1 (a)), which have large ambiguity and can only be distinguished by highlevel representation. To address this ambiguity, this work jointly optimize pedestrian detection with semantic tasks, including pedestrian attributes (e.g. 'carrying backpack') and scene attributes (e.g. 'vehicle', 'tree', and 'horizontal'). Rather than expensively annotating scene attributes, we transfer attributes information from existing scene segmentation datasets to the pedestrian dataset, by proposing a novel deep model to learn high-level features from multiple tasks and multiple data sources. Since distinct tasks have distinct convergence rates and data from different datasets have different distributions, a multi-task objective function is carefully designed to coordinate tasks and reduce discrepancies among datasets. The importance coefficients of tasks and network parameters in this objective function can be iteratively estimated. Extensive evaluations show that the proposed approach outperforms the state-of-the-art on the challenging Caltech [10] and ETH [11] datasets where it reduces the miss rates of previous deep models by 17 and 5.5 percent, respectively.
收录类别EI
语种英语
源URL[http://ir.siat.ac.cn:8080/handle/172644/6700]  
专题深圳先进技术研究院_集成所
作者单位2015
推荐引用方式
GB/T 7714
Yonglong Tian,Ping Luo,Xiaogang Wang,et al. Pedestrian Detection Aided by Deep Learning Semantic Tasks[C]. 见:IEEE Conference on Computer Vision and Pattern Recognition. 美国波士顿.

入库方式: OAI收割

来源:深圳先进技术研究院

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。