Deep unsupervised multi-modal fusion network for detecting driver distraction
文献类型:期刊论文
作者 | Zhang, Yuxin1,2,3; Chen, Yiqiang1,2,3; Gao, Chenlong1,2,3 |
刊名 | NEUROCOMPUTING
![]() |
出版日期 | 2021-01-15 |
卷号 | 421页码:26-38 |
ISSN号 | 0925-2312 |
英文摘要 | The risk of incurring a road traffic crash has increased year by year. Studies show that lack of attention during driving is one of the major causes of traffic accidents. In this work, in order to detect driver distraction, e.g., phone conversation, eating, texting, we introduce a deep unsupervised multi-modal fusion network, termed UMMFN. It is an end-to-end model composing of three main modules: multi-modal representation learning, multi-scale feature fusion and unsupervised driver distraction detection. The first module is to learn low-dimensional representation of multiple heterogeneous sensors using embedding subnetworks. The goal of multi-scale feature fusion is to learn both the temporal dependency for each modality and spatio dependencies from different modalities. The last module utilizes a ConvLSTM Encoder-Decoder model to perform an unsupervised classification task that is not affected by new types of driver behaviors. During the detection phase, a fine-grained detection decision can be made through calculating reconstruction error of UMMFN as a score for each captured testing data. We empirically compare the proposed approach with several state-of-the-art methods on our own multi-modal dataset for distracted driving behavior. Experimental results show that UMMFN has superior performance over the existing approaches. (c) 2020 Elsevier B.V. All rights reserved. |
资助项目 | National Key Research and Development Plan of China[2018YFC2000605] ; Beijing Natural Science Foundation[4194091] ; Beijing Municipal Science & Technology Commission[Z171100000117001] |
WOS研究方向 | Computer Science |
语种 | 英语 |
WOS记录号 | WOS:000593102100003 |
出版者 | ELSEVIER |
源URL | [http://119.78.100.204/handle/2XEOYT63/15969] ![]() |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Chen, Yiqiang |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China 3.Beijing Key Lab Mobile Comp & Pervas Device, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Zhang, Yuxin,Chen, Yiqiang,Gao, Chenlong. Deep unsupervised multi-modal fusion network for detecting driver distraction[J]. NEUROCOMPUTING,2021,421:26-38. |
APA | Zhang, Yuxin,Chen, Yiqiang,&Gao, Chenlong.(2021).Deep unsupervised multi-modal fusion network for detecting driver distraction.NEUROCOMPUTING,421,26-38. |
MLA | Zhang, Yuxin,et al."Deep unsupervised multi-modal fusion network for detecting driver distraction".NEUROCOMPUTING 421(2021):26-38. |
入库方式: OAI收割
来源:计算技术研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。