Long video question answering: A Matching-guided Attention Model
文献类型:期刊论文
作者 | Wang, Weining1,3![]() ![]() ![]() |
刊名 | PATTERN RECOGNITION
![]() |
出版日期 | 2020-06-01 |
卷号 | 102页码:11 |
关键词 | Long video QA Matching-guided attention |
ISSN号 | 0031-3203 |
DOI | 10.1016/j.patcog.2020.107248 |
通讯作者 | Wang, Liang(wangliang@nlpr.ia.ac.cn) |
英文摘要 | Existing video question answering methods answer given questions based on short video snippets. The underlying assumption is that the visual content indicating the ground truth answer ubiquitously exists in the snippet. It might be problematic for long video applications, since involving large numbers of answer-irrelevant snippets will dramatically degenerate the performance. To deal with this issue, we focus on a rarely investigated but practically important problem, namely long video QA, by predicting answers directly from long videos rather than manually pre-extracted short video snippets. We accordingly propose a Matching-guided Attention Model (MAM) which jointly extracts question-related video snippets and predicts answers in a unified framework. To localize questions accurately and efficiently, we calculate corresponding matching scores and boundary regression results for candidate video snippet proposals generated by sliding windows of limited granularity. Guided by the matching scores, the model pays different attention to the extracted video snippet proposals for each question. Finally, we use the attended visual features along with the question to predict the answer in a classification manner. A key obstacle to training our model is that publicly available video QA datasets only contain short videos especially designed for short video QA. Thus, we generate two new datasets for this task on the top of TACoS Multi-level dataset and MSR-VTT dataset by generating QA pairs from the video captions, called TACoS-QA and MSR-VTT-QA. Experimental results show the effectiveness of our proposed method on both datasets by comparing with two short video QA methods and a baseline method. (C) 2020 Elsevier Ltd. All rights reserved. |
WOS关键词 | NETWORK ; IMAGE |
资助项目 | National Key Research and Development Program of China[2016YFB1001000] ; National Key Research and Development Program of China[2018AAA0100402] ; National Natural Science Foundation of China[61525306] ; National Natural Science Foundation of China[61633021] ; National Natural Science Foundation of China[61721004] ; National Natural Science Foundation of China[61420106015] ; National Natural Science Foundation of China[61806194] ; National Natural Science Foundation of China[U1803261] ; National Natural Science Foundation of China[61976132] ; Capital Science and Technology Leading Talent Training Project[Z181100006318030] ; CAS-AIR ; [HW2019SOW01] |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:000525825100029 |
出版者 | ELSEVIER SCI LTD |
资助机构 | National Key Research and Development Program of China ; National Natural Science Foundation of China ; Capital Science and Technology Leading Talent Training Project ; CAS-AIR |
源URL | [http://ir.ia.ac.cn/handle/173211/38877] ![]() |
专题 | 自动化研究所_智能感知与计算研究中心 |
通讯作者 | Wang, Liang |
作者单位 | 1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China 2.Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Inst Automat, Beijing 100190, Peoples R China 3.Chinese Acad Sci, Ctr Res Intelligent Percept & Comp, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Wang, Weining,Huang, Yan,Wang, Liang. Long video question answering: A Matching-guided Attention Model[J]. PATTERN RECOGNITION,2020,102:11. |
APA | Wang, Weining,Huang, Yan,&Wang, Liang.(2020).Long video question answering: A Matching-guided Attention Model.PATTERN RECOGNITION,102,11. |
MLA | Wang, Weining,et al."Long video question answering: A Matching-guided Attention Model".PATTERN RECOGNITION 102(2020):11. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。