中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
3D-CNN for Facial Micro- and Macro-expression Spotting on Long Video Sequences using Temporal Oriented Reference Frame

文献类型:会议论文

作者Yap, Chuin Hong3; Yap, Moi Hoon3; Davison, Adrian2; Kendrick, Connah3; Li, Jingting1; Wang, Su-Jing1; Li, JingtinCunningham, Ryan3
出版日期2022
会议名称MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia
会议日期不详
会议地点不详
页码7016-7020
英文摘要

Facial expression spotting is the preliminary step for micro- and macro-expression analysis. The task of reliably spotting such expressions in video sequences is currently unsolved. Current best systems depend upon optical flow methods to extract regional motion features, before categorisation of that motion into a specific class of facial movement. Optical flow is susceptible to drift error, which introduces a serious problem for motions with long-term dependencies, such as high frame-rate macro-expression. We propose a purely deep learning solution which, rather than tracking frame differential motion, compares via a convolutional model, each frame with two temporally local reference frames. Reference frames are sampled according to calculated micro- and macro-expression duration. As baseline for MEGC2021 using leave-one-subject-out evaluation method, we show that our solution performed better in a high frame-rate (200 fps) SAMM long videos dataset (SAMM-LV) than a low frame-rate (30 fps) (CAS(ME)2) dataset. We introduce a new unseen dataset for MEGC2022 challenge (MEGC2022-testSet) and achieves F1-Score of 0.1531 as baseline result.

收录类别EI
会议录MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia
源URL[http://ir.psych.ac.cn/handle/311026/44785]  
专题心理研究所_中国科学院行为科学重点实验室
通讯作者Davison, Adrian
作者单位1.CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
2.University of Manchester, Manchester, United Kingdom
3.Centre for Advanced Computational Science, Manchester Metropolitan University, Manchester, United Kingdom
推荐引用方式
GB/T 7714
Yap, Chuin Hong,Yap, Moi Hoon,Davison, Adrian,et al. 3D-CNN for Facial Micro- and Macro-expression Spotting on Long Video Sequences using Temporal Oriented Reference Frame[C]. 见:MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia. 不详. 不详.

入库方式: OAI收割

来源:心理研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。