Bi-directional Modality Fusion Network for Audio-Visual Event Localization
文献类型:会议论文
作者 | Liu, Shuo1,3; Quan, Weize1,3; Liu, Yuan2; Yan, Dong-MIng1,3 |
出版日期 | 2022-05 |
会议日期 | 2022.5.23-2022.5.27 |
会议地点 | Singapore |
DOI | 0.1109/ICASSP43922.2022.9746280 |
英文摘要 | Audio and visual signals stimulate many audio-visual sensory neurons of persons to generate audio-visual contents, helping humans perceive the world. Most of the existing audio-visual event localization approaches focus on generating audio-visual features by fusing the audio and visual modalities for final predictions. However, an audio-visual adjustment mechanism exists in a complicated multi-modal perception system. Inspired by this observation, we propose a novel bi-directional modality fusion network (BMFN), which not only simply fuses audio and visual features, but also adjusts the fused features to increase their representativeness with the help of the original audio and visual contents. The high-level audio-visual features achieved from two directions with two forward-backward fusion modules and a mean operation are summarized for the final event localization. Experimental results demonstrate that our method outperforms state-of-the-art works in both fully- and weakly-supervised learning settings. The code is available at https://github.com/weizequan/BMFN.git. |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/51504] |
专题 | 多模态人工智能系统全国重点实验室 |
通讯作者 | Yan, Dong-MIng |
作者单位 | 1.School of Artificial Intelligence, University of Chinese Academy of Sciences 2.Speech Lab, Alibaba Group 3.Nlpr, Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Liu, Shuo,Quan, Weize,Liu, Yuan,et al. Bi-directional Modality Fusion Network for Audio-Visual Event Localization[C]. 见:. Singapore. 2022.5.23-2022.5.27. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。