中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Few-Shot Learning via Feature Hallucination with Variational Inference

文献类型:会议论文

作者Luo QX(罗沁轩)1,2; Wang LF(汪凌峰)1,3; Lv JG(吕京国)4; Xiang SM(向世明)1,2; Pan CH(潘春洪)1
出版日期2021-01
会议日期2021-1
会议地点线上会议
英文摘要

Deep learning has achieved huge success in the field of artificial intelligence, but the performance heavily depends on labeled data. Few-shot learning aims to make a model rapidly adapt to unseen classes with few labeled samples after training on a base dataset, and this is useful for tasks lacking labeled data such as medical image processing. Considering that the core problem of few-shot learning is the lack of samples, a straightforward solution to this issue is data augmentation. This paper proposes a generative model (VI-Net) based on a cosine-classifier baseline. Specifically, we construct a framework to learn to define a generating space for each category in the latent space based on few support samples. In this way, new feature vectors can be generated to help make the decision boundary of classifier sharper during the fine-tuning process. To evaluate the effectiveness of our proposed approach, we perform comparative experiments and ablation studies on mini-ImageNet and CUB. Experimental results show that VI-Net does improve performance compared with the baseline and obtains the state-of-the-art result among other augmentation-based methods.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/44310]  
专题自动化研究所_模式识别国家重点实验室_遥感图像处理团队
作者单位1.NLPR, Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.Key Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education
4.School of Geomatics and Urban Spatial Informatics, Beijing University of Civil Engineering and Architecture
推荐引用方式
GB/T 7714
Luo QX,Wang LF,Lv JG,et al. Few-Shot Learning via Feature Hallucination with Variational Inference[C]. 见:. 线上会议. 2021-1.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。