End-to-End Lifelong Learning: a Framework to Achieve Plasticities of both the Feature and Classifier Constructions
文献类型:期刊论文
作者 | Hao, Wangli1![]() ![]() ![]() ![]() |
刊名 | COGNITIVE COMPUTATION
![]() |
出版日期 | 2018-04-01 |
卷号 | 10期号:2页码:321-333 |
关键词 | Plasticity Lifelong Learning End-to-end Incremental Pcanet Incremental Kmeansnet Incremental Svm |
DOI | 10.1007/s12559-017-9514-0 |
文献子类 | Article |
英文摘要 | Plasticity in our brain offers us promising ability to learn and know the world. Although great successes have been achieved in many fields, few bio-inspired machine learning methods have mimicked this ability. Consequently, when meeting large-scale or time-varying data, these bio-inspired methods are infeasible, due to the reasons that they lack plasticity and need all training data loaded into memory. Furthermore, even the popular deep convolutional neural network (CNN) models have relatively fixed structures and cannot process time varying data well. Through incremental methodologies, this paper aims at exploring an end-to-end lifelong learning framework to achieve plasticities of both the feature and classifier constructions. The proposed model mainly comprises of three parts: Gabor filters followed by max pooling layer offering shift and scale tolerance to input samples, incremental unsupervised feature extraction, and incremental SVM trying to achieve plasticities of both the feature learning and classifier construction. Different from CNN, plasticity in our model has no back propogation (BP) process and does not need huge parameters. Our incremental models, including IncPCANet and IncKmeansNet, have achieved better results than PCANet and KmeansNet on minist and Caltech101 datasets respectively. Meanwhile, IncPCANet and IncKmeansNet show promising plasticity of feature extraction and classifier construction when the distribution of data changes. Lots of experiments have validated the performance of our model and verified a physiological hypothesis that plasticity exists in high level layer better than that in low level layer. |
WOS关键词 | OBJECT RECOGNITION ; COMPONENT ANALYSIS ; VISUAL-SYSTEM ; EIGENFACES ; NETWORKS ; MODEL |
WOS研究方向 | Computer Science ; Neurosciences & Neurology |
语种 | 英语 |
WOS记录号 | WOS:000430190600012 |
资助机构 | National Natural Science Foundation of China(61773375 ; Microsoft Collaborative Research Project ; 61375036 ; 61511130079) |
源URL | [http://ir.ia.ac.cn/handle/173211/20440] ![]() |
专题 | 自动化研究所_智能感知与计算研究中心 |
作者单位 | 1.UCAS, Inst Automat, Beijing, Peoples R China 2.CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing, Peoples R China 3.Chinese Acad Sci, CASIA, Inst Automat, Beijing, Peoples R China |
推荐引用方式 GB/T 7714 | Hao, Wangli,Fan, Junsong,Zhang, Zhaoxiang,et al. End-to-End Lifelong Learning: a Framework to Achieve Plasticities of both the Feature and Classifier Constructions[J]. COGNITIVE COMPUTATION,2018,10(2):321-333. |
APA | Hao, Wangli,Fan, Junsong,Zhang, Zhaoxiang,&Zhu, Guibo.(2018).End-to-End Lifelong Learning: a Framework to Achieve Plasticities of both the Feature and Classifier Constructions.COGNITIVE COMPUTATION,10(2),321-333. |
MLA | Hao, Wangli,et al."End-to-End Lifelong Learning: a Framework to Achieve Plasticities of both the Feature and Classifier Constructions".COGNITIVE COMPUTATION 10.2(2018):321-333. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。