中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification

文献类型:期刊论文

作者Luo, Yong1,2,3; Wen, Yonggang2; Tao, Dacheng4,5; Gui, Jie6; Xu, Chao1
刊名IEEE TRANSACTIONS ON IMAGE PROCESSING
出版日期2016
卷号25期号:1页码:414-427
关键词Feature Extraction Image Classification Multi-task Multi-modal Large Margin
DOI10.1109/TIP.2015.2495116
文献子类Article
英文摘要The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.
WOS关键词FEATURE-SELECTION ; MULTIVIEW ; MATRIX ; REGRESSION ; FRAMEWORK
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:000366558900011
资助机构Microsoft Research Asia ; Microsoft Research Asia ; Microsoft Research Asia ; Microsoft Research Asia ; Singapore Ministry of Education under Grant Tier 1(RG17/14) ; Singapore Ministry of Education under Grant Tier 1(RG17/14) ; Singapore Ministry of Education under Grant Tier 1(RG17/14) ; Singapore Ministry of Education under Grant Tier 1(RG17/14) ; National Key Technology RD Program(2015BAF15B00) ; National Key Technology RD Program(2015BAF15B00) ; National Key Technology RD Program(2015BAF15B00) ; National Key Technology RD Program(2015BAF15B00) ; National Natural Science Foundation of China(61375026 ; National Natural Science Foundation of China(61375026 ; National Natural Science Foundation of China(61375026 ; National Natural Science Foundation of China(61375026 ; Australian Research Council(DP-140102164 ; Australian Research Council(DP-140102164 ; Australian Research Council(DP-140102164 ; Australian Research Council(DP-140102164 ; 61572463) ; 61572463) ; 61572463) ; 61572463) ; FT-130101457) ; FT-130101457) ; FT-130101457) ; FT-130101457) ; Microsoft Research Asia ; Microsoft Research Asia ; Microsoft Research Asia ; Microsoft Research Asia ; Singapore Ministry of Education under Grant Tier 1(RG17/14) ; Singapore Ministry of Education under Grant Tier 1(RG17/14) ; Singapore Ministry of Education under Grant Tier 1(RG17/14) ; Singapore Ministry of Education under Grant Tier 1(RG17/14) ; National Key Technology RD Program(2015BAF15B00) ; National Key Technology RD Program(2015BAF15B00) ; National Key Technology RD Program(2015BAF15B00) ; National Key Technology RD Program(2015BAF15B00) ; National Natural Science Foundation of China(61375026 ; National Natural Science Foundation of China(61375026 ; National Natural Science Foundation of China(61375026 ; National Natural Science Foundation of China(61375026 ; Australian Research Council(DP-140102164 ; Australian Research Council(DP-140102164 ; Australian Research Council(DP-140102164 ; Australian Research Council(DP-140102164 ; 61572463) ; 61572463) ; 61572463) ; 61572463) ; FT-130101457) ; FT-130101457) ; FT-130101457) ; FT-130101457)
源URL[http://ir.hfcas.ac.cn:8080/handle/334002/31603]  
专题合肥物质科学研究院_中科院合肥智能机械研究所
作者单位1.Peking Univ, Minist Educ, Key Lab Machine Percept, Sch Elect Engn & Comp Sci, Beijing 100871, Peoples R China
2.Nanyang Technol Univ, Sch Comp Engn, Singapore 639798, Singapore
3.Univ Technol Sydney, Ctr Quantum Computat & Intelligent Syst, Fac Engn & Informat Technol, Sydney, NSW 2007, Australia
4.Univ Technol, Ctr Quantum Computat & Intelligent Syst, Ultimo, NSW 2007, Australia
5.Univ Technol, Fac Engn & Informat Technol, Ultimo, NSW 2007, Australia
6.Chinese Acad Sci, Inst Intelligent Machines, Hefei 230031, Peoples R China
推荐引用方式
GB/T 7714
Luo, Yong,Wen, Yonggang,Tao, Dacheng,et al. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2016,25(1):414-427.
APA Luo, Yong,Wen, Yonggang,Tao, Dacheng,Gui, Jie,&Xu, Chao.(2016).Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.IEEE TRANSACTIONS ON IMAGE PROCESSING,25(1),414-427.
MLA Luo, Yong,et al."Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification".IEEE TRANSACTIONS ON IMAGE PROCESSING 25.1(2016):414-427.

入库方式: OAI收割

来源:合肥物质科学研究院

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。