Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement
文献类型:期刊论文
作者 | Yan, Yijun1; Ren, Jinchang1; Sun, Genyun2; Zhao, Huimin3; Han, Junwei4; Li, Xuelong5![]() |
刊名 | PATTERN RECOGNITION
![]() |
出版日期 | 2018-07-01 |
卷号 | 79页码:65-78 |
关键词 | Background Connectivity Gestalt Laws Guided Optimization Image Saliency Detection Feature Fusion Human Vision Perception |
ISSN号 | 0031-3203 |
DOI | 10.1016/j.patcog.2018.02.004 |
产权排序 | 5 |
文献子类 | Article |
英文摘要 | Visual attention is a kind of fundamental cognitive capability that allows human beings to focus on the region of interests (ROls) under complex natural environments. What kind of ROls that we pay attention to mainly depends on two distinct types of attentional mechanisms. The bottom-up mechanism can guide our detection of the salient objects and regions by externally driven factors, i.e. color and location, whilst the top-down mechanism controls our biasing attention based on prior knowledge and cognitive strategies being provided by visual cortex. However, how to practically use and fuse both attentional mechanisms for salient object detection has not been sufficiently explored. To the end, we propose in this paper an integrated framework consisting of bottom-up and top-down attention mechanisms that enable attention to be computed at the level of salient objects and/or regions. Within our framework, the model of a bottom-up mechanism is guided by the gestalt-laws of perception. We interpreted gestalt-laws of homogeneity, similarity, proximity and figure and ground in link with color, spatial contrast at the level of regions and objects to produce feature contrast map. The model of top-down mechanism aims to use a formal computational model to describe the background connectivity of the attention and produce the priority map. Integrating both mechanisms and applying to salient object detection, our results have demonstrated that the proposed method consistently outperforms a number of existing unsupervised approaches on five challenging and complicated datasets in terms of higher precision and recall rates, AP (average precision) and AUC (area under curve) values. (C) 2018 Elsevier Ltd. All rights reserved. |
WOS关键词 | REGION DETECTION ; TOP-DOWN ; OBJECT SEGMENTATION ; BOTTOM-UP ; LEVEL ; MODEL ; MECHANISMS ; RETRIEVAL ; VISION ; SEARCH |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:000430903000006 |
资助机构 | Natural Science Foundation of China(61672008 ; Fundamental Research Funds for the Central Universities(18CX05030A) ; Natural Science Foundation of Guangdong Province(2016A030311013) ; Guangdong Provincial Application-oriented Technical Research and Development Special fund(2016B010127006) ; International Scientific and Technological Cooperation Projects of Guangdong Province(2017A050501039) ; 61772144) |
源URL | [http://ir.opt.ac.cn/handle/181661/30026] ![]() |
专题 | 西安光学精密机械研究所_光学影像学习与分析中心 |
通讯作者 | Ren, Jinchang (jinchang.ren@strath.ac.uk) |
作者单位 | 1.Univ Strathclyde, Dept Elect & Elect Engn, Glasgow, Lanark, Scotland 2.China Univ Petr East China, Sch Geosci, Qingdao, Peoples R China 3.Guangdong Polytech Normal Univ, Sch Comp Sci, Guangzhou, Guangdong, Peoples R China 4.Northwestern Polytech Univ, Sch Automat, Xian, Shaanxi, Peoples R China 5.Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian, Shaanxi, Peoples R China |
推荐引用方式 GB/T 7714 | Yan, Yijun,Ren, Jinchang,Sun, Genyun,et al. Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement[J]. PATTERN RECOGNITION,2018,79:65-78. |
APA | Yan, Yijun.,Ren, Jinchang.,Sun, Genyun.,Zhao, Huimin.,Han, Junwei.,...&Ren, Jinchang .(2018).Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement.PATTERN RECOGNITION,79,65-78. |
MLA | Yan, Yijun,et al."Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement".PATTERN RECOGNITION 79(2018):65-78. |
入库方式: OAI收割
来源:西安光学精密机械研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。