Alignment Free and Distortion Robust Iris Recognition
文献类型:会议论文
作者 | Min Ren1,2![]() ![]() ![]() ![]() ![]() |
出版日期 | 2019-06 |
会议日期 | 2019-6-4 |
会议地点 | Crete, Greece |
英文摘要 | Iris recognition is a reliable personal identification method but there is still much room to improve its accu- racy especially in less-constrained situations. For example, free movement of head pose may cause large rotation dif- ference between iris images. And illumination variations may cause irregular distortion of iris texture. To match intra-class iris images with head rotation robustly, the exist- ing solutions usually need a precise alignment operation by exhaustive search within a determined range in iris image preprosessing or brute-force searching the minimum Ham- ming distance in iris feature matching. In the wild envi- roments, iris rotation is of much greater uncertainty than that in constrained situations and exhaustive search within a determined range is impracticable. This paper presents a unified feature-level solution to both alignment free and distortion robust iris recognition in the wild. A new deep learning based method named Alignment Free Iris Net- work (AFINet) is proposed, which utilizes a trainable VLAD (Vector of Locally Aggregated Descriptors) encoder called NetVLAD [18] to decouple the correlations between local representations and their spatial positions. And deformable convolution [5] is leveraged to overcome iris texture distor- tion by dense adaptive sampling. The results of extensive experiments on three public iris image databases and the simulated degradation databases show that AFINet signifi- cantly outperforms state-of-art iris recognition methods. |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/50605] ![]() |
专题 | 自动化研究所_智能感知与计算研究中心 |
作者单位 | 1.CRIPAC, NLPR, CASIA 2.University of Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Min Ren,Caiyong Wang,Yunlong Wang,et al. Alignment Free and Distortion Robust Iris Recognition[C]. 见:. Crete, Greece. 2019-6-4. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。