中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Residual Dual Scale Scene Text Spotting by Fusing Bottom-Up and Top-Down Processing

文献类型:期刊论文

作者Wei Feng2,3; Fei Yin2,3; Xu-Yao Zhang2,3; Wenhao He1; Cheng-Lin Liu2,3,4
刊名International Journal of Computer Vision
出版日期2020-10
卷号1期号:38页码:1872–1885
关键词Scene text spotting Arbitrary shapes Bottom-up Top-down Residual dual scale
英文摘要

Existing methods for arbitrary shaped text spotting can be divided into two categories: bottom-up methods detect and recognize local areas of text, and then group them into text lines or words; top-down methods detect text regions of interest, then apply polygon fitting and text recognition to the detected regions. In this paper, we analyze the advantages and disadvantages of these two methods, and propose a novel text spotter by fusing bottom-up and top-down processing. To detect text of arbitrary shapes, we employ a bottom-up detector to describe text with a series of rotated squares, and design a top-down detector to represent the region of interest with a minimum enclosing rotated rectangle. Then the text boundary is determined by fusing the outputs of two detectors. To connect arbitrary shaped text detection and recognition, we propose a differentiable operator named RoISlide, which can extract features for arbitrary text regions from whole image feature maps. Based on the extracted features through RoISlide, a CNN and CTC based text recognizer is introduced to make the framework free from character-level annotations. To improve the robustness against scale variance, we further propose a residual dual scale spotting mechanism, where two spotters work on different feature levels, and the high-level spotter is based on residuals of the low-level spotter. Our method has achieved state-of-the-art performance on four English datasets and one Chinese dataset, including both arbitrary shaped and oriented texts. We also provide abundant ablation experiments to analyze how the key components affect the performance.

源URL[http://ir.ia.ac.cn/handle/173211/41453]  
专题自动化研究所_模式识别国家重点实验室_模式分析与学习团队
通讯作者Cheng-Lin Liu
作者单位1.Tencent Map Big Data Lab, Beijing 100193, People’s Republic of China
2.National Laboratory of Pattern Recognition, Institute of Automation of Chinese Academy of Sciences, Beijing 100190, People’s Republic of China
3.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China
4.CAS Center for Excellence of Brain Science and Intelligence Technology, Beijing 100190, People’s Republic of China
推荐引用方式
GB/T 7714
Wei Feng,Fei Yin,Xu-Yao Zhang,et al. Residual Dual Scale Scene Text Spotting by Fusing Bottom-Up and Top-Down Processing[J]. International Journal of Computer Vision,2020,1(38):1872–1885.
APA Wei Feng,Fei Yin,Xu-Yao Zhang,Wenhao He,&Cheng-Lin Liu.(2020).Residual Dual Scale Scene Text Spotting by Fusing Bottom-Up and Top-Down Processing.International Journal of Computer Vision,1(38),1872–1885.
MLA Wei Feng,et al."Residual Dual Scale Scene Text Spotting by Fusing Bottom-Up and Top-Down Processing".International Journal of Computer Vision 1.38(2020):1872–1885.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。