中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
首页
机构
成果
学者
登录
注册
登陆
×
验证码:
换一张
忘记密码?
记住我
×
校外用户登录
CAS IR Grid
机构
长春光学精密机械与物... [2]
软件研究所 [1]
采集方式
OAI收割 [3]
内容类型
会议论文 [2]
期刊论文 [1]
发表日期
2011 [2]
2004 [1]
学科主题
筛选
浏览/检索结果:
共3条,第1-3条
帮助
条数/页:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
排序方式:
请选择
题名升序
题名降序
提交时间升序
提交时间降序
作者升序
作者降序
发表日期升序
发表日期降序
Multi-focus image fusion algorithm based on adaptive PCNN and wavelet transform (EI CONFERENCE)
会议论文
OAI收割
International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, May 24, 2011 - May 26, 2011, Beijing, China
Wu Z.-G.
;
Wang M.-J.
;
Han G.-L.
收藏
  |  
浏览/下载:72/0
  |  
提交时间:2013/03/25
Being an efficient method of information fusion
image fusion has been used in many fields such as machine vision
medical diagnosis
military applications and remote sensing.In this paper
Pulse Coupled Neural Network (PCNN) is introduced in this research field for its interesting properties in image processing
including segmentation
target recognition et al.
and a novel algorithm based on PCNN and Wavelet Transform for Multi-focus image fusion is proposed. First
the two original images are decomposed by wavelet transform. Then
based on the PCNN
a fusion rule in the Wavelet domain is given. This algorithm uses the wavelet coefficient in each frequency domain as the linking strength
so that its value can be chosen adaptively. Wavelet coefficients map to the range of image gray-scale. The output threshold function attenuates to minimum gray over time. Then all pixels of image get the ignition. So
the output of PCNN in each iteration time is ignition wavelet coefficients of threshold strength in different time. At this moment
the sequences of ignition of wavelet coefficients represent ignition timing of each neuron. The ignition timing of PCNN in each neuron is mapped to corresponding image gray-scale range
which is a picture of ignition timing mapping. Then it can judge the targets in the neuron are obvious features or not obvious. The fusion coefficients are decided by the compare-selection operator with the firing time gradient maps and the fusion image is reconstructed by wavelet inverse transform. Furthermore
by this algorithm
the threshold adjusting constant is estimated by appointed iteration number. Furthermore
In order to sufficient reflect order of the firing time
the threshold adjusting constant is estimated by appointed iteration number. So after the iteration achieved
each of the wavelet coefficient is activated. In order to verify the effectiveness of proposed rules
the experiments upon Multi-focus image are done. Moreover
comparative results of evaluating fusion quality are listed. The experimental results show that the method can effectively enhance the edge details and improve the spatial resolution of the image. 2011 SPIE.
TextureGrow: Object recognition and segmentation with limit prior knowledge (EI CONFERENCE)
会议论文
OAI收割
2011 International Conference on Network Computing and Information Security, NCIS 2011, May 14, 2011 - May 15, 2011, Guilin, Guangxi, China
Yao Z.
;
Han Q.
收藏
  |  
浏览/下载:24/0
  |  
提交时间:2013/03/25
In this paper we present a new method for automatically visual recognition and semantic segmentation of photographs. Our automatically and rapidly approach based on Cellular Automation. Most of the analysis and description of recognition and segmentation are based on statistical or structural properties of this attribute
most of them need plenty of samples and prior Knowledge. In this paper
within a few evident samples (not too many)
we can first get the texture feature of each component and the structures
then select the approximately location randomly of the objects or patches of them
then we use the Cellular Automata algorithm to "grow" based on texture of different objects. The grow progress will stop When texture grow to the boundary. By this steps a new method is found which allow us use very few samples and low lever experience and get a rapidly and accuracy way to recognize and segment objects. We found that this new propose gives competitive results with limited experience and samples. 2011 IEEE.
SEPARATING FIGURES, MATHEMATICAL FORMULAS AND JAPANESE TEXT FROM FREE HANDWRITING IN MIXED ONLINE DOCUMENTS
期刊论文
OAI收割
International Journal of Pattern Recognition and Artificial Intelligence, 2004, 卷号: 18, 期号: 7, 页码: 1173-1187
KEISUKE MOCHIDA
;
MASAKI NAKAGAWA
  |  
收藏
  |  
浏览/下载:18/0
  |  
提交时间:2010/05/13
Pen interfaces
online handwritten patterns
pattern segmentation
stroke classification
probabilistic model
segmentation by recognition
neural-network