中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
首页
机构
成果
学者
登录
注册
登陆
×
验证码:
换一张
忘记密码?
记住我
×
校外用户登录
CAS IR Grid
机构
计算技术研究所 [3]
自动化研究所 [3]
长春光学精密机械与物... [1]
采集方式
OAI收割 [7]
内容类型
期刊论文 [6]
会议论文 [1]
发表日期
2023 [1]
2022 [2]
2021 [1]
2020 [1]
2018 [1]
2006 [1]
更多
学科主题
筛选
浏览/检索结果:
共7条,第1-7条
帮助
条数/页:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
排序方式:
请选择
题名升序
题名降序
提交时间升序
提交时间降序
作者升序
作者降序
发表日期升序
发表日期降序
CoAxNN: Optimizing on-device deep learning with conditional approximate neural networks
期刊论文
OAI收割
JOURNAL OF SYSTEMS ARCHITECTURE, 2023, 卷号: 143, 页码: 14
作者:
Li, Guangli
;
Ma, Xiu
;
Yu, Qiuchu
;
Liu, Lei
;
Liu, Huaxiao
  |  
收藏
  |  
浏览/下载:18/0
  |  
提交时间:2023/12/04
On-device deep learning
Efficient neural networks
Model approximation and optimization
Sampling Methods for Efficient Training of Graph Convolutional Networks: A Survey
期刊论文
OAI收割
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 卷号: 9, 期号: 2, 页码: 205-234
作者:
Liu, Xin
;
Yan, Mingyu
;
Deng, Lei
;
Li, Guoqi
;
Ye, Xiaochun
  |  
收藏
  |  
浏览/下载:41/0
  |  
提交时间:2022/06/21
Efficient training
graph convolutional networks (GCNs)
graph neural networks (GNNs)
sampling method
Sampling Methods for Efficient Training of Graph Convolutional Networks: A Survey
期刊论文
OAI收割
IEEE/CAA Journal of Automatica Sinica, 2022, 卷号: 9, 期号: 2, 页码: 205-234
作者:
Xin Liu
;
Mingyu Yan
;
Lei Deng
;
Guoqi Li
;
Xiaochun Ye
  |  
收藏
  |  
浏览/下载:58/0
  |  
提交时间:2021/11/03
Efficient training
graph convolutional networks (GCNs)
graph neural networks (GNNs)
sampling method
ECBC: Efficient Convolution via Blocked Columnizing
期刊论文
OAI收割
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 2021, 页码: 13, 13
作者:
Zhao, Tianli
;
Hu, Qinghao
;
He, Xiangyu
;
Xu, Weixiang
;
Wang, Jiaxing
  |  
收藏
  |  
浏览/下载:42/0
  |  
提交时间:2022/01/27
Convolution
Convolution
Tensors
Layout
Memory management
Indexes
Transforms
Performance evaluation
Convolutional neural networks (CNNs)
direct convolution
high performance computing for mobile devices
im2col convolution
memory-efficient convolution (MEC)
Tensors
Layout
Memory management
Indexes
Transforms
Performance evaluation
Convolutional neural networks (CNNs)
direct convolution
high performance computing for mobile devices
im2col convolution
memory-efficient convolution (MEC)
A Quantitative Exploration of Collaborative Pruning and Approximation Computing Towards Energy Efficient Neural Networks
期刊论文
OAI收割
IEEE DESIGN & TEST, 2020, 卷号: 37, 期号: 1, 页码: 36-45
作者:
He, Xin
;
Yan, Guihai
;
Lu, Wenyan
;
Zhang, Xuan
;
Liu, Ke
  |  
收藏
  |  
浏览/下载:32/0
  |  
提交时间:2020/12/10
Resilience
Energy consumption
Approximate computing
Collaboration
Computational modeling
Artificial neural networks
Optimization
Neural network
Energy efficient computing
Network pruning
Approximate computing
Efficient coding matters in the organization of the early visual system
期刊论文
OAI收割
NEURAL NETWORKS, 2018, 卷号: 105, 页码: 218-226
作者:
Kong, Qingqun
;
Han, Jiuqi
;
Zeng, Yi
;
Xu, Bo
  |  
收藏
  |  
浏览/下载:38/0
  |  
提交时间:2018/10/10
Early Visual Stages
Hierarchical Structure
Efficient Coding
Brain-inspired Neural Networks
Lossless wavelet compression on medical image (EI CONFERENCE)
会议论文
OAI收割
4th International Conference on Photonics and Imaging in Biology and Medicine, September 3, 2005 - September 6, 2005, Tianjin, China
作者:
Liu H.
;
Liu H.
;
Liu H.
收藏
  |  
浏览/下载:37/0
  |  
提交时间:2013/03/25
An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS). as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image
thus facilitating accurate diagnosis
of course at the expense of higher bit rates
i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization
wavelet coding
neural networks
and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1
or even more)
they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image
but the achievable compression ratios are only of the order 2:1
up to 4:1. In our paper
we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time
we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance
so that all the low rate codes are included at the beginning of the bit stream. Typically
the encoding process stops when the target bit rate is met. Similarly
the decoder can interrupt the decoding process at any point in the bil stream
and still reconstruct the image. Therefore
a compression scheme generating an embedded code can start sending over the network the coarser version of the image first
and continues with the progressive transmission of the refinement details. Experimental results show that our method can get a perfect performance in compression ratio and reconstructive image.