中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
首页
机构
成果
学者
登录
注册
登陆
×
验证码:
换一张
忘记密码?
记住我
×
校外用户登录
CAS IR Grid
机构
计算技术研究所 [7]
自动化研究所 [5]
重庆绿色智能技术研究... [1]
采集方式
OAI收割 [13]
内容类型
期刊论文 [11]
会议论文 [2]
发表日期
2024 [2]
2023 [2]
2022 [2]
2021 [4]
2020 [2]
2017 [1]
更多
学科主题
筛选
浏览/检索结果:
共13条,第1-10条
帮助
条数/页:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
排序方式:
请选择
题名升序
题名降序
提交时间升序
提交时间降序
作者升序
作者降序
发表日期升序
发表日期降序
An adaptive joint optimization framework for pruning and quantization
期刊论文
OAI收割
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 页码: 17
作者:
Li, Xiaohai
;
Yang, Xiaodong
;
Zhang, Yingwei
;
Yang, Jianrong
;
Chen, Yiqiang
  |  
收藏
  |  
浏览/下载:5/0
  |  
提交时间:2024/12/06
Model compression
Network pruning
Quantization
Mutual learning
Multi-teacher knowledge distillation
Fpar: filter pruning via attention and rank enhancement for deep convolutional neural networks acceleration
期刊论文
OAI收割
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 页码: 13
作者:
Chen, Yanming
;
Wu, Gang
;
Shuai, Mingrui
;
Lou, Shubin
;
Zhang, Yiwen
  |  
收藏
  |  
浏览/下载:18/0
  |  
提交时间:2024/05/20
Neural network
Model compression
Filter pruning
Attention
Rank enhancement
CNNs
BitXpro: Regularity-Aware Hardware Runtime Pruning for Deep Neural Networks
期刊论文
OAI收割
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2023, 卷号: 31, 期号: 1, 页码: 90-103
作者:
Li, Hongyan
;
Lu, Hang
;
Wang, Haoxuan
;
Deng, Shengji
;
Li, Xiaowei
  |  
收藏
  |  
浏览/下载:15/0
  |  
提交时间:2023/07/12
Deep learning accelerator
deep neural network (DNN)
hardware runtime pruning
Pruning-aware Sparse Regularization for Network Pruning
期刊论文
OAI收割
Machine Intelligence Research, 2023, 卷号: 20, 期号: 1, 页码: 109-120
作者:
Nan-Fei Jiang
  |  
收藏
  |  
浏览/下载:3/0
  |  
提交时间:2024/04/23
Deep learning
convolutional neural network (CNN)
model compression and acceleration
network pruning
regularization
Accelerating deep neural network filter pruning with mask-aware convolutional computations on modern CPUs
期刊论文
OAI收割
NEUROCOMPUTING, 2022, 卷号: 505, 页码: 375-387
作者:
Ma, Xiu
;
Li, Guangli
;
Liu, Lei
;
Liu, Huaxiao
;
Wang, Xueying
  |  
收藏
  |  
浏览/下载:20/0
  |  
提交时间:2023/07/12
Deep learning systems
Neural network compression
Filter pruning
Multi-Granularity Pruning for Model Acceleration on Mobile Devices
会议论文
OAI收割
线上, 2022-07
作者:
Zhao TL(赵天理)
;
Zhang X(张希)
;
Zhu WT(朱文涛)
;
Wang JX(王家兴)
;
Yang S(杨森)
  |  
收藏
  |  
浏览/下载:16/0
  |  
提交时间:2023/06/21
Deep Neural Networks
Network Pruning
Structured Pruning
Non-structured Pruning
Single Instruction Multiple Data
Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization
期刊论文
OAI收割
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 15
作者:
Deng, Lei
;
Wu, Yujie
;
Hu, Yifan
;
Liang, Ling
;
Li, Guoqi
  |  
收藏
  |  
浏览/下载:35/0
  |  
提交时间:2022/06/21
Neurons
Computational modeling
Quantization (signal)
Optimization
Encoding
Task analysis
Synapses
Activity regularization
alternating direction method of multiplier (ADMM)
connection pruning
spiking neural network (SNN) compression
weight quantization
Dynamical Conventional Neural Network Channel Pruning by Genetic Wavelet Channel Search for Image Classification
期刊论文
OAI收割
FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2021, 卷号: 15, 页码: 11
作者:
Chen, Lin
;
Gong, Saijun
;
Shi, Xiaoyu
;
Shang, Mingsheng
  |  
收藏
  |  
浏览/下载:38/0
  |  
提交时间:2021/12/28
neural network pruning
neural architecture search
wavelet features
neural network compression
image classification
Dynamically Optimizing Network Structure Based on Synaptic Pruning in the Brain
期刊论文
OAI收割
FRONTIERS IN SYSTEMS NEUROSCIENCE, 2021, 卷号: 15, 页码: 8
作者:
Zhao, Feifei
;
Zeng, Yi
  |  
收藏
  |  
浏览/下载:33/0
  |  
提交时间:2021/08/15
synaptic pruning
developmental neural network
optimizing network structure
accelerating learning
compressing network
Roulette: A Pruning Framework to Train a Sparse Neural Network From Scratch
期刊论文
OAI收割
IEEE ACCESS, 2021, 卷号: 9, 页码: 51134-51145
作者:
Zhong, Qiaoling
;
Zhang, Zhibin
;
Qiu, Qiang
;
Cheng, Xueqi
  |  
收藏
  |  
浏览/下载:28/0
  |  
提交时间:2021/12/01
Network pruning
inference acceleration
model compression
multiple GPUs