中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Understanding the tradeoffs between software-managed vs. hardware-managed caches in GPUs

文献类型:会议论文

作者Li, Chao (1) ; Yang, Yi (2) ; Dai, Hongwen (1) ; Yan, Shengen (3) ; Mueller, Frank (4) ; Zhou, Huiyang (1)
出版日期2014
会议名称2014 IEEE International Symposium on Performance Analysis of Systems and Software, ISPASS 2014
会议日期March 23, 2014 - March 25, 2014
会议地点Monterey, CA, United states
页码231-242
中文摘要On-chip caches are commonly used in computer systems to hide long off-chip memory access latencies. To manage on-chip caches, either software-managed or hardware-managed schemes can be employed. State-of-art accelerators, such as the NVIDIA Fermi or Kepler GPUs and Intel's forthcoming MIC 'Knights Landing' (KNL), support both software-managed caches, aka. shared memory (GPUs) or near memory (KNL), and hardware-managed L1 data caches (D-caches). Furthermore, shared memory and the L1 D-cache on a GPU utilize the same physical storage and their capacity can be configured at runtime (same for KNL). In this paper, we present an in-depth study to reveal interesting and sometimes unexpected tradeoffs between shared memory and the hardware-managed L1 D- caches in GPU architecture. In our study, the kernels utilizing the L1 D-caches are generated from those leveraging shared memory to ensure that the same optimizations such as tiling are applied equally in both versions. Our detailed analyses reveal that rather than cache hit rates, the following tradeoffs often have more profound performance impacts. On one hand, the kernels utilizing the L1 caches may support higher degrees of thread-level parallelism, offer more opportunities for data to be allocated in registers, and sometimes result in lower dynamic instruction counts. On the other hand, the applications utilizing shared memory enable more coalesced accesses and tend to achieve higher degrees of memory-level parallelism. Overall, our results show that most benchmarks perform significantly better with shared memory than the L1 D-caches due to the high impact of memory-level parallelism and memory coalescing. © 2014 IEEE.
英文摘要On-chip caches are commonly used in computer systems to hide long off-chip memory access latencies. To manage on-chip caches, either software-managed or hardware-managed schemes can be employed. State-of-art accelerators, such as the NVIDIA Fermi or Kepler GPUs and Intel's forthcoming MIC 'Knights Landing' (KNL), support both software-managed caches, aka. shared memory (GPUs) or near memory (KNL), and hardware-managed L1 data caches (D-caches). Furthermore, shared memory and the L1 D-cache on a GPU utilize the same physical storage and their capacity can be configured at runtime (same for KNL). In this paper, we present an in-depth study to reveal interesting and sometimes unexpected tradeoffs between shared memory and the hardware-managed L1 D- caches in GPU architecture. In our study, the kernels utilizing the L1 D-caches are generated from those leveraging shared memory to ensure that the same optimizations such as tiling are applied equally in both versions. Our detailed analyses reveal that rather than cache hit rates, the following tradeoffs often have more profound performance impacts. On one hand, the kernels utilizing the L1 caches may support higher degrees of thread-level parallelism, offer more opportunities for data to be allocated in registers, and sometimes result in lower dynamic instruction counts. On the other hand, the applications utilizing shared memory enable more coalesced accesses and tend to achieve higher degrees of memory-level parallelism. Overall, our results show that most benchmarks perform significantly better with shared memory than the L1 D-caches due to the high impact of memory-level parallelism and memory coalescing. © 2014 IEEE.
收录类别EI
会议录出版地IEEE Computer Society
语种英语
ISBN号9781479936052
源URL[http://ir.iscas.ac.cn/handle/311060/16614]  
专题软件研究所_软件所图书馆_会议论文
推荐引用方式
GB/T 7714
Li, Chao ,Yang, Yi ,Dai, Hongwen ,et al. Understanding the tradeoffs between software-managed vs. hardware-managed caches in GPUs[C]. 见:2014 IEEE International Symposium on Performance Analysis of Systems and Software, ISPASS 2014. Monterey, CA, United states. March 23, 2014 - March 25, 2014.

入库方式: OAI收割

来源:软件研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。