Please wait a minute...
Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

Front. Comput. Sci.    2015, Vol. 9 Issue (3) : 431-441    https://doi.org/10.1007/s11704-014-4127-1
RESEARCH ARTICLE
Memory bandwidth optimization of SpMV on GPGPUs
Chenggang Clarence YAN1,3,Hui YU1,Weizhi XU2,4,*(),Yingping ZHANG5,Bochuan CHEN5,Zhu TIAN6,Yuxuan WANG6,Jian YIN6
1. Key Lab of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
2. Institute of Microelectronics, Tsinghua University, Beijing 100084, China
3. Automation Department, Tsinghua University, Beijing 100084, China
4. State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
5. State Grid Information & Communication Company of Hunan EPC, Changsha 410007, China
6. Department of Computer, Shandong University,Weihai 250101, China
 Download: PDF(525 KB)  
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

It is an important task to improve performance for sparse matrix vector multiplication (SpMV), and it is a difficult task because of its irregular memory access. General purpose GPU (GPGPU) provides high computing ability and substantial bandwidth that cannot be fully exploited by SpMV due to its irregularity. In this paper, we propose two novel methods to optimize the memory bandwidth for SpMV on GPGPU. First, a new storage format is proposed to exploit memory bandwidth of GPU architecture more efficiently. The new storage format can ensure that there are as many non-zeros as possible in the format which is suitable to exploit the memory bandwidth of the GPU. Second, we propose a cache blocking method to improve the performance of SpMV on GPU architecture. The sparse matrix is partitioned into sub-blocks that are stored in CSR format. With the blocking method, the corresponding part of vector x can be reused in the GPU cache, so the time to access the global memory for vector x is reduced heavily. Experiments are carried out on three GPU platforms, GeForce 9800 GX2, GeForce GTX 480, and Tesla K40. Experimental results show that both new methods can efficiently improve the utilization of GPU memory bandwidth and the performance of the GPU.

Keywords GPGPU      performance tuning      SpMV      cache blocking      memory bandwidth     
Corresponding Author(s): Weizhi XU   
Issue Date: 18 May 2015
 Cite this article:   
Chenggang Clarence YAN,Hui YU,Weizhi XU, et al. Memory bandwidth optimization of SpMV on GPGPUs[J]. Front. Comput. Sci., 2015, 9(3): 431-441.
 URL:  
https://academic.hep.com.cn/fcs/EN/10.1007/s11704-014-4127-1
https://academic.hep.com.cn/fcs/EN/Y2015/V9/I3/431
1 Xu W, Liu Z, Wu J, Ye X, Jiao S, Wang D, Song F, Fan D. Auto-tuning GEMV on many-core GPU. In: Proceedings of the 18th IEEE International Conference on Parallel and Distributed Systems. 2012, 30-36
https://doi.org/10.1109/icpads.2012.15
2 Yan C G, Zhang Y D, Xu J Z, Dai F, Li L, Dai Q H, Wu F. A highly parallel framework for HEVC coding unit partitioning tree decision on many-core processors. IEEE Signal Processing letters, 2014, 21(5): 573-576
https://doi.org/10.1109/LSP.2014.2310494
3 Yan C G, Zhang Y D, Dai F, Zhang J, Li L, Dai Q H. Efficient parallel HEVC intra prediction on many-core processor. Electronics Letters (in press)
https://doi.org/10.1049/el.2014.0611
4 Yan C, Zhang Y, Xu J, Dai F, Zhang J, Dai Q, Wu F. Efficient parpallel framework for HEVC motion estimation on many-core processors. IEEE Transactions on Circuits and Systems for Video Technology, 2014, 99: 1
5 Yan C G, Zhang Y D, Dai F, Li L. Highly parallel framework for HEVC motion estimation on many-core platform. In: Proceedings of Data Compression Conference, 2013, 63-72
6 Zhang Y D, Yan C G, Dai F, Ma Y. Efficient parallel framework for H.264/AVC deblocking filter on many-core platform. IEEE Transactions on Multimedia, 2012, 14(3): 510-524
https://doi.org/10.1109/TMM.2012.2190391
7 Yan C G, Dai F, Zhang Y D, Ma Y, Chen L C, Fan L J, Zheng Y S. Parallel deblocking filter for H.264/AVC implemented on Tile64 platform. In: Proceedings of the International Conference on Multimedia and Expo. 2011, 1-68
8 Bell N, Garland M. Implementing sparse matrix-vector multiplication on throughput-oriented processors. In: Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis. 2009, 18
https://doi.org/10.1145/1654059.1654078
9 Nvidia C. Compute Unified Device Architecture Programming Guide. 2007
10 Im E. Optimizing the performance of sparse matrix-vector multiplication. Dissertation for the Doctoral Degree. Berkeley: University of California, 2000
11 Vuduc R W. Automatic performance tuning of sparse matrix kernels. Dissertation for the Doctoral Degree. Berkeley: University of California, 2003
12 Williams S W. Auto-tuning performance on multicore computers. Dissertation for the Doctoral Degree. Berkeley: University of California, 2008
13 Williams S, Oliker L, Vuduc R, Shalf J, Yelick K, Demmel J. Optimization of sparse matrix-vector multiplication on emerging multicore platforms. Parallel Computing, 2009, 35(3): 178-194
https://doi.org/10.1016/j.parco.2008.12.006
14 Bolz J, Farmer I, Grinspun E, Schr?oder P. Sparse matrix solvers on the GPU: conjugate gradients and multigrid. ACM Transactions on Graphics, 2003, 22(3): 917-924
https://doi.org/10.1145/882262.882364
15 Sengupta S, Harris M, Zhang Y, Owens J D. Scan primitives for GPU computing. In: Proceedings of Graphics Hardware. 2007, 97-106
16 Bell N, Garland M. Efficient Sparse Matrix-vector Multiplication on Cuda. Technical Report, NVIDIA Technical Report NVR-2008-004. 2008
17 Baskaran M M, Bordawekar R. Optimizing Sparse Matrix-vector Multiplication on GPUs Using Compile-time and Run-time Strategies. IBM Reserach Report RC24704 (W0812-047). 2008.
18 Cevahir A, Nukada A, Matsuoka S. Fast conjugate gradients with multiple GPUs. In: Proceedings of the Computational Science. 2009, 893-903
https://doi.org/10.1007/978-3-642-01970-8_90
19 Vázquez F, Garzón E M, Martínez J A, Fernández J J. The sparse matrix vector product on GPUs. In: Proceedings of the 2009 International Conference on Computational and Mathematical Methods in Science and Engineering. 2009, 1081-1092
20 Monakov A, Lokhmotov A, Avetisyan A. Automatically tuning sparse matrix-vector multiplication for GPU architectures. In: Proceedings of the High Performance Embedded Architectures and Compilers. 2010, 111-125
https://doi.org/10.1007/978-3-642-11515-8_10
21 Choi J W, Singh A, Vuduc R W. Model-driven autotuning of sparse matrix-vector multiply on GPUs. ACM Sigplan Notices, 2010, 45(5): 115-126
https://doi.org/10.1145/1837853.1693471
22 Guo P, Wang L. Auto-tuning cuda parameters for sparse matrixvector multiplication on GPUs. In: Proceedings of the 2010 International Conference on Computational and Information Sciences (ICCIS). 2010, 1154-1157
https://doi.org/10.1109/ICCIS.2010.285
23 Yang X, Parthasarathy S, Sadayappan P. Fast sparse matrix-vector multiplication on GPUs: implications for graph mining. Proceedings of the VLDB Endowment, 2011, 4(4): 231-242
https://doi.org/10.14778/1938545.1938548
24 Xu W, Zhang H, Jiao S, Wang D, Song F, Liu Z. Optimizing sparse matrix vector multiplication using cache blocking method on fermi GPU. In: Proceedings of the 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel & Distributed Computing (SNPD). 2012, 231-235
https://doi.org/10.1109/snpd.2012.20
25 Buluc A, Williams S, Oliker L, Demmel J. Reduced-bandwidth multithreaded algorithms for sparse matrix-vector multiplication. In: Proceedings of the 2011 IEEE International Parallel & Distributed Processing Symposium. 2011, 721-733
26 Bulu? A, Fineman J T, Frigo M, Gilbert J R, Leiserson C E. Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks. In: Proceedings of the 21st annual symposium on Parallelism in algorithms and architectures. 2009, 233-244
https://doi.org/10.1145/1583991.1584053
27 Kourtis K, Karakasis V, Goumas G, Koziris N. Csx: an extended compression format for spmv on shared memory systems. ACM SIGPLAN Notices, 2011, 46(8): 247-256
https://doi.org/10.1145/2038037.1941587
28 Willcock J, Lumsdaine A. Accelerating sparse matrix computations via data compression. In: Proceedings of the 20th annual international conference on Supercomputing. 2006, 307-316
https://doi.org/10.1145/1183401.1183444
29 Xu WZ, Liu Z Y, Fan D R, Jiao S, Ye X C, Song F L, Yan C. G Accelerating sparse matrix vector multiplication on many-core GPUs. World Academy of Science, Engineering and Technology, 2012, 6(1): 71-78
[1] Qi ZHU,Bo WU,Xipeng SHEN,Kai SHEN,Li SHEN,Zhiying WANG. Understanding co-run performance on CPU-GPU integrated processors: observations, insights, directions[J]. Front. Comput. Sci., 2017, 11(1): 130-146.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed