AUB ScholarWorks

Optimizing Sparse Matrix Multiplication for Sparse Deep Neural Networks on GPUs

Show simple item record

dc.contributor.advisor El Hajj, Izzat
dc.contributor.author El Masry, Bachir
dc.date.accessioned 2021-05-11T10:23:13Z
dc.date.available 2021-05-11T10:23:13Z
dc.date.issued 5/11/2021
dc.identifier.uri http://hdl.handle.net/10938/22846
dc.description Dr. George Turkiyyah, Dr. Shady Elbassuoni
dc.description.abstract Deep Neural Networks (DNNs) require a huge amount of computational power and memory storage. Hence, sparsifying the neural network was proposed as a technique to help reduce the computational complexities of DNNs. However, when dealing with parallelization, we face multiple challenges like load balancing, memory management, and many others. Many studies have tackled these problems, some using CPUs, and more recent studies using GPUs. Since modern GPUs, compared to the CPUs, promise a much higher peak floating-point performance and memory bandwidth, we based our study on running DNNs on GPUs. Many works have proven the efficiency of GPUs in dealing with sparse matrices. Our aim is to further explore the effects of applying a combination of various storage formats on the GPU while testing different tiling strategies. We would also be proposing a technique for better memory utilization.
dc.language.iso en
dc.subject GPU
dc.subject 2D Tiling
dc.subject SpDNN
dc.subject Sparse Storage Formats
dc.subject Sparse Matrix Multiplication
dc.title Optimizing Sparse Matrix Multiplication for Sparse Deep Neural Networks on GPUs
dc.type Thesis
dc.contributor.department Department of Computer Science
dc.contributor.faculty Faculty of Arts and Sciences
dc.contributor.institution American University of Beirut


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search AUB ScholarWorks


Browse

My Account