Dr. Ayaz H. Khan's repositories
RT-CUDA-GUI-Development
Recent development in Graphic Processing Units (GPUs) has opened a new challenge in harnessing their computing power as a new general-purpose computing paradigm with its CUDA parallel programming. However, porting applications to CUDA remains a challenge to average programmers. We have developed a restructuring software compiler (RT-CUDA) with best possible kernel optimizations to bridge the gap between high-level languages and the machine dependent CUDA environment. RT-CUDA is based upon a set of compiler optimizations. RT-CUDA takes a C-like program and convert it into an optimized CUDA kernel with user directives in a con.figuration .file for guiding the compiler. While the invocation of external libraries is not possible with OpenACC commercial compiler, RT-CUDA allows transparent invocation of the most optimized external math libraries like cuSparse and cuBLAS. For this, RT-CUDA uses interfacing APIs, error handling interpretation, and user transparent programming. This enables efficient design of linear algebra solvers (LAS). Evaluation of RT-CUDA has been performed on Tesla K20c GPU with a variety of basic linear algebra operators (M+, MM, MV, VV, etc.) as well as the programming of solvers of systems of linear equations like Jacobi and Conjugate Gradient. We obtained significant speedup over other compilers like OpenACC and GPGPU compilers. RT-CUDA facilitates the design of efficient parallel software for developing parallel simulators (reservoir simulators, molecular dynamics, etc.) which are critical for Oil & Gas industry. We expect RT-CUDA to be needed by many industries dealing with science and engineering simulation on massively parallel computers like NVIDIA GPUs.
code-samples
Source code examples from the Parallel Forall Blog
CPPE-Dataset
Code for our paper CPPE - 5 (Medical Personal Protective Equipment), a new challenging object detection dataset
cs344
Introduction to Parallel Programming class code
datasharing
The Leek group guide to data sharing
helloworld-RPC
A VERY simple rpc example to start with.
Image-Recognition-Tutorial-using-MXNet-with-Docker
This is an extension of the tutorial available at https://www.r-bloggers.com/image-recognition-tutorial-in-r-using-deep-convolutional-neural-networks-mxnet-package/ for image recognition example using MXNet. The users can easily build a docker image for the required environment and directly start running the example. There is no need to do installation of linux and other required packages.
jetbot_ros
ROS nodes and Gazebo model for NVIDIA JetBot with Jetson Nano
module3
GitHub Campus Advisor Module 3
morphologica
A library of supporting code for numerical modelling (JSON config, HDF5 data, Modern OpenGL visualization)
NeMo
NeMo: a toolkit for conversational AI
padding_free_matrix_transpose_gpu
The advances of Graphic Processing Units (GPU) technology and the introduction of CUDA programming model facilitates developing new solutions for sparse and dense linear algebra solvers. Matrix Transpose is an important linear algebra procedure that has deep impact in various computational science and engineering applications. Several factors hinder the expected performance of large matrix transpose on GPU devices. The degradation in performance involves the memory access pattern such as coalesced access in the global memory and bank conflict in the shared memory of streaming multiprocessors within the GPU. In this paper, two matrix transpose algorithms are proposed to alleviate the aforementioned issues of ensuring coalesced access and conflict free bank access. The proposed algorithms have comparable execution times with the NVIDIA SDK bank conflict - free matrix transpose implementation. The main advantage of proposed algorithms is that they eliminate bank conflicts while allocating shared memory exactly equal to the tile size (T x T) of the problem space. However, to the best of our knowledge an extra space of Tx(T+1) needs to be allocated in the published research. We have also applied the proposed transpose algorithm to recursive gaussian implementation of NVIDIA SDK and achieved about 6% improvement in performance.
ParallelProgrammingwithOpenMP
Training Material
programming_examples
Programming Examples taught in class
RStudio
A repository that will be linked with RStudio
rticonnextdds-connector-py
RTI Connector for Connext DDS is a lightweight technology that enables DDS data to be accessed with Python.
Strassen-Matrix-Multiplication---Parallel-Implementations
Parallel Implementations of Strassen Matrix Multiplication and it's variant Winograd using different parallel programming platforms.
swirl_courses
:mortar_board: A collection of interactive courses for the swirl R package.
tflearn
Deep learning library featuring a higher-level API for TensorFlow.