t-arnold / gpu-python-tutorial

GPU Development in Python 101 tutorial

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GPU development with Python 101 Tutorial

Welcome to the GPU Development in Python 101 tutorial.

Over the last two years I’ve gotten to grips with the fundamentals of writing accelerated code in Python. I was amazed to discover that I didn’t need to learn C++ and I didn’t need new development tools. Writing GPU code in Python is easier today than ever, and in this tutorial, I will share what I’ve learned and how you can get started with accelerating your code.

In this tutorial we will cover:

  • What is a GPU and why is it different to a CPU?
  • An overview of the CUDA development model.
  • Numba: A high performance compiler for Python.
  • Writing your first GPU code in Python.
  • Managing memory.
  • Understanding what your GPU is doing with pyNVML (memory usage, utilization, etc).
  • RAPIDS: A suite of GPU accelerated data science libraries.
  • Working with Numpy style arrays on the GPU.
  • Working with Pandas style dataframes on the GPU.
  • Performing some scikit-learn style machine learning on the GPU.

Attendees will be expected to have a general knowledge of Python and programming concepts, but no GPU experience will be necessary. The key takeaway for attendees will be the knowledge that they don’t have to do much differently to get their code running on a GPU.

About

GPU Development in Python 101 tutorial


Languages

Language:Jupyter Notebook 99.6%Language:Dockerfile 0.4%Language:Shell 0.1%