waptaff / pytorch-ebuild

Ebuild infrastructure files for PyTorch and some related projects

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Ebuild files and necessary patches for PyTorch

The project contains a portage directory subtree, which can be used for building and merging PyTorch in Gentoo-based distros.

How to use

AMD ROCm repository setup

Required step only if AMD ROCm support is going to be enabled (rocm USE flag). Please, head over to repository setup chapter below, if don't plan to build PyTorch against ROCm.

If you have recent AMD GPU and would like to use it in PyTorch, you would probably want to build PyTorch with AMD ROCm support. It requires the current step of registering additional external portage repository with ROCm infrastructure by @justxi.

Probably the easiest way to register external portage repository in the system is to add it as an overlay (see also here). In short, to do this you want to create a small file in your /etc/portage/repos.conf directory (as root):

cat >> /etc/portage/repos.conf/justxi-rocm.conf << EOF
[justxi-rocm]
location = /var/db/repos/justxi-rocm
sync-type = git
sync-uri = https://github.com/justxi/rocm
auto-sync = yes
EOF

Current repository setup

Now add our current portage repository to the system:

cat >> /etc/portage/repos.conf/aclex-pytorch.conf << EOF
[aclex-pytorch]
location = /var/db/repos/aclex-pytorch
sync-type = git
sync-uri = https://github.com/aclex/pytorch-ebuild
auto-sync = yes
EOF

Afterwards you just sync the changes (emerge --sync, eix-sync etc.) and merge the packages using the common utilities via e.g. emerge.

What's inside

  • libtorch (C++ core of PyTorch)
  • system-wide installation
  • Python binding (i.e. PyTorch itself) linked to the built libtorch instance (i.e. no additional rebuild)
  • BLAS selection
  • building official documentation
  • torchvision (CPU and CUDA support only at the moment)

Some questions on ROCm support

How can I make use ROCm inside PyTorch? It mimics CUDA inside PyTorch (and libtorch), so you can use just the same code snippets you normally use to work with CUDA, e.g. .cuda(), torch.cuda.is_available() == True etc.

Can I still build CUDA support along with ROCm support enabled? No, it's not currently possible. PyTorch can only be built with either of them.

Is it experimental or official support? As the corresponding chapter on the homepage reads, PyTorch ROCm support appears to be official.

About

Ebuild infrastructure files for PyTorch and some related projects

License:GNU General Public License v2.0


Languages

Language:Shell 100.0%