mjs271 / koPP_particleCode

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Kokkos-enabled, Performance-portable Particle Code

This repository contains random walk particle tracking code with SPH-style mass transfers, written in C++ and designed for parallel performance portability, using Kokkos.

The only mandatory dependency is cmake; however, python3 is needed for testing/plotting/verification capability, along with some package dependencies, including numpy, matplotlib, and a few others. On Mac, your easiest option is to use Homebrew and then

  • brew install cmake
  • brew install python
  • pip install numpy
  • pip install matplotlib
  • etc.

Note that if python codes still don't work after package installation, you may need to use pip3 in the above. If you're looking to run unit/verification tests that involve python scripts, then you'll likely have to install more python packages until your code stops crashing. Also, there are a handful of Jupyter Notebooks that were used for creating the verification tests and may be helpful for visualization or verification of future changes. This dependency can be taken care of via Homebrew using

  • brew install jupyter
  • brew install notebook

Then run the notebook with jupyter-notebook <notebook>.ipynb, which will open a browser window.

Important Notes:

  1. All of the above dependencies can be avoided by using the Docker build instructions below. However, first you must install Docker 🙃 (Hint: follow the above link).
  2. Simply downloading the zip file from the repository will not include the third-party libraries (Kokkos, Kokkos Kernels, yaml-cpp, ArborX), as they are git submodules. For that reason, the best bet is to clone the repository, as below in the build instructions.

Docker Build Instructions

  1. Clone the repository:
    • If you use https (this is the case if you haven't set up a github ssh key):
      • git clone --recurse-submodules -j8 https://github.com/mjs271/koPP_particleCode.git
    • If you use ssh:
      • git clone --recurse-submodules -j8 git@github.com:mjs271/koPP_particleCode.git
    • Note: the -j8 is a parallel flag, allowing git to fetch up to 8 submodules in parallel.
  2. cd koPP_particleCode
  • To build and run using Docker (recommended for ease):

    1. Start docker.
      • Trust me--you'll forget it eventually and lose 15 minutes of your life debugging :)
    2. docker build -t ko_pt .
      • Default build is debug for now. However, if you want to explicitly choose the build mode, use:
        • docker build --build-arg BUILD_TYPE_IN=<debug, release> -t ko_pt .
    3. docker run -it --rm ko_pt bash
      • This will enter a bash terminal in a Docker container running Ubuntu.
        • To exit, type: exit or ctrl + d.

    Some Notes on the Docker Build

    1. If you wish to run Jupyter notebooks from within the container, run the container using:
      • docker run -it -p 8888:8888 ko_pt bash
        • Note that the -p <host-port>:<container-port> flag maps a port from inside the container to one on your host machine (well, technically publishes internal ports), in this case 8888 to 8888.
      • Once you've generated the results, run the Jupyter notebook using:
        1. If you chose the 8888 port mapping above, you can use a macro that I've defined to run the notebook:
          • djupyter <notebook-name>.ipynb
        2. Otherwise:
          • jupyter notebook --ip 0.0.0.0 --no-browser --allow-root <notebook-name>.ipynb
      • Now, in your machine's web browser, either,
        • Copy and paste one of the URLs containing a token into the browser (the final one works for me most reliably).
        • Go to localhost:<host-port>, where host-port is 8888 if you're using my macro. You will be prompted for a password or token that is given in the container's terminal at the end of one of the above-referenced URLs.
      • You will be presented with a directory structure in the browser where you can select the desired notebook and run it.
    2. If you make changes to the source code from within the container, they will not transmit to the actual source code nor persist after exiting and restarting the container. If you want to make persistent changes, I would recommend:
      • Modify/test within the container.
      • Once you've got things working, modify the external source code.
      • Rebuild and restart the container, which will now contain these changes (docker build -t ko_pt . && docker run -it --rm ko_pt bash).

Building from Source

  1. Clone the repository, as given above.
  2. Edit the config.sh script to accommodate your build goals and development environment.
  3. mkdir build && cd build
  4. Run the config script
    • ../config.sh
    • Advanced Maneuver: If you know what you're doing and want to cut down on build times by pre-installing the libraries that are referenced in the "machine-dependent build options" section of config.sh, add a logic block for your machine to that section, with the full knowledge that you are on from here 👷.
  5. Run the cmake-generated makefiles and install the project.
    • make -j install
    • Note: similarly to above, the -j flag executes the make using the max number of cores available, and make -jN will use N cores.
    • Note: the current behavior, as given in config.sh is to install to the directory in which the config script is run (probably build). As such, you can re-build/install/run from the same place without changing directories repeatedly.
      • If you want to customize this behavior, you can change the line near the top of config.sh that reads export INSTALL_LOCATION="./" to indicate a directory other than build (./ = this directory where we run config.sh).

Testing and Running

  1. Run the unit/verification tests to ensure the code is running properly.
    • make test
    • If tests fail, check out <install location>/Testing/Temporary/LastTest.log to get some info, and if it's not a straightforward issue, like a missing python package, then reach out or file a GitHub issue.
  2. To run some basic examples, take a look in the build/src/examples directory. There you will find three examples in the directories MT_only, RWMT, and RW_only (MT = mass-transfer, RW = random walk).
    • cd <example name>
    • ./run_<example name>.sh
    • To plot/examine the data, run the associated Jupyter notebook.
      • jupyter-notebook plot_<example name>.ipynb, and a web browser window should open to use the notebook.
      • This is another spot where you may run into missing python packages. If so, see above.
    • The examples can be modified by editing the input file <example name>/data/<example name>_input.yaml.
      • Note that if you edit the input file or run script here, in the build directory, it will be overwritten by the original (in the source tree--e.g. koPP_particleCode/src/examples) after doing another make install.

Building for OpenMP (CPU)

  • Building with OpenMP does work on my personal Mac running Monterey (earlier versions also worked on Mojave) with g++ versions 11 and 12 and libomp versions 11 and 14, as well as on a couple of Linux workstations, using various versions of the g++ compiler.
    • Note: If OpenMP was installed via Homebrew brew info libomp will give the version.
  • In order to build for OpenMP, (un)comment the relevant lines in config.sh so that export USE_OPENMP=True and ensure the proper compiler variable is set in the "compiler options" section, namely MAC_OMP_CPP.
    • In the current iteration of the config file, this is option #2 in the "parallel accelerator options" section.
  • If you are on an Apple machine, it is recommended to use the g++ compiler from the GNU compiler collection (GCC) and compatible OpenMP (Apple's default clang++ doesn't have OpenMP). The easiest way to achieve this is on Mac is with Homebrew, by running
    • brew install gcc
    • brew install libomp
  • Finally, know that OpenMP functionality can be finicky on Mac, so reach out if you have issues.

Building for CUDA (GPU)

  • In principle, this should be as easy as (un)commenting the relevant lines in the "parallel accelerator options" section of config.sh so that export USE_CUDA=True (option #4) and setting the GPU_ARCHITECTURE variable (see the Architecture Keywords section of the Kokkos Wiki's build guide for more info)
  • However, everything gets harder once GPUs are involved, so this may not be for the faint of heart and could involve some pain if you're using a machine I haven't already tested on.
  • If you have a non-NVIDIA GPU, I have no idea, but... maybe it'll work. ¯\_(ツ)_/¯

Issues?

Please feel free to reach out or file an issue if you run into problems!

About


Languages

Language:C++ 29.5%Language:Jupyter Notebook 28.5%Language:Shell 14.8%Language:Python 13.4%Language:CMake 12.5%Language:Dockerfile 1.1%Language:Vim Script 0.3%