amirabbasasadi / RockyML

⛰️ RockyML - A High-Performance Scientific Computing Framework for Non-smooth Machine Learning Problems

Home Page:https://amirabbasasadi.github.io/RockyML

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DOI



📔 Documentation : amirabbasasadi.github.io/RockyML

Tutorials

Zagros Tutorials

Components

Zagros

Design Goals:

  • Providing a language called Dena for designing arbitrary complex optimizers by combining
    • Modular and parallel search strategies: Genetic, PSO, EDA, ...
    • Communication strategies for distributed optimization on top of MPI
    • Analyzer strategies for analyzing objective functions
    • Blocking strategies for block optimization
    • Logging strategies for tracking optimization experiments on local system or a remote server
  • Hybrid parallelism: multi-threading in each node and message passing across nodes (MPI)
  • ‌Block optimization for using memory-intensive optimizers for large number of variables
#include <mpi.h>
#include <rocky/zagros/benchmark.h>
#include <rocky/zagros/flow.h>

using namespace rocky::zagros;
using namespace rocky::zagros::dena;

int main(int argc, char* argv[]){
    MPI_Init(&argc, &argv);
    
    // define the optimization problem
    const int dim = 100;
    benchmark::rastrigin<float> problem(dim);

    // recording the result of optimization
    local_log_handler log_handler("result.csv");

    // define the optimizer
    auto optimizer = container::create("A", 300)
                    >> init::uniform("A") 
                    >> run::n_times(500,
                            mutate::gaussian("A")
                            >> run::with_probability(0.2,
                                crossover::differential_evolution("A")
                            )
                            >> log::local::best("A", log_handler)
                        );

    // create a runtime for executing the optimizer 
    basic_runtime<float, dim> runtime(&problem);
    runtime.run(optimizer);

    MPI_Finalize();
    return 0;
}

Etna (Work in progress)

Building blocks for designing non-differentiable neural networks

  • Fast, low overhead, and thread-safe
  • Various components:
    • Standard deep learning layers
    • Discrete and integer layers
    • Combinatorial layers
    • Stochastic layers
    • Dynamic layers

About

Publications

If you use RockyML in your research, please cite it as follows:

@software{RockyML,
  author = {Asadi, Amirabbas},
  doi = {10.5281/zenodo.7612838},
  month = {2},
  title = {{RockyML, A Scientific Computing Framework for Non-smooth Machine Learning Problems}},
  url = {https://github.com/amirabbasasadi/RockyML},
  year = {2023}
}

About

⛰️ RockyML - A High-Performance Scientific Computing Framework for Non-smooth Machine Learning Problems

https://amirabbasasadi.github.io/RockyML

License:Apache License 2.0


Languages

Language:C++ 60.1%Language:CSS 27.8%Language:JavaScript 7.4%Language:HTML 2.1%Language:Python 1.6%Language:CMake 0.7%Language:Shell 0.4%Language:C 0.0%