kvantas / nebullvm

Plug and play modules to boost the performances of your AI systems πŸš€

Home Page:https://www.nebuly.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool







Plug and play modules to boost the performances of your AI systems

Nebullvm is an ecosystem of plug and play modules to boost the performances of your AI systems. The optimization modules are stack-agnostic and work with any library.

The performances of language, vision and generative models strongly depend on input data/prompting, model architecture and hardware. These are not independent factors, and making optimal choices on all fronts is hard. Our open source modules help you to automatically combine these factors, thus bringing incredibly fast and efficient AI systems to your fingertips.

If you like the idea, give us a star to show your support for the project β­

Documentation

Please find here the full documentation on:

  • Installation
  • Getting started (quick view and examples)
  • Notebooks
  • Ecosystem and integrations
  • Product structure

What can this help with?

Our optimization modules are designed to be easily integrated into your system, providing a quick and seamless boost to its performance. Simply plug and play to start realizing the benefits of optimized performance right away:

βœ… Speedster: Automatically apply the best set of SOTA optimization techniques to achieve the maximum inference speed-up on your hardware.

βœ… OpenAlphaTensor: Increase the computational performances of an AI model with custom-generated matrix multiplication algorithms.

βœ… Forward-Forward: The Forward Forward algorithm is a method for training deep neural networks that replaces the backpropagation forward and backward passes with two forward passes.

Next modules and roadmap

We are actively working on incorporating the following modules, as requested by members of our community, in upcoming releases:

  • GPU partitioner: Effortlessly maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas.
  • Promptify: Effortlessly personalize APIs generative models from OpenAI, Cohere, HF to your specific writing style and context leveraging human feedback.
  • CloudSurfer: Automatically discover the optimal cloud configuration and hardware on AWS, GCP and Azure to run your AI models.
  • OptiMate: Interactive tool guiding savvy users in achieving the best inference performance out of a given model / hardware setup.
  • TrainingSim: Easily simulate the training of large AI models on a distributed infrastructure to predict training behaviours without actual implementation.

Contributing

As an open source project in a rapidly evolving field, we welcome contributions of all kinds, including new features, improved infrastructure, and better documentation. If you're interested in contributing, please see the linked page for more information on how to get involved.


Join the community | Contribute to the library

About

Plug and play modules to boost the performances of your AI systems πŸš€

https://www.nebuly.com/

License:Apache License 2.0


Languages

Language:Python 74.2%Language:Jupyter Notebook 19.5%Language:CMake 5.5%Language:Shell 0.6%Language:Dockerfile 0.3%