Fast serverless LLM inference, in Rust.
Repository from Github https://github.comatoma-network/atoma-inferRepository from Github https://github.comatoma-network/atoma-infer