llm-random / llm-random

Home Page:https://llm-random.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

llm-random

We are LLM-Random, a research group at IDEAS NCBR (Warsaw, Poland). We develop this repo and use it to conduct research. To learn more about us and our research, check out our blog, llm-random.github.io.

Publications, preprints and blogposts

  • Scaling Laws for Fine-Grained Mixture of Experts (arxiv)
  • MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts (arxiv, blogpost)
  • Mixture of Tokens: Efficient LLMs through Cross-Example Aggregation (arxiv, blogpost)

Development (WIP)

Getting started

In the root directory run ./start-dev.sh. This will create a virtual environment, install requirements and set up git hooks.

Running Experiments (WIP)

Experiments config

Use the baseline configuration as a template, which is in configs/test/test_baseline.yaml. Based on this template, create a new experiment config and put it in lizrd/scripts/run_configs.

Running Locally

python -m lizrd.grid path/to/config

Running Remotely

bash scripts/run_exp_remotely.sh <remote_cluster_name> scripts/run_configs/<your_config>

License

This project is licensed under the terms of the Apache License, Version 2.0.

Copyright 2023 LLM-Random Authors

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

About

https://llm-random.github.io/

License:Apache License 2.0


Languages

Language:Python 98.9%Language:Shell 0.6%Language:JavaScript 0.5%