Kwaai-AI-Lab / kwaai-alignment

Repo for alignment at @KWAAI-AI-Lab

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Kwaai Alignment Research Lab

🀝 Meetings: TBD


FAQ:

What are the goals of this group?

Our current goal this year is to produce a survey paper in alignment research and run small experiments.

How can I contribute to this group?

Right now, we’re working on the survey and small experiments. If you’d like to help out with either, reach out and we can get you started.

What if I don’t know anything about alignment?

That's okay! We welcome all different backgrounds. If you are interested in this area and willing to help, then that is great.

Contributions: I want to run a small experiment I read in a paper, can I add my code and findings somewhere?

Yes, create a new branch, add your changes using the below format to /Implementations, and open a PR.

Folder Structure

Please create a folder named {experiment_name} inside the /Implementations directory. The structure should look like this:

/Implementations
β”‚
└───{/experiment_name}
β”‚   β”‚   README.md
β”‚   β”‚   relevant_paper_1.pdf
β”‚   β”‚   relevant_paper_1.pdf
β”‚   β”‚
β”‚   └───code
β”‚       β”‚   script_1.py
β”‚       β”‚   script_2.py
β”‚       β”‚   ...

Feel free to add more specific instructions or scripts based on the particular experiment and its requirements.

Useful Reads to Get Up to Speed:

  • Research Priorities for Robust and Beneficial AI: Read here
  • Concrete Problems in AI Safety (2016): Read here
  • Unsolved Problems in ML Safety (2022): Read here

Alignment Forums:


Institutions:

About

Repo for alignment at @KWAAI-AI-Lab


Languages

Language:Python 87.1%Language:PureBasic 12.9%