Ansh Radhakrishnan (anshradh)

anshradh

Geek Repo

Company:@anthropics

Location:NYC

Home Page:anshradhakrishnan.com

Github PK Tool:Github PK Tool

Ansh Radhakrishnan's repositories

trl_custom

Applying Reinforcement Learning from Human Feedback to language models to teach them to write short story responses to writing prompts.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:13Issues:0Issues:0

Dalle-Mini-RL

Fine-tuning Dalle-Mini with RL to not produce NSFW images

Language:PythonStargazers:1Issues:0Issues:0

dalle-mini

DALL·E Mini - Generate images from a text prompt

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

deep_learning_curriculum

Language model alignment-focused deep learning curriculum

Language:PythonStargazers:0Issues:0Issues:0
Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

elk

Keeping language models honest by directly eliciting knowledge encoded in their activations. Building on "Discovering latent knowledge in language models without supervision" (Burns et al. 2022)

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

equinox

Callable PyTrees and filtered transforms => neural networks in JAX. https://docs.kidger.site/equinox/

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

fancy_einsum

Einsum with einops style variable names

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

flax_minimal_gpt

This is a minimal implementation of a GPT style transformer model in FLAX, mostly done for learning purposes.

Language:PythonStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

littlebookofsemaphores

Python answers to Puzzles in The Little Book of Semaphores

Language:PythonStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

ml-interviews-book-answers

Answers to https://huyenchip.com/ml-interviews-book/

Stargazers:0Issues:0Issues:0

mlab2_pre_exercises

Pre-course exercises for the August 2022 MLAB cohort.

Language:PythonStargazers:0Issues:0Issues:0

Module-0

Module 0 - Fundamentals

Language:PythonStargazers:0Issues:0Issues:0

Module-1

Module 1 - Autodifferentiation

Language:PythonStargazers:0Issues:0Issues:0

Module-2

Module 2 - Tensors

Language:PythonStargazers:0Issues:0Issues:0

Module-3

Module 3 - Efficiency

Language:PythonStargazers:0Issues:0Issues:0

numpy-100

100 numpy exercises (with solutions)

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

License:NOASSERTIONStargazers:0Issues:0Issues:0

Sorting-Transformer-Interp

A mechanistic interpretability project meant to analyze how a simple transformer learns to sort a sequence of 10 digits.

Language:PythonStargazers:0Issues:0Issues:0