Owen Dugan's repositories

QuantFluNNAnalysis

Data processing and neural network training for the QuantifiedFlu project

Language:PythonStargazers:2Issues:1Issues:0

state-spaces

Sequence Modeling with Structured State Spaces

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:2Issues:0Issues:0

OccamNet_Versions

A compilation of all OccamNet versions

qiskift

A Package for Quantum Fault Tolerace in Qiskit

Language:Jupyter NotebookStargazers:1Issues:0Issues:0

srbench

A living benchmark framework for symbolic regression

Language:JavaLicense:GPL-3.0Stargazers:1Issues:0Issues:0

diffrax

Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. https://docs.kidger.site/diffrax/

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Gym-Snake

An OpenAI gym environment made for RL

Language:PythonStargazers:0Issues:0Issues:0

jax-flows

Normalizing Flows in JAX 🌊

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

levanter

Legibile, Scalable, Reproducible Foundation Models with Named Tensors and Jax

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:2Issues:0

mixed_autonomy_intersections

[ITSC 2021] Reinforcement Learning for Mixed Autonomy Intersections

Stargazers:0Issues:0Issues:0

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

netket

Machine learning algorithms for many-body quantum systems

License:Apache-2.0Stargazers:0Issues:0Issues:0

pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

License:NOASSERTIONStargazers:0Issues:0Issues:0

qst-cgan

Quantum state tomography with conditional generative adversarial networks

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

qst-nn

Classification and reconstruction of optical quantum states with deep neural networks

License:MITStargazers:0Issues:0Issues:0
Language:HTMLStargazers:0Issues:1Issues:0

RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

v202

Proceedings of ICML 2023

Stargazers:0Issues:0Issues:0