mtanghu

mtanghu

User data from Github https://github.com/mtanghu

GitHub:@mtanghu


Organizations
b01lers
TheDuckAI

mtanghu's repositories

Transformer-Trader

Investigation into whether Transformers and self-supervised learning could be used to trade currency markets

Language:Jupyter NotebookStargazers:5Issues:1Issues:2

LEAP

LEAP: Linear Explainable Attention in Parallel for causal language modeling with O(1) path length, and O(1) inference

Language:Jupyter NotebookLicense:CC0-1.0Stargazers:4Issues:1Issues:16

DNI-RNN

Decoupled Neural Interfaces (Jaderberg et al. 2017) mini-package for easy integration with pytorch RNNs

Language:PythonLicense:MITStargazers:2Issues:1Issues:0

Active-Passive-Losses

[ICML2020] Normalized Loss Functions for Deep Learning with Noisy Labels

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Attention-Advice

Transformers with learned advice vectors

License:CC0-1.0Stargazers:0Issues:1Issues:0

awd-lstm-lm

LSTM and QRNN Language Model Toolkit for PyTorch

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

blockchain_video

COM 217 video presentation code for an explainer on how blockchain works using Manim

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

Citadel-Central-Datathon-Fall21

2nd place winning analysis of smoking data for the Citatdel Central Datathon of Fall 2021 (final report included)

Language:Jupyter NotebookStargazers:0Issues:1Issues:0

datasets

🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Rethinking-Neural-Computation

Draft & experiments for an alternative approach to neuro-symbolic AI that allows for "thinking fast and slow"

Language:Jupyter NotebookLicense:CC0-1.0Stargazers:0Issues:1Issues:0

URF

URF: Unsupervised Random Forest fork that uses scikit learn instead of pycluster for ~100x speed up

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

dni-pytorch

Decoupled Neural Interfaces using Synthetic Gradients for PyTorch

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

EconML

ALICE (Automated Learning and Intelligence for Causation and Economics) is a Microsoft Research project aimed at applying Artificial Intelligence concepts to economic decision making. One of its goals is to build a toolkit that combines state-of-the-art machine learning techniques with econometrics in order to bring automation to complex causal inference problems. To date, the ALICE Python SDK (econml) implements orthogonal machine learning algorithms such as the double machine learning work of Chernozhukov et al. This toolkit is designed to measure the causal effect of some treatment variable(s) t on an outcome variable y, controlling for a set of features x.

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:0Issues:0Issues:0

Fastformer

A pytorch &keras implementation and demo of Fastformer.

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

flash-attention

Fast and memory-efficient exact attention

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

flops-profiler

pytorch-profiler

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

hccpy

A Python implementation of Hierarchical Condition Categories

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

martingale

quick simulation to see how martingale betting would work with realistic conditions (i.e. finite but large money), as well as removing finite stopping condition

Language:Jupyter NotebookLicense:CC0-1.0Stargazers:0Issues:1Issues:0

Mega-pytorch

Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
License:MITStargazers:0Issues:0Issues:0

pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Language:C++License:NOASSERTIONStargazers:0Issues:0Issues:0

RWKV-CUDA

The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )

Language:CudaStargazers:0Issues:0Issues:0

RWKV-LM

RWKV is a RNN with transformer-level performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0

smart-on-fhir-tutorial

SMART on FHIR developer tutorial

Language:JavaScriptStargazers:0Issues:0Issues:0

sru

Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

tinygrad

You like pytorch? You like micrograd? You love tinygrad! ❤️

Language:PythonLicense:MITStargazers:0Issues:0Issues:0