Daniel Shats's repositories
MoCoV2_CIFAR10
Training MoCoV2 on the CIFAR10 Dataset
hot_budr_snake
A snake on the lookout for some good ol' hot_budr
Unofficial-Poster-Template-for-Technion-Computer-Science
Unofficial Poster Template for Technion Computer Science. Copied from the UChicago version here: https://www.overleaf.com/latex/templates/unofficial-poster-template-for-uchicago-computer-science/kbbmbdxwbypb
fuse-drug
FuseMedML based molecular biochemistry library for drug discovery/repurposing
fuse-med-ml
A python framework accelerating ML based discovery in the medical field by encouraging code reuse. Batteries included :)
improved-neural-algorithm-of-artistic-style
Improving style transfer of VGG using adversarial training
budr_blog
Me fastpages blog
ChessGameOfficial
This is the official Repository for the COP3503 Group_33 Chess Game
chi_squared
Implementation of Pearsons Chi Squared test in python
full_attention
Full attention for neural networks
gaeble-diffusion
A latent text-to-image diffusion model
Habit-Tracking-App
This respository is dedicated to maintaining and updating the Software Engineering desktop application
imagenette_starter
starter kit for imagenette.
Monty-Hall
I was having trouble understanding the Monty hall problem. So I wrote a program to confirm it for myself. About halfway through writing the program, I finally realized how the damn thing works. I would definetly reccomend for anyone else having trouble with the monty hall problem, (or any problem, for that matter) to write it out in code.
more_better
MORE CRITERION MORE BETTERRR
Neural-Network-Architecture-Diagrams
Diagrams for visualizing neural network architecture (Created with diagrams.net)
pdbpp
pdb++, a drop-in replacement for pdb (the Python debugger)
pi_opencv
Just my first time playing with opencv on raspberry pi
Project_1_Lists
An ensemble of c++ list classes I built for my data structures class. They include iterators which is super cool!
PyTorch-YOLOv3
Minimal PyTorch implementation of YOLOv3
RWKV-LM
RWKV-2 is a RNN with transformer-level performance. It can be directly trained like a GPT transformer (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.