송승훈's repositories
Awesome-Prompting-on-Vision-Language-Model
This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models.
my-eda-project
EDA 프로젝트 4조 저장소. 🎲 보드게임 트렌드 분석 🎲
backprop-alts
This repository has implementations of various alternatives to backpropagation for training neural networks.
custom_model
trying to customize Gemma
CustomizedSpikeGPT
original model : Implementation of "SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks"
decision-transformer
Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.
deeplearning-carla-project
my_deeplearning_project
DI-engine
OpenDILab Decision AI Engine
jepa
Experiments in Joint Embedding Predictive Architectures (JEPAs).
memformer
Implementation of Memformer, a Memory-augmented Transformer, in Pytorch
MemTorch
A Simulation Framework for Memristive Deep Learning Systems
minimalRL
Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
Multi-Modal-Transformer
The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and self-supervised learning models. Additionally, it also collects many useful tutorials and tools in these related domains.
pynput
Sends virtual input commands
pywinauto
Windows GUI Automation with Python (based on text properties)
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Spiking-Neural-Network
Pure python implementation of SNN
Spiking-Neural-Network-SNN-with-PyTorch-where-Backpropagation-engenders-STDP
What about coding a Spiking Neural Network using an automatic differentiation framework? In SNNs, there is a time axis and the neural network sees data throughout time, and activation functions are instead spikes that are raised past a certain pre-activation threshold. Pre-activation values constantly fades if neurons aren't excited enough.
thought-and-memory
Investigations into transformers with memory.