Can Goksen (cangoksen)

cangoksen

Geek Repo

Company:Microsoft

Location:Seatlle, WA

Twitter:@KnockturnalNed

Github PK Tool:Github PK Tool

Can Goksen's starred repositories

transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Language:PythonLicense:Apache-2.0Stargazers:132927Issues:1117Issues:15854

stable-diffusion

A latent text-to-image diffusion model

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:67758Issues:558Issues:711

cpython

The Python programming language

Language:PythonLicense:NOASSERTIONStargazers:62734Issues:1519Issues:69374

openai-cookbook

Examples and guides for using the OpenAI API

FastChat

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

Language:PythonLicense:Apache-2.0Stargazers:36585Issues:347Issues:1778

tinygrad

You like pytorch? You like micrograd? You love tinygrad! ❤️

Language:PythonLicense:MITStargazers:26419Issues:273Issues:731

flash-attention

Fast and memory-efficient exact attention

Language:PythonLicense:BSD-3-ClauseStargazers:13633Issues:115Issues:1047

tortoise-tts

A multi-voice TTS system trained with an emphasis on quality

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:13000Issues:172Issues:515

ML-Papers-of-the-Week

🔥Highlighting the top ML papers every week.

attention-is-all-you-need-pytorch

A PyTorch implementation of the Transformer model in "Attention is All You Need".

Language:PythonLicense:MITStargazers:8779Issues:96Issues:181

cuda-samples

Samples for CUDA Developers which demonstrates features in CUDA Toolkit

Language:CLicense:NOASSERTIONStargazers:6197Issues:121Issues:238

DiT

Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"

Language:PythonLicense:NOASSERTIONStargazers:6078Issues:45Issues:80

cutlass

CUDA Templates for Linear Algebra Subroutines

Language:C++License:NOASSERTIONStargazers:5453Issues:103Issues:1078

linux-surface

Linux Kernel for Surface Devices

GLIP

Grounded Language-Image Pre-training

Language:PythonLicense:MITStargazers:2176Issues:46Issues:171

Codex-CLI

CLI tool that uses Codex to turn natural language commands into their Bash/ZShell/PowerShell equivalents

Language:PythonLicense:MITStargazers:1991Issues:32Issues:85

calib_challenge

The comma.ai Calibration Challenge!

pytorch-lr-finder

A learning rate range test implementation in PyTorch

Language:PythonLicense:MITStargazers:916Issues:14Issues:61

Efficient-3DCNNs

PyTorch Implementation of "Resource Efficient 3D Convolutional Neural Networks", codes and pretrained models.

Language:PythonLicense:MITStargazers:767Issues:14Issues:42

fft-conv-pytorch

Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch. Much faster than direct convolutions for large kernel sizes.

Language:PythonLicense:MITStargazers:472Issues:8Issues:14

awesome-lifelong-continual-learning

A list of papers, blogs, datasets and software in the field of lifelong/continual machine learning

expRNN

Optimization with orthogonal constraints and on general manifolds

Language:PythonLicense:MITStargazers:124Issues:6Issues:5

fasth

Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.

Language:PythonLicense:MITStargazers:69Issues:4Issues:4

knowledge-distillation-for-unet

An implementation of Knowledge distillation for segmentation, to train a small (student) UNet from a larger (teacher) UNet thereby reducing the size of the network while achieving performance similar to the heavier model.

fftw-cufftw-benchmark

Benchmark for popular fft libaries - fftw | cufftw | cufft

Language:C++License:Apache-2.0Stargazers:15Issues:4Issues:0

GPU-research-FFT-OpenACC-CUDA

Case studies constitute a modern interdisciplinary and valuable teaching practice which plays a critical and fundamental role in the development of new skills and the formation of new knowledge. This research studies the behavior and performance of two interdisciplinary and widely adopted scientific kernels, a Fast Fourier Transform and Matrix Multiplication. Both routines are implemented in the two current most popular many-core programming models CUDA and OpenACC. A Fast Fourier Transform (FFT) samples a signal over a period of time and divides it into its frequency components, computing the Discrete Fourier Transform (DFT) of a sequence. Unlike the traditional approach to computing a DFT, FFT algorithms reduce the complexity of the problem from O(n2) to O(nLog2n). Matrix multiplication is a cornerstone routine in Mathematics, Artificial Intelligence and Machine Learning. This research also shows that the nature of the problem plays a crucial role in determining what many-core model will provide the highest benefit in performance.

Language:CudaLicense:MITStargazers:10Issues:1Issues:0

LayerOut

A new regularization technique that freezes the layers of the deep neural networks stochastically.

Language:PythonStargazers:4Issues:2Issues:0

DeepGalerkinMethod

Based on: https://arxiv.org/abs/1811.08782. Our writeup: https://github.com/Dahoas/DeepGalerkinMethod/blob/master/DPDEs.pdf

Language:PythonStargazers:1Issues:1Issues:0