There are 2 repositories under kolmogorov-arnold-networks topic.
The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling
This project is dedicated to the implementation and research of Kolmogorov-Arnold convolutional networks. The repository includes implementations of 1D, 2D, and 3D convolutions with different kernels, ResNet-like and DenseNet-like models, training code based on accelerate/PyTorch, as well as scripts for experiments with CIFAR-10 and Tiny ImageNet.
Implementation on how to use Kolmogorov-Arnold Networks (KANs) for classification and regression tasks.
Improved LBFGS and LBFGS-B optimizers in PyTorch.
Testing KAN-based text generation GPT models
Combine B-Spline (BS) and Radial Basic Function (RBF) in Kolmogorov-Arnold Networks (KANs)
KANs for text classification on GLUE tasks
Experiments on using Kolmogorov-Arnold Networks (KAN) on Graph Learning
KAN meets Gram Polynomials
This is the repo for the MixKABRN Neural Network (Mixture of Kolmogorov-Arnold Bit Retentive Networks), and an attempt at first adapting it for training on text, and later adjust it for other modalities.
This is a GPT model from nanoGPT but with a twist of KAN:)
DL model deployment using docker, API deployment with FastAPI, and MLOps using WandB for overhead-mnist dataset
Just experimenting with KANNs in pytorch
Generative Adversarial Networks (GANs) using Kolmogorov-Arnold Network Layers (KANLs)
An implementation of the KAN architecture using learnable activation functions for knowledge distillation on the MNIST handwritten digits dataset. The project demonstrates distilling a three-layer teacher KAN model into a more compact two-layer student model, comparing the performance impacts of distillation versus non-distilled models.