Logesh kumar's repositories
Bespoke_LLMs_TALK
Slides and code for Bespoke LLMs : Adapting OSS LLMs for your use case.
Interpretable-NLP-Talk
Hack Session on Interpreting NLP models at DataHack Summit 2019.
language-detection-fasttext
Sample code illustrating FastText's pre-trained language Identification model usage
Colab_Transformers
Colab notebooks and code used for training and inferencing BERT and other transformer models from pytorch transformers
NLP_paper_notes
My Notes and observations of Interesting NLP papers. My interests are representation learning, Metric learning and Information retrieval
Multilabel-Transformers
A repository to finetune transformers on a multilabel classification task ( based on transformers library ).
FastTokenizersWrapper
A wrapper for Huggingface's Tokenizers library , for it to be used along with existing version of Huggingface Transformers library.
ABSA-BERT-QA
Aspect based sentiment analysis as QA problem
bigcode-evaluation-harness
A framework for the evaluation of autoregressive code generation language models.
flax-sentence-embeddings
Shared code for training sentence embeddings with Flax / JAX
How-to-do-more-with-less-data
Blog posts and demo on using Active learning to achieve good ML performance with less data
infinitylogesh.github.io
My Blog
keras-scaffolding
A scaffolding for keras and tensorflow with some callbacks , metrics and logs inbuilt.
open-interpreter
A natural language interface for computers
probability_simulations
Simulations of probability problems and concepts
santacoder-finetuning
Fine-tune SantaCoder for Code/Text Generation.
sentence-transformers
Sentence Embeddings with BERT & XLNet
Speculative-Sampling
Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.