hkxIron's repositories

Language:PythonLicense:MITStargazers:6Issues:0Issues:0

allennlp

An open-source NLP research library, built on PyTorch.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

Book-Mathmatical-Foundation-of-Reinforcement-Learning

This is the homepage of a new book entitled "Mathmatical Foundations of Reinforcement Learning."

Stargazers:0Issues:0Issues:0

Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

cs231n.github.io

Public facing notes page

License:MITStargazers:0Issues:0Issues:0

gisting

Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0

lit-llama

Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

License:Apache-2.0Stargazers:0Issues:0Issues:0

LLaMA-Factory

Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)

License:Apache-2.0Stargazers:0Issues:0Issues:0

llama2.c

Inference Llama 2 in one file of pure C

License:MITStargazers:0Issues:0Issues:0

LLM-Tuning

Tuning LLMs with no tears💦, sharing LLM-tools with love❤️.

Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0

LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

License:MITStargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0

minbpe

Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.

License:MITStargazers:0Issues:0Issues:0

mistral-src

Reference implementation of Mistral AI 7B v0.1 model.

License:Apache-2.0Stargazers:0Issues:0Issues:0

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

NBCE

Naive Bayes-based Context Extension

Language:PythonStargazers:0Issues:0Issues:0

numpy-ml

Machine learning, in numpy

Language:PythonLicense:GPL-3.0Stargazers:0Issues:1Issues:0
Language:PythonStargazers:0Issues:0Issues:0
License:CC-BY-4.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

SimCSE-Pytorch

中文数据集下SimCSE+ESimCSE的实现

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

SimCSE_princeton

EMNLP'2021: SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Skywork

Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sourced the model, training data, evaluation data, evaluation methods, etc. 天工系列模型在3.2TB高质量多语言代码数据上进行预训练。我们开源了模型参数,训练数据,评估数据,评估方法。

License:NOASSERTIONStargazers:0Issues:0Issues:0
Language:JavaStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0