dsj96's repositories

PPR-master

This is an implementation of the POI recommendation model-PPR.

Language:PythonLicense:MITStargazers:10Issues:1Issues:2

alpaca-lora

Instruct-tune LLaMA on consumer hardware

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

AlpacaDataCleaned

Alpaca dataset from Stanford, cleaned and curated

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

bert_score

BERT score for text generation

License:MITStargazers:0Issues:0Issues:0

Book2_Beauty-of-Data-Visualization

Book_2_《可视之美》 | 鸢尾花书:从加减乘除到机器学习;开始上传PDF草稿、Jupyter笔记。文件还会经过至少两轮修改,改动会很大,大家注意下载最新版本。请多提意见,谢谢

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

camel

🐫 CAMEL: Communicative Agents for “Mind” Exploration of Large Language Model Society (NeruIPS'2023) https://www.camel-ai.org

License:Apache-2.0Stargazers:0Issues:0Issues:0

ChatGPT-Next-Web

A well-designed cross-platform ChatGPT UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT 应用。

License:MITStargazers:0Issues:0Issues:0

COMET

A Neural Framework for MT Evaluation

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

DPR

Dense Passage Retriever - is a set of tools and models for open domain Q&A task.

License:NOASSERTIONStargazers:0Issues:0Issues:0

easy-rl

强化学习中文教程(蘑菇书),在线阅读地址:https://datawhalechina.github.io/easy-rl/

License:NOASSERTIONStargazers:0Issues:0Issues:0

GPT-4-LLM

Instruction Tuning with GPT-4

License:Apache-2.0Stargazers:0Issues:0Issues:0

joeynmt

Minimalist NMT for educational purposes

License:Apache-2.0Stargazers:0Issues:0Issues:0

llama2.c

Inference Llama 2 in one file of pure C

License:MITStargazers:0Issues:0Issues:0

MEGABYTE-pytorch

Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

MetaGPT

🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

mt-bigscience

Evaluation results for Machine Translation within the BigScience project

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

neural-compressor

Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

NLP-progress

Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.

License:MITStargazers:0Issues:0Issues:0

Pareto-Mutual-Distillation

Implementation of Pareto-Mutual-Distillation (paper: Towards Higher Pareto Frontier in Multilingual Machine Translation)

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

prize

A prize for finding tasks that cause large language models to show inverse scaling

License:CC-BY-4.0Stargazers:0Issues:0Issues:0

Prompt-Engineering-Guide

🐙 Guides, papers, lecture, notebooks and resources for prompt engineering

License:MITStargazers:0Issues:0Issues:0

ReAct

[ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models

License:MITStargazers:0Issues:0Issues:0

ReAgent

A platform for Reasoning systems (Reinforcement Learning, Contextual Bandits, etc.)

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

Rememberer

Rememberer & RLEM

License:Apache-2.0Stargazers:0Issues:0Issues:0

SCM4LLMs

Self-Controlled Memory System for LLMs

License:MITStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:0Issues:0
License:NOASSERTIONStargazers:0Issues:0Issues:0

trlx

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities

License:MITStargazers:0Issues:0Issues:0