littlehacker26

littlehacker26

Geek Repo

Company:Beijing Institute of Technology

Location:Beijing

Github PK Tool:Github PK Tool

littlehacker26's repositories

Discriminator-Cooperative-Unlikelihood-Prompt-Tuning

The code implementation of the EMNLP2022 paper: DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for Controllable Text Generation

2018-NanJing-AI-Application-Competition

赛题的解题思路描述和项目源代码

Residual_Memory_Transformer

This repository contains code, data, checkpoints, and training and evaluation instructions for the paper: Controllable Text Generation with Residual Memory Transformer

PaperList

记录阅读的paper

ACL2021MF

Source Code For ACL 2021 Paper "Mention Flags (MF): Constraining Transformer-based Text Generators"

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

adavae

VAE with adaptive parameter-efficient GPT-2s for language modeling

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

awesome-phd-advice

Collection of advice for prospective and current PhD students

License:MITStargazers:0Issues:0Issues:0

baichuan-7B

A large-scale 7B pretraining language model developed by BaiChuan-Inc.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

BIThesis

📖 北京理工大学非官方 LaTeX 模板集合,包含本科、研究生毕业设计模板及更多。🎉 (更多文档请访问 wiki 和 release 中的手册)

License:LPPL-1.3cStargazers:0Issues:0Issues:0

COCON_ICLR2021

Pytorch implementation of CoCon: A Self-Supervised Approach for Controlled Text Generation

License:Apache-2.0Stargazers:0Issues:0Issues:0

CommonGen

A Constrained Text Generation Challenge Towards Generative Commonsense Reasoning

License:MITStargazers:0Issues:0Issues:0

DExperts

code associated with ACL 2021 DExperts paper

Stargazers:0Issues:0Issues:0

diasenti

Conversational Multimodal Emotion Recognition

Language:PythonStargazers:0Issues:0Issues:0
License:NOASSERTIONStargazers:0Issues:0Issues:0

HuatuoGPT

HuatuoGPT, Towards Taming Language Models To Be a Doctor. (An Open Medical GPT)

License:Apache-2.0Stargazers:0Issues:0Issues:0

LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)

License:Apache-2.0Stargazers:0Issues:0Issues:0

Mengzi

Mengzi Pretrained Models

License:Apache-2.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:1Issues:0
Stargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0

OpenRLHF

An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & Mixtral)

License:Apache-2.0Stargazers:0Issues:0Issues:0

P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

License:MITStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

PLMpapers

Must-read Papers on pre-trained language models.

License:MITStargazers:0Issues:0Issues:0

PPLM

Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.

License:Apache-2.0Stargazers:0Issues:0Issues:0

Progressive-Hint

This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"

Stargazers:0Issues:0Issues:0

Residual-EBM

Code for Residual Energy-Based Models for Text Generation in PyTorch.

License:MITStargazers:0Issues:0Issues:0

self-refine

LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.

License:Apache-2.0Stargazers:0Issues:0Issues:0

transformers

🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0