Jianing Wang's starred repositories
Langchain-Chatchat
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM 等语言模型的本地知识库问答 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM) QA app with langchain
flash-attention
Fast and memory-efficient exact attention
llama-recipes
Scripts for fine-tuning Llama2 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization & question answering. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment.Demo apps to showcase Llama2 for WhatsApp & Messenger
WukongCRM-11.0-JAVA
悟空CRM-基于Spring Cloud Alibaba微服务架构 +vue ElementUI的前后端分离CRM系统
direct-preference-optimization
Reference implementation for DPO (Direct Preference Optimization)
Wukong_Accounting
悟空财务管理系统(悟空FS) 实现凭证管理、账簿管理、资产负债表、现金流量表、利润表等管理。开启数智财务新时代。
llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
Awesome-LLM-for-RecSys
Survey: A collection of AWESOME papers and resources on the large language model (LLM) related recommender system topics.
Awesome-Language-Model-on-Graphs
A curated list of papers and resources based on "Large Language Models on Graphs: A Comprehensive Survey".
GraphWriter
Code for "Text Generation from Knowledge Graphs with Graph Transformers"
Finetune-ChatGLM2-6B
ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。
Table-Fact-Checking
Data and Code for ICLR2020 Paper "TabFact: A Large-scale Dataset for Table-based Fact Verification"
InstructUIE
Universal information extraction with instruction learning
InstructGLM
Language is All a Graph Needs
IGB-Datasets
Largest realworld open-source graph dataset - Worked done under IBM-Illinois Discovery Accelerator Institute and Amazon Research Awards and in collaboration with NVIDIA Research.
LLMs-as-Zero-Shot-Conversational-RecSys
Evaluation data, LLMs query code and results for "Large Language Models as Zero-Shot Conversational Recommenders" on CIKM 2023.
ChatGLM2-Tuning
基于ChatGLM2-6B进行微调,包括全参数、参数有效性、量化感知训练等,可实现指令微调、多轮对话微调等。