Fangkai Jiao (SparkJiao)

SparkJiao

Geek Repo

Company:NTU-NLP & I2R, A*STAR, Singapore

Location:Sinagpore

Home Page:jiaofangkai.com

Github PK Tool:Github PK Tool

Fangkai Jiao's starred repositories

dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.

Language:TypeScriptLicense:NOASSERTIONStargazers:38760Issues:298Issues:2869

llama-recipes

Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:10699Issues:88Issues:296

FlexGen

Running large language models on a single GPU for throughput-oriented scenarios.

Language:PythonLicense:Apache-2.0Stargazers:9089Issues:109Issues:81

llm-foundry

LLM training code for Databricks foundation models

Language:PythonLicense:Apache-2.0Stargazers:3884Issues:49Issues:368

Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2

Language:PythonLicense:NOASSERTIONStargazers:1759Issues:24Issues:171

Emu

Emu Series: Generative Multimodal Models from BAAI

Language:PythonLicense:Apache-2.0Stargazers:1578Issues:21Issues:85

MOSS-RLHF

MOSS-RLHF

Language:PythonLicense:Apache-2.0Stargazers:1235Issues:34Issues:51

MeZO

[NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333

Language:PythonLicense:MITStargazers:1003Issues:20Issues:33

Megatron-LLaMA

Best practice for training LLaMA models in Megatron-LM

Language:PythonLicense:NOASSERTIONStargazers:573Issues:5Issues:59

LongBench

LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding

Language:PythonLicense:MITStargazers:550Issues:6Issues:63

VQ-VAE

Minimalist implementation of VQ-VAE in Pytorch

Language:PythonLicense:BSD-3-ClauseStargazers:481Issues:8Issues:15

BLoRA

batched loras

MathGLM

Official Pytorch Implementation for MathGLM

tree-of-thought-puzzle-solver

The Tree of Thoughts (ToT) framework for solving complex reasoning tasks using LLMs

SearchAnything

A semantic local search engine powered by AI models.

Language:PythonLicense:MITStargazers:249Issues:10Issues:6

Program-of-Thoughts

Data and Code for Program of Thoughts (TMLR 2023)

Language:PythonLicense:MITStargazers:213Issues:6Issues:9

RAP

Reasoning with Language Model is Planning with World Model

Language:PDDLLicense:MITStargazers:127Issues:3Issues:8

Awesome-LegalAI-Resources

This repository aims to collect all LegalAI data to facilitate the development of intelligent justice systems

llama-ssp

Experiments on speculative sampling with Llama models

Glot500

Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023

Language:PythonLicense:NOASSERTIONStargazers:96Issues:8Issues:7

reStructured-Pretraining

reStructured Pre-training

gdGPT

Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.

Language:PythonLicense:Apache-2.0Stargazers:89Issues:1Issues:8

M3Exam

Data and code for paper "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models"

Hrrformer

Hrrformer: A Neuro-symbolic Self-attention Model (ICML23)

CoTConsistency

The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".

IDOL

Repo for paper "IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning" accepted to the Findings of ACL 2023

Language:PythonLicense:Apache-2.0Stargazers:25Issues:2Issues:2

IDOL

Repo for paper "IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning" accepted to the Findings of ACL 2023

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0