Junkun Yuan (junkunyuan)

junkunyuan

Geek Repo

Company:Zhejiang University

Location:Hangzhou, Zhejiang, China

Home Page:https://junkunyuan.github.io/

Github PK Tool:Github PK Tool

Junkun Yuan's starred repositories

llama

Inference code for Llama models

Language:PythonLicense:NOASSERTIONStargazers:54420Issues:514Issues:936

Open-Assistant

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

Language:PythonLicense:Apache-2.0Stargazers:36863Issues:429Issues:1641

evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.

Language:PythonLicense:NOASSERTIONStargazers:14426Issues:268Issues:203

ChatGLM3

ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型

Language:PythonLicense:Apache-2.0Stargazers:13148Issues:99Issues:758

Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.

Language:PythonLicense:Apache-2.0Stargazers:12774Issues:98Issues:1032

llama-recipes

Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:10616Issues:86Issues:296

PaLM-rlhf-pytorch

Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM

Language:PythonLicense:MITStargazers:7651Issues:143Issues:46

CogVLM

a state-of-the-art-level open visual language model | 多模态预训练模型

Language:PythonLicense:Apache-2.0Stargazers:5693Issues:66Issues:405

Qwen-VL

The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.

Language:PythonLicense:NOASSERTIONStargazers:4407Issues:49Issues:400

open_flamingo

An open-source framework for training large multimodal models.

Language:PythonLicense:MITStargazers:3591Issues:47Issues:173

NExT-GPT

Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model

Language:PythonLicense:BSD-3-ClauseStargazers:3096Issues:60Issues:91

Video-LLaVA

Video-LLaVA: Learning United Visual Representation by Alignment Before Projection

Language:PythonLicense:Apache-2.0Stargazers:2721Issues:26Issues:172

zjuthesis

Zhejiang University Graduation Thesis LaTeX Template

Language:TeXLicense:MITStargazers:2489Issues:15Issues:307

AutoCrawler

Google, Naver multiprocess image web crawler (Selenium)

Language:PythonLicense:Apache-2.0Stargazers:1580Issues:45Issues:45

Qwen-Audio

The official repo of Qwen-Audio (通义千问-Audio) chat & pretrained large audio language model proposed by Alibaba Cloud.

Language:PythonLicense:NOASSERTIONStargazers:1281Issues:25Issues:62

MOSS-RLHF

MOSS-RLHF

Language:PythonLicense:Apache-2.0Stargazers:1235Issues:34Issues:51

MiniGPT-5

Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"

Language:PythonLicense:Apache-2.0Stargazers:833Issues:12Issues:40

LSTM-FCN

Codebase for the paper LSTM Fully Convolutional Networks for Time Series Classification

TPA-LSTM

Temporal Pattern Attention for Multivariate Time Series Forecasting

LLaVA-Plus-Codebase

LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills

Language:PythonLicense:Apache-2.0Stargazers:670Issues:11Issues:24

Woodpecker

✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.

refer

Referring Expression Datasets API

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:429Issues:7Issues:19

awesome-multi-modal-reinforcement-learning

A curated list of Multi-Modal Reinforcement Learning resources (continually updated)

GPT-Fathom

GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well as OpenAI's earlier models on 20+ curated benchmarks under aligned settings.

Language:PythonLicense:MITStargazers:345Issues:1Issues:6

LLaVA-Interactive-Demo

LLaVA-Interactive-Demo

Language:PythonLicense:Apache-2.0Stargazers:338Issues:16Issues:8

LLaVA-RLHF

Aligning LMMs with Factually Augmented RLHF

Language:PythonLicense:GPL-3.0Stargazers:286Issues:8Issues:32

Awesome_Multimodel_LLM

Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Models (MLLM). It covers datasets, tuning techniques, in-context learning, visual reasoning, foundational models, and more. Stay updated with the latest advancement.

PALI3

Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"

Language:PythonLicense:MITStargazers:131Issues:5Issues:5

MMC

[NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning

HAP

[NeurIPS 2023] HAP: Structure-Aware Masked Image Modeling for Human-Centric Perception