Charlie Cheng-Jie Ji's starred repositories
LLMs-from-scratch
Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step
openapi-generator
OpenAPI Generator allows generation of API client libraries (SDK generation), server stubs, documentation and configuration automatically given an OpenAPI Spec (v2, v3)
MediaCrawler
小红书笔记 | 评论爬虫、抖音视频 | 评论爬虫、快手视频 | 评论爬虫、B 站视频 | 评论爬虫、微博帖子 | 评论爬虫
helm
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110). This framework is also used to evaluate text-to-image models in Holistic Evaluation of Text-to-Image Models (HEIM) (https://arxiv.org/abs/2311.04287).
llama3-jailbreak
A trivial programmatic Llama 3 jailbreak. Sorry Zuck!
buffer-of-thought-llm
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
tree-diffusion
Diffusion on syntax trees for program synthesis
arena-hard-auto
Arena-Hard-Auto: An automatic LLM benchmark.
athina-evals
Python SDK for running evaluations on LLM generated responses
ai-benchmarks
Benchmarking suite for popular AI APIs
experiments
Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.
code-rag-bench
CodeRAG-Bench: Can Retrieval Augment Code Generation?
imgsys-public
imgsys backend
LatestEval
Latest Evaluation Toolkit (LatestEval). Assessing the language models with latest, uncontaminated materials.
local_function_calling
This repository contains a Python implementation that allows you to use gorilla-llm/gorilla-openfunctions-v2 LLM to perform function calling using the OpenAI protocol. It provides a way to extend the capabilities of the local model by enabling it to generate function arguments and execute functions based on the provided specifications.