ShayneC's repositories
AndroidAOP
🔥🔥🔥AndroidAOP 是专属于 Android 端 Aop 框架,只需一个注解就可以请求权限、切换线程、禁止多点、一次监测所有点击事件、监测生命周期等等,没有使用 AspectJ,也可以定制出属于你的 Aop 代码
autoprompt
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
Awesome-Chinese-LLM
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
btrace
🔥🔥 btrace(AKA RheaTrace) is a high performance Android trace tool which is based on Perfetto, it support to define custom events automatically during building apk and using bhook to provider more native events like Render/Binder/IO etc.
ChatGLM-Finetuning
基于ChatGLM-6B、ChatGLM2-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Chinese-LLaMA-Alpaca-2
中文LLaMA-2 & Alpaca-2大模型二期项目 + 16K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs, including 16K long context models)
CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
clippinator
AI programming assistant
excalidraw
Virtual whiteboard for sketching hand-drawn like diagrams
GLM-130B
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
LangChain-Chinese-Getting-Started-Guide
LangChain 的中文入门教程
LLaMA-Factory
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
mitmproxy
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
MobileAgent
Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception
modelscope-agent
ModelScope-Agent: An agent framework connecting models in ModelScope with the world
OpenAGI
OpenAGI: When LLM Meets Domain Experts
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
ragas
Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
RestGPT
An LLM-based autonomous agent controlling real-world applications via RESTful APIs
screen_annotation
The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and describe the UI elements present on the screen: their type, location, OCR text and a short description. It has been introduced in the paper `ScreenAI: A Vision-Language Model for UI and Infographics Understanding`.
screen_qa
ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K question-answer pairs collected by human annotators for ~35K screenshots from Rico. It should be used to train and evaluate models capable of screen content understanding via question answering.
Sliver
字节跳动sliver 采集Java函数栈实现
sonic-server
🎉Back end of Sonic cloud real machine platform. Sonic云真机平台后端服务。
Test-Agent
国内首个测试行业大模型工具,体验AIGC为测试领域带来的变革!
testable-mock
换种思路写Mock,让单元测试更简单
Video-LLaVA
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
vscode-coverage-gutters
Display test coverage generated by lcov and xml - works with many languages
wireguard-android
Mirror only. Official repository is at https://git.zx2c4.com/wireguard-android