JackeyLove1's starred repositories
MediaCrawler
小红书笔记 | 评论爬虫、抖音视频 | 评论爬虫、快手视频 | 评论爬虫、B 站视频 | 评论爬虫、微博帖子 | 评论爬虫
MoneyPrinterTurbo
利用AI大模型,一键生成高清短视频 Generate short videos with one click using AI LLM.
gpt-researcher
GPT based autonomous agent that does online comprehensive research on any given topic
node_exporter
Exporter for machine metrics
FreeAskInternet
FreeAskInternet is a completely free, PRIVATE and LOCALLY running search aggregator & answer generate using MULTI LLMs, without GPU needed. The user can ask a question and the system will make a multi engine search and combine the search result to LLM and generate the answer based on search results. It's all FREE to use.
Douyin_TikTok_Download_API
🚀「Douyin_TikTok_Download_API」是一个开箱即用的高性能异步抖音、快手、TikTok、Bilibili数据爬取工具,支持API调用,在线批量解析及下载。
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
bob-plugin-openai-translator
基于 ChatGPT API 的文本翻译、文本润色、语法纠错 Bob 插件,让我们一起迎接不需要巴别塔的新时代!Licensed under CC BY-NC-SA 4.0
nginx-tutorial
这是一个 Nginx 极简教程,目的在于帮助新手快速入门 Nginx。
tree-of-thought-llm
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
awesome-typescript
A collection of awesome TypeScript resources for client-side and server-side development
page-assist
Use your locally running AI models to assist you in your web browsing
nvidia_gpu_exporter
Nvidia GPU exporter for prometheus using nvidia-smi binary
write-you-a-vector-db
A Vector Database Tutorial (over CMU-DB's BusTub system)
Streamline-Analyst
An AI agent powered by LLMs that streamlines the entire process of data analysis. 🚀
comfy-server
comfyui server to use comfyui API as easy as send a message
React-TypeScript
react+ts项目练习, 多动手才能发现问题,冲冲冲!
gpu-monitoring-docker-compose
Docker compose file to set up NVIDIA GPU monitoring on a single server