Vikesh Tiwari's starred repositories
screenshot-to-code
Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)
Stirling-PDF
#1 Locally hosted web application that allows you to perform various operations on PDF files
open-webui
User-friendly WebUI for LLMs (Formerly Ollama WebUI)
full-stack-fastapi-template
Full stack, modern web application template. Using FastAPI, React, SQLModel, PostgreSQL, Docker, GitHub Actions, automatic HTTPS and more.
nginx-proxy-manager
Docker container for managing Nginx proxy hosts with a simple, powerful interface
system-design
A resource to help you become good at system design.
llama_parse
Parse files for optimal RAG
statusnook
Effortlessly deploy a status page and start monitoring endpoints in minutes
data-connectors
LLM-ready data connectors
openai-github-copilot
GitHub Copilot ➜ OpenAI API proxy. Serverless!
LLM-Load-Unload-Ollama
This is a simple demonstration to show how to keep an LLM loaded for prolonged time in the memory or unloading the model immediately after inferencing when using it via Ollama.