Lawrence M Stewart's starred repositories
developer-roadmap
Interactive roadmaps, guides and other educational content to help developers grow in their careers.
system-design-101
Explain complex systems using visuals and simple terms. Help you prepare for system design interviews.
lobe-chat
🤯 Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Azure / DeepSeek), Knowledge Base (file upload / knowledge management / RAG ), Multi-Modals (Vision/TTS) and plugin system. One-click FREE deployment of your private ChatGPT/ Claude application.
Scrapegraph-ai
Python scraper based on AI
incubator-answer
A Q&A platform software for teams at any scales. Whether it's a community forum, help center, or knowledge management platform, you can always count on Apache Answer.
ml-engineering
Machine Learning Engineering Open Book
open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
h2o-llmstudio
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/
llm-attacks
Universal and Transferable Attacks on Aligned Language Models
llama2.mojo
Inference Llama 2 in one file of pure 🔥
AutoPrompt
A framework for prompt tuning using Intent-based Prompt Calibration
lm-format-enforcer
Enforce the output format (JSON Schema, Regex etc) of a language model
hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
willow-inference-server
Open source, local, and self-hosted highly optimized language inference server supporting ASR/STT, TTS, and LLM across WebRTC, REST, and WS
news-crawl
News crawling with StormCrawler - stores content as WARC
arctic_shift
Making Reddit data accessible to researchers, moderators and everyone else. Interact with the data through large dumps, an API or web interface.
CoT-Collection
[EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning