gitworkflows's repositories
cover-agent
CodiumAI Cover-Agent: An AI-Powered Tool for Automated Test Generation and Code Coverage Enhancement! 💻🤖🧪🐞
depstubber
Depstubber generates type-correct stubs for Go dependencies, for use in testing
brew
🍺 The missing package manager for macOS (or Linux)
cve-bin-tool
The CVE Binary Tool helps you determine if your system includes known vulnerabilities. You can scan binaries for over 200 common, vulnerable components (openssl, libpng, libxml2, expat and others), or if you know the components used, you can get a list of known vulnerabilities associated with an SBOM or a list of components and versions.
devbox
Instant, easy, and predictable development environments
experts
Experts.js is the easiest way to create and deploy OpenAI's Assistants and link them together as Tools to create advanced Multi AI Agent Systems with expanded memory and attention to detail.
explore
Community-curated topic and collection pages on GitHub
fastapi
FastAPI framework, high performance, easy to learn, fast to code, ready for production
gpt-computer-assistant
gpt-4o for windows, macos and linux
homebrew-cask
🍻 A CLI workflow for the administration of macOS applications distributed as binaries
issue-labeler
An action for automatically labelling issues
PatrowlHearsData
Open-Source Vulnerability Intelligence Center - Unified source of vulnerability, exploit and threat Intelligence feeds
pr-agent
🚀CodiumAI PR-Agent: An AI-Powered 🤖 Tool for Automated Pull Request Analysis, Feedback, Suggestions and More! 💻🔍
rawsec-cybersecurity-inventory
An inventory of tools and resources about CyberSecurity that aims to help people to find everything related to CyberSecurity.
rengine
reNgine is an automated reconnaissance framework for web applications with a focus on highly configurable streamlined recon process via Engines, recon data correlation and organization, continuous monitoring, backed by a database, and simple yet intuitive User Interface. reNgine makes it easy for penetration testers to gather reconnaissance with mi
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs