teocns

teocns

Geek Repo

Location:Anywhere around the world

Github PK Tool:Github PK Tool

teocns's repositories

telescope-frecency.nvim

A telescope.nvim extension that offers intelligent prioritization when selecting files from your editing history.

Language:LuaLicense:MITStargazers:2Issues:0Issues:0

aider

aider is AI pair programming in your terminal

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

puppetask

Execute containerized Chromium puppeteer tasks

Language:DockerfileStargazers:0Issues:1Issues:0

aifs

Local semantic search. Stupidly simple.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.

Language:JavaScriptLicense:MITStargazers:0Issues:0Issues:0
Language:ShellLicense:MIT-0Stargazers:0Issues:0Issues:0
Language:MustacheStargazers:0Issues:0Issues:0

charts-bitnami

Bitnami Helm Charts

License:NOASSERTIONStargazers:0Issues:0Issues:0

charts-ot

A repository which that will contain helm charts with best and security practices.

Stargazers:0Issues:0Issues:0

chrome-power-app

The first open source fingerprint browser.

License:AGPL-3.0Stargazers:0Issues:0Issues:0

danswer

Ask Questions in natural language and get Answers backed by private sources. Connects to tools like Slack, GitHub, Confluence, etc.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

devika

Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective. Devika aims to be a competitive open-source alternative to Devin by Cognition AI.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

kubeflow-pipelines

Machine Learning Pipelines for Kubeflow

License:Apache-2.0Stargazers:0Issues:0Issues:0

kuberay

A toolkit to run Ray applications on Kubernetes

License:Apache-2.0Stargazers:0Issues:0Issues:0

llm-on-ray

Pretrain, finetune and serve LLMs on Intel platforms with Ray

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

MemGPT

Teaching LLMs memory management for unbounded context 📚🦙

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

ml-metadata

For recording and retrieving metadata associated with ML developer and data scientist workflows.

License:Apache-2.0Stargazers:0Issues:0Issues:0

neovim-ayu

Ayu theme for Neovim.

Language:LuaLicense:GPL-3.0Stargazers:0Issues:0Issues:0

open-interpreter

A natural language interface for computers

Language:PythonLicense:AGPL-3.0Stargazers:0Issues:0Issues:0

OpenDevin

🐚 OpenDevin: Code Less, Make More

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

plandex

An AI coding engine for complex tasks

Language:GoLicense:AGPL-3.0Stargazers:0Issues:0Issues:0

ray

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.

License:Apache-2.0Stargazers:0Issues:0Issues:0

redis

A production optimized redis docker image

Stargazers:0Issues:0Issues:0

redis-operator

A golang based redis operator that will make/oversee Redis standalone/cluster/replication/sentinel mode setup on top of the Kubernetes.

Language:GoLicense:Apache-2.0Stargazers:0Issues:0Issues:0

streamlit-agent

Reference implementations of several LangChain agents as Streamlit apps

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

streamlit-cheat-sheet

A cheat sheet for streamlit

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

super-rag

Super-performant RAG pipeline for AI Agents.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
License:MITStargazers:0Issues:1Issues:0

SWE-agent

SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It solves 12.29% of bugs in the SWE-bench evaluation set and takes just 1.5 minutes to run.

License:MITStargazers:0Issues:0Issues:0

upstreaming-to-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

License:Apache-2.0Stargazers:0Issues:0Issues:0