Josh Minor (jishminor)

jishminor

Geek Repo

Company:Arm

Location:Austin, TX

Github PK Tool:Github PK Tool


Organizations
smarter-project

Josh Minor's repositories

ee445l-linux

Setup repository for completing labs in ee445l on linux

Language:CStargazers:10Issues:3Issues:0

atomic

Wrapper types for sync/atomic which enforce atomic access

Language:GoLicense:MITStargazers:0Issues:0Issues:0

backend

Common source, scripts and utilities for creating Triton backends.

Language:C++License:BSD-3-ClauseStargazers:0Issues:0Issues:0

client

Triton Python and C++ client libraries and example, and client examples for go, java and scala.

Language:C++License:BSD-3-ClauseStargazers:0Issues:0Issues:0

common

Common source, scripts and utilities shared across all Triton repositories.

Language:C++License:BSD-3-ClauseStargazers:0Issues:0Issues:0
Language:CLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

cpu

cpu command in Go, inspired by the Plan 9 cpu command

Language:GoLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

docker-slim

DockerSlim (docker-slim): Don't change anything in your Docker container image and minify it by up to 30x (and for compiled languages even more) making it secure too! (free and open source)

Language:GoLicense:Apache-2.0Stargazers:0Issues:1Issues:0

ee445l_web

All things web for 445l

Language:JavaScriptStargazers:0Issues:1Issues:0

fluent-plugin-prometheus

A fluent plugin that collects metrics and exposes for Prometheus.

Language:RubyLicense:Apache-2.0Stargazers:0Issues:0Issues:0

kubefed

Kubernetes Cluster Federation

Language:GoLicense:Apache-2.0Stargazers:0Issues:0Issues:0

llama.cpp

Port of Facebook's LLaMA model in C/C++

License:MITStargazers:0Issues:0Issues:0

model_analyzer

Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
License:NOASSERTIONStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.

Language:C++License:BSD-3-ClauseStargazers:0Issues:0Issues:0

Tool-Solutions

Tutorials & examples for Arm software development tools.

License:Apache-2.0Stargazers:0Issues:0Issues:0

tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0