Ayush Goel (goelayu)

goelayu

Geek Repo

Company:Hewlett Packard Labs

Location:Santa Clara, CA

Home Page:goelayu.github.io

Twitter:@goelayu_sh

Github PK Tool:Github PK Tool

Ayush Goel's repositories

Jawa

JavaScript Aware Web Archive Crawler (JAWA) (OSDI'22)

Language:OpenEdge ABLLicense:GPL-3.0Stargazers:12Issues:2Issues:1

Sprinter

A web crawler that leverages compute memoization techniques to enable lightning fast crawling of pages without sacrificing fidelity

Language:JavaScriptStargazers:2Issues:1Issues:0

javascript-call-graph

Approximate call graph builder for JavaScript

Language:JavaScriptStargazers:1Issues:1Issues:0

mahimahi

Web performance measurement toolkit

Language:C++License:GPL-3.0Stargazers:1Issues:0Issues:0

blaze

Automatic push policy generation via reinforcement learning

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

closure-compiler

A JavaScript checker and optimizer.

Language:JavaLicense:Apache-2.0Stargazers:0Issues:0Issues:0

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

goelayu.github.io

Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes

Language:CSSLicense:MITStargazers:0Issues:0Issues:0

misc

Miscellaneous linux scripts

Language:ShellStargazers:0Issues:0Issues:0

DeepSpeed-MII

MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.

License:Apache-2.0Stargazers:0Issues:0Issues:0

goelayu.github.bck

Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes

Language:JavaScriptLicense:MITStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:1Issues:7

TAJS

Type Analyzer for JavaScript

Language:JavaLicense:Apache-2.0Stargazers:0Issues:0Issues:0

tracelib

A Node.js library that provides Chrome DevTools trace models to parse arbitrary trace logs.

License:Apache-2.0Stargazers:0Issues:0Issues:0

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

License:Apache-2.0Stargazers:0Issues:0Issues:0