Wing Tang Wong's repositories
attention_sinks
Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
autogen-ui
Web UI for AutoGen (A Framework Multi-Agent LLM Applications)
ch57x-keyboard-tool
Utility for programming ch57x small keyboard
ChatDev
Create Customized Software using Natural Language Idea (through Multi-Agent Collaboration)
ctransformers
Python bindings for the Transformer models implemented in C/C++ using GGML library.
FLIPPER_flipper-application-catalog
Flipper Application Catalog
FLIPPER_Flipper-iOS-App
iOS Mobile App to rule all Flipper's family
FLIPPER_flipperzero-firmware
Flipper Zero firmware source code
FLIPPER_flipperzero-toolchain
Flipper Zero Embedded Toolchain
FLIPPER_flipperzero-ufbt
Compact tool for building and debugging applications for Flipper Zero.
FLIPPER_flipperzero-ufbt-action
Official ufbt Action wrapper for building Flipper Zero applications
FLIPPER_libusb_stm32
Lightweight USB device Stack for STM32 microcontrollers
FLIPPER_qFlipper
qFlipper — desktop application for updating Flipper Zero firmware via PC
GodotSteam
An open-source and fully functional Steamworks SDK / API module and plug-in for the Godot Game Engine.
OTHER_3D_goo-engine
Custom build of blender with some extra NPR features.
OTHER_BUS_LPC_LpcAnalyzer
Low Pin Count (LPC) Analyzer for Saleae Logic
OTHER_BUS_LPC_verilog-lpc-module
LPC (Low Pin Count) interface peripheral module in pure Verilog
OTHER_EricLLM
A fast batching API to serve LLM models
OTHER_PY32_MCU-Flash-Tools
Simple ISP Flash Tools for various Microcontrollers
OTHER_PY32_py32f0-template
Puya PY32F002A PY32F003 PY32F030 GNU GCC SDK, template and examples
OTHER_SD_multidiffusion-upscaler-for-automatic1111
Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
OTHER_SD_sd-webui-regional-prompter
set prompt to divided region
Retrieval-based-Voice-Conversion-WebUI
Voice data <= 10 mins can also be used to train a good VC model!
streaming-llm
Efficient Streaming Language Models with Attention Sinks
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs