biyuehuang's repositories
microphone_chatglm2
microphone in (pyaudio) > ASR (whisper-medium) > chat (chatglm2-6b)
LLM_PC_UI_CPU
LLM run on CPU with 16GB memory
MediaSDK_Windows
Test Decode/Encode performance on IA platform
OpenVINO_NNCF
support static shape like ONNX, Tensorflow, Yolov5.pt or 2 inputs
AI_training_by_iTEX
train Resnet50 by Intel®-Extension-for-TensorFlow*(iTEX) on multiply Intel Arc DGPU
audiollm_Arc_Ubuntu
LLM run on DGPU with audio in and out
audiollm_MTL_Arc_Windows
LLM run on MTL iGPU or Arc DGPU Windows with audio in and out
benchmark_for_LLM_CPU_Arc
benchmark test for LLM INT4/FP16 on Intel Core, SPR, Arc.
codeshell_VScode_MTL_win
codeshell-7B-Chat run on MTL iGPU by VS Code with 32GB memory
digtal_human
technical solution including ASR, TTS, LLM, lipsync
Enable_Arc_on_Xeon
enable multi-dGPU on Xeon server such as 2 Arc A770
LLM_Arc_UI
chatglm2-6b, llama2-13b, starcoder-15.5b run on Arc DGPU
mlc-llm-for-Arc
run mlc-llm on Intel Arc DGPU
OpenVINO_benchmark_win_ubuntu
test all .xml file in current document
startAI
Hello AI
windows_exe_autotest
auto test .exe in Windows, include button, slide, text number, checkbox and page.
YOLOX-DLStreamer
Inference YOLOX model by DLStreamer with display or print results to JSON file
LLM_family
including ollama, vllm, xft, ipex-llm, tensorrt-llm
SD_OV_pipeline
Stable
stream_on_Linux
memory bandwidth benchmark
wenet_ov2023.1
run wenet by openvino 2023.1