Intel Corporation's repositories
scikit-learn-intelex
Intel(R) Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application
compute-runtime
Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver
ad-rss-lib
Library implementing the Responsibility Sensitive Safety model (RSS) for Autonomous Vehicles
intel-sgx-ssl
Intel® Software Guard Extensions SSL
intel-xpu-backend-for-triton
OpenAI Triton backend for Intel® GPUs
mlir-extensions
Intel® Extension for MLIR. A staging ground for MLIR dialects and tools for Intel devices using the MLIR toolchain.
cartwheel-ffmpeg
Intel developer staging area for unmerged upstream patch contributions to FFmpeg
onnxruntime
ONNX Runtime: cross-platform, high performance scoring engine for ML models
intel-device-plugins-for-kubernetes
Collection of Intel device plugins for Kubernetes
intel-inb-manageability
The Intel® In-Band Manageability Framework enables an administrator to perform critical Device Management operations over-the-air remotely from the cloud. It also facilitates the publishing of telemetry and critical events and logs from an IoT device to the cloud enabling the administrator to take corrective actions if, and when necessary. The framework is designed to be modular and flexible ensuring scalability of the solution across preferred Cloud Service Providers (for example, Azure* IoT Central, ThingBoard.io, and so on).
intel-technology-enabling-for-openshift
The project focuses on Intel’s enterprise AI and cloud native foundation for Red Hat OpenShift Container Platform (RHOCP) solution enablement and innovation including Intel data center hardware features, Intel technology enhanced AI platform and the referenced AI workloads provisioning for OpenShift.
vscode-tcf-debug
Visual Studio Code Target Communication Framework (TCF) Debugger Extension
cartwheel-gstreamer
Intel developer staging area for unmerged upstream patch contributions to gstreamer monorepo
Speech-to-Text-Analytics-System
Speech-to-Text Analytics System
onnxruntime-inference-examples
Examples for using ONNX Runtime for machine learning inferencing.