There are 0 repository under float16 topic.
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Up to 200x Faster Dot Products & Similarity Metrics — for Python, Rust, C, JS, and Swift, supporting f64, f32, f16 real & complex, i8, and bit vectors using SIMD for both AVX2, AVX-512, NEON, SVE, & SVE2 📐
Stage 3 IEEE 754 half-precision floating-point ponyfill
:dart: Accumulated Gradients for TensorFlow 2
half float library for C and for z80
Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm
PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu
TFLite applications: Optimized .tflite models (i.e. lightweight and low latency) and code to run directly on your Microcontroller!
Main purpose of this library is to provide functions for conversion to and from half precision (16bit) floating point numbers. It also provides functions for basic arithmetic and comparison of half floats.
Half-precision floating-point mathematical constants.
Difference between one and the smallest value greater than one that can be represented as a half-precision floating-point number.
The bias of a half-precision floating-point number's exponent.
Maximum half-precision floating-point number.
Maximum safe half-precision floating-point integer.
Minimum safe half-precision floating-point integer.
Half-precision floating-point positive infinity.
Smallest positive half-precision floating-point subnormal number.
Square root of half-precision floating-point epsilon.
Cube root of half-precision floating-point epsilon.
Half-precision floating-point negative infinity.
Smallest positive normalized half-precision floating-point number.
Fast Half precision Floating point operations for C++