adknudson / BFloat.jl

Julia implementation of Google's Brain Float

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

BFloat

Documentation Build Status Package Details
Build Status Licence
Release

A Julia implementation of Google's Brain Float16 (BFloat16). A BFloat16, as the name suggests, uses 16 bits to represent a floating point number. It uses 8 bits for the exponenet like a single precision number, but only has 7 bits for the mantissa. In this way it is a truncated Float32, which makes it easy for back and forth conversion between the two. The tradeoff is that it has reduced precision (only about 2-3 decimal digits).

About

Julia implementation of Google's Brain Float

License:MIT License


Languages

Language:Julia 100.0%