Tiiiger / QPyTorch

Low Precision Arithmetic Simulation in PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Simple test error for sum

drcut opened this issue · comments

Hi, I use QPytorch on V100 for cuda9 with pytorch version 1.1.0. I do a simple test but it's error.
Is my understand of Qpytorch using is wrong? Thanks!
`

import torch
from qtorch.quant import Quantizer
from qtorch.optim import OptimLP
from qtorch import FloatingPoint
import torch.nn as nn
bit_16 = FloatingPoint(exp=5, man=10)
weight_quant = Quantizer(forward_number=bit_16, backward_number=None,
forward_rounding="nearest", backward_rounding="nearest").cuda()
a = torch.rand([10]).cuda()
b = torch.rand([10]).cuda()
print("add:.................")
res1 = a.half() + b.half()
res2 = weight_quant(weight_quant(a) + weight_quant(b))
print(res1)
print(res2)
assert res1.equal(res2.half())

`

Hi @drcut , I am looking into this. thank you for pointing out.

Hi @drcut, this is due to a difference between the rounding mode of QPyTorch and PyTorch.

In short, QPyTorch is doing rounding-away-from-zero and PyTorch is doing rounding-to-nearest-even.

If you are not familiar with rounding mode, you can observe this by quantizing these two constants.

QPyTorch: 1.00048828125 -> 1.0010
PyTorch: 1.00048828125 -> 1.0

QPyTorch: 1.00146484375 -> 1.0020
PyTorch: 1.00146484375 -> 1.0020

I am looking into how to conform to PyTorch's standard.

Hi @drcut, this is due to a difference between the rounding mode of QPyTorch and PyTorch.

In short, QPyTorch is doing rounding-away-from-zero and PyTorch is doing rounding-to-nearest-even.

If you are not familiar with rounding mode, you can observe this by quantizing these two constants.

QPyTorch: 1.00048828125 -> 1.0010
PyTorch: 1.00048828125 -> 1.0

QPyTorch: 1.00146484375 -> 1.0020
PyTorch: 1.00146484375 -> 1.0020

I am looking into how to conform to PyTorch's standard.

Thanks! It's really helpful.