joennlae / halutmatmul

Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Performance of halut matmul_online

leomem opened this issue · comments

Hi, I am testing the example python code on an Intel Xeon box. Basically, np.matmul(A_test, B) and hm.matmul_online(A_test) are both executed 1000 times to compare the time difference. I suppose halutmatmul should be much faster. However, it turned out that
halutmatmul took much longer.
Total time taken to np matmul 1000 times: 0.05877375602722168 seconds
Total time taken to halut matmul 1000 times: 1.6328861713409424 seconds

Is there anything I am missing? Thanks!

Hi :-) Thank you for the question.

I get your thinking :-)

np.matmul

np.matmul is a highly optimised coroutine that each hardware manufacturer provides SIMD libraries called (BLAS). These are then linked to numpy.

The linking in numpy happens around here:
https://github.com/numpy/numpy/blob/2970735a38b1a1142ab7fd0a14b906611448277e/numpy/_core/src/common/npy_cblas_base.h#L406

Reference to the sgemm documentation of the MKL BLAS library used on your Xeon box:
https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-0/cblas-gemm-001.html

halutmatmul

I did some (very simple) optimization in python for halutmatmul:

With numba:

@numba.jit(parallel=True, nopython=True)
def read_luts_opt(
A_raveled: np.ndarray,
A_shape: tuple[int, int],
B_luts: np.ndarray,
total_result: np.ndarray,
) -> np.ndarray:
for i in prange((len(B_luts))):
read_lut = B_luts[i].ravel()[A_raveled].reshape(A_shape)
read_lut = read_lut.sum(axis=-1)
total_result[i] = read_lut
return total_result

This is done just in time. So if you run one warmup of hm.matmul_online to run the jit compilation, then run it 1000 times for the timing. It should already be faster.

But in the end, you will probably not beat the BLAS implementation in terms of speed. That is why we argue for very simple custom hardware support (see paper).

I hope this helps :-)

Thanks for the information. Very useful.