Parallelization of the PC loop
devinamatthews opened this issue · comments
Many applications (e.g. in machine learning) end up with matrix products that have small(ish) m
and n
dimensions, but large K. For these cases, one would need to parallelize the pc
loop to take advantage of threading in an efficient way. An example application is #536. The downside of parallelizing the pc
loop is that extra workspace is required to store the partial C matrices, and then these matrices have to be reduced (in parallel probably).
To support parallelizing this loop, I propose the following plan:
- Determine the maximum size
max_size_c
of the temporary C matrices that we are willing to store. This could be a constant (e.g. 8MiB) or it could depend on the cache blocking parameters in some way (e.g. not larger than the sum of the A block and B panel sizes). - Modify the automatic partitioning code to set the parallelism of the PC loop to the maximum value
nt_pc
(which is a factor ofnt
?) for which(nt_pc-1)*m*n <= max_size_c
, and then assign the remaining ways of parallelism based onnt -> nt/nt_pc
. Alternatively, the current algortihm could be extended to handle nt_ic, nt_jc, and nt_pc simultaneously and also obey the(nt_pc-1)*m*n <= max_size_c
restriction. If the user setsBLIS_PC_NT
then that can be used as-is (without obeying the size restriction?). - In
bli_gemm_blk_var3.c
:- The main thread in each thread group (after splitting for parallelism in the
pc
direction), except the "main" thread group, should check out a block from the existing pool for C, which is of sizem*n
. - The main thread group gets the "real" C and keeps
beta
as-is, all other threads get a chunk of the C buffer and setbeta = 0
. - Parallelize the
pc
loop as invar1
andvar2
. - After the loop, reduce (sum) the additional blocks back into the main block. This can be parallelized over
m
andn
, and use e.g.axpym
. - If the blocks of
C
should be of most sizem_c
along them
dimension, then the reduction step must happen inbli_gemm_blk_var1.c
.
- The main thread in each thread group (after splitting for parallelism in the
The main thread should check out a block from the existing pool for C, which is of size (nt_pc-1)mn and broadcast to the other threads.
Since the main thread takes the "real" C and original beta
, the broadcast could not be replaced by just a reset to zero of each part of other threads ?
We should also be aware of floating-point accuracy, too (typically for float
) due to rounding order.
@hominhquan by broadcast I just mean pointer broadcast. Only the main thread includes the beta*C part.
@fgvanzee I welcome your comments.
Sure, though I'm going to need some time to think about this and how it compares and contrasts with my own ideas.
ii. 1 if
n > n_c
.
@devinamatthews Can you explain the thinking behind this?
ii. 1 if n > n_c
I'm reconsidering this now, but my thinking was that a) you only want to parallelize over k if n is smallish anyways and b) you only want to checkout a block for C and do a reduction once. But, on second though I don't think there's really a downside to b), and maybe you do still want to parallelize over k for some bizarre situation like m=20, n=5000, k=10000.
and maybe you do still want to parallelize over k for some bizarre situation like m=20, n=5000, k=10000.
I think if @rvdg were here, he would likely say that this isn't bizarre at all. We want to have maximum flexibility in the dimensions along which we parallelize, and oftentimes that means parallelizing two dimension simultaneously, especially when you have large numbers of cores. And if only one of m and n is large (and k is also large) that means (potentially) parallelizing k.
Agreed. Drop that requirement.
I think I know how to solve this for generalized problems. It builds upon the idea of carouseling, which @tlrmchlsmth developed and prototyped years ago. The key feature of that solution is that it avoids using large workspace, but it will also require significant non-trivial changes to the overall matrix multiplication algorithm.
(Historical aside: I found Tyler's original implementation to be a bit rough around the edges. I also did not fully understand it at the time, and those two things contributed to it going unmerged. But the idea, which was delightfully simple, stuck in my head.)
So, I think it's time to reimplement Tyler's carouseling. I'll likely start off in the gemmlike
sandbox so that I can get a bird's eye view of what parts of the Goto algorithm need changing, which will then inform if and how I can cleanly port the idea to the conventional code path.
@fgvanzee let's talk about this; IIRC carouseling doesn't take full advantage of parallelism in both dimensions.
Tuesday then.
@fgvanzee let's talk about this; IIRC carouseling doesn't take full advantage of parallelism in both dimensions.
The thing about carouselling is that it doesn't actually increase the number of FMA instructions that you can execute in parallel. Instead, the advantage comes from not messing up your block sizes. So instead of just parallelizing along the m
dimension and cutting the matrix up into too-small of partitions, we also parallelize along the k
dimension. And then the carousel mechanism just manages the concurrency.
Anyway, I think there is probably enough interesting stuff here to explore the tradeoffs between the two approaches and situations where each does better than the other!