ycjuan / libffm

A Library for Field-aware Factorization Machines

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does parallel operation of train function in ffm.cpp ensure thread safety?

heekyungyoon opened this issue · comments

Regarding train in ffm.cpp lines 228-375, I have a question on thread safety.

below are lines 288-312
#if defined USEOMP

        #pragma omp parallel for schedule(static) reduction(+: tr_loss)

        #endif

        for(ffm_int ii = 0; ii < (ffm_int)order.size(); ii++)

        {

        ffm_int i = order[ii];

        ffm_float y = tr->Y[i];
        
        ffm_node *begin = &tr->X[tr->P[i]];

        ffm_node *end = &tr->X[tr->P[i+1]];

        ffm_float r = R_tr[i];

        ffm_float t = wTx(begin, end, r, *model);

        ffm_float expnyt = exp(-y*t);

        tr_loss += log(1+expnyt);
           
        ffm_float kappa = -y*expnyt/(1+expnyt);

        wTx(begin, end, r, *model, kappa, param.eta, param.lambda, true);
        }

I'm new to openmp parallel operations. I'm curious whether it ensures thread safety regarding wTx operation at the very bottom. wTx(begin, end, r, *model, kappa, param.eta, param.lambda, true);
It seems that since wTx with do_update = true updates weights, it could interfere with other threads updating the weights.
Waiting for reply.

No, it is not thread safe.