zhihu / cuBERT

Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

output softmax in addition to logits

levyfan opened this issue · comments

As discussed in #20, a softmax output can be added after the logits as same in

https://github.com/google-research/bert/blob/d66a146741588fb208450bde15aa7db143baaa69/run_classifier.py#L608

logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
probabilities = tf.nn.softmax(logits, axis=-1)
log_probs = tf.nn.log_softmax(logits, axis=-1)

The final output could be logits and probabilities and log_probs at the same time.