openai / finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"

Home Page:https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Do you ever try your model on ROCstory training dataset?

Brandonnogithub opened this issue · comments

I use the training data to train this model. (I make the wrong ending by random)
And use the test data to test.
The result is only about 60%. While the common embedding model can reach 65+%.
I'm not sure whether I use this model in a right way.
Do you ever try this ?