NUSTM / ChatGPT-Sentiment-Evaluation

Can ChatGPT really understand the opinions, sentiments, and emotions contained in the text? We provide a preliminary evaluation.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Performance Record Documents

Steven-1124 opened this issue · comments

Hi there!

Your work is excellent!

To manually check and understand the prediction results of different models, I am eager to find the documents storing relevant information.

I went through your work and read the code on GitHub. I understand the row data (true labels) stored in the standard data file. However, there are many files named by "50_test" "100_test" "train" "test" "dev" under different folders that I did not understand their connections with each other. To my understanding, "50_test" and "100_test" are generated by extracting 50 lines and 100 lines from "dev" respectively.

I am trying to understand these files with the purpose of finding docs that record the detailed prediction results for different models in different tasks. Could you kindly help me have a clearer mind about that?

I would appreciate it a lot if you would like to tell me about either the logic/connection behind the filenames or where I could find the docs storing the prediction results of different models!

hi, thanks for your attention and kind words.

Sorry for the confusion. Initially, OpenAI did not release the API of ChatGPT. So we attempt to benchmark its performance using part of the test set. For example, 50_test means we sample 50 samples from the test set. But, on the way, OpenAI released its API of ChatGPT therefore we decided to benchmark it with the full test set. You can refer to our paper for detailed statistics of the test set for each task.

If you have any further questions, please do not hesitate, feel free to leave words.

Thanks for your immediate response! Do you mean that you store the prediction results in 50_test file? And, under different prefix folder, the 50_test record the results performance (not only the measurements index, but the details, like the sentences and the labels predicted) on differnt tasks and different datasets? Then, how about the prediction of other models if 50_test is the results of GPT?

Sorry for the confusion. Initially, OpenAI did not release the API of ChatGPT. So we attempt to benchmark its performance using part of the test set. For example, 50_test means we sample 50 samples from the test set.

It's very glad to hear that! I would check it quickly, but btw, can I access the detail of prediction results (as I mentioned above) from there?

But, on the way, OpenAI released its API of ChatGPT therefore we decided to benchmark it with the full test set. You can refer to our paper for detailed statistics of the test set for each task.

Hi, 50_test means the set sampling 50 test samples from the full test set, where only the original review sentence and corresponding label are provided.

We recommend rerunning the evaluation experiments for predictions by yourselves. This can be done through OpenAI playground, Chat Window, or using API. You will find that ChatGPT is quite powerful for these tasks, especially some long-tail domains.

Thanks so much! I will check it! It's very helpful talking to you!