We conduct a preliminary evaluation of ChatGPT/GPT-4 for machine translation. [V1] [V2-ArXiv] [V3-ArXiv]
This repository shows the main findings and releases the evaluated test sets as well as the translation outputs, for the replication of the study.
Updates for GPT-4
We update the translation performance of GPT-4, which exhibits huge improvements over ChatGPT. Refer to [ParroT] for the COMET metric results.
Please kindly cite the papers of the data sources if you use any of them.
- Flores: The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation
- WMT19 Biomedical: Findings of the WMT 2019 Biomedical Translation Shared Task: Evaluation for Medline Abstracts and Biomedical Terminologies
- WMT20 Robustness: Findings of the WMT 2020 Shared Task on Machine Translation Robustness
We ask ChatGPT for advice to trigger the translation ability:
Summarized prompts:
- Tp1:
Translate these sentences from [SRC] to [TGT]:
- Tp2:
Answer with no quotes. What do these sentences mean in [TGT]?
- Tp3:
Please provide the [TGT] translation for these sentences:
Table 1: Comparison of different prompts for ChatGPT to perform Chinese-to-English (Zh⇒En) translation.
We evaluate the translations between four languages, namely, German, English, Romanian and Chinese, considering both the resource and language family effects:
For distant languages, we explore an interesting strategy named Pivot Prompting that asks ChatGPT to translate the source sentence into a high-resource pivot language before into the target language. Thus, we adjust the Tp3 prompt as below:
- Tp3-pivot:
Please provide the [PIV] translation first and then the [TGT] translation for these sentences one by one:
Table 3: Performance of ChatGPT with pivot prompting. New results are obtained from the updated ChatGPT version on 2023.01.31. LR: length ratio.
We evaluate the translation robustness of ChatGPT on biomedical abstracts, reddit comments, and crowdsourced speeches:
We should admit that the report is far from complete with various aspects to make it more reliable in the future:
- Coverage of Test Data: Currently, we randomly select 50 samples from each test set for evaluation due to the response delay of ChatGPT. While there are some projects in GitHub trying to automate the access process, they are vulnerable to browser refreshes or network issues. The official API by OpenAI in the future may be a better choice. Let’s just wait for a moment.
- Reproducibility Issue: By querying ChatGPT multiple times, we find that the results of the same query may vary across multiple trials, which brings randomness to the evaluation results. For more reliable results, it is best to repeat the translation multiple times for each test set and report the average result.
- Evaluation Metric: The results here are calculated by automatic metrics with single references, which may not reflect some characteristics of translation properly, e.g., nativeness. Human evaluation can provide more insights for comparing ChatGPT with commercial translators.
- Translation Abilities: We only focus on multilingual translation and translation robustness in this report. However, there are some other translation abilities that can be further evaluated, e.g., constrained machine translation and document-level machine translation.
Please kindly cite our report if you find it helpful:
@inproceedings{jiao2023ischatgpt,
title={Is ChatGPT A Good Translator? A Preliminary Study},
author={Wenxiang Jiao and Wenxuan Wang and Jen-tse Huang and Xing Wang and Zhaopeng Tu},
booktitle = {ArXiv},
year = {2023}
}