chenghuzi / doc-optuna-chinese

超参数优化框架Optuna的中文文档

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Some review comments on FAQ

belldandyxtq opened this issue · comments

  1. msgstr "某某库可以和 Optuna 配合使用吗?(某某是你最爱的机器学习库)"

    How about (某某是你常用的机器学习库).
    also we may want to be consistent on the terms, like 机器学习库 or ML库
    "Optuna 和绝大多数ML库兼容,并且很容易同他们配合使用。"

msgstr "如何定义带有独有参数的目标函数?"

I am not sure about the 独有参数, I get the idea since I read the English version first.
How about 额外参数 to be consistent with
"其次,你可以用 ``lambda`` 或者 ``functools.partial`` 来创建带有额外参数的函数(闭包)。 "

msgstr "首先,如下的可调用的 objective 类具有这个功能:"

How about 首先,如下例,可调用的 objective 类具有这个功能:

"其次,你可以用 ``lambda`` 或者 ``functools.partial`` 来创建带有额外参数的函数(闭包)。 "

I think 或者 feels better here.

msgstr "如果你想能保存和恢复 study 的化,将 SQLite 作为本地存储也是很方便的:"

  1. typo 的话
  2. How about 你也可以很方便地使用SQLite来储存到本地磁盘:

"Optuna 会保存超参数和对应的目标函数值到存储对象中,但是它不会存储诸如机器学习模型或者网络权重这样的中间对象。"

How about Optuna 会保存超参数和对应的目标函数值,但是它不会存储诸如机器学习模型或者网络权重这样的中间数据。
Similarly about 机器学习库 or ML库

"在保存模型的时候,我们推荐将 :obj:`optuna.trial.Trial.number` 一同存储。这样易于之后确认对应的 trial."

. =>

msgstr "但是这么做的话有两个风险。"

I think it means 即使如此,请仍注意以下两点

msgstr "你也可以在日志信息里找到失败的 trial."

It seems just repeats the previous line.
How about 你也可以通过查看trial的状态来找到他们

msgstr "如何用两块 GPU 同时对两个 trial 进行求值?"

How about 如何在两块 GPU 同时跑不同的两个 trial

msgstr "如果你想能保存和恢复 study 的化,将 SQLite 作为本地存储也是很方便的:"

1. typo `的话`

2. How about `你也可以很方便地使用SQLite来储存到本地磁盘:`

Indeed active voice sounds better, but how about 你可以轻松地将 SQLite 用作本地存储。 and remove in the first part.

"在保存模型的时候,我们推荐将 :obj:`optuna.trial.Trial.number` 一同存储。这样易于之后确认对应的 trial."

. =>

This is a complex and long-standing typography issue, especially when we're using both English and Chinese characters in the same article ... Here I follow a rule to use for all sentences ending with an Chinese character and . in other cases.

But this is not the only way, we can discuss this later.

Is faq unfinished. I found https://optuna.readthedocs.io/en/latest/faq.html#how-do-i-avoid-running-out-of-memory-oom-when-optimizing-studies is missing

In the version (v1.4?) of doc I used, there's no out-of-memory stuff. Probably we can add it later.

Partially resolved in this commit.