facebook / Ax

Adaptive Experimentation Platform

Home Page:https://ax.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

save the state

Fa20 opened this issue · comments

commented

Hallo.

is there any way to save the state and in case that we want to continue run the code after we stoped we do not need to start for the beginning in case of MOO?

@Fa20 are you aware of our storage tutorial (link)? Closing this out because I think this should address your question, however please comment/reopen if you require further assistance.

commented

ax_client = AxClient()
ax_client.create_experiment(
name="moo_experiment",
parameters=[
{
"name": f"x{i+1}",
"type": "range",
"bounds": [0.0, 1.0],
}
for i in range(2)
],
objectives={
"a": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[0]),
"b": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[1]),
},
overwrite_existing_experiment=True,
is_test=True,
)

def evaluate(parameters):
evaluation = branin_currin(
torch.tensor([parameters.get("x1"), parameters.get("x2")])
)
return {"a": (evaluation[0].item(), 0.0), "b": (evaluation[1].item(), 0.0)}
Run the trials and create checkpoints

num_trials = 16
checkpoint_interval = 1

for i in range(num_trials):
parameters, trial_index = ax_client.get_next_trial()
ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))
'''
if (i + 1) % checkpoint_interval == 0:
objectives = ax_client.experiment.optimization_config.objective.objectives
frontier = compute_posterior_pareto_frontier(
experiment=ax_client.experiment,
data=ax_client.experiment.fetch_data(),
primary_objective=objectives[1].metric,
secondary_objective=objectives[0].metric,
absolute_metrics=["a", "b"],
num_points=2,
)
render(plot_pareto_frontier(frontier, CI_level=0.90))
print(f"Checkpoint after {i+1} trials")''' e.g. in this is it possible to update it to save the current state and use for example the first 3 trials as start point instead of start from begining . I mean the idea is to stop the cod and rerunn it again any use the information about the perivious trials?

Yes, by loading experiment state, you will have access to the trials that were available on that experiment when it was last saved.

commented

@bernardbeckerman Thanks for your Answer. does this means I should to load this saved file inside the for _ in range(). does this means that we need to create two experiment 1- experiment = Experiment(...)
filepath = "experiments/experiment.json"
save_experiment(experiment, filepath) to save the results and 2- AxClient()
ax_client.create_experiment(
name="moo_experiment", ... is there any tutorial to show the exact steps with the above code for MOO

commented

@bernardbeckerman I checked the link you shared and it seems that just to save experiments results and load it . my question how can we use this saved file inside our experiment in case that I want to run 3 trials and save the resulte and reuse it again and run for another 3 trials . could you please give an example how can be used inside the AxClient experiment

How about this section of the Service API tutorial (link)?

commented

@bernardbeckerman do you mean that should looks like :

`import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import init_notebook_plotting, render

init_notebook_plotting()

try:
ax_client = AxClient.load_from_json_file("experiment_state.json")
print("Loaded existing experiment.")
except FileNotFoundError:
ax_client = AxClient()
ax_client.create_experiment(
name="hartmann_test_experiment",
parameters=[
{"name": "x1", "type": "range", "bounds": [0.0, 1.0], "value_type": "float", "log_scale": False},
{"name": "x2", "type": "range", "bounds": [0.0, 1.0]},
{"name": "x3", "type": "range", "bounds": [0.0, 1.0]},
{"name": "x4", "type": "range", "bounds": [0.0, 1.0]},
{"name": "x5", "type": "range", "bounds": [0.0, 1.0]},
{"name": "x6", "type": "range", "bounds": [0.0, 1.0]},
],
objectives={"hartmann6": ObjectiveProperties(minimize=True)},
parameter_constraints=["x1 + x2 <= 2.0"],
outcome_constraints=["l2norm <= 1.25"],
)
print("Created new experiment.")

def evaluate(parameterization):
x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x**2).sum()), 0.0)}

iterations = 5

for i in range(iterations):
parameterization, trial_index = ax_client.get_next_trial()
ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))

# Save the experiment state to a JSON file
ax_client.save_to_json_file("experiment_state.json")
print(f"Saved experiment state after iteration {i+1}.")


best_parameters, values = ax_client.get_best_parameters()
print(f"Best parameters after iteration {i+1}: {best_parameters}")
print(f"Corresponding values: {values}")

#best_parameters, values = ax_client.get_best_parameters()
#print(f"Best parameters after iteration {i+1}: {best_parameters}")
#print(f"Corresponding values: {values}")
that we run for five iteration for example and then we can rerun again but we will load the last results and so?. does this will ensure to get same results as if we will run 25 iteration once without using this method save and load becuase I got different results`

This code isn't formatted so I'm not entirely sure what's going on, but it looks like you've got the gist - save after each iteration and then when you reload, you'll have access to all prior data on the experiment, and the state of the generation strategy as well, so the optimization should pick up just as it left off.

does this will ensure to get same results as if we will run 25 iteration once without using this method save and load becuase I got different results

Ax generally doesn't guarantee identical results between runs since there is some randomness in our point selection algorithms, so I wouldn't be concerned if you're seeing small discrepancies in the Corresponding values value printed after a set number of iterations. If you're seeing a large discrepancy, however, please open up a new issue with the code that creates the discrepancy.