pytorch / ignite

High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

Home Page:https://pytorch-ignite.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How can i implement multiple updates of discriminator before a single update of generator in the train_step(engine, data)?

yuanxiqd opened this issue · comments

As claimed in the title, how can i implement multiple updates of discriminator before a single update of generator in the train_step(engine, data)?

As a common strategy of training GAN, we can do multiple updates of discriminator before a single update of generator to improve the training performance. So, how can this strategy be realized in the framework of ignite?

@yuanxiqd thanks for asking this question!
I'd say this can be implemented similarly to other training tasks, for example:

generator = ...
discriminator = ...

optimizer_generator = ...
criterion_generator = ...

optimizer_discriminator = ...
criterion_discriminator = ...

num_discriminator_steps = 3

def training_step(engine, batch):
    # first call multiple updates of discriminator
    for i in range(num_discriminator_steps):
        discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)

    generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)

trainer = Engine(training_step)

train_loader = ...
max_epochs = ...
trainer.run(train_loader, max_epochs=max_epochs)

Here is a working example of CycleGAN: https://github.com/pytorch/ignite/blob/master/examples/notebooks/CycleGAN_with_nvidia_apex.ipynb where the training step is:

def update_fn(engine, batch):
    generator_A2B.train()
    generator_B2A.train()
    discriminator_A.train()
    discriminator_B.train()

    real_a = convert_tensor(batch['A'], device=device, non_blocking=True)
    real_b = convert_tensor(batch['B'], device=device, non_blocking=True)
    
    # Update generators:

    # Disable grads computation for the discriminators:
    toggle_grad(discriminator_A, False)
    toggle_grad(discriminator_B, False)    
        
    fake_b = generator_A2B(real_a)
    rec_a = generator_B2A(fake_b)
    fake_a = generator_B2A(real_b)
    rec_b = generator_A2B(fake_a)
    decision_fake_a = discriminator_A(fake_a)
    decision_fake_b = discriminator_B(fake_b)

    # Compute loss for generators and update generators
    # loss_a2b = GAN loss: mean (D_B(G(x)) − 1)^2 + Forward cycle loss: || F(G(x)) - x ||_1    
    loss_a2b = compute_loss_generator(decision_fake_b, real_a, rec_a, lambda_value)    

    # loss_b2a = GAN loss: mean (D_A(F(x)) − 1)^2 + Backward cycle loss: || G(F(y)) - y ||_1
    loss_b2a = compute_loss_generator(decision_fake_a, real_b, rec_b, lambda_value)

    # total generators loss:
    loss_generators = loss_a2b + loss_b2a

    optimizer_G.zero_grad()    
    with amp.scale_loss(loss_generators, optimizer_G, loss_id=0) as scaled_loss:
        scaled_loss.backward()
    optimizer_G.step()

    decision_fake_a = rec_a = decision_fake_b = rec_b = None
    
    # Update discriminators:

    # Enable grads computation for the discriminators:
    toggle_grad(discriminator_A, True)
    toggle_grad(discriminator_B, True)

    decision_real_a, decision_fake_a = discriminator_forward_pass(discriminator_A, real_a, fake_a.detach(), fake_a_buffer)    
    decision_real_b, decision_fake_b = discriminator_forward_pass(discriminator_B, real_b, fake_b.detach(), fake_b_buffer)    
    # Compute loss for discriminators and update discriminators
    # loss_a = mean (D_a(y) − 1)^2 + mean D_a(F(x))^2
    loss_a = compute_loss_discriminator(decision_real_a, decision_fake_a)

    # loss_b = mean (D_b(y) − 1)^2 + mean D_b(G(x))^2
    loss_b = compute_loss_discriminator(decision_real_b, decision_fake_b)
    
    # total discriminators loss:
    loss_discriminators = 0.5 * (loss_a + loss_b)
    
    optimizer_D.zero_grad()
    with amp.scale_loss(loss_discriminators, optimizer_D, loss_id=1) as scaled_loss:
        scaled_loss.backward()
    optimizer_D.step()
    
    return {
        "loss_generators": loss_generators.item(),
        "loss_generator_a2b": loss_a2b.item(),
        "loss_generator_b2a": loss_b2a.item(),
        "loss_discriminators": loss_discriminators.item(),
        "loss_discriminator_a": loss_a.item(),
        "loss_discriminator_b": loss_b.item(),
    }

Let me know if this answers your question.

@vfdev-5 Thanks so much for your detailed answers. As you said, an inner loop can be inserted into the training_step(engine, batch). But, I have a question here. Does the loop use different batches or just the same one? Besides, I still have a question on how to configure adaptive strategy for the number of discriminator updates before a single update of generator. For example, if I want to update the discriminator for 100 times for each of the first 25 updates of generator but want to update the discriminator for 10 times for each of the remaining updates of generator, how can i do this? So sorry about this. But I was really stuck in this for a long period.

But, I have a question here. Does the loop use different batches or just the same one?

In the example above they use the same batch. If you want to get more batches from the dataloader you can also do the following:

from ignite.engine import Engine, Events

max_epochs = 5
# We define a data loader, here just a large iterator such that we do not need to handle resets
data_iterator = iter([v * 1.234 for v in range(50)])

def train_step(engine, _):
    # we fetch batches directly from data_iterator
    # so we can fetch any number of batches we need
    batch1 = next(data_iterator)
    batch2 = next(data_iterator)
    print(f"{engine.state.epoch} / {engine.state.max_epochs} | {engine.state.iteration} - batches: {batch1} {batch2}", flush=True)

trainer = Engine(train_step)

# We do not pass data into the run function but just define the epoch length
trainer.run(max_epochs=max_epochs, epoch_length=4)

Output:

1 / 5 | 1 - batches: 0.0 1.234
1 / 5 | 2 - batches: 2.468 3.702
1 / 5 | 3 - batches: 4.936 6.17
1 / 5 | 4 - batches: 7.404 8.638
2 / 5 | 5 - batches: 9.872 11.106
2 / 5 | 6 - batches: 12.34 13.574
2 / 5 | 7 - batches: 14.808 16.042
2 / 5 | 8 - batches: 17.276 18.509999999999998
3 / 5 | 9 - batches: 19.744 20.978
3 / 5 | 10 - batches: 22.212 23.445999999999998
3 / 5 | 11 - batches: 24.68 25.914
3 / 5 | 12 - batches: 27.148 28.381999999999998
4 / 5 | 13 - batches: 29.616 30.85
4 / 5 | 14 - batches: 32.084 33.318
4 / 5 | 15 - batches: 34.552 35.786
4 / 5 | 16 - batches: 37.019999999999996 38.254
5 / 5 | 17 - batches: 39.488 40.722
5 / 5 | 18 - batches: 41.956 43.19
5 / 5 | 19 - batches: 44.424 45.658
5 / 5 | 20 - batches: 46.891999999999996 48.126

Besides, I still have a question on how to configure adaptive strategy for the number of discriminator updates before a single update of generator. For example, if I want to update the discriminator for 100 times for each of the first 25 updates of generator but want to update the discriminator for 10 times for each of the remaining updates of generator, how can i do this?

This can be done in a various ways, for example, doing that inside the training_step:

from ignite.engine import Engine, Events


def training_step(engine, batch):
    print(f"- {engine.state.epoch} / {engine.state.max_epochs} | {engine.state.iteration}")
    # first call multiple updates of discriminator
    # we get the number of discriminator steps from trainer state:    
    for i in range(engine.state.num_discriminator_steps):
        print("-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)")

    print("--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)")

trainer = Engine(training_step)
trainer.state.num_discriminator_steps = 4

# Now let's define a handler that will change num_discriminator_steps once we get 25 updates of the generator
# assuming that on one iteration we update the generator once
@trainer.on(Events.ITERATION_COMPLETED(once=25))
def update_num_discriminator_steps():
    trainer.state.num_discriminator_steps = 1


@trainer.on(Events.ITERATION_COMPLETED(once=25 + 10))
def zero_num_discriminator_steps():
    trainer.state.num_discriminator_steps = 0


train_loader = range(50)
trainer.run(train_loader, max_epochs=2)

Output:

- 1 / 2 | 1
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 1 / 2 | 2
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 1 / 2 | 3
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
...
- 1 / 2 | 24
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 1 / 2 | 25
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 1 / 2 | 26
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 1 / 2 | 27
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 1 / 2 | 28
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 1 / 2 | 29
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
...
- 1 / 2 | 34
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 1 / 2 | 35
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 1 / 2 | 36
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 1 / 2 | 37
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
...
- 2 / 2 | 98
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 2 / 2 | 99
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
- 2 / 2 | 100
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)

So sorry about this. But I was really stuck in this for a long period.

no problems, thanks for asking! Feel free to ask other questions here or on our discord.

@vfdev-5 Thanks for your great help~ Due to the adaptive strategy of training multiple times of discriminator in train_step(engine, _), a simple question will naturally be raised, that is, for each epoch, we may have different engine.state.iterations, which implies that the epoch_length is necessarily different for each epoch. However, we have to predefine the epoch_length as a constant in trainer.run(max_epochs=max_epochs, epoch_length=epoch_length). As a result, it seems difficult to reconcile these two settings. Do you have any idea so that trainer.run() can automatically iterate through all batches from the first batch to the last batch?

The epoch length is either defined by the size of the dataloader or provided epoch_length and it is a constant value.
trainer.state.iteration goes from 1 to epoch length times max epochs.

Do you have any idea so that trainer.run() can automatically iterate through all batches from the first batch to the last batch?

I'm not sure to understand your question, sorry. Can you reword or give more details?

Sorry to get you confused. My code is as follows.

from basic_packages import *

if __name__ == '__main__':
    max_epochs = 4
    dataset = range(12)
    def training_step(engine, _):
        print(f"- {engine.state.epoch} / {engine.state.max_epochs} | {engine.state.iteration}")
        # first call multiple updates of discriminator
        # we get the number of discriminator steps from trainer state:
        print('-' * 120)
        for i in range(engine.state.num_discriminator_steps):
            print(f"- batch1: {next(engine.state.dataiter)}")
            print(
                "-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)")

        print(f"- batch2: {next(engine.state.dataiter)}")
        print("--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)")
        print('-'*120)


    trainer = Engine(training_step)
    trainer.state.num_discriminator_steps = 2
    trainer.state.dataiter = iter(dataset)


    # Now let's define a handler that will change num_discriminator_steps once we get 25 updates of the generator
    # assuming that on one iteration we update the generator once
    @trainer.on(Events.DATALOADER_STOP_ITERATION)
    def init_data_iter():
        # print('Epoch completed')
        trainer.state.dataiter = iter(dataset)

    @trainer.on(Events.ITERATION_COMPLETED(once=2))
    def update_num_discriminator_steps():
        trainer.state.num_discriminator_steps = 1


    @trainer.on(Events.ITERATION_COMPLETED(once=2 + 3))
    def zero_num_discriminator_steps():
        trainer.state.num_discriminator_steps = 0


    # train_loader = range(50)
    # trainer.run(train_loader, max_epochs=2)
    trainer.run(max_epochs=max_epochs,epoch_length=2)

Output:

- 1 / 4 | 1
------------------------------------------------------------------------------------------------------------------------
- batch1: 0
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch1: 1
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch2: 2
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 1 / 4 | 2
------------------------------------------------------------------------------------------------------------------------
- batch1: 3
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch1: 4
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch2: 5
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 2 / 4 | 3
------------------------------------------------------------------------------------------------------------------------
- batch1: 0
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch2: 1
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 2 / 4 | 4
------------------------------------------------------------------------------------------------------------------------
- batch1: 2
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch2: 3
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 3 / 4 | 5
------------------------------------------------------------------------------------------------------------------------
- batch1: 0
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch2: 1
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 3 / 4 | 6
------------------------------------------------------------------------------------------------------------------------
- batch2: 2
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 4 / 4 | 7
------------------------------------------------------------------------------------------------------------------------
- batch2: 0
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 4 / 4 | 8
------------------------------------------------------------------------------------------------------------------------
- batch2: 1
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------

In my code, max_epoch is set to 4 and epoch_length is set to 2. From the output, it can be observed that only 5 batches were used to train the model in 1st epoch; only 3 batches were used in 2nd epoch and 3rd epoch; only 2 epochs were used in the last epoch. My program cannot traverse all 12 batches for each epoch. Can you understand this? Mr. @vfdev-5

Thanks for the providing the details, @yuanxiqd !
Seems like there is a bug with DATALOADER_STOP_ITERATION event as it should not be triggered if no data provided to Engine.run method.

Here is how to make the code work such that it would traverse all 12 batches:

from ignite.engine import Engine, Events


max_epochs = 4
dataset = range(12)


def training_step(engine, _):
    print(f"- {engine.state.epoch} / {engine.state.max_epochs} | {engine.state.iteration}")
    # first call multiple updates of discriminator
    # we get the number of discriminator steps from trainer state:
    print('-' * 120)
    for i in range(engine.state.num_discriminator_steps):
        print(f"- batch1: {next(engine.state.dataiter)}")
        print(
            "-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)")

    print(f"- batch2: {next(engine.state.dataiter)}")
    print("--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)")
    print('-'*120)


def make_inf_data_iter(data):
    data_iter = iter(data)
    while True:
        try:
            yield next(data_iter)
        except StopIteration:
            data_iter = iter(data)
        


trainer = Engine(training_step)
trainer.state.num_discriminator_steps = 2
trainer.state.dataiter = make_inf_data_iter(dataset)


# Now let's define a handler that will change num_discriminator_steps once we get 25 updates of the generator
# assuming that on one iteration we update the generator once
# @trainer.on(Events.DATALOADER_STOP_ITERATION)
# def init_data_iter():
#     print('Dataiter completed -> reinit')
#     trainer.state.dataiter = iter(dataset)

@trainer.on(Events.ITERATION_COMPLETED(once=2))
def update_num_discriminator_steps():
    trainer.state.num_discriminator_steps = 1


@trainer.on(Events.ITERATION_COMPLETED(once=2 + 3))
def zero_num_discriminator_steps():
    trainer.state.num_discriminator_steps = 0

trainer.run(max_epochs=max_epochs, epoch_length=2)

Output:

- 1 / 4 | 1
------------------------------------------------------------------------------------------------------------------------
- batch1: 0
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch1: 1
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch2: 2
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 1 / 4 | 2
------------------------------------------------------------------------------------------------------------------------
- batch1: 3
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch1: 4
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch2: 5
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 2 / 4 | 3
------------------------------------------------------------------------------------------------------------------------
- batch1: 6
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch2: 7
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 2 / 4 | 4
------------------------------------------------------------------------------------------------------------------------
- batch1: 8
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch2: 9
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 3 / 4 | 5
------------------------------------------------------------------------------------------------------------------------
- batch1: 10
-- discriminator_fwd_bwd_pass(batch, generator, discriminator, optimizer_discriminator, criterion_discriminator)
- batch2: 11
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 3 / 4 | 6
------------------------------------------------------------------------------------------------------------------------
- batch2: 0
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 4 / 4 | 7
------------------------------------------------------------------------------------------------------------------------
- batch2: 1
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------
- 4 / 4 | 8
------------------------------------------------------------------------------------------------------------------------
- batch2: 2
--- generator_fwd_bwd_pass(batch, generator, discriminator, optimizer_generator, criterion_generator)
------------------------------------------------------------------------------------------------------------------------

okay, I see. Thanks you so much. One more question is: is there any setting for which I can fire a new epoch if some event has been triggered?

@yuanxiqd I'll close this issue as answered, feel free reopen if needed