commanded / commanded

Use Commanded to build Elixir CQRS/ES applications

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Aggregate side effects

slashdotdash opened this issue · comments

Allow an aggregate to optionally return a list of one or more side side effects in addition to any events. These effects would be run once the events have be committed to storage. Effects would be defined as a standard module-function-args tuple.

defmodule MyApp.Counter do
  def handle(%{count: count}, %Increment{}) do
    events = [%Increased{amount: 1}]

    if count == 9 do
      effects = [{MyApp.AdminMailer, :dispatch, %{threshold: 10}}]

      {:ok, {events, effects}}
    else
      {:ok, events}
    end
  end
end

Side effects would have at most once semantics meaning there is no guarantee that they will run. This is because the events are committed before dispatching effects and it is always possible for the dispatch to fail or crash.

Inspired by Pachyderm's entity side effects.

Are there any guarantees about when the side effects are run? E.g. could we just throw them in an async task?

The only guarantee is that they run after the events have been persisted. An async task would be ok.

Is there any reason to do this rather than, say, use a process manager? In your example, wouldn't you want to send the email at most once? How is that enforced if you are just calling a function? Every time you replay the event log against that aggregate, your example would end up sending an email as far as I can tell. If you get in the habit of describing side effects outside of the command/event system, you get into situations where you can't provide any guarantees about those effects, which really limits their usefulness in my opinion. I'd expect a better approach here is to use a process manager which tracks whether or not an email has been sent for the counter, ensuring it is only ever sent once, and providing a mechanism by which the attempt to send the email could be retried (using the error handling mechanism of process managers). I'd definitely be interested to hear in what situations you would prefer the system you are proposing vs the more structured form of process managers (or just event handlers generally). I might be misunderstanding some of the pieces here, so at a minimum I expect I'll learn something :)

@bitwalker I think the intention here is for this to be based on the execute/2 callback not the apply/2 therefore the sideeffects would not rerun on aggregate rebuild, I think the logic Ben intends here is

events = execute_result |> Enum.filter(&is_struct/1)
sideeffects = execute_result |> Enum.filter(&is_tuple/1)

:ok = store(events)
:ok = run(sideeefects)

The reasons why one would prefer this to using an event handler might be because they want it to be command based not event based. Personally I'm not sure about this approach (even though I have had cases I considered it).

For anyone who want to have this effect I would suggest having a context module that is responsible for dispatching the commands and returning the execution_result, then they can use that to execute any sideeffects. It adds a bit more code but then you have complete controll of how that sideeffect is executed.

Overall I am not against this, I can see some use cases. But would rather we build an exactly once execution guarantee somehow if possible.

I am failing to understand when I would use that feature.

From my perspective and based on the example you shared, in Event Sourcing or event-driven, Event Handlers are meant to be used here.

Coupling the Command Handler to Event Processing concern sounds like a suboptimal idea.

Hello,

I see this as a middleware behaviour where an aggregate could call a next function if the event is successfully handled. I don't see this as command based behaviour because command are not part of the event storing mechanism. It is close to what a process manager should do. But sometime a process manager is heavy for simple tasks.

As I understand the proposition, it will allow an aggregate to trigger a simple function which isn't part of the commanded system. Meaning that this module could be a simple GenServer without a stream nor complex lifetime logic associated. A sidekick cast which should never fail and have no real interaction with the system itself (not dispatching commands, if so it's a process manager).

The difference seems thin and I don't know if it would be a good choice to add complexity compared to the corner case where this proposition can be useful.

But maybe I'm missing something that @slashdotdash could point to us !

This behavior, or something like it, is very useful in applying cross-system events without having to modify all command handlers.

For instance, say you want to emit a notification Event every time certain Aggregate state is mutated.
It is unwieldy and very prone to failure to check the Apply function of every Command. Worse, easy to forget this class of business logic when adding new Commands (and impossible to write preventative tests for).

However, if you have a process that ... "when this mutates, let me know", the problem becomes trivial