Is it a bad practice for an event handler to depend on a projector completion?
ajacquierbret opened this issue · comments
Hi folks,
Following the Conduit sample app design, I wrote an event handler that listens for EventA
and dispatches EventB
in response to that event.
EventA
should add a record to the persistence layer usingProjectorA
.EventB
should add some records to the persistence layer usingProjectorB
, depending on up-to-date records persisted byProjectorA
.
The problem is that, from my understanding, strong consistency doesn't guarantee that the event handler for EventA
will be executed only when ProjectorA
persisted the associated record.
What do you think is the best solution in this situation?
-
Is this design considered bad because I create a dependency between two handlers (
EventA
handler andProjectorA
)? If so, how would you rewrite this workflow? -
Should I use the
after_update/3
callback fromcommanded_ecto_projections
in order for me to be sure thatProjectorA
persisted its record? -
Should I explicitly wait for the completion of
ProjectorA
inside theEventA
handler?
Here's some further explanations about this issue and my software design concerns:
I have two aggregates: AggregateA
and AggregateB
.
Both of them have their respective TableA
and TableB
projections, linked by a many-to-many relationship through a TableATableB
join table.
If an instance of AggregateA
is created or updated and therefore its TableA
projection too, the app must either create or delete TableATableB
relationships with TableB
, and vice-versa.
As of now, this workflow is actually done by the event handler of EventA
and the event handler of EventB
:
- When the event handler receives an
EventA
, it queries the database to retrieve allTableB
records usingTableA
and performs a calculation on which relationships should be updated. But because the event handler doesn't have any guarantee thatTableA
has effectively been persisted, it randomly fails.
In order for TableATableB
relationships creation/removal to not depend on an projector and therefore on db inserts, EventA
could carry a list of related TableB
, and its projector would just have to insert both TableA
and related TableAsTableBs
projections in a single transaction, and vice-versa. This also ensures that if any operation fails, the whole transaction will be rolled back.
But if I do that, AggregateB
would have no idea about the relationships stored in the state of AggregateA
, and conversely, if AggregateB
has been commanded to perform the same action from its side, AggregateA
will have no idea about the relationships stored in the state of AggregateB
. This will lead to have up-to-date relationships in the persistence layer but with both aggregate states being out of sync. If one day my requirements are to retrieve the exact value of relationships between AggregateA
and AggregateB
, I'll be in trouble.
How would one deal with it?
This also raises the question of which state should be the source of truth in a relationship between two aggregates, and which events are responsible for updating relationships projections?
"It Depends",
Yes, avoid dependencies between Projectors (Event Handler) and Processors (Event Handler).
An Event Handler should be as sufficient as possible by itself, since the Event Handler is a linearization factor, otherwise it may be difficult to reason and implement the system.
That means creating a Processor own its Projections as well. The anti-pattern (especially as the system grows) is to use somebody else Projection.
Once again, "It Depends".