grassdog / eventstore

CQRS event store using PostgreSQL for persistence

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

EventStore

CQRS event store implemented in Elixir. Uses PostgreSQL (v9.5 or later) as the underlying storage engine.

MIT License

Build Status


Overview


Getting started

EventStore is available in Hex and can be installed as follows:

  1. Add eventstore to your list of dependencies in mix.exs:
def deps do
  [{:eventstore, "~> 0.8.1"}]
end
  1. Ensure eventstore is started before your application:
def application do
  [applications: [:eventstore]]
end
  1. Add an eventstore config entry containing the PostgreSQL connection details to each environment's mix config file (e.g. config/dev.exs).
config :eventstore, EventStore.Storage,
  username: "postgres",
  password: "postgres",
  database: "eventstore_dev",
  hostname: "localhost",
  pool_size: 10
  1. Create the EventStore database and tables using the mix task
mix event_store.create

Using the EventStore

Writing to a stream

Create a unique identity for each stream. It must be a string. This example uses the uuid package.

stream_uuid = UUID.uuid4

Set the expected version of the stream. This is used for optimistic concurrency. A new stream will be created when the expected version is zero.

expected_version = 0

Build a list of events to persist. The data and metadata fields will be serialized to binary data. This uses your own serializer, as defined in config, that implements the EventStore.Serializer behaviour.

events = [
  %EventStore.EventData{
    event_type: "ExampleEvent",
    data: %ExampleEvent{key: "value"},
    metadata: %{user: "someuser@example.com"},
  }
]

Append the events to the stream.

:ok = EventStore.append_to_stream(stream_uuid, expected_version, events)

### Reading from a stream

Read all events from the stream, starting at the stream's first event.

{:ok, events} = EventStore.read_stream_forward(stream_uuid)

### Reading from all streams

Read all events from all streams.

# defaults to reading the first 1,000 events from all streams
{:ok, events} = EventStore.read_all_streams_forward()

### Stream from all streams

Stream all events from all streams.

events = EventStore.stream_all_forward() |> Enum.to_list()

Subscribe to streams

Subscriptions to a stream will guarantee at least once delivery of every persisted event. Each subscription may be independently paused, then later resumed from where it stopped. A subscription can be created to receive events published from a single logical stream or from all streams.

Events are received in batches after being persisted to storage. Each batch contains events from a single stream only and with the same correlation id.

Subscriptions must be uniquely named and support a single subscriber. Attempting to connect two subscribers to the same subscription will return an error.

By default subscriptions are created from the single stream, or all stream, origin. So it will receive all events from the single stream, or all streams. You can optionally specify a given start position:

  • :origin - subscribe to events from the start of the stream (identical to using 0). This is the current behaviour and will remain the default.
  • :current - subscribe to events from the current version.
  • stream_version or event_id (integer) - specify an exact stream version to subscribe from for a single stream subscription. You provide an event id for an all stream subscription.

Ack received events

Receipt of each event by the subscriber must be acknowledged. This allows the subscription to resume on failure without missing an event.

The subscriber receives an {:events, events} tuple containing the published events. The subscription returned when subscribing to the stream should be used to send the ack to. This is achieved by sending an {:ack, last_seen_event_id} tuple to the subscription process. A subscriber can confirm receipt of each event in a batch by sending multiple acks, one per event. The subscriber may confirm receipt of the last event in the batch in a single ack.

A subscriber will not receive further published events until it has confirmed receipt of all received events. This provides back pressure to the subscription to prevent the subscriber from being overwhelmed with messages if it cannot keep up. The subscription will buffer events until the subscriber is ready to receive, or an overflow occurs. At which point it will move into a catch-up mode and query events and replay them from storage until caught up.

Subscribe to events appended to all streams.

{:ok, subscription} = EventStore.subscribe_to_all_streams("example_subscription", self())

receive do
  {:events, events} ->
    IO.inspect(events)
    EventStore.ack(subscription, events)
end

Unsubscribe from a stream.

:ok = EventStore.unsubscribe_from_all_streams("example_subscription")

Example subscriber

# An example subscriber
defmodule Subscriber do
  use GenServer

  def start_link do
    GenServer.start_link(__MODULE__, [])
  end

  def received_events(subscriber) do
    GenServer.call(subscriber, :received_events)
  end

  def init(events) do
    # subscribe to events from all streams
    {:ok, subscription} = EventStore.subscribe_to_all_streams("example_subscription", self())

    {:ok, %{events: events, subscription: subscription}}
  end

  def handle_info({:events, events}, %{events: existing_events, subscription: subscription} = state) do
    # confirm receipt of received events
    EventStore.ack(subscription, events)

    {:noreply, %{state | events: existing_events ++ events}}
  end

  def handle_call(:received_events, _from, %{events: events} = state) do
    {:reply, events, state}
  end
end

Start your subscriber process, which subscribes to all streams in the event store.

{:ok, subscriber} = Subscriber.start_link()

Event serialization

The default serialization of event data and metadata uses Erlang's external term format. This is not a recommended serialization format for production usage.

You must implement the EventStore.Serializer behaviour to provide your preferred serialization format. The example serializer below serializes event data and metadata to JSON using the Poison library.

defmodule JsonSerializer do
  @moduledoc """
  A serializer that uses the JSON format.
  """

  @behaviour EventStore.Serializer

  @doc """
  Serialize given term to JSON binary data.
  """
  def serialize(term) do
    Poison.encode!(term)
  end

  @doc """
  ��Deserialize given JSON binary data to the expected type.
  """
  def deserialize(binary, config) do
    type = case Keyword.get(config, :type, nil) do
      nil -> []
      type -> type |> String.to_existing_atom |> struct
    end
    Poison.decode!(binary, as: type)
  end
end

Configure your serializer by setting the serializer option in the mix environment configuration file (e.g. config/dev.exs).

config :eventstore, EventStore.Storage,
  serializer: JsonSerializer,
  # ...

Benchmarking performance

Run the benchmark suite using mix with the bench environment, as configured in config/bench.exs. Logging is disabled for benchmarking.

MIX_ENV=bench mix do es.reset, app.start, bench

Example output:

## AppendEventsBench
append events, single writer                  100   10170.26 µs/op
append events, 10 concurrent writers           20   85438.80 µs/op
append events, 100 concurrent writers           2   1102006.00 µs/op
## ReadEventsBench
read events, single reader                   1000   1578.10 µs/op
read events, 10 concurrent readers            100   16799.80 µs/op
read events, 100 concurrent readers            10   167397.30 µs/op

Used in production?

Yes, this event store is being used in production.

PostgreSQL is used for the underlying storage. Providing guarantees to store data securely. It is ACID-compliant and transactional. PostgreSQL has a proven architecture. A strong reputation for reliability, data integrity, and correctness.

You can use any standard PostgreSQL tool to manage the event store data:

Contributing

Pull requests to contribute new or improved features, and extend documentation are most welcome.

Please follow the existing coding conventions, or refer to the Elixir style guide.

You should include unit tests to cover any changes.

Contributors

About

CQRS event store using PostgreSQL for persistence

License:MIT License


Languages

Language:Elixir 100.0%