gspandy / flowing-retail

Event- and domain-driven order fulfilment using Kafka or Rabbit as Event Bus and Java, Spring Boot & Camunda for the microservices

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Order fulfillment sample application demonstrating concepts in the context of DDD and Microservices.

This sample application shows how to implement

  • a simple order fulfillment system

in the context of

  • Domain Driven Design (DDD)
  • Event Driven Architecture (EDA)
  • Microservices (µS)

Note: There is also an example demonstrating stateful resilience patterns when using REST communication. It contains an own readme for details: payment-rest.

Links

Overview

Flowing retail simulates a very easy order fulfillment system. The business logic is separated into the following services (shown as context map):

Microservices

Concrete technologies/frameworks:

  • Java
  • Spring Boot
  • Spring Cloud Streams
  • Camunda
  • Apache Kafka

Architecture

This results in the following architecture:

Microservices

Communication of services

The services have to collaborate in order to implement the overall business capability of order fulfillment. There are many possibilities to communicate, this example focues on:

  • Asynchronous communication via Apache Kafka
  • Event-driven wherever appropriate
  • Sending Commands in cases you want somebody to do something, which involves that events need to be transformed into events from the component responsible for, which in our case is the Order service:

Events and Commands

Potentially long running services and distributed orchestration

Typically long running services allow for a better service API. For example Payment might clear problems with the credit card handling itself, which could even involve to ask the customer to add a new credit card in case his is expired. So the service might have to wait for days or weeks, making it long running. This requires to handle state, that's why a state machine like Camunda is used.

An important thought is, that this state machine (or workflow engine in this case) is a library used within one service. It runs embedded within the Spring Boot application, and if different services need this, they run engines on their own. It is an autonomous team decision if they want to use some framework and which one:

Events and Commands

Run the application

You can either

  • Docker Compose with pre-built images from Docker Hub (simplest)
  • Build (Maven) and start via Docker Compose
  • Build (Maven) and start manually (including Zookeeper, Kafka)

Hint on using Camunda Enterprise Edition

For Camunda there is an enterprise edition available with [https://camunda.com/products/cockpit/#/features](additional features in Cockpit) (the monitoring tool). It is quite handy to use this when playing around with the example. You can easily switch to use enterprise edition:

Note that you do not need the enterprise edition to run the examples, the community edition will also do fine, you just cannot see and do that much in Camunda Cockpit.

Docker Compose with pre-build Docker images

cd docker-dist
  • Start using docker compose:
docker-compose up

If you like you can connect to Kafka from your local Docker host machine too. Because of so called advertised endpoints you have to map the Kafka container hostname to localhost. This is because the cluster manager of Kafka (Zookeeper) gives you his view of the Kafka cluster which containes of this hostname, even if you connected to localhost in the first place.

For example, on Windows append this entry to C:\Windows\System32\drivers\etc\hosts:

127.0.0.1 kafkaserver

On Linix edit the /etc/hosts accordingly.

Docker Compose with local build of Docker images

  • Download or clone the source code
  • Run a full maven build
mvn install
  • Build Docker images and start them up
docker-compose build
docker-compose up

If you like you can connect to Kafka from your local Docker host machine too. Because of so called advertised endpoints you have to map the Kafka container hostname to localhost. This is because the cluster manager of Kafka (Zookeeper) gives you his view of the Kafka cluster which containes of this hostname, even if you connected to localhost in the first place.

For example, on Windows append this entry to C:\Windows\System32\drivers\etc\hosts:

127.0.0.1 kafkaserver

On Linix edit the /etc/hosts accordingly.

Manual start (Kafka, mvn exec:java)

  • Download or clone the source code
  • Run a full maven build
mvn install
  • Install and start Kafka on the standard port
  • Create topic "flowing-retail"
kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic flowing-retail
  • You can check & query all topics by:
kafka-topics.sh --list --zookeeper localhost:2181
  • Start the different microservices components by Spring Boot one by one, e.g.
mvn -f checkout exec:java
mvn -f order exec:java
...

You can also import the projects into your favorite IDE and start the following class yourself:

checkout/io.flowing.retail.java.CheckoutApplication
...

About

Event- and domain-driven order fulfilment using Kafka or Rabbit as Event Bus and Java, Spring Boot & Camunda for the microservices

License:Apache License 2.0


Languages

Language:Java 89.4%Language:HTML 6.5%Language:JavaScript 4.1%