Study of using Avro as the serialization mechanism for messages and events.
The focus was on investigating what the effort would be to disable the default Java Serializer of Akka. I looked at two parts.
One were the persistent events which are being persisted. So the application needs to deal with all the previous versions. I primarily looked at up-casting as the way to interpret the old event.
The second are the commands. For now I am only interested in the current version, but this will bring some challenges.
- start Cassandra on port 9042. I use Docker for that, see the shell-script in the root of the project.
- start the server
If using a Mac, like I do, make sure that you can use docker.
I use docker-machine with the default image aptly called default
.
If it isn't running already (check with docker-machine ls
), do so with docker-machine start default
.
Make sure the environment-variables of the VM on which your container will run is available.
Do this through the eval "$(docker-machine env default)"
command.
Execute the shell script ./cassandra.sh
.
I just execute nl.codestar.api.Server
from my IDE (IntelliJ),
which will use as a default the configuration for server-1
.
You can supply a commandline argument like nl.codestar.api.Server server-2
.
The first commandline argument is the name of the file on the classpath that will be loaded.
See below why it does not work with sbt yet.
The following resources where handy.
- akka-serialization-test: Study on akka-serialization using Google Protocol Buffers, Kryo and Avro
- Akka Serialization
For the events (of Akka Persistence) different versions of the same event have been made, which will be 'up-casted' to the latest version when read. This way the domain logic only needs to know the latest version of an event. The up-casting will be done with the use of defaults, defined in the Avro schema.
In the current project you see the different versions of the commands as they are kept around to test it. In normal circumstances you would only have the Avro-schemas (in `src/main/resources/').
As a starting point, see the AppointmentEventAvroSerializer
class.
For now the AvroCommandSerializer just uses a list of classes to serialize. There is no versioning possibility for the commands
No up-casting is supported for command. The implication is that when using Akka Persistence in an Akka cluster there is no deployment of new versions of commands without downtime.
New nodes will not understand the old command and vice-versa. So for a cluster,you will also need versioning of commands or introduction of new commands.
Needs to be improved, but the types do not line up.
When preparing this README, I the build.sbt
so that it should be runnable with sbt run
.
This gives the error mentioned below.
2017-12-30 14:03:33,641 - akka.event.EventStreamUnsubscriber -> ERROR[appointmentSystem-akka.actor.default-dispatcher-2] EventStreamUnsubscriber - swallowing exception during message send
java.lang.ClassNotFoundException: scala.Int
at sbt.internal.inc.classpath.ClasspathFilter.loadClass(ClassLoaders.scala:74)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at akka.actor.ReflectiveDynamicAccess.$anonfun$getClassFor$1(ReflectiveDynamicAccess.scala:21)
at scala.util.Try$.apply(Try.scala:209)
at akka.actor.ReflectiveDynamicAccess.getClassFor(ReflectiveDynamicAccess.scala:20)
at akka.serialization.Serialization.$anonfun$bindings$3(Serialization.scala:333)
at scala.collection.TraversableLike$WithFilter.$anonfun$map$2(TraversableLike.scala:739)
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:231)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:462)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:738)
at akka.serialization.Serialization.<init>(Serialization.scala:331)