When working within a distributed system, or simply into a system calling other component than itself (database, queue, rest endpoint, ...), it is difficult to understand what happen wihtout looking deeply in the code. For example, if a call takes a lot of time, we need metrics to understand where and how much time the process is lagging. Zipkin is an answer to this problem. Zipkin allows tracing calls into a distributed system.
This project is not at all about ho to manage system element, it is only about connectivity between those components.
- HTTP call
direct call | nested call | |
---|---|---|
Java | with Spring MVC | with RestTemplate |
with HttpClient | ||
NodeJS | with Express | with Axios |
PHP |
- Message queue producer
ActiveMQ | RabbitMQ | Kafka | |
---|---|---|---|
Java | with Spring Sleuth and JMS | with Spring Sleuth and JMS | |
NodeJS | with KafkaJS | ||
PHP |
- Message queue consumer
ActiveMQ | RabbitMQ | Kafka | |
---|---|---|---|
Java | with Spring Sleuth and JMS | with Spring Sleuth and JMS | |
NodeJS | with KafkaJS | ||
PHP |
- Database
MySQL | |
---|---|
Java | with P6Spy |
with Driver interceptor | |
NodeJS | |
PHP |
This demo illustrates the simpliest use case : a Java application acting as a web server. All calls to the endpoint are traced into Zipkin.
docker-compose -f _docker-compose/java-basic.yml up
Services availables :
Service name | URL |
---|---|
zipkin | http://[MY_HOST]:9411 |
java-basic-frontend | http://[MY_HOST]:8080 |
Calling java-basic-frontend gets you a serialized date. This simple HTTP call is traced inside Zipkin.
This demo illustrates nested HTTP calls in Java using RestTemplate. Once the main endpoint is called, it will call another service. All calls to any endpoint are traced into Zipkin.
docker-compose -f _docker-compose/java-resttemplate.yml up
Services availables :
Service name | URL |
---|---|
zipkin | http://[MY_HOST]:9411 |
java-basic-frontend | http://[MY_HOST]:8081 |
java-resttemplate-frontend | http://[MY_HOST]:8080 |
Calling java-resttemplate-frontend will call java-basic-frontend. Then a serialized date is brought back from java-basic-frontend to java-resttemplate-frontend and then to user. All HTTP calls are traced inside Zipkin. The server acting as backend for the nested HTTP call is the one used in the java-basic demo.
This demo is the same as the java-resttemplate one.
docker-compose -f _docker-compose/java-httpclient.yml up
Services availables :
Service name | URL |
---|---|
zipkin | http://[MY_HOST]:9411 |
java-basic-frontend | http://[MY_HOST]:8081 |
java-httpclient-frontend | http://[MY_HOST]:8080 |
Only the implementation for the nested call is modified. Instead of using Spring's RestTemplate, Apache's HttpClient is used.
This demo illustrates how Spring sleuth decorates JmsTemplate and JmsListener.
docker-compose -f _docker-compose/java-activemq.yml up
Services availables :
Service name | URL |
---|---|
zipkin | http://[MY_HOST]:9411 |
activemq | http://[MY_HOST]:8161 |
java-activemq-frontend | http://[MY_HOST]:8080 |
java-activemq-consumer | not reachable |
Calling java-activemq-frontend will send a message onto the message queue. You can check that the message is correctly sent through ActiveMQ UI (default credentials : admin/admin). A Java application is defined to consume message from the queue. Consumption is very simple, it will dump message on the standard output. Sending and consuming message are traced through Zipkin. For each call on java-activemq-frontend, you should observe 3 spans : two for java-activemq-frontend endpoint and its sending to the queue and one for java-activemq-consumer message consumption.
This demo is the same as the java-activemq one. This time, 2 consumers are started.
docker-compose -f _docker-compose/java-activemq-multiple-consumers.yml up
Services availables :
Service name | URL |
---|---|
zipkin | http://[MY_HOST]:9411 |
activemq | http://[MY_HOST]:8161 |
java-activemq-frontend | http://[MY_HOST]:8080 |
java-activemq-consumer-1 | not reachable |
java-activemq-consumer-2 | not reachable |
It would have been great to be able use docker-compose scale to scale up (or down) java-activemq-consumer. But it does not seem to have a simple way to have container ID as environment variable (in order to change application name for demo's purpose).
This demo illustrates how Spring sleuth decorates KafkaTemplate and KafkaListener.
docker-compose -f _docker-compose/java-kafka.yml up
Services availables :
Service name | URL |
---|---|
zipkin | http://[MY_HOST]:9411 |
kafka | not reachable |
zookeeper (needed for kafka) | not reachable |
java-activemq-frontend | http://[MY_HOST]:8080 |
java-activemq-consumer | not reachable |
Calling java-kafka-frontend will send a message onto the message queue (topic is topicBackend). Frontend service returns the raw Kafka result from the sending. A Java application is defined to consume message from the queue (topic is topicBackend and groupId is my-java-consumer-group). Consumption is very simple, it will dump message on the standard output. Sending and consuming message are traced through Zipkin. For each call on java-kafka-frontend, you should observe 4 spans : two for java-kafka-frontend endpoint and its sending to the queue and two for java-kafka-consumer message consumption.
This demo illustrates how P6Spy is used through brave instrumentation to decorate JDBC datasource.
docker-compose -f _docker-compose/java-mysqlp6spy.yml up
Services availables :
Service name | URL |
---|---|
zipkin | http://[MY_HOST]:9411 |
mysql | port 3306 is accessible |
adminer | http://[MY_HOST]:8081 (for demo only, not needed) |
java-mysql-frontend | http://[MY_HOST]:8080 |
Application java-mysql-frontend offers 2 endpoint :
- GET / : will retrieve customers from database
- POST / : will create customers into database
Calling java-mysql-frontend with a POST will trigger mulitple JDBC call in order to insert 5 new customers. In Zipkin UI, calls to database are traced and you can witness how many database calls are made (especially those for the hibernate sequence).
Calling java-mysql-frontend with a GET will trigger only one JDBC call in order to fetch customers.
For each traced database call, we can check which DB query is made wit its parameter.
Interesting thing to notice is how little you have to change the project to make it work with P6Spy, no need to change code:
- change JDBC driver
- change JDBC URL
- add a property file for P6Spy
- add P6Spy Maven dependency
This demo illustrates how brave instrumentation decorates JDBC driver interceptors.
docker-compose -f _docker-compose/java-mysqlinstrumentation.yml up
This demo is the same as the java-mysqlp6spy one.
Services availables :
Service name | URL |
---|---|
zipkin | http://[MY_HOST]:9411 |
mysql | port 3306 is accessible |
adminer | http://[MY_HOST]:8081 (for demo only, not needed) |
java-mysql-frontend | http://[MY_HOST]:8080 |
Interesting thing to notice is how little you have to change the project to make it work with P6Spy, no need to change code:
- add parameter to JDBC URL
- add brave-instrumentation Maven dependency
Difference between P6Spy and driver interceptor is the level of tracing :
- P6Spy wrap the driver and traces what goes out of it
- Driver interceptor traces what is happening inside the driver
This demo illustrates the simpliest use case : a NodeJS application acting as a web server (express). All calls to the endpoint are traced into Zipkin.
docker-compose -f _docker-compose/nodejs-basic.yml up
Services availables :
Service name | URL |
---|---|
zipkin | http://[MY_HOST]:9411 |
nodejs-basic-frontend | http://[MY_HOST]:9000 |
Calling nodejs-basic-frontend gets you a serialized date. This simple HTTP call is traced inside Zipkin. As debug is enabled for the demo, you can see details for the HTTP call to Zipkin.
This demo illustrates nested HTTP calls in NodeJS using Axios. Once the main endpoint is called, it will call another service. All calls to any endpoint are traced into Zipkin.
docker-compose -f _docker-compose/nodejs-axios.yml up
Services availables :
Service name | URL |
---|---|
zipkin | http://[MY_HOST]:9411 |
nodejs-basic-frontend | http://[MY_HOST]:9001 |
nodejs-axios-frontend | http://[MY_HOST]:9000 |
Calling nodejs-axios-frontend will call nodejs-basic-frontend. Then a serialized date is brought back from nodejs-basic-frontend to nodejs-axios-frontend and then to user. All HTTP calls are traced inside Zipkin. The server acting as backend for the nested HTTP call is the one used in the nodejs-basic demo.
Bonus, you can try the fibonacci endpoint to trigger latency :
- http://[MY_HOST]:9000/fibonacci?count=40
This demo illustrates how ZipkinJS can be used to decorated a KafkaJS client.
docker-compose -f _docker-compose/nodejs-kafkajs.yml up
Services availables :
Service name | URL |
---|---|
zipkin | http://[MY_HOST]:9411 |
kafka | not reachable |
zookeeper (needed for kafka) | not reachable |
nodejs-kafkajs-frontend | http://[MY_HOST]:9000 |
nodejs-kafkajs-consumer | not reachable |
Calling nodejs-kafkajs-frontend will send a message onto the message queue (topic is topicBackend). Frontend service returns OK. A NodeJS application is defined to consume message from the queue (topic is topicBackend and groupId is my-nodejs-consumer-group). Consumption is very simple, it will dump message on the standard output. Sending and consuming message are traced through Zipkin. For each call on nodejs-kafkajs-frontend, you should observe 4 spans : two for nodejs-kafkajs-frontend endpoint and its sending to the queue and two for nodejs-kafkajs-consumer message consumption.
- introdution to distributed tracing : https://speakerdeck.com/adriancole/introduction-to-distributed-tracing-and-zipkin-at-devopsdays-singapore
- example for Java Spring : https://github.com/openzipkin/sleuth-webmvc-example
- tools and example for NodeJS (it contains multiple instrumentations) : https://github.com/openzipkin/zipkin-js-example
- tools and example for PHP : https://github.com/openzipkin/zipkin-php
- plug nginx with zipkin : https://medium.com/opentracing/how-to-enable-nginx-for-distributed-tracing-9479df18b22c