This repository dives into five different logging patterns:
- Parse: Take the log files of your applications and extract the relevant pieces of information.
- Send: Add a log appender to send out your events directly without persisting them to a log file.
- Structure: Write your events in a structured file, which you can then centralize.
- Containerize: Keep track of short-lived containers and configure their logging correctly.
- Orchestrate: Stay on top of your logs even when services are short lived and dynamically allocated on Kubernetes.
The slides for this talk are available on my website.
Run $ docker run --rm --interactive --tty --volume $PWD/app:/app composer:1.9.1 install
before everything else to fetch the dependencies (or update
if you have run it before).
- Bring up the Elastic Stack:
$ docker-compose up --build
- Rerun the PHP application to generate more logs:
$ docker restart php_app
- Remove the Elastic Stack and its volumes:
$ docker-compose down -v
- Start the demo with
$ docker-compose up --build
. - Look at the code — which pattern are we building with log statements here?
- Look at Management -> Index Management in Kibana.
- How many log events should we have? 40. But we have 43 entries instead. Since only 42 would be the perfect number, something is wrong here.
- See the
_grokparsefailure
in the tag field. Enable the multiline rules in Filebeat. It should automatically refresh and when you run the application again, it should now only collect 40 events. - Show that this works as expected now and drill down to the errors to see that emojis are working throughout the stack.
- Copy a log line and parse it with the Grok Debugger in Kibana, for example, with the pattern
^\[%{TIMESTAMP_ISO8601:timestamp}\]%{SPACE}\{"memory":%{NUMBER:memory}
— show https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns to get started. We can copy the rest of the pattern from logstash.conf. - Show the Data Visualizer in Machine Learning by uploading the LOG file. The output is actually quite good already, but we are sticking to our manual rules for now.
- Find the log statements in Kibana's Discover view for the parse index.
- Show the Logstash pipeline in Kibana's Monitoring view and the other components in Monitoring.
- Create a vertical bar chart visualization on the
log.level
field.
- Describe how the logs would be missing from the first run, since no connection to Elasticsearch would have been established yet.
- Skip the approach after discussing its downsides.
- Run the application and show the data in the structure index and filter to
fields.application: "php"
. - Show the PHP configuration for JSON, since it is a little more complicated than the others.
- Point to https://github.com/elastic/ecs for the naming conventions and its PHP implementation https://github.com/elastic/ecs-logging-php.
- Show the results in
fields.application: "php-ecs"
and discuss the deeper integrations including their tradeoff around more coupling.
- Show the metadata we are collecting now.
- Point to the ingest pipeline and show how everything is tied together. Also discuss the need for Logstash and why many use-cases are fine with ingest nodes.
- Filter to down to
container.name : "php_app"
and point out the hinting that stops the multiline statements from being broken up. - Point out how you could break up the output into two indices — docker-* and docker-php-*.
- Show the new Logs UI (adapt the pattern to match the right index).