donbourne / guide-app-logging

Guide for logging on OpenShift

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

permalink
/guides/app-logging/

Application Logging on OpenShift with Elasticsearch, FluentD, and Kibana

OpenShift provides a preconfigured EFK(Elasticsearch, FluentD, and Kibana) stack, which DevOps uses to aggregate all container logs. You install the EFK stack by using the ansible-playbook command that resides in the openshift-ansible repository. After the installation completes, the EFK deployments reside inside the openshift-logging namespace of the OpenShift cluster.

Set up two separate EFK stacks. One stack is for EFK deployments that are dedicated exclusively for OpenShift and Kubernetes logs. The other EFK stack is for user applications.

There are several advantages to having an ops EFK stack.

  • It is much easier to find applications logs in Kibana, as it is not polluted with all the Kubernetes logs happening all the time.

  • It provides more flexibility for memory allocations, because users can independently assign system memories for each of the EFK deployments.

Install openshift-logging

Installing EFK on OpenShift requires an OpenShift host inventory file and the openshift-ansible playbook for logging the installation. After you create the inventory file, add the following Ansible variables to the [OSEv3:vars] section of the inventory file.

openshift_logging_use_ops=True
openshift_logging_es_ops_nodeselector={“node-role.kubernetes.io/infra”:“true”}
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra":"true"}
openshift_logging_es_ops_memory_limit=5G
openshift_logging_es_memory_limit=3G

There are many other Ansible variables provided by OpenShift for advanced fine-tuning the EFK stack. Detailed information about the installation and configuration can be found on OpenShift documentation Aggregating Container Logs.

Let’s examine each of the variables in detail.

  • Setting openshift_logging_use_ops=True instructs Ansible to install two EFKs with a dedicated ops deployment.

  • openshift_logging_es_nodeselector and openshift_logging_es_ops_nodeselector are two variables required by the openshift-logging ansible-playbook to install Elasticsearch, You usually set these variables to the infra nodes.

  • openshift_logging_es_memory_limit and openshift_logging_es_ops_memory_limit are self-explanatory and can be set according to your preference. For stable operation, allocate at least 2 GB of memory for each of the Elasticsearch deployments. If memory is not specified explicitly, OpenShift allocates 16 GB of memory for each of the Elasticsearch deployment by default. It is highly recommended to install openshift-logging on systems with at least 32 GB of RAM when openshift_logging_use_ops is set to true.

After all the variables are set in the inventory file, the user would need to run the ansible-playbook command to install the EFK stack onto its current OpenShift Cluster.

ansible-playbook -i <inventory_file> openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=true

The installation process can take a few minutes to complete. After the installation is completed without any error, you can see the following pods that are running in the openshift-logging namespace.

[root@rhel-2EFK ~]# oc get pods -n openshift-logging

NAME                                          READY     STATUS      RESTARTS   AGE
logging-curator-1565163000-9fvpf              0/1       Completed   0          20h
logging-curator-ops-1565163000-5l5tx          0/1       Completed   0          20h
logging-es-data-master-iay9qoim-4-cbtjg       2/2       Running     0          3d
logging-es-ops-data-master-hsmsi5l8-3-vlrgs   2/2       Running     0          3d
logging-fluentd-vssj2                         1/1       Running     1          3d
logging-kibana-2-tplkv                        2/2       Running     6          4d
logging-kibana-ops-1-bgl8k                    2/2       Running     2          3d

You can see that Elasticsearch, Kibana, and Curator all have two types of pods: the main one and a secondary set of pods with -ops postfix, except for Fluentd. This situation is expected as the split of application logs, and the OpenShift operations logs occurs inside the single Fluentd instance. All of the node system logs and the logs from projects, default, openshift, and openshift-infra are considered as operation (or ops) logs and are aggregated to the ops Elasticsearch server. The logs from any other namespaces are aggregated to the main Elasticsearch server.

The openshift-logging ansible-playbook also exposes two routes for external access to the Kibana and ops Kibana web console.

[root@rhel-2EFK ~]# oc get routes -n openshift-logging

NAME                 HOST/PORT                             PATH      SERVICES             PORT      TERMINATION          WILDCARD
logging-kibana       kibana.apps.9.37.135.153.nip.io                 logging-kibana       <all>     reencrypt/Redirect   None
logging-kibana-ops   kibana-ops.apps.9.37.135.153.nip.io             logging-kibana-ops   <all>     reencrypt/Redirect   None

If you examine the logging-kibana-ops URL, all the operation logs generated by Openshift and Kubernetes are visible on Kibana’s Discover page.

Kibana ops page with operation log entries

Figure 1: Kibana ops page with operation log entries

View application logs on Kibana

Before you use the logging-kibana for application logs, make sure that the application is already deployed in a namespace NOT from one of the ops namespaces. The ops namespaces are default, openshift, and openshift-infra.

To fully take advantage of the Kibana’s dashboard functions, output the application in JSON format. Kibana is then able to process the data from each individual field of the JSON object to create customized visualization for that field.

See the Kibana dashboard page by using the routes URL https://kibana.apps.9.37.135.153.nip.io. Log in using your OpenShift user and password, then the page should redirect you to Kibana’s Discover page where the newest logs of the selected index are being streamed. Select the project.\* index to view the application logs generated by the deployed application.

Kibana page with the application log entries

Figure 2: Kibana page with the application log entries

The project.\* index contains only a set of default fields at the start, which does not include all of the fields from the deployed application’s JSON log object. Therefore, the index needs to be refreshed to have all the fields from the application’s log object available to Kibana.

To refresh the index, click on the Management option on the left pane.

Click Index Pattern, and find the project.\* index in Index Pattern. Then, click the refresh fields button, which is on the right. After the Kibana is updated with all the available fields in the project.\* index, import the preconfigured dashboards to view the application logs.

Index refresh button on Kibana

Figure 3: Index refresh button on Kibana

To import the dashboard and its associated objects, navigate back to the Management page and click Saved Objects. Click Import and select the dashboard file. When prompted, click the Yes, overwrite all option

Head back to the Dashboard page and enjoy the logging on the dashboard that you imported previously.

Kibana dashboard for Open Liberty application logs

Figure 4: Kibana dashboard for Open Liberty application logs

Reinstalling and uninstalling openshift-logging

If changes need to be made for the installed EFK stack, rerun the ansible-playbook installation command with updated ansible variables values in the inventory file. It’s always safe to reinstall openshift-logging by running this command. If the aggregated containers logging is no longer needed in the current cluster, you can use the same ansible-playbook command to uninstall the openshift-logging feature. Uninstall the feature by setting the openshift_logging_install_logging variable to False.

About

Guide for logging on OpenShift