How to monitor OpenShift ElasticSearch logging with ElasticHQ

Ricardo Zanini
4 min readMar 26, 2019

OpenShift has the EFK stack for handling aggregate logging. Aggregate logging refers to logs of the OpenShift internal services and containers where your application is deployed. Hence it’s very important to keep an eye on this service to make sure everything is working as intended. In this article I’ll show you an alternative approach for monitoring the ElasticSearch (ES).

ElasticHQ monitoring OpenShift Logging ElasticSearch

Background Story

On Openshift 3.11 Prometheus was introduced implementing the services exporters (HAProxy, ElasticSearch and etc.) from the standard OpenShift installation. This way an operator could have the OpenShift internal services monitored by Prometheus.

Prior to 3.11 the only way to monitor the internal ES service was querying the health API directly like:

$ oc exec $es_pod -- curl -s --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert --cacert /etc/elasticsearch/secret/admin-ca https://localhost:9200/_cat/health?v

To had something to monitor the ES, you could create a script to execute the API calls from time to time and export the results to somewhere else. Then create a view or notify the administrators that something is not right with their indices.

A friend of mine told me about ElasticHQ and how nice its dashboard was. He was very interested in give it a try by deploying it on OpenShift to monitor the internal ES service. He had OpenShift 3.9 in production and Prometheus was still in tech preview. Why not give ElasticHQ a try? After all, the installation was just a matter of pushing a new container. Wasn’t? Not quite.

Understanding ElasticSearch Deployment

In the standard installation, the internal logging ES doesn’t expose the health API directly, so the only way to query the API is curling the localhost URL inside the ES container (like demonstrated in the previous snippet).

This approach is necessary because only the ES admin has access to the health API. Even if you expose the ES API via a new route, the health endpoints won’t be accessible even to an user with the cluster-admin role.

The only way to access this API is using the ES administrator certificate key. This key is available in the secrets section of the logging namespace named as logging-elasticsearch. Mounting this secret into a container inside the logging namespace would be possible to access the health API via the ES service (svc):

curl -s --key admin-key --cert admin-cert --cacert admin-ca https://logging-es.logging.svc.cluster.local:9200/_cat/health?v

Unfortunately (at the time of this writing), ElasticHQ doesn’t have support for client certificate authentication. This limitation would make things harder to connect the ElasticHQ to the OpenShift logging ES.

Deploying ElasticHQ on OpenShift

With the ElasticHQ limitation to connect to a client certificate protected endpoint, and to avoid exposing the ES API to outside of the cluster for security reasons, our approach was to add a reverse proxy between the ElasticHQ container and the ES service. This proxy would be responsible to use the ES administrator keys to authenticate and forward requests to the ES container:

ElasticHQ deployment on OpenShift

The proxy implementation used was NGINX, which has a supported container image by Red Hat. NGINX configuration was pretty straightforward once the secrets had been mounted into the container’s filesystem:

location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass https://logging-es.logging.svc.cluster.local:9200;
proxy_ssl_trusted_certificate /opt/app-root/src/elasticsearch/secret/admin-ca;
proxy_ssl_certificate /opt/app-root/src/elasticsearch/secret/admin-cert;
proxy_ssl_certificate_key /opt/app-root/src/elasticsearch/secret/admin-key;
}

The root location is responsible to proxy every request to the ES service using the administrator keys. This way, the monitoring tool could easily connect to this proxy internally and make the right API calls to retrieve the ES metrics.

To deploy this container along with the ElasticHQ one, we implemented a sidecar container pattern:

Deploy components of an application into a separate process or container to provide isolation and encapsulation.

In this use case is a deployment of a pod with two containers: the proxy is responsible to interface with the ES secure service and the monitoring container to delivery the desired functionality. To read more about how multiple containers are managed together within a pod, go to the Kubernetes documentation.

The proxy container exposes the port 8080 internally, which means that only the containers inside the pod would be able to connect to it. This avoids unnecessary exposure of the ElasticSearch API. Only the ElasticHQ backend services would be able to connect to it.

To connect the ElasticHQ container to the proxy was pretty simple, just a matter of adding the http://localhost:8080 URL to the dashboard:

ElasticHQ connection

Voilà! You have your OpenShift logging ES monitored by ElasticHQ. Play around with the dashboard to discover the nice functionality exposed by it.

ElasticHQ on OpenShift: Metrics View

Even with Prometheus, having ElasticHQ could add some value to your daily operations, saving you time to create your unique monitoring view or even aggregating other ElasticSearch clusters to the same dashboard.

A full functional ElasticHQ deployment on OpenShift with every technical details is on this GitHub repository (star us!). If you have any issues, please let me know by opening an issue there or leaving a comment on this article.

See you next time!

--

--

Ricardo Zanini

Yogi, open source lover, software engineer @ Red Hat. Opinions are my own.