kubernetes, elasticsearch, operators

Kubernetes Elasticsearch Operator

Last update:

Just after I wrote a Stateful Applications on Kubernetes post which focuses on stateful set in general, I started to look into the Kubernetes operators. An operator is a custom API object registered as Custom Resource Definition or CRD which enables you to create a custom business logic for operating with a particular service, in this case, Elasticsearch. This post will go through Elasticsearch operator in more details to show you why using an operator is probably a better idea than using plain stateful sets or deployments to create a production-ready Elasticsearch cluster on top of Kubernetes.

StatefulSet or Elasticsearch Operator?

When I started to look into the operators, I asked the above question on Twitter and referenced the author of the most used Elasticsearch Kubernetes deployment resource. The repository is here, and I highly recommend checking it to learn more about Elasticsearch deployment on top of Kubernetes.

Check this tweet to see the full conversation history!

At first, I couldn't decide what is right or better. I had a feeling that using operators is not beneficial at all. I thought that using the operator will introduce more issues in handling the Elasticsearch cluster in production. Also, if you check the stars of the above repository compared to Elasticsearch operator, you will see that people are still using stateful set more than this operator. Or, they don't like the idea of operators.

Elasticsearch Operator

"An Operator represents human operational knowledge in software to reliably manage an application."

In the end, I decided to go with the operator and also made my first contribution. Also, I created a Helm chart for easier deployment.

The operator can do things that are not available with the stateful set. It utilizes different Kubernetes resources in the background to do things in more automated fashion and adds some additional features:

  • S3 snapshots of indexes
  • Automatic TLS - the operator automatically generates secrets
  • Spread loads across zones
  • Support for Kibana and Cerebro
  • Instrumentation with statsd

Let's deploy Elasticsearch operator first:

⚡ helm repo add akomljen-charts \
     https://raw.githubusercontent.com/komljen/helm-charts/master/charts/

⚡ helm install --name es-operator \
     --namespace kube-system \
     akomljen-charts/elasticsearch-operator

As I mentioned, the operator is a custom Kubernetes resource, or CRD. After you deploy it you can check that new resource exists:

⚡ kubectl get CustomResourceDefinition
NAME                                         AGE
elasticsearchclusters.enterprises.upmc.com   3d

Moreover, you can check the details of this CRD with:

⚡ kubectl describe CustomResourceDefinition elasticsearchclusters.enterprises.upmc.com
...
Spec:
  Group:  enterprises.upmc.com
  Names:
    Kind:       ElasticsearchCluster
    List Kind:  ElasticsearchClusterList
    Plural:     elasticsearchclusters
    Singular:   elasticsearchcluster
  Scope:        Namespaced
  Version:      v1
...

As you can see, we have a new kind of resource, ElasticsearchCluster. So, now you can just create an Elasticsearch cluster using only one yaml file which represents it. Here is the example:

apiVersion: enterprises.upmc.com/v1
kind: ElasticsearchCluster
metadata:
  name: example-es-cluster
spec:
  kibana:
    image: docker.elastic.co/kibana/kibana-oss:6.1.3
  cerebro:
    image: upmcenterprises/cerebro:0.6.8
  elastic-search-image: upmcenterprises/docker-elasticsearch-kubernetes:6.1.3_0
  client-node-replicas: 3
  master-node-replicas: 2
  data-node-replicas: 3
  network-host: 0.0.0.0
  zones:
  - us-east-2b
  data-volume-size: 10Gi
  java-options: "-Xms512m -Xmx512m"
  snapshot:
    scheduler-enabled: false
    bucket-name: elasticsnapshots99
    cron-schedule: "@every 2m"
    image: upmcenterprises/elasticsearch-cron:0.0.4
  storage:
    type: gp2
    storage-class-provisioner: kubernetes.io/aws-ebs
  resources:
    requests:
      memory: 512Mi
      cpu: 500m
    limits:
      memory: 1024Mi
      cpu: '1'

That's it. Complete Elasticsearch cluster ready for use. If you want to add another one, you need to create a new yaml file and push it to Kubernetes API. One operator can manage multiple Elasticsearch clusters. I will try to answer some questions you may have:

How well-written operator should look like?

An excellently written operator is a Prometheus from CoreOS. CoreOS introduced operators as business logic in the first place.

Can I use custom Elasticsearch Docker image?

The Elasticsearch Docker image used by this operator consists of more layers. You can check those repositories in the exact order:

  1. Base image
  2. Kubernetes ready image
  3. Operator ready image

In the end, you have one image upmcenterprises/docker-elasticsearch-kubernetes which is a default one. You can use the official image, but also you can build your own. From above links you can check all environment variables and other things that you need to incorporate into your image to work correctly. It is pretty easy.

Ok, so all looks good, but are there any cons?

Well, yeah, some of them:

  • It is open source project, so you should probably be able to fix something on your own
  • You need to learn how it works (additional tool)
  • I would like to see support for different Java options for master, data and client nodes
  • Zone awareness so that primary & replica shards are not all scheduled into the same zone

Summary

I referenced only one operator which I found useful for Elasticsearch. There are probably many of them. What I would like to see is that companies like Elastic to start to embrace Kubernetes and eventually start to develop operators and Helm charts. I'm afraid that we will end up with multiple operators for the same software rather than improving the existing ones. What can you do about it? Well, start contributing and use existing operators. It is like with any other opensource project. Operators are here to stay. Stay tuned for the next one.