Last update:
Here is the second post of my monthly update series. Today is a day 1 of KubeCon + CloudNativeCon conference in Copenhagen, Denmark, and all eyes will be on it for a few days. Unfortunately, I will not be there, but I will track all the events and write about it during the next few weeks. Before that, I will share some news that I found interesting and important.
My Updates
As usual, I will start with my updates. In April I was mostly researching Kubernetes monitoring solutions and I wrote two related blog posts. The first one is about getting Kubernetes cluster metrics with Prometheus and the second one about getting Kubernetes logs with EFK stack. By implementing Prometheus and EFK stack you will probably cover 80% of monitoring needs for your cluster. Also, I had some time to write and explain how to deal with persistent storage with deployments and stateful sets. This post received good comments.
Just blogged: Get Kubernetes Logs with Elasticsearch, FluentBit and Kibana in 5 Minutes https://t.co/hr5ubss3fa #EFK #Monitoring
— Alen Komljen (@alenkomljen) April 22, 2018
Conduit v0.4.0 Released
I wrote about Kubernetes service mesh a few months ago. In that post, I explained what the service mesh is and which problems it tries to solve. Also, I mentioned Conduit and Istio, probably the two most popular solutions on Kubernetes. Wile Istio is more popular, has more features and seems like a better deal, not all users are happy with it. I found an interesting comment on Reddit where one user pointed out a bad experience with Istio so far.
I have a feeling that Conduit will become my choice when service mesh is in question. Stability and ease of use will probably become more relevant, not just the number of features where Istio seems much better at the moment. Also, don't forget that Conduit is a younger project.
Last week the new Conduit version got released and the updated roadmap looks really good.
k8s Spot Rescheduler
I was trying to solve an issue of managing multiple auto scaling groups on AWS, where one group are on-demand instances and others are a spot. The problem is that once you scale up the spot instance group you want to move the pods from on-demand instances so that you can scale it down. Of course, this scaling is automatic with Cluster Autoscaler. I stumbled upon on k8s spot rescheduler which does exactly that and also created a Helm chart for easier deployment. The Helm chart is not merged yet into official charts, but the deployment of k8s spot rescheduler is easy and I encourage you to try it if you want to solve the same problem as I did.
Testing the k8s spot rescheduler by @pusher and it works nicely. Also sent a pull request for a Helm chart https://t.co/b6BHYdaQCn
— Alen Komljen (@alenkomljen) April 12, 2018
Vault Operator by CoreOS
Yet again CoreOS created the new operator, this time for deploying Vault. I mentioned that a couple of times already, and I'm even more sure that operators are the future of how we deal with complex stateful applications on top of Kubernetes. I didn't have time to try it out but looking forward to it. Great job CoreOS!
Kubernetes Security Guide by Sysdig
Of course, security is important, and especially if you are running Kubernetes cluster. Most of the guides out there that show you how to spin up Kubernetes cluster ASAP don't include any security hardening. And that is ok, because who would try any new piece of software if that means running hundreds of commands and huge guide just to get started. Once you know what you want from your cluster you should take care of security. There is a lot of material online, but Sysdig recently published a great Kubernetes security guide.
#Kubernetes in-depth #security guide: RBAC, API certificates, Pod Security Policy, network policies and much more. Stay tuned for additional chapters over the next few weeks.https://t.co/9zbVioDWFk pic.twitter.com/670mIxQNXi
— Sysdig (@sysdig) April 9, 2018
Cilium 1.0
If you didn't hear about Cilium CNI, you should definitely read more about it. Last week they released the 1.0 version. This is a big thing because the most CNIs are relying on IP tables which are not designed for the huge number of containers. BPF is a game changer, and Cilium managed to make BPF based networking ready for Kubernetes. I'm looking forward to trying it soon.
Announcing Cilium 1.0:
— Cilium (@ciliumproject) April 24, 2018
Bringing the BPF Revolution to Kubernetes Networking and Security
- BPF-based networking, security & load balancing
- Identity based security at packet & API call level
- Service mesh datapath
- CNI/CMM pluginhttps://t.co/nX2BRQRBby pic.twitter.com/akjmVgz1Ey