kubernetes, service mesh

Kubernetes Service Mesh

A few months ago my colleague asked me what I think about integrating Linkerd to our new application running on Kubernetes. My first thought was, heck, isn't Kubernetes service and ingress enough? You can do a lot of stuff with it. Having a service mesh seemed like an overhead to me. You often have some API which is available only on the internal network. But, this is not the case anymore with modern apps. The API is probably exposed to the Internet as well and you will get a lot of traffic to it. You want more control of the traffic that goes to this API. Maybe you want to support many API versions, do canary deployments, and you want to watch and keep track of each request that comes in. This is where service mesh comes into play. It doesn't matter if you want to use Linkerd, Istio, or recently announced Conduit, principals are almost the same.

Why service mesh?

Service mesh is not something that came up with Kubernetes. But clearly, it is easier to integrate service mesh into your environment thanks to Kubernetes. There are two logical components that create service mesh. We already have pods which are designed to have many containers. Sidecar is the perfect example which extends and enhances the main container in a pod. With service mesh, the sidecar is service proxy or data plane.

Service mesh is a critical component of cloud-native.

To better understand the service mesh, you need to understand terms proxy and reverse proxy. Proxy, in a nutshell, receives the traffic and forwards it to somewhere else. Reverse proxy receives the traffic from many clients and then forwards that traffic to lots of services. In this case, all the clients talk to one proxy instance. Think of data plane as a reverse proxy. Ingress is also one of those proxies used to expose the services in Kubernetes. It is a great technology, but you can't do that much with it. Ingress can terminate SSL, do some rewrite rules and that is pretty much it. The same thing is with Kubernetes services. What if you want to make some more complicated routing? Service mesh is a critical component of cloud-native.

Here are a few others things that service mash is capable of:

  • Load balancing
  • Fine-grained traffic policies
  • Service discovery
  • Service monitoring
  • Tracing
  • Routing
  • Secure service to service communication

Other then sidecar proxies all service mesh solutions have some kind of controller which defines how sidecar containers should work. Service mesh control plane is the central place to manage the service mesh and service proxies. The service mesh control plane records a lot of network information, so it is a monitoring network tool also.

So, why service mesh? The answer is simple. You can do all the above without making changes to your code. It saves time and money. And, most important, you will not skip some testing because it is too complicated, to begin with. You can even simulate different scenarios on how your service will react to failures with features like Istio fault injection.

Conduit and Istio

At the beginning, I mentioned a few great solutions to create a service mesh on Kubernetes. In the future, they might be many others. Each product I mentioned tries to solve problems in its own way. They might be overlapping in some areas of course. Let's focus only on Conduit and Istio.

Buoyant, the company that created Linkerd also created Conduit service mesh. Probably the same team works on both. Why another service mesh from the same company? Because Buoyant created Linkerd without supporting only Kubernetes in mind and it is the more generic solution. Also, it is written in Java, which means it can be heavy. Remember, each pod gets one more container, a sidecar. They started to work on Conduit which is designed for Kubernetes. Conduit is written in Go - control plane, and Rust - a native service proxy, to be ultra-lightweight, fast and secure. You can define retries and timeouts, have instrumentation, and encryption (TLS), as well as allowing and denying requests according to the relevant policy. Also, it comes with a nice dashboard:


Or if you prefer command line tools you could list some stats:

⚡  conduit stat deployments
emojivoto/emoji-svc          1.1rps        100.00%           0ms           0ms
emojivoto/voting-svc         0.2rps        100.00%           0ms           0ms

Conduit getting started guide is great, so please try it yourself. And to learn more about Conduit please check the official docs.

Istio currently supports Kubernetes and Nomad, with more to come in the feature. Istio is more comparable to Linkerd then Conduit because it is multi-platform. It manages traffic flow across microservices, enforce policies and aggregate telemetry data. Like Conduit Istio is written in Go to be lightweight but unlike Conduit it employes Envoy to do the service proxy. To see how everything fits together check Istio architecture diagram:


What I like about Istio is the support for auto sidecar injection. Chances are that you already use Helm to deploy your apps, so injecting the sidecar manually into Kubernetes config files is not an option. To install Istio on Kubernetes check quick start guide. For all other information about Istio, please check the official docs.

Both products are opensource too. Whichever service mesh better suits your needs, they are both pretty much easy to try. You will not spend more than 5 minutes to get things running. I encourage you to try both. Istio at this point can do a lot more then Conduit. Also, don't forget that they are in Alpha/Beta phase, so they will probably change over time with more features to come.


I hope I gave you a nice introduction to service mesh. This post wasn't meant to be a comparison between Linkerd, Conduit, and Istio. I listed some of its features so you can get the idea of what service mesh brings to the table. I would say, the feature is bright and I can't wait to start using this for cloud-native application deployments.

Author image

Alen Komljen

Building and automating infrastructure with Docker, Kubernetes, kops, Helm, Rancher, Terraform, Ansible, SaltStack, Jenkins, AWS, GKE and many others.
  • Sarajevo