Last update:
Most users while starting to learn Kubernetes will get to the point of exposing some resources outside the cluster. This is like a Hello World example in the Kubernetes world. And in most cases, the solution to this problem is the ingress controller. Think of ingress as a reverse proxy. Ingress sits between the Kubernetes service and Internet. It provides name-based routing, SSL termination, and other goodies. Often when approaching this problem users will choose Nginx. And the reason is simple, it is all over the place, almost every article about ingress refers to Nginx. The main reason for this is that Nginx was here from the start, almost. I was referring to it in my blog post as well. But, the situation is quite different today as we have some great alternatives. Welcome to Heptio Contour ingress controller.
The Raise of CRDs
Before talking about Contour and how it is different compared to Nginx for example, or any other "standard" ingress controller I have to mention Custom Resource Definitions or CRDs. Actually, I'm mentioning it a lot on this blog, but you need to appreciate how easy is to extend Kubernetes with custom resources.
As the name suggests, with custom resources you can define additional objects and extend your Kubernetes cluster with new features. Contour team did a great job introducing IngressRoute
object which doesn't depend on standard ingress. I encourage you to take a look at design doc to learn more. This means that the team behind Contour can extend its functionality without depending on the whole community, but at the same time they give us new ideas. In the end, we can expect that some of those things will end up in upstream Kubernetes as well. Maybe an ingress v2 😉.
Deployment
I created a helm chart for Contour deployment. The chart will install the Contour and Envoy proxy as deployment, both running in the same pod. We could have those separate, or even run it as daemon set. Maybe I will add it as an option to the helm chart later. I know, I also need to add a README.
Some notes:
- If you are running on-premises you could expose Envoy proxy as node port and then you will be able to access your service on each k8s node.
- When running in the cloud you will have an additional component that sits between Envoy proxy and the Internet, load balancer. If you are running on AWS preferred load balancer is NLB, which compared to classic ELB, doesn't terminate the connection and has a lower latency. Also, it is cheaper.
Let's deploy Contour ingress controller with Envoy proxy, and use NLB as my cluster is running on AWS:
$ helm repo add akomljen-charts https://raw.githubusercontent.com/komljen/helm-charts/master/charts/
$ helm install --name heptio \
--namespace ingress \
--set proxy.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"=nlb \
akomljen-charts/contour
$ kubectl get pod -n ingress --selector=app=contour
NAME READY STATUS RESTARTS AGE
heptio-contour-7b7694f98d-cxfnx 2/2 Running 0 1m
NOTE: If you are running k8s v1.9 or lower, NLB will not work! More info here.
If everything goes well you should get ELB/NLB running in your cluster. You can get its address with:
$ kubectl get svc heptio-contour -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' -n ingress
a00950ebcfd0411e740ee0207cf10ce8-1089949860.eu-west-1.nlb.amazonaws.com
And then use this address to create a wildcard DNS A record *.test.example.com
in Route53.
NOTE: External DNS is the project that you might want to look at, but not the scope of this post and above wildcard DNS will be ok for ingress testing.
Example Workloads
You can now run different workloads and use ingress route objects to create ingress rules. Of course, standard ingress is also supported. Let's test a few examples. First I need to run some test app. I will create a simple web app based on dockersamples/static-site
docker image. This is a Nginx container that will display a unique name which will help us to identify which app we are accessing. Let's create a deployment:
$ cat > blog.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog
spec:
selector:
matchLabels:
app: blog
replicas: 3
template:
metadata:
labels:
app: blog
spec:
containers:
- name: blog
image: dockersamples/static-site
env:
- name: AUTHOR
value: blog
ports:
- containerPort: 80
EOF
And a service:
$ cat > s1.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: blog
name: s1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: blog
EOF
Let's create both:
$ kubectl apply -f blog.yaml -f s1.yaml
$ kubectl get po --selector=app=blog
NAME READY STATUS RESTARTS AGE
app1-5d4d466cc7-4vh9l 1/1 Running 0 10s
app1-5d4d466cc7-fj489 1/1 Running 0 10s
app1-5d4d466cc7-wndhn 1/1 Running 0 10s
Ok, so the service is running and we can expose it now. Let's say I want to have this service available on app.test.example.com
:
$ cat > main.yaml <<EOF
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: main
spec:
virtualhost:
fqdn: app.test.example.com
routes:
- match: /
services:
- name: s1
port: 80
EOF
$ kubectl apply -f main.yaml
$ kubectl get ingressroute main -o jsonpath='{.status.currentStatus}'
valid
If you try to access app.test.example.com
you should get this page:
Nothing special here, but let's adjust a different path now. Instead of match: /
set match: /blog
and apply changes. If you try to access app.test.example.com/blog
it will not work. This is expected because the service itself doesn't have /blog
path available. You can resolve this issue with rewriting to /
. Just add prefixRewrite: "/"
and apply the changes again:
spec:
routes:
- match: /blog
prefixRewrite: "/"
services:
- name: s1
port: 80
Now it should work again. The big difference, when compared to standard ingress object, is the ability to set prefix rewrite per route. This is not possible with Nginx because it uses annotations. You could do some workaround, but it's messy.
All the above is not much different from standard ingress. The key features of ingress route are:
- Better support of multi-team Kubernetes clusters
- A delegation of routing configuration for a path or namespace
- Multiple services within a single route
- Supports defining service weighting and load balancing strategy (no annotations here)
Probably the most interesting Contour feature is the ability to delegate one route to another. Basically, you can connect multiple ingress route objects to work like one. In above example, you might want to delegate /
path to another ingress route object. That object can also be in the different namespace.
Let's create another deployment app2
in test namespace this time:
$ cat > app2.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: app2
spec:
selector:
matchLabels:
app: app2
replicas: 3
template:
metadata:
labels:
app: app2
spec:
containers:
- name: app2
image: dockersamples/static-site
env:
- name: AUTHOR
value: app2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: app2
name: s2
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app2
EOF
$ kubectl apply -f app2.yaml -n test
I already have ingress route main
in the default namespace, and now I want for that ingress route to delegate /
path to ingress route in test namespace:
$ cat > delegate-from-main.yaml <<EOF
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: delegate-from-main
spec:
routes:
- match: /
services:
- name: s2
port: 80
EOF
$ kubectl apply -f delegate-from-main.yaml -n test
$ kubectl get ingressroute delegate-from-main -o jsonpath='{.status.currentStatus}' -n test
orphaned
As you can see the status is orphaned
because this ingress route doesn't have a host. The last step is to edit the existing main ingress route in default namespace and add a delegate rule:
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: main
spec:
virtualhost:
fqdn: app.test.example.com
routes:
- match: /blog
prefixRewrite: "/"
services:
- name: s1
port: 80
- match: /
delegate:
name: delegate-from-main
namespace: test
And if you check the status of new ingress route, it has changed from orphaned
to valid
. Finally, app.test.example.com
will point to app2
and app.test.example.com/blog
to blog
.
There are other interesting features which I didn't cover here:
- The ability to run health checks from Envoy proxy (those are completely separate from k8s health checks)
- You can add weights to different routes (canary deployments)
- Support for different load-balancing strategies
- WebSocket support
So, is there anything missing? Most users are using automatic Let's Encrypt SSL with cert manager. Unfortunately, cert manager will not work with ingress route, yet. For more details please check this issue. In any case, you can still use Contour with standard ingress objects and have SSL.
Summary
I hope I give you some ideas when considering Contour as your default ingress controller and embracing the ingress route. The more we use it, the better it gets. Stay tuned for the next one!