Connecting kubernetes clusters together with istio.

Chris Haessig
5 min readJul 1, 2021

Update ( 11/1/21 ): Istio’s multicluster mode is a better way to set this up. I created a blog about it here.

After a couple years of running large kubernetes clusters with 1000+ nodes, things started to slow down for us. The kubernetes API started responding slower and slower, it was tough to control so many different namespaces with hundreds of deployments, rbac rules, secrets and configs. After getting some help from AWS, it was agreed we need to start breaking off our deployments into dedicated EKS clusters.

But how do we do this with istio that we run in production ? Istio is great because all traffic can support STRICT mTLS, meaning no packets will be able to reach its destination unless the correct certificate is presented and validated. We can literally expose an open to the would load balancer and no one can scrape the endpoint without the proper certificates being presented….. sweet, we want to use this connecting many clusters together.

Getting started

To get this working, I will use the namespace named “secure” for both clusters. I created two EKS clusters in AWS on different VPCs peered together, also created a AWS NLB between them. When installing istio, make sure to use the same CA for both clusters.

You can enable mTLS STRICT by namespace ( of course this blog assumes you are running istio on your kubernetes clusters already), I am running 1.8.5.

I first create a NLB with a static ip address which we can do by setting the annotation aws-load-balancer-eip-allocations in the pod spec. We don’t need to use a NLB ( can be a NodePort for example ) . but it makes auto discovery easier for us in the future.

# Enable strict mTLS on namespace
---
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: secure
spec:
mtls:
mode: STRICT
# Create NLB
---
apiVersion: v1
kind: Service
metadata:
name: cluster-load-balancer
namespace: istio-system
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "TCP"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: " 8443"
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-<your id>"
spec:
ports:
- port: 443
targetPort: 8443
protocol: TCP
type: LoadBalancer
selector:
app: istio-ingressgateway

Let’s login to the destination server and set it up first.

We add a Gateway crd to expect https traffic on port 8443, the traffic will be coming from istio from the source cluster.

# Gateway CRD on destination
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mtls
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: ISTIO_MUTUAL

I need a test backend to connect to , so I will launch a nginx pod and k8 service that points to it.

kubectl -n secure create deploy nginx --image=nginx --port=80
kubectl -n secure expose deploy nginx --port 80

Once the traffic hits the gateway, the virtual service will take over and route to nginx on port 80.

---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx
namespace: secure
spec:
gateways:
- mtls
hosts:
- newlb.secure.svc.cluster.local
http:
- route:
- destination:
host: nginx.secure.svc.cluster.local
port:
number: 80

Everything on the destination cluster should be good to go.

Configuring the source cluster.

What we want istio to do is encrypt traffic if the traffic is destined for the newlb.secure.svc.cluster.local service, then send it to the secondary cluster.

We can do this by creating a headless service and endpoint pointing to the ip address of the NLB we just created ( this is why it needed to be static ) , which then will forward to the destination cluster, looks like this ( pod -> k8 service / EP-> NLB> ( Cluster 2 ) -> ( Ingress Controller 8443 ) -> 80 nginx )

First enable STRICT mTLS on the secure namespace

Then create the service and endpoint.

# Enable strict mTLS on namespace secure
---
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: secure
spec:
mtls:
mode: STRICT
# headless service
---
apiVersion: v1
kind: Service
metadata:
name: newlb
namespace: secure
spec:
ports:
- name: https
port: 443
---
apiVersion: v1
kind: Endpoints
metadata:
name: newlb
namespace: secure
subsets:
- addresses:
- ip: < ip of the NLB >
ports:
- name: https
port: 443
protocol: TCP

We then create a destination rule on the source cluster to apply mTLS to the packets that leave the pod.

---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: newlb
namespace: secure
spec:
host: newlb.secure.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL

Inspecting traffic to make sure we are protected.

But what do these rules mean ?

---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: newlb
namespace: secure
spec:
host: newlb.secure.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL

Let’s do a quick test.

Running ngrep on the destination ingress controller , we can see the traffic is coming in encrypted because thats that the destination rules is applying.

If I disable the traffic policy on the source cluster ( just as a test ) , I see clear text coming in clear text.

---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: newlb
namespace: secure
spec:
host: newlb.secure.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE

So we know the rule is working, make sure to revert back to ISTIO_MUTUAL for the TLS mode.

Testing it out !

We should be able to access the nginx webserver from the source cluster ( cluster 1 ) now, the packets will leave the pod encrypted , go through the NLB, then be processed by the destination ( cluster 2 ) ingress controller, handed off to nginx that run with strict mTLS. The whole transit will be protected by mTLS and prying eyes.

I launch a test pod and run , curl newlb.secure.svc.cluster.local:443 nginx returns successfully !

Profit ?

--

--