Multi cluster setup with istio

Chris Haessig
5 min readOct 18, 2021

As everyone knows istio is a great service mesh. The power that it adds to kubernetes makes it where I would never even consider running k8s again without it.

It is becoming more and more common to run multiple kubernetes clusters though. How do we connect them together ? This post has you covered.

Creating the certificates.

mTLS is almost a default now in istio ( not really, but you show seriously enable it ). It’s important that istio shares the same CA certificates with all other clusters that we will connect. This will make the encryption between clusters happy when we set it all up.

Using openssl , we can create the CA here.

openssl req -newkey rsa:2048 -nodes -keyout root-key.pem -x509 -days 36500 -out root-cert.pem openssl genrsa -out ca-key.pem 2048openssl req -new -key ca-key.pem -out ca-cert.csr -sha256openssl x509 -req -days 36500 -in ca-cert.csr -sha256 -CA root-cert.pem -CAkey root-key.pem -CAcreateserial -out ca-cert.pem -extensions v3_reqcp ca-cert.pem cert-chain.pem

Upload our certs to kubernetes in all clusters before we even install istio

kubectl create secret generic cacerts -n istio-system \
--from-file=cluster1/ca-cert.pem \
--from-file=cluster1/ca-key.pem \
--from-file=cluster1/root-cert.pem \
--from-file=cluster1/cert-chain.pem

Approach #2

If you are lazy like me, you can also install istio without generating the certs manually. You can just view the cacerts secret in k8s from cluster 1 after installing istio and copy it to cluster 2, boom, same CA.

Installing Istio

Go to the istio install page and grab istioctl. We will use the default profile but need to add a couple more options. I am using istio version1.11 found here for macOS. Of course you your version you wish.

Untar the download file , then go into manifests/profiles/. Let’s edit the default.yaml profile. Profiles just like they sound are ways group settings together.

We need to tell istio to use multicluster mode. We also want to give it some unique cluster information to know what cluster we are in.

Cluster 1

Making changes to default.yaml ( or what ever profile name you are calling it ) . We need to chabge the clusterName key for each cluser.

...
meshConfig:
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
...
global:
meshID: mesh1
multiCluster:
enabled: true
clusterName: cluster1
network: network1
...

Cluster 2

...
meshConfig:
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
...
global:
meshID: mesh1
multiCluster:
enabled: true
clusterName: cluster2
network: network1
...

Now we can install istio on both clusters.

# cluster 1.    default.yaml is where our profile is defined.  
istioctl manifest install -f default1.yaml
# cluster 2
istioctl manifest install -f default2.yaml

Make sure istio is running on both clusters without failing.

kubectl -n istio-system get podsNAME                                    READY   STATUS    RESTARTS   AGEistio-egressgateway-6c9c945447-wq4qf    1/1     Running   0          46histio-ingressgateway-6f9c7ffd8b-fsfqv   1/1     Running   0          46histiod-67fb45b754-dktvh                 1/1     Running   0          46h

Create remote secrets on both clusters

Both these kubernetes clusters are on the same VPC in this example , make sure port 443 so the two clusters can talk with each other.

Running the command below we will create the remote secret needed for istio on cluster 1 to talk to it cluster 2 via the API server. Now all services on both cluster will know about each other

bin/istioctl x create-remote-secret    --context="<context_name_1>"    --name=cluster2 |    kubectl apply -f - --context="<context_name_2>"bin/istioctl x create-remote-secret    --context="<context_name_2>"    --name=cluster2 |    kubectl apply -f - --context="<context_name_1>"

If you read the secret that we generated , it becomes clear what happened.

kubectl -n istio-system get secret istio-remote-secret-<name>  -o yaml

We can see cluster 1 has all the info to talk to cluster 2 and can do service discovery and visa versa.

Launch some deployments on both sides.

Label the namespace for istio injection, we will use the namespace test for cluster 1 and secure for cluster 2

# Cluster 1kubectl label namespace test istio-injection=enabled --overwrite# Cluster 2kubectl label namespace secure istio-injection=enabled --overwrite

Then we will launch some test deployments on both clusters.

# Cluster 1kubectl -n test create deploy shell--image=nginx --port 80# Cluster 2kubectl -n secure create deploy lax --image=nginx --port 80
kubectl -n secure expose deploy lax

Looking at the istiod pod logs we should see that cluster 1 saw the lax endpoint we created on cluster 2.

istiod-67fb45b754-dktvhdiscovery 2021-10-18T07:58:08.700207Z info ads Full push, new service secure/lax.secure.svc.cluster.localistiod-67fb45b754-dktvhdiscovery 2021-10-18T07:58:08.700221Z info ads Full push, service accounts changed, lax.secure.svc.cluster.local

Testing it out

Let’s see if we can access the lax service from cluster 1. Keep in mind there is no lax service on cluster 1, it should hit cluster 2.

# Exec into pod shell on cluster 1. 
kubectl -n test exec shell-646599f59c-r2s6r -- curl lax.secure.svc.cluster.local -s<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>

It works !

mTLS enabled

We should enable mTLS on both clusters and see if it still works

Cluster 1

apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: mtls
namespace: test
spec:
mtls:
mode: STRICT

Cluster 2

apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: mtls
namespace: secure
spec:
mtls:
mode: STRICT

Still good

kubectl -n test exec shell-646599f59c-r2s6r -- curl lax.secure.svc.cluster.local -s<!DOCTYPE html><html><head><title>Welcome to nginx!</title>

High Availability

Whats also really cool about this, is if you have a k8s service with the same service name on both clusters you can route to both

I will create a “lax” deploy on cluster 1 as well.

kubectl -n secure create deploy lax --image=nginx --port 80
kubectl -n secure expose deploy lax

Feel free to delete one on either cluster and it should still work

Look into Locality Load Balancing for more advance controls.

Troubleshooting

  1. I tried this on an existing cluster that did not have this configured originally, these steps all works but I needed to restart the pod for it to take effect. New services seems to work fine though after that.
  2. You should see the the endpoint discoveryin the logs.

Should look like this

istioctl pc all shell-646599f59c-r2s6r -n test  | grep laxlax.secure.svc.cluster.local                                                80        -          outbound      EDS172.20.218.71  80    Trans: raw_buffer; App: HTTP                                             Route: lax.secure.svc.cluster.local:80172.20.218.71  80    ALL                                                                      Cluster: outbound|80||lax.secure.svc.cluster.local

3. CA serial should match on both clusters, if mTLS is not working

istioctl pc all <pod> | grep -i ROOTCA

Profit ?

--

--