Multi cluster setup with istio
As everyone knows istio is a great service mesh. The power that it adds to kubernetes make it where I would never even consider running k8 again without it.
It is becoming more and more common to run multiple kubernetes clusters though. How do we connect istio together where all cluster can talk to each other ? This post has you covered.
Creating the certificates.
mTLS is almost a default now in istio ( not really, but you show seriously enable it ). It’s important that istio shares the same CA certificates with all other installed clusters with istio. This will make the encryption between clusters happy when we set this all up.
We can take the shell script from istios website here to generate the certs or just use openssl
openssl req -newkey rsa:2048 -nodes -keyout root-key.pem -x509 -days 36500 -out root-cert.pem openssl genrsa -out ca-key.pem 2048openssl req -new -key ca-key.pem -out ca-cert.csr -sha256openssl x509 -req -days 36500 -in ca-cert.csr -sha256 -CA root-cert.pem -CAkey root-key.pem -CAcreateserial -out ca-cert.pem -extensions v3_reqcp ca-cert.pem cert-chain.pem
Upload the cert to kubernetes in all clusters before we even install istio
kubectl create secret generic cacerts -n istio-system \
--from-file=cluster1/ca-cert.pem \
--from-file=cluster1/ca-key.pem \
--from-file=cluster1/root-cert.pem \
--from-file=cluster1/cert-chain.pem
Approach #2
If you are lazy like me, you can also install istio without generating the certs manually. You can just take the cacerts secret from cluster 1 after you install istio and copy it to cluster 2, boom, same CA.
Installing Istio
Go to the istio install page and grab istioctl. We will use the default profile but need to add a couple more options. I am using 1.11 found here for macOS. Of course you your own settings.
Untar, then go into manifests/profiles/. Let’s edit the default.yaml profile.
We need to tell istio to use multicluster mode. We also want to give it some unique cluster information to know what cluster it’s in. You may want to create new files.
Cluster 1
Add to the default.yaml profile file ( or what ever file you are calling these ) . The only thing that is different is the clusterName for each.
...
meshConfig:
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
...
global:
meshID: mesh1
multiCluster:
enabled: true
clusterName: cluster1
network: network1
...
Cluster 2
...
meshConfig:
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
...
global:
meshID: mesh1
multiCluster:
enabled: true
clusterName: cluster2
network: network1
...
Now we can install istio on both clusters.
# cluster 1
istioctl manifest install -f default1.yaml# cluster 2
istioctl manifest install -f default2.yaml
Make sure it all came up and istio is happy on both
kubectl -n istio-system get podsNAME READY STATUS RESTARTS AGEistio-egressgateway-6c9c945447-wq4qf 1/1 Running 0 46histio-ingressgateway-6f9c7ffd8b-fsfqv 1/1 Running 0 46histiod-67fb45b754-dktvh 1/1 Running 0 46h
Create remote secrets on both clusters
Both these kubernetes clusters are on the same VPC for me, make sure port 443 is open from cluster 1 and cluster 2.
Running the command below will create the remote secret needed for istio to talk to its other cluster API server. Now all services on both cluster will know about each other
bin/istioctl x create-remote-secret --context="<context_name_1>" --name=cluster2 | kubectl apply -f - --context="<context_name_2>"bin/istioctl x create-remote-secret --context="<context_name_2>" --name=cluster2 | kubectl apply -f - --context="<context_name_1>"
If you read the secret that was just create it becomes clear what happened.
kubectl -n istio-system get secret istio-remote-secret-<name> -o yaml
We can see cluster 1 has all the info to talk to cluster 2 and can do service discovery and visa versa.
Launch some deployments on both sides.
Tag the namespace for istio injection, will use namespace test for cluster 1and secure for cluster 2
# Cluster 1kubectl label namespace test istio-injection=enabled --overwrite# Cluster 2kubectl label namespace secure istio-injection=enabled --overwrite
Then will launch some test deployments
# Cluster 1kubectl -n test create deploy shell--image=nginx --port 80# Cluster 2kubectl -n secure create deploy lax --image=nginx --port 80
kubectl -n secure expose deploy lax
Looking at the istiod logs we should see that cluster 1 saw the lax endpoint we created above.
istiod-67fb45b754-dktvh discovery 2021-10-18T07:58:08.700207Z info ads Full push, new service secure/lax.secure.svc.cluster.localistiod-67fb45b754-dktvh discovery 2021-10-18T07:58:08.700221Z info ads Full push, service accounts changed, lax.secure.svc.cluster.local
Testing it out
Let’s see if we can access the lax service from cluster 1. Keep in mind there is no lax service on cluster 1, it should hit cluster 2.
kubectl -n test exec shell-646599f59c-r2s6r -- curl lax.secure.svc.cluster.local -s<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>
It works !
mTLS enabled
We should enable mTLS on both clusters and see if it still works
Cluster 1
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: mtls
namespace: test
spec:
mtls:
mode: STRICT
Cluster 2
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: mtls
namespace: secure
spec:
mtls:
mode: STRICT
Still good
kubectl -n test exec shell-646599f59c-r2s6r -- curl lax.secure.svc.cluster.local -s<!DOCTYPE html><html><head><title>Welcome to nginx!</title>
High Availability
Whats also really cool about this, is if you have a service with the same service name on both clusters you can route to both
I will create a “lax” deploy on cluster 1 also
kubectl -n secure create deploy lax --image=nginx --port 80
kubectl -n secure expose deploy lax
Feel free to delete one on either side and it should will still work
Look into Locality Load Balancing for more control.
Troubleshooting
- I tried this on an existing cluster that did not have this configured originally, these steps all works but I needed to restart the pod for it to take effect. New services seems to work fine though after that.
- You should see the discovered endpoint using istioctl, if it’s not in there then istio can’t discover it and route traffic to it. Check logs and or firewall settings.
Should look like this
istioctl pc all shell-646599f59c-r2s6r -n test | grep laxlax.secure.svc.cluster.local 80 - outbound EDS172.20.218.71 80 Trans: raw_buffer; App: HTTP Route: lax.secure.svc.cluster.local:80172.20.218.71 80 ALL Cluster: outbound|80||lax.secure.svc.cluster.local
3. CA serial should match on both clusters
istioctl pc all <pod> | grep -i ROOTCA
Profit ?