CONTENTS
Install an Istio mesh across multiple Kubernetes clusters.
In this lab, we explore some of Istio's multi-cluster capabilities. Istio actually has a few different approaches to multi-cluster as you can see in the documentation but we recommend you chose an approach that favors running multiple control planes (starting with one per cluster and optimizing from there).
Multi-cluster service-mesh architectures are extremely important to enable things like high availability, failover, isolation, and regulatory/compliance. We have found the teams that chose to run a single large cluster with a single Istio control plane (or try run a single Istio control plane across multiple clusters) have issues with tenancy, blast radius, and overall stability of their platform long-term. If you run multiple clusters, you will likely need your service mesh deployed accordingly.
In the model we explore here in this lab, we will deploy an individual Istio control plane per cluster and then connect them through a remote-access protocol.
This is the approach suggested in the Multi-Primary documentation on the Istio website, however, THERE ARE SOME SERIOUS DRAWBACKS TO THIS APPROACH, specifically in the area of security posture.
We will walk through this set up and then discuss the drawbacks at the end.
Let's dive in!
In this lab, we have 3 clusters configured for you. The contexts are: cluster1, cluster2, and cluster3. *Commands in this challenge are run in the Cluster 1
tab unless otherwise directed.
Let's install Istio across multiple clusters. Note, the various clusters are referred to as $CLUSTER1
, $CLUSTER2
, and $CLUSTER3
.
We will create the namespaces for each of the installations into their respective clusters. Note, we will also label the namespaces with an indication of which network to which they belong. Istio uses a network designator to know whether a service communication can (or should) cross a network boundary. Perform the following prep for our installation:
kubectl --context ${CLUSTER1} create ns istio-system
kubectl --context ${CLUSTER1} label namespace istio-system topology.istio.io/network=network1
kubectl --context ${CLUSTER2} create ns istio-system
kubectl --context ${CLUSTER2} label namespace istio-system topology.istio.io/network=network2
kubectl --context ${CLUSTER3} create ns istio-system
kubectl --context ${CLUSTER3} label namespace istio-system topology.istio.io/network=network3
Before we actually do the installation of Istio, we'll make sure to use an intermediate signing certificate rooted in the same Root CA (see Challenge 04 [Certificate Rotation] for more on certificates and rotation). We will set up these secrets ahead of time named cacerts
in the istio-system
namespace.
kubectl --context ${CLUSTER1} create secret generic cacerts -n istio-system \
--from-file=labs/05/certs/cluster1/ca-cert.pem \
--from-file=labs/05/certs/cluster1/ca-key.pem \
--from-file=labs/05/certs/cluster1/root-cert.pem \
--from-file=labs/05/certs/cluster1/cert-chain.pem
kubectl --context ${CLUSTER2} create secret generic cacerts -n istio-system \
--from-file=labs/05/certs/cluster2/ca-cert.pem \
--from-file=labs/05/certs/cluster2/ca-key.pem \
--from-file=labs/05/certs/cluster2/root-cert.pem \
--from-file=labs/05/certs/cluster2/cert-chain.pem
kubectl --context ${CLUSTER3} create secret generic cacerts -n istio-system \
--from-file=labs/05/certs/cluster3/ca-cert.pem \
--from-file=labs/05/certs/cluster3/ca-key.pem \
--from-file=labs/05/certs/cluster3/root-cert.pem \
--from-file=labs/05/certs/cluster3/cert-chain.pem
When the Istio control plane comes up in each of these clusters, it will use this intermediate signing CA to issue workload certificates to each of the running applications and associated sidecar. Although the workloads in each of the clusters is issuing leaf certificates signed by different signing CAs, they are all rooted in the same root CA, so traffic can be verified and trusted.
Now that we've correctly configured the namespaces and the signing certificates for each control plane. We will use the istioctl
CLI to first install the IstioOperator for the Control Plane, then we'll again use istioctl
to install the east-west gateway, and lastly configure the Gateway resource to expose the east-west gateway.
istioctl install -y --context ${CLUSTER1} -f labs/05/istio/cluster1.yaml
istioctl install -y --context ${CLUSTER1} -f labs/05/istio/ew-gateway1.yaml
kubectl --context=${CLUSTER1} apply -n istio-system -f labs/05/istio/expose-services.yaml
You can see we install the Istio control plane as well as an east-west gateway that will be used to connect traffic between clusters. You can run the following command to verify that these components came up successfully:
kubectl --context ${CLUSTER1} get pods -n istio-system
# OUTPUT
NAME READY STATUS RESTARTS AGE
istio-eastwestgateway-85c5855c76-6swpx 1/1 Running 0 131m
istiod-76ddb688f7-2trh9 1/1 Running 0 132m
If that looks good, let's finish by doing the same installation to clusters 2 and 3:
istioctl install -y --context ${CLUSTER2} -f labs/05/istio/cluster2.yaml
istioctl install -y --context ${CLUSTER2} -f labs/05/istio/ew-gateway2.yaml
kubectl --context=${CLUSTER2} apply -n istio-system -f labs/05/istio/expose-services.yaml
istioctl install -y --context ${CLUSTER3} -f labs/05/istio/cluster3.yaml
istioctl install -y --context ${CLUSTER3} -f labs/05/istio/ew-gateway3.yaml
kubectl --context=${CLUSTER3} apply -n istio-system -f labs/05/istio/expose-services.yaml
At this point, all we've done is installed separate Istio control planes into each of the clusters. We could deploy workloads into each of the clusters but they would be isolated in the mesh. Let's see what steps we need to connect the various control planes and service-mesh networks.
In this section, we will use the istioctl
CLI to automate setting up connectivity between each of the clusters. We do this by running istioctl x create-remote-secret
for each of the other peer clusters. Let's run the following commands and then explore what happened:
istioctl x create-remote-secret --context=${CLUSTER2} --name=cluster2 | kubectl apply -f - --context=${CLUSTER1}
istioctl x create-remote-secret --context=${CLUSTER3} --name=cluster3 | kubectl apply -f - --context=${CLUSTER1}
These two commands will create a secret for each of cluster 2 and cluster 3 and store it into the istio-system
namespace in cluster 1. After running the above commands, let's verify what was created:
kubectl --context ${CLUSTER1} get secret -n istio-system
# OUTPUT
NAME TYPE DATA AGE
cacerts Opaque 4 135m
default-token-npwbd kubernetes.io/service-account-token 3 135m
istio-eastwestgateway-service-account-token kubernetes.io/service-account-token 3 135m
istio-reader-service-account-token-pwf7h kubernetes.io/service-account-token 3 135m
istio-remote-secret-cluster2 Opaque 1 134m
istio-remote-secret-cluster3 Opaque 1 134m
istiod-service-account-token-gqlwc kubernetes.io/service-account-token 3 135m
istiod-token-n6hs8 kubernetes.io/service-account-token 3 135m
You can see a secret called istio-remote-secret-cluster2
and istio-remote-secret-cluster3
. Let's take a look at one of these secrets:
kubectl --context ${CLUSTER1} get secret istio-remote-secret-cluster2 -n istio-system -o yaml
Woah! That's a lot of... hidden stuff... let's take a look at what is in the cluster2
key in the secret:
kubectl --context ${CLUSTER1} get secret istio-remote-secret-cluster2 -n istio-system -o jsonpath="{.data.cluster2}" | base64 --decode
You should see an output like this (with the keys/certs redacted in this listing for brevity):
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
< CERT DATA REDACTED >
server: https://10.128.0.37:7002
name: cluster2
contexts:
- context:
cluster: cluster2
user: cluster2
name: cluster2
current-context: cluster2
kind: Config
preferences: {}
users:
- name: cluster2
user:
token: < TOKEN DATA REDACTED >
Interesting... isn't this a bit familiar??
If you said "this is a kubeconfig file for accessing a cluster", you would be correct!
For this form of multi-cluster, Istio creates a kubeconfig for each of the remote clusters with direct access to the Kubernetes API of the remote cluster. We will come back to this at the end of the lab, but for now, let's continue enabling endpoint discovery (through the Kubernetes API) for each of the other clusters:
istioctl x create-remote-secret --context=${CLUSTER1} --name=cluster1 | kubectl apply -f - --context=${CLUSTER2}
istioctl x create-remote-secret --context=${CLUSTER3} --name=cluster3 | kubectl apply -f - --context=${CLUSTER2}
istioctl x create-remote-secret --context=${CLUSTER1} --name=cluster1 | kubectl apply -f - --context=${CLUSTER3}
istioctl x create-remote-secret --context=${CLUSTER2} --name=cluster2 | kubectl apply -f - --context=${CLUSTER3}
At this point, multi-cluster service and endpoint discovery has been enabled. We don't have any sample apps deployed, yet, so let's take a look at that:
Let's set up a couple of sample applications to test our multi-cluster connectivity. First, let's create the right namespaces and label them for injection on all of the clusters:
kubectl --context ${CLUSTER1} create ns sample
kubectl label --context=${CLUSTER1} namespace sample istio-injection=enabled
kubectl --context ${CLUSTER2} create ns sample
kubectl label --context=${CLUSTER2} namespace sample istio-injection=enabled
kubectl --context ${CLUSTER3} create ns sample
kubectl label --context=${CLUSTER3} namespace sample istio-injection=enabled
Next, let's deploy services into each of the clusters:
kubectl apply --context=${CLUSTER1} -f labs/05/istio/helloworld.yaml -l service=helloworld -n sample
kubectl apply --context=${CLUSTER2} -f labs/05/istio/helloworld.yaml -l service=helloworld -n sample
kubectl apply --context=${CLUSTER3} -f labs/05/istio/helloworld.yaml -l service=helloworld -n sample
kubectl apply --context=${CLUSTER1} -f labs/05/istio/helloworld.yaml -l version=v1 -n sample
kubectl apply --context=${CLUSTER2} -f labs/05/istio/helloworld.yaml -l version=v1 -n sample
kubectl apply --context=${CLUSTER3} -f labs/05/istio/helloworld.yaml -l version=v2 -n sample
Lastly, let's deploy a sample "sleep" application into each cluster that we will use as the "client" to connect to services across clusters:
kubectl apply --context=${CLUSTER1} -f labs/05/istio/sleep.yaml -n sample
kubectl apply --context=${CLUSTER2} -f labs/05/istio/sleep.yaml -n sample
kubectl apply --context=${CLUSTER3} -f labs/05/istio/sleep.yaml -n sample
From the sleep
client in cluster 1, let's make a call to the helloworld
service:
kubectl exec --context ${CLUSTER1} -n sample -c sleep deploy/sleep -- curl -sS helloworld.sample:5000/hello
# OUTPUT
Hello version: v1, instance: helloworld-v1-776f57d5f6-b946k
You should see the helloworld
that's deployed in the same cluster respond. Let's take a look at the endpoints that the sleep
pod's Istio service proxy knows about
istioctl --context $CLUSTER1 pc endpoints deploy/sleep.sample --cluster "outbound|5000||helloworld.sample.svc.cluster.local"
# OUTPUT
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.132.0.102:15443 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
10.132.0.112:15443 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
10.42.0.16:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
You can see from this output that Istio's service proxy knows about three endpoints for the helloworld
service. One of the endpoints is local (10.42.x.x) while two of them are remote (10.132.x.x). What are those remote IP addresses?
Those are the External IP addresses of the east-west gateways in cluster 2 and cluster 3! Let's verify:
Cluster 2:
kubectl --context $CLUSTER2 get svc -n istio-system
# OUTPUT
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istiod ClusterIP 10.43.26.28 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 4m19s
istio-eastwestgateway LoadBalancer 10.43.146.20 10.132.0.102 15021:31687/TCP,15443:30582/TCP,15012:32477/TCP,15017:30127/TCP 4m4s
Cluster 3:
kubectl --context $CLUSTER3 get svc -n istio-system
# OUTPUT
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istiod ClusterIP 10.43.210.62 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 4m1s
istio-eastwestgateway LoadBalancer 10.43.159.182 10.132.0.112 15021:30165/TCP,15443:31056/TCP,15012:32060/TCP,15017:32290/TCP 3m48s
Istio has discovered helloworld
services running on cluster 2 and cluster 3 and has correctly configured the remote endpoints as the east-west gateways in the remote clusters. The east-west gateway in the respective clusters can then route to the local helloworld
services and maintain an end-to-end mTLS connection. Let's verify the connectivity works as we expect:
For each cluster, let's make a call to the helloworld
service a few times and watch the load get balanced:
echo "Calling from cluster 1"
for i in {1..10}
do
kubectl exec --context ${CLUSTER1} -n sample -c sleep deploy/sleep -- curl -sS helloworld.sample:5000/hello
done
echo "Calling from cluster 2"
for i in {1..10}
do
kubectl exec --context ${CLUSTER2} -n sample -c sleep deploy/sleep -- curl -sS helloworld.sample:5000/hello
done
echo "Calling from cluster 3"
for i in {1..10}
do
kubectl exec --context ${CLUSTER3} -n sample -c sleep deploy/sleep -- curl -sS helloworld.sample:5000/hello
done
for i in {1..10}
do
kubectl exec --context ${CLUSTER3} -n sample -c sleep deploy/sleep -- curl -sS helloworld.sample:5000/hello
done
Calling from cluster 3
Hello version: v1, instance: helloworld-v1-776f57d5f6-sbc2j
Hello version: v2, instance: helloworld-v2-54df5f84b-4vdjq
Hello version: v2, instance: helloworld-v2-54df5f84b-4vdjq
Hello version: v1, instance: helloworld-v1-776f57d5f6-5tr8n
Hello version: v1, instance: helloworld-v1-776f57d5f6-sbc2j
Hello version: v2, instance: helloworld-v2-54df5f84b-4vdjq
Hello version: v1, instance: helloworld-v1-776f57d5f6-5tr8n
Hello version: v1, instance: helloworld-v1-776f57d5f6-5tr8n
Hello version: v1, instance: helloworld-v1-776f57d5f6-sbc2j
Hello version: v1, instance: helloworld-v1-776f57d5f6-sbc2j
root@cluster1:~/istio-workshops/istio-expert# k get po -A --context ${CLUSTER3} | grep helloworld
default helloworld-f85896cd8-g8x4d 2/2 Running 0 144m
sample helloworld-v2-54df5f84b-4vdjq 2/2 Running 0 6m10s
root@cluster1:~/istio-workshops/istio-expert# k get po -A --context ${CLUSTER2} | grep helloworld
sample helloworld-v1-776f57d5f6-sbc2j 2/2 Running 0 6m25s
root@cluster1:~/istio-workshops/istio-expert# k get po -A --context ${CLUSTER1} | grep helloworld
sample helloworld-v1-776f57d5f6-5tr8n 2/2 Running 0 6m30s
In this lab we explored one of Istio's multi-cluster approaches called "Multi-Primary" with endpoint discovery. As we saw, with this approach we create tokens/kube-configs in each of the clusters to give direct Kubernetes API access to each of the remote clusters. We have found organizations are hesitant to give direct Kubernetes API access to each of the clusters in a multi-cluster deployment. The access is "read-only" (for the most part.. not completely) and is a major risk: what if one of the clusters gets compromised? Do they have access to all of the other clusters in the fleet?
Another drawback we've seen with this approach is the lack of control over what constitutes a service across multiple clusters. Using the endpoint-discovery mechanism, services are matched based on name and must be in the same namespace. What if services don't run in the same namespace across clusters? This scenario is quite common, and would not work for the approach to multi-cluster as presented here.
At Solo.io, we have built our service mesh using Istio but NOT relying on this endpoint-discovery model. We do use the multi-primary control plane deployment, but instead of proliferating Kubernetes credentials to every cluster, we have a global service-discovery mechanism that uses information to then inform Istio which services to federate and how. This approach uses ServiceEntry
s and does not rely on sharing Kubernetes credentials across clusters. Additionally, with Gloo Mesh, you get more control over what services are included in a "virtual destination" by specifying labels/namespaces/clusters and services can reside in other namespaces. We also do a lot of the certificate federation for you and handle rotation so to minimize the operational burden. Please see the Gloo Mesh workshop for more details about how this works.
Thanks for joining our Advanced Istio Workshop for Operationalizing Istio for Day 2! We hope it was able to show you powerful ways of leveraging Istio in your environment. That's the the end, however. The next step is to go to the next level and see what best practices for operating Istio in multi-tenant, multi-cluster architectures look like with Gloo Mesh. Please register for an upcoming Gloo Mesh workshop which will show how to simplify the operations and usage of Istio.
Join the newsletter to receive the latest updates in your inbox.