All the cool kids run Istio Ambient

Chris Haessig
6 min readFeb 22, 2025

--

Istio is fantastic, I have been using it for a long time and love it. The #1 complaint I hear though is, the sidecar is so intrusive.

Well no more ! Istio 1.24 has been released, we can run istio completely sidecarless ( is this a word ?) with ambient mode . Seriously, it’s easy. This blog will show you how to do it.

KIND

As always, let’s start with a kind cluster. I’ll call mine chris.

kind create  cluster --name chris

Download istioctl from the istio website. Highly recommend you grab 1.24 or later.

istioctl version 
Istio is not present in the cluster: no running Istio pods in namespace "istio-system"
client version: 1.24.

Now let’s install istio with the ambient profile. Will also install an istio ingress gateway.

istioctl install --set profile=ambient
helm install istio-ingressgateway istio/gateway -n istio-system

We now have istio running, but what now ? What are these ztunnel things ?

kubectl -n istio-system get pods
NAME READY STATUS RESTARTS AGE
istio-cni-node-2q8pn 1/1 Running 0 156m
istio-cni-node-bndr7 1/1 Running 0 156m
istio-cni-node-c58kv 1/1 Running 0 156m
istio-cni-node-c7kjv 1/1 Running 0 156m
istio-cni-node-f65m5 1/1 Running 0 156m
istio-cni-node-prmgf 1/1 Running 0 156m
istio-ingressgateway-6c5bf6c59-sbfsm 1/1 Running 0 128m
istiod-dc9fdf7b8-8v9w2 1/1 Running 0 157m
ztunnel-46r8j 1/1 Running 0 151m
ztunnel-96cc7 1/1 Running 0 151m
ztunnel-9zh28 1/1 Running 0 151m
ztunnel-kqcvg 1/1 Running 0 151m
ztunnel-s48jq 1/1 Running 0 151m
ztunnel-vjk8b 1/1 Running 0 151m

So, instead of running each pod with its own sidecar, a daemonset is created which runs a pod per node called ztunnel.

From chatGPT

In istio Ambient Mesh, ztunnel (zero-trust tunnel) is a lightweight, per-node proxy responsible for handling mTLS encryption, authentication, and traffic tunneling between workloads. It eliminates the need for sidecars by securely routing traffic through a shared infrastructure, improving performance and reducing resource overhead.

The istio CNI redirects traffic to ztunnel using iptables.

Create a httpbin deployment.

Let’s start by creating a web namespace and telling istio its ambient.

kubectl create ns web
kubectl label namespace web istio.io/dataplane-mode=ambient

Install the k8 api gateway

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml

We also want to handle layer 7 in the future so we create a waypoint proxy, more about this later.

istioctl waypoint apply -n web

Now let’s create a httpbin deployment. ( Not sure why I went with the nginx name but I am sticking with it ).

kubectl -n web create deploy nginx --image=kennethreitz/httpbin
kubectl -n web expose deploy nginx --port 80

We create a Gateway.

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: http
namespace: istio-system
spec:
gatewayClassName: istio
listeners:
- name: http
hostname: "nginx.web"
port: 8080
protocol: HTTP
allowedRoutes:
namespaces:
from: All

This will create a ingress pod listening on port 8080 and looking for traffic with the host nginx.web.

istioctl -n istio-system pc listener http-istio-645b4849cc-f75bv
ADDRESSES PORT MATCH DESTINATION
0 ALL Cluster: connect_originate
0.0.0.0 8080 ALL Route: http.8080
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*

Now we create an HTTPRoute to point to the nginx service in the web namespace we created.

I also added the option to only go to my nginx service if the header “chris: washere” is set . I tied the HTTPRoute to the gateway and the nginx service.

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http
namespace: web
spec:
parentRefs:
- name: http
namespace: istio-system
- group: ""
kind: Service
name: nginx
port: 80
hostnames: ["nginx.web"]
rules:
- matches:
- path:
type: PathPrefix
value: /
headers:
- name: "chris"
value: "washere"
backendRefs:
- name: nginx
port: 80

So we should be all set, but looking at our pods there is no sidecars ?

kubectl -n web get pods 
NAME READY STATUS RESTARTS AGE
nginx-6f79cdd766-r49b4 1/1 Running 0 99m
waypoint-594b764d7-lww95 1/1 Running 0 128

Depending on the flow, traffic will go through the ztunnels for layer 4 things like mTLS, and be processed by waypoint for things like header matching ( layer 7 ) , no more sidecars on the pods !

Giving it a shot

We port forward to the http-istio pod. This was created when we applied the gateway.

kubectl -n istio-system port-forward http-istio 8080

We then use curl to see if our host matching works.

curl localhost:8080/headers -H 'Host: nginx.web'

HTTP / 1.1 404

Got a 404 because we did not pass our “chris: washere” header. Let's add it.

curl localhost:8080/headers -H 'Host: nginx.web' -H 'chris: washere'             
{
"headers": {
"Accept": "*/*",
"Chris": "washere",
"Host": "nginx.web",
"User-Agent": "curl/8.7.1",
"X-Envoy-Attempt-Count": "1",
"X-Envoy-Decorator-Operation": "nginx.web.svc.cluster.local:80/*",
"X-Envoy-Internal": "true",
"X-Envoy-Peer-Metadata": "ChoKCkNMVVNURVJfSUQSDBoKS3ViZXJuZXRlcwpuCgZMQUJFTFMSZCpiCi8KH3NlcnZpY2UuaXN0aW8uaW8vY2Fub25pY2FsLW5hbWUSDBoKaHR0cC1pc3RpbwovCiNzZXJ2aWNlLmlzdGlvLmlvL2Nhbm9uaWNhbC1yZXZpc2lvbhIIGgZsYXRlc3QKJQoETkFNRRIdGhtodHRwLWlzdGlvLTY0NWI0ODQ5Y2MtZjc1YnYKGwoJTkFNRVNQQUNFEg4aDGlzdGlvLXN5c3RlbQpTCgVPV05FUhJKGkhrdWJlcm5ldGVzOi8vYXBpcy9hcHBzL3YxL25hbWVzcGFjZXMvaXN0aW8tc3lzdGVtL2RlcGxveW1lbnRzL2h0dHAtaXN0aW8KHQoNV09SS0xPQURfTkFNRRIMGgpodHRwLWlzdGlv",
"X-Envoy-Peer-Metadata-Id": "router~10.244.2.6~http-istio-645b4849cc-f75bv.istio-system~istio-system.svc.cluster.local"
}
}

It worked !

Istio is now working without a sidecar. Let’s create a new pod called shell in the web namespace.

kubectl -n web create deploy shell --image nginx

Let’s try and hit our nginx pod running from the shell pod. This is just to show it will go through waypoint.

We exec into the pod and run curl.

kubectl -n web exec -it shell-<id> bash

curl nginx.web/headers -H 'Host: nginx.web' -H 'chris: washere' -A "chrisagent"
{
"headers": {
"Accept": "*/*",
"Chris": "washere",
"Host": "nginx.web",
"User-Agent": "curl/7.88.1",
"X-Envoy-Attempt-Count": "1",
"X-Envoy-Decorator-Operation": "nginx.web.svc.cluster.local:80/*",
"X-Envoy-Peer-Metadata": "ChoKCkNMVVNURVJfSUQSDBoKS3ViZXJuZXRlcwp5CgZMQUJFTFMSbyptCg4KA2FwcBIHGgVzaGVsbAoqCh9zZXJ2aWNlLmlzdGlvLmlvL2Nhbm9uaWNhbC1uYW1lEgcaBXNoZWxsCi8KI3NlcnZpY2UuaXN0aW8uaW8vY2Fub25pY2FsLXJldmlzaW9uEggaBmxhdGVzdAogCgROQU1FEhgaFnNoZWxsLTU2ZGY1YzVkOWYtdzY2dmcKFAoJTkFNRVNQQUNFEgcaBXNoZWxsCkcKBU9XTkVSEj4aPGt1YmVybmV0ZXM6Ly9hcGlzL2FwcHMvdjEvbmFtZXNwYWNlcy9zaGVsbC9kZXBsb3ltZW50cy9zaGVsbAoYCg1XT1JLTE9BRF9OQU1FEgcaBXNoZWxs",
"X-Envoy-Peer-Metadata-Id": "sidecar~10.244.4.4~shell-56df5c5d9f-w66vg.shell~shell.svc.cluster.local"
}
}

Pod to pod worked! Looking at the waypoint logs we can see it in fact did go through the waypoint.


# Look for the useragent chrisagent to cut down on noise.

stern -n web waypoint | grep "chrisagent"

waypoint-689565d8fb-wjg2h istio-proxy ':authority', 'nginx.web'
waypoint-689565d8fb-wjg2h istio-proxy ':path', '/headers'
waypoint-689565d8fb-wjg2h istio-proxy ':method', 'GET'
waypoint-689565d8fb-wjg2h istio-proxy ':scheme', 'http'
waypoint-689565d8fb-wjg2h istio-proxy 'user-agent', 'chrisagent'
waypoint-689565d8fb-wjg2h istio-proxy 'accept', '*/*'
waypoint-689565d8fb-wjg2h istio-proxy 'chris', 'washere'
waypoint-689565d8fb-wjg2h istio-proxy 'x-forwarded-proto', 'http'
waypoint-689565d8fb-wjg2h istio-proxy 'x-request-id', '7f19f858-9dca-4e86-a035-d69eca79d328'
waypoint-689565d8fb-wjg2h istio-proxy thread=24

Tools

Here are some istioctl tools to see if the service is connected to a waypoint.

Make sure waypoint is deployed in the web namespace.


istioctl waypoint list -n web
NAME REVISION PROGRAMMED
waypoint default True

Make sure the service and workload is connected to waypoint ( should show in WAYPOINT column )

istioctl ztunnel-config service

NAMESPACE SERVICE NAME SERVICE VIP WAYPOINT ENDPOINTS
default kubernetes 10.96.0.1 None 1/1
istio-system http-istio 10.96.13.113 None 1/1
istio-system istio-ingressgateway 10.96.6.134 None 1/1
istio-system istiod 10.96.72.72 None 1/1
istio-system kiali 10.96.156.151 None 1/1
istio-system prometheus 10.96.106.86 None 1/1
kube-system kube-dns 10.96.0.10 None 2/2
web nginx 10.96.175.197 waypoint 1/1
web waypoint 10.96.91.8 None 1/1

istioctl ztunnel-config workload

NAMESPACE POD NAME ADDRESS NODE WAYPOINT PROTOCOL
default kubernetes 172.18.0.2 None TCP
istio-system ztunnel-9zh28 10.244.0.5 chris-control-plane None TCP
istio-system ztunnel-kqcvg 10.244.4.2 chris-worker2 None TCP
istio-system ztunnel-s48jq 10.244.1.2 chris-worker3 None TCP
istio-system ztunnel-vjk8b 10.244.5.2 chris-worker5 None TCP
local-path-storage local-path-provisioner-7577fdbbfb-fhtgv 10.244.0.2 chris-control-plane None TCP
shell shell-595557859f-9dcsx 10.244.2.10 chris-worker4 None HBONE
web nginx-7bf6474f7c-rhf2v 10.244.4.5 chris-worker2 waypoint HBONE
web waypoint-689565d8fb-wjg2h 10.244.5.10 chris-worker5 None TCP

Where is my envoy config ?

Wondering where the envoy config is ? Where does the header matching happen . Envoy can be found by port forwarding to the waypoint proxy and dumping the config.

kubectl -n web port-forward waypoint-594b764d7-lww95 15000

And here is where the magic happens.

          "route_config": {
"name": "inbound-vip|80|http|nginx.web.svc.cluster.local",
"virtual_hosts": [
{
"name": "inbound|http|80",
"domains": [
"*"
],
"routes": [
{
"match": {
"prefix": "/",
"case_sensitive": true,
"headers": [
{
"name": "chris",
"string_match": {
"exact": "washere"
}
}
]

See how easy it was !

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

No responses yet

Write a response