I'm having oh-so-much fun debugging a TLS-encrypted, client-certificate-protected service running on Kubernetes 1.20, and was looking for a way to "see inside" the ClusterIP service itself.
This: -
Lab 01 - Kubernetes Networking, using Service Types, Ingress and Network Policies to Control Application Access
provided some really useful insight: -
The HelloWorld Service is accessible now but only within the cluster. To expose a Service onto an external IP address, you have to create a ServiceType other than ClusterIP. Apps inside the cluster can access a pod by using the in-cluster IP of the service or by sending a request to the name of the service. When you use the name of the service, kube-proxy looks up the name in the cluster DNS provider and routes the request to the in-cluster IP address of the service.
To allow external traffic into a kubernetes cluster, you need a NodePort ServiceType. If you set the type field of Service to NodePort, Kubernetes allocates a port in the range 30000-32767. Each node proxies the assigned NodePort (the same port number on every Node) into your Service.
Patch the existing Service for helloworld to type: NodePort,
$ kubectl patch svc helloworld -p '{"spec": {"type": "NodePort"}}'
service/helloworld patched
Describe the Service again,
$ kubectl describe svc helloworld
Name: helloworld
Namespace: default
Labels: app=helloworld
Annotations: <none>
Selector: app=helloworld
Type: NodePort
IP: 172.21.161.255
Port: <unset> 8080/TCP
TargetPort: http-server/TCP
NodePort: <unset> 31777/TCP
Endpoints: 172.30.153.79:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
In this example, Kubernetes added a NodePort with port value 31777 in this example. For everyone, this is likely to be a different port in the range 30000-32767.
You can now connect to the service via the public IP address of any worker node in the cluster and traffic gets forwarded to the service, which uses service discovery and the selector of the Service to deliver the request to the assigned pod. With this piece in place we now have a complete pipeline for load balancing external client requests to all the nodes in the cluster.
With that, I can (temporarily) patch my ClusterIP service to a NodePort, and then poke into it from the outside, using the K8s Node's external IP: -
kubectl get node nodename --output json | jq -r .status.addresses
and the newly allocated NodePort e.g. 31777.
Lovely!
No comments:
Post a Comment