Saturday, 16 October 2021

Yay, VMware Fusion and macOS Big Sur - no longer "NAT good friends" - forgive the double negative and the terrible pun ...

After macOS 11 Big Sur was released in 2020, VMware updated their Fusion product to v12 and, sadly, managed to break Network Address Translation (NAT), as per their release notes: -

VMware Fusion 12 Release Notes

Unable to access port forwarding on a NAT virtual machine, if the traffic is routed through the host IP stack on Big Sur hosts

On Big Sur hosts, if user configures NAT port forwarding in Fusion, the service running in the virtual machine is unavailable on the macOS host using localhost:exposedPort, hostIP:exposedPort, or 127.0.0.1:exposedPort; Port forwarding is also not accessible inside a NAT virtual machine using hostIP:exposedPort.

Thankfully, as of now, with Fusion 12.2.0 this is now resolved: -

VMware Fusion 12.2.0 Release Notes

Resolved Issues

Unable to access port forwarding on a NAT virtual machine if the traffic is routed through the host IP stack on Big Sur hosts

On Big Sur hosts, if a user configures NAT port forwarding in Fusion, the service running in the virtual machine is unavailable on the macOS host using localhost:exposedPort, hostIP:exposedPort, or 127.0.0.1:exposedPort; Port forwarding is also not accessible inside a NAT virtual machine using hostIP:exposedPort.

This issue is now resolved.

I updated this AM, and am now running Fusion 12.2.0 on macOS 11.6 and all is well - my Windows 10 VM happily shares my Mac's connection using NAT, and can tunnel via my Cisco AnyConnect VPN, which is nice ....

Wednesday, 13 October 2021

For my future self - don't try and use crictl to deploy pods into an existing Kubernetes cluster

I'm doing some work with the Kata Containers v2 runtime, and was trying to test it using crictl 

First I created a pair of YAML documents: -

tee podsandbox-config.yaml <<EOF
metadata:
  attempt: 1
  name: busybox-sandbox
  namespace: default
  uid: hdishd83djaidwnduwk28bcsb
log_directory: /tmp
linux:
  namespaces:
    options: {}
EOF
tee container-config.json <<EOF
{
  "metadata": {
      "name": "busybox"
  },
  "image":{
      "image": "busybox"
  },
  "command": [
      "top"
  ],
  "log_path":"busybox.log",
  "linux": {
  }
}
EOF

and then I used the first of those to create a Pod Sandbox: -

sandbox=$(crictl runp -r kata podsandbox-config.yaml)

However, the resulting pod soon disappeared - I managed to check its state before it disappeared: -

crictl pods | grep kata

9fafade8c3216       23 seconds ago      NotReady            busybox-sandbox                                  default             1                   kata

65fc059b8129d       40 minutes ago      Ready               nginx-kata                                       default             0                   kata

and inspected the NotReady pod: -

crictl inspectp 9fafade8c3216 | jq .status.state

"SANDBOX_NOTREADY"

Thankfully someone else had hit this issue over in the cri-o project: -


specifically this comment: -

I guess you might be using crictl to create pod/container on a running kubernetes node. Kubelet deletes unwanted containers/pods, please don't do that on a running kubernetes node.

See https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/#example-crictl-commands

Ah, yes, that'd be it !

Back to K8s ..........

Aiding my memory - parsing Kubernetes using JQ etc.

So I was looking for a way to munge a Kubernetes Compute Node configuration to extract it's external/public IP.

I know I can do it using purely K8s: -

kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

123.45.6.78

but I also wanted to do it via jq and here it is: -

kubectl get node `kubectl get nodes | tail -1 | awk '{print $1}'` --output json | jq -r '.status.addresses[] | select(.type=="ExternalIP") .address'

123.45.6.78

which is nice!

Tinkering with Istio and Envoy on IBM Kubernetes Service via macOS

Whilst I've been aware of Istio for some years, I've never really played with it.

Well, today that's changing ...

I'm following this tutorial guide: -

Getting Started

and starting by installing the CLI tool / installation file on my Mac: -

curl -L https://istio.io/downloadIstio | sh -

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   102  100   102    0     0     72      0  0:00:01  0:00:01 --:--:--    72
100  4549  100  4549    0     0   2693      0  0:00:01  0:00:01 --:--:--  2693
Downloading istio-1.11.3 from https://github.com/istio/istio/releases/download/1.11.3/istio-1.11.3-osx.tar.gz ...
Istio 1.11.3 Download Complete!
Istio has been successfully downloaded into the istio-1.11.3 folder on your system.
Next Steps:
See https://istio.io/latest/docs/setup/install/ to add Istio to your Kubernetes cluster.
To configure the istioctl client tool for your workstation,
add the /Users/hayd/istio-1.11.3/bin directory to your environment path variable with:
export PATH="$PATH:/Users/hayd/istio-1.11.3/bin"
Begin the Istio pre-installation check by running:
istioctl x precheck 
Need more information? Visit https://istio.io/latest/docs/setup/install/ 

and adding the installation directory to my path: _

export PATH="$PATH:$HOME/istio-1.11.3/bin"

and validating the istioctl tool: -

which istioctl

/Users/hayd/istio-1.11.3/bin/istioctl

istioctl version

no running Istio pods in "istio-system"
1.11.3

and then install it into my K8s 1.20 cluster: -

istioctl install --set profile=demo -y

✔ Istio core installed                                                                                                                                                                  
✔ Istiod installed                                                                                                                                                                       
✔ Ingress gateways installed                                                                                                                                                             
✔ Egress gateways installed                                                                                                                                                              
✔ Installation complete                                                                                                                                                                  
Thank you for installing Istio 1.11.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/asdsdasdas

and checked the now running pods: -

kubectl get pods -A

NAMESPACE      NAME                                         READY   STATUS    RESTARTS   AGE
ibm-system     addon-catalog-source-2x7hj                   1/1     Running   0          42h
ibm-system     catalog-operator-578f7c8857-666wd            1/1     Running   0          42h
ibm-system     olm-operator-6c45d79d96-pjtmr                1/1     Running   0          42h
istio-system   istio-egressgateway-5fdc76bf94-v5dpg         1/1     Running   0          59s
istio-system   istio-ingressgateway-6bd7764b48-rr4fp        1/1     Running   0          59s
istio-system   istiod-675949b7c5-zqg6w                      1/1     Running   0          74s
kube-system    calico-kube-controllers-78ccd56cd7-wqgtf     1/1     Running   0          42h
kube-system    calico-node-pg6vv                            1/1     Running   0          42h
kube-system    calico-typha-ddd44968b-86cgs                 1/1     Running   0          42h
kube-system    calico-typha-ddd44968b-ffxmt                 0/1     Pending   0          42h
kube-system    calico-typha-ddd44968b-mqjrb                 0/1     Pending   0          42h
kube-system    coredns-7fc9f85d9c-5rwwv                     1/1     Running   0          42h
kube-system    coredns-7fc9f85d9c-bxtts                     1/1     Running   0          42h
kube-system    coredns-7fc9f85d9c-qk6gv                     1/1     Running   0          42h
kube-system    coredns-autoscaler-9cccfb98d-mw9qj           1/1     Running   0          42h
kube-system    dashboard-metrics-scraper-7c75dcd466-d5b9f   1/1     Running   0          42h
kube-system    ibm-keepalived-watcher-856k6                 1/1     Running   0          42h
kube-system    ibm-master-proxy-static-10.144.213.225       2/2     Running   0          42h
kube-system    kubernetes-dashboard-659cd5b798-thd57        1/1     Running   0          42h
kube-system    metrics-server-b7bc76594-4fdg2               2/2     Running   0          42h
kube-system    vpn-546847fcbf-dzzml                         1/1     Running   0          42h

and added the appropriate label for Envoy sidecar proxies: -

kubectl label namespace default istio-injection=enabled

namespace/default labeled

and then deployed the sample Bookinfo application: -

cd ~/istio-1.11.3/

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created


and verified the created services: -

kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   172.21.9.104     <none>        9080/TCP   34m
kubernetes    ClusterIP   172.21.0.1       <none>        443/TCP    43h
productpage   ClusterIP   172.21.149.123   <none>        9080/TCP   34m
ratings       ClusterIP   172.21.233.195   <none>        9080/TCP   34m
reviews       ClusterIP   172.21.163.74    <none>        9080/TCP   34m

and pods: -

kubectl get pods

NAME                              READY   STATUS    RESTARTS   AGE
details-v1-79f774bdb9-zvnzr       2/2     Running   0          34m
productpage-v1-6b746f74dc-swnwk   2/2     Running   0          34m
ratings-v1-b6994bb9-kspd6         2/2     Running   0          34m
reviews-v1-545db77b95-bwdmz       2/2     Running   0          34m
reviews-v2-7bf8c9648f-h2nsl       2/2     Running   0          34m
reviews-v3-84779c7bbc-x2v2l       2/2     Running   0          34m

before testing the application: -

kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"

<title>Simple Bookstore App</title>

and then configure the Istio gateway: -

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

and run the istioctl analysis: -

istioctl analyze

✔ No validation issues found when analyzing namespace: default.

and set the INGRESS_PORT and SECURE_INGRESS_PORT variable: -

export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')

and grab the external IP of my K8s Compute Node into the INGRESS HOST: -

export INGRESS_HOST=$(kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}')

and set the the GATEWAY_URL variable: -

export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT

and then hit the sample application: -

curl $(echo "http://$GATEWAY_URL/productpage")

which returns a bunch of HTML 🤣

I also hit the same URL via a real browser: -





And, finally, deploy and access the Dashboard: -

kubectl apply -f samples/addons

serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created
deployment.apps/jaeger created
service/tracing created
service/zipkin created
service/jaeger-collector created
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
role.rbac.authorization.k8s.io/kiali-controlplane created
rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
service/kiali created
deployment.apps/kiali created
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created

kubectl rollout status deployment/kiali -n istio-system

Waiting for deployment "kiali" rollout to finish: 0 of 1 updated replicas are available...
deployment "kiali" successfully rolled out

istioctl dashboard kiali

http://localhost:20001/kiali

which popped up a browser ....

Having thrown some traffic at the application: -

for i in $(seq 1 100); do curl -s -o /dev/null "http://A.B.C.D:30588/productpage"; done

I could then see the application/flow/throughput etc. via the dashboard: -



To conclude, the Getting Started is really rather peachy, and definitely worth following through ....

Hacking go.mod files - there IS a better way

TL;DR; for our project, we're making heavy use of Go and, more specifically, modules: -

Modules are how Go manages dependencies.

A module is a collection of packages that are released, versioned, and distributed together. Modules may be downloaded directly from version control repositories or from module proxy servers.

A module is identified by a module path, which is declared in a go.mod file, together with information about the module’s dependencies. The module root directory is the directory that contains the go.mod file. The main module is the module containing the directory where the go command is invoked.

Each package within a module is a collection of source files in the same directory that are compiled together. A package path is the module path joined with the subdirectory containing the package (relative to the module root). For example, the module "golang.org/x/net" contains a package in the directory "html". That package’s path is "golang.org/x/net/html".

Source: Go Modules

In our specific use case, we have a go.mod file which contains the replace directive: -

A replace directive replaces the contents of a specific version of a module, or all versions of a module, with contents found elsewhere. The replacement may be specified with either another module path and version, or a platform-specific file path.

Source: replace directive

We had a requirement to update this replace directive, and have it point at a specific fork of a GitHub project rather than defaulting to the "main" upstream repository.

So, using the documentation's example: -

replace golang.org/x/net v1.2.3 => example.com/fork/net v1.4.5

we wanted to change from fork to, say, foobar AND specific a different branch e.g. v1.5.7

Now it's perfectly easy to do this merely by hand-editing go.mod e.g. vi go.mod ....

BUT

there is ( as ever ) a better way ....

go mod edit --replace golang.org/x/net=example.com/foobar/net@v1.5.7

which gives us this in go.mod : -

replace golang.org/x/net => example.com/foobar/net v1.5.7

which is a much nicer approach ...

TL;DR; go mod edit is your friend!

Kubernetes - tailing Pod logs - TIL

I saw this on Twitter a few days back: -


but hadn't yet got around to playing with it.

I thought I'd test it with a brand new K8s 1.20 cluster running on IBM Kubernetes Service (IKS).

Firstly, I queried the pods that were running: -

kubectl get pods -A

NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE

ibm-system    addon-catalog-source-2x7hj                   1/1     Running   0          41h
ibm-system    catalog-operator-578f7c8857-666wd            1/1     Running   0          41h
ibm-system    olm-operator-6c45d79d96-pjtmr                1/1     Running   0          41h
kube-system   calico-kube-controllers-78ccd56cd7-wqgtf     1/1     Running   0          41h
kube-system   calico-node-pg6vv                            1/1     Running   0          41h
kube-system   calico-typha-ddd44968b-86cgs                 1/1     Running   0          41h
kube-system   calico-typha-ddd44968b-ffxmt                 0/1     Pending   0          41h
kube-system   calico-typha-ddd44968b-mqjrb                 0/1     Pending   0          41h
kube-system   coredns-7fc9f85d9c-5rwwv                     1/1     Running   0          41h
kube-system   coredns-7fc9f85d9c-bxtts                     1/1     Running   0          41h
kube-system   coredns-7fc9f85d9c-qk6gv                     1/1     Running   0          41h
kube-system   coredns-autoscaler-9cccfb98d-mw9qj           1/1     Running   0          41h
kube-system   dashboard-metrics-scraper-7c75dcd466-d5b9f   1/1     Running   0          41h
kube-system   ibm-keepalived-watcher-856k6                 1/1     Running   0          41h
kube-system   ibm-master-proxy-static-10.144.213.225       2/2     Running   0          41h
kube-system   kubernetes-dashboard-659cd5b798-thd57        1/1     Running   0          41h
kube-system   metrics-server-b7bc76594-4fdg2               2/2     Running   0          41h
kube-system   vpn-546847fcbf-dzzml                         1/1     Running   0          41h

and chose the metrics-server-b7bc76594-4fdg2 pod, because it's running two containers: -

kubectl get pod metrics-server-b7bc76594-4fdg2 --namespace kube-system --output json | jq -r .spec.containers[].name

metrics-server
metrics-server-nanny

Grabbing the labels for this pod: -

kubectl get pod metrics-server-b7bc76594-4fdg2 --namespace kube-system --output json | jq .metadata.labels

{
  "cert-checksum": "6457ed123878693e37fbde8b4a0abf966ae050c3",
  "k8s-app": "metrics-server",
  "pod-template-hash": "b7bc76594",
  "version": "v0.4.4"
}

and choosing the k8s-app label, I followed Lili's advice: -

kubectl logs --namespace kube-system --selector k8s-app=metrics-server --container metrics-server --tail=-1

I1011 17:10:21.606350       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/metrics-server-certs/tls.crt::/etc/metrics-server-certs/tls.key
I1011 17:10:21.606421       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.626860       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.646520       1 secure_serving.go:197] Serving securely on [::]:4443
I1011 17:10:21.627139       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1011 17:10:21.647378       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I1011 17:10:21.647922       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1011 17:10:21.606461       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:21.686383       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:22.007270       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I1011 17:10:22.046837       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I1011 17:10:22.126237       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 

and: -

kubectl logs --namespace kube-system --selector k8s-app=metrics-server --container metrics-server-nanny --tail=-1

ERROR: logging before flag.Parse: I1011 17:10:03.951537       1 pod_nanny.go:68] Invoked by [/pod_nanny --config-dir=/etc/config --cpu=40m --extra-cpu=0.5m --memory=40Mi --extra-memory=4Mi --threshold=5 --deployment=metrics-server --container=metrics-server --poll-period=300000 --estimator=exponential --use-metrics=true]
ERROR: logging before flag.Parse: I1011 17:10:03.951645       1 pod_nanny.go:69] Version: 1.8.12
ERROR: logging before flag.Parse: I1011 17:10:03.951673       1 pod_nanny.go:85] Watching namespace: kube-system, pod: metrics-server-b7bc76594-4fdg2, container: metrics-server.
ERROR: logging before flag.Parse: I1011 17:10:03.951684       1 pod_nanny.go:86] storage: MISSING, extra_storage: 0Gi
ERROR: logging before flag.Parse: I1011 17:10:03.968655       1 pod_nanny.go:116] cpu: 40m, extra_cpu: 0.5m, memory: 40Mi, extra_memory: 4Mi
ERROR: logging before flag.Parse: I1011 17:10:03.968697       1 pod_nanny.go:145] Resources: [{Base:{i:{value:40 scale:-3} d:{Dec:<nil>} s:40m Format:DecimalSI} ExtraPerNode:{i:{value:5 scale:-4} d:{Dec:<nil>} s: Format:DecimalSI} Name:cpu} {Base:{i:{value:41943040 scale:0} d:{Dec:<nil>} s: Format:BinarySI} ExtraPerNode:{i:{value:4194304 scale:0} d:{Dec:<nil>} s:4Mi Format:BinarySI} Name:memory}]

In other words, I used the --selector switch to grab logs for the two containers by their label, rather than knowing/caring about the pod name.

I could've done much the same BUT via a much more complicated route using grep and awk : -

kubectl logs --namespace kube-system $(kubectl get pods --namespace kube-system | grep metrics-server | awk '{print $1}')  -c metrics-server

I1011 17:10:21.606350       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/metrics-server-certs/tls.crt::/etc/metrics-server-certs/tls.key
I1011 17:10:21.606421       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.626860       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.646520       1 secure_serving.go:197] Serving securely on [::]:4443
I1011 17:10:21.627139       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1011 17:10:21.647378       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I1011 17:10:21.647922       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1011 17:10:21.606461       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:21.686383       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:22.007270       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I1011 17:10:22.046837       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I1011 17:10:22.126237       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 


so Lili's approach is WAY more streamlined: -

Tuesday, 12 October 2021

More fun with keyctl on Ubuntu

Following on from my earlier post: -

Fun with keyctl on Ubuntu

I started seeing the same Bad message exception when adding a certificate into a keyring: -

keyctl padd asymmetric foo @u < ~/ssl/server.crt 

add_key: Bad message

even though the required kernel module was loaded: -

lsmod |grep pkcs

pkcs8_key_parser       16384  0

and this appeared to be a valid certificate: -

file ~/ssl/server.crt

/home/hayd/ssl/server.crt: PEM certificate

openssl verify -verbose -CAfile ~/ssl/etcd-ca.crt ~/ssl/server.crt

/home/hayd/ssl/server.crt: OK

so, as per the above, the certificate is stored in Privacy Enhanced Mail (PEM) format: -

PEM or Privacy Enhanced Mail is a Base64 encoded DER certificate. PEM certificates are frequently used for web servers as they can easily be translated into readable data using a simple text editor. Generally when a PEM encoded file is opened in a text editor, it contains very distinct headers and footers.


Jumping to a conclusion that keyctl may require a different format e.g. Distinguished Encoding Rules (DER) instead: -

DER (Distinguished Encoding Rules) is a binary encoding for X.509 certificates and private keys. Unlike PEM, DER-encoded files do not contain plain text statements such as -----BEGIN CERTIFICATE-----. DER files are most commonly seen in Java contexts.


I regenerated the certificate: -

openssl x509 -req -extfile <(printf "subjectAltName=DNS:localhost,DNS:genctl-etcd-cluster.genctl.svc,DNS:genctl-etcd-cluster-client.genctl.svc") -days 365 -in ~/ssl/server.csr -CA ~/ssl/etcd-ca.crt -CAkey ~/ssl/etcd-ca.key -CAcreateserial -out ~/ssl/server.der -outform der

in DER format ( via -outform der ) and verified it: -

file ~/ssl/server.der

/home/hayd/ssl/server.der: data

and then imported it using keyctl : -

export description="Test1"

keyctl padd asymmetric $description @u < ~/ssl/server.der

526852507

and validated thusly: -

keyctl list @u

1 key in keyring:
526852507: --als--v  1000  1000 asymmetric: Test1

Nice!

Thursday, 7 October 2021

Fun with keyctl on Ubuntu

One of my friends is tinkering with keyctl and had a few questions about the Linux kernel modules e.g. pkcs8_key_parser.

So I ran through an end-to-end setup to grow my own understanding, with thanks to: -

keyring-ima-signer

for enabling me to grow my understanding 🤣🤣

What OS do I have ?

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal

What kernel am I running ?

uname -a

Linux ubuntu 5.4.0-84-generic #94-Ubuntu SMP Thu Aug 26 20:27:37 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Install the keyutils package

sudo apt install keyutils

Set a subject for the public key

export subject="/C=GB/O=IBM/CN="`hostname`

Set the description for the keyring entry

export description="Test1"

Generate a RSA private key

openssl genrsa | openssl pkcs8 -topk8 -nocrypt -outform DER -out privatekey.der

Generating RSA private key, 2048 bit long modulus (2 primes)
....................+++++
.....................+++++
e is 65537 (0x010001)

Generate a public key/certificate

openssl req -x509 -key privatekey.der -out certificate.pem -days 365 -keyform DER -subj $subject

Add private key to keyring

keyctl padd asymmetric $description @u <privatekey.der

add_key: Bad message

Load required key parser module

sudo modprobe pkcs8_key_parser

Verify module load

lsmod |grep key

pkcs8_key_parser       16384  0

Add private key to keyring - second attempt

keyctl padd asymmetric $description @u <privatekey.der

676878733

Validate keyring

keyctl list @u

1 key in keyring:
676878733: --als--v  1000  1000 asymmetric: Test1

Wednesday, 6 October 2021

Podman broke my IBM Cloud

As per my earlier posts, I'm all over Podman at the moment.

Having removed Docker Desktop, I then realised that this broke the IBM Cloud CLI tool, specifically the Container Registry plugin: -

ic cr login

FAILED

Could not locate 'docker'. Check your 'docker' installation and path.

Of course, Docker isn't installed, and thus is no longer in the path: -

which docker

which returns NADA!

But Podman is there: -

which podman

/usr/local/bin/podman

so I created an alias/shortcut/symbolic link between the two: -

ln -s `which podman` /usr/local/bin/docker

so now Docker is back: -

which docker

/usr/local/bin/docker

and is remarkably similar to Podman: -

docker --version

docker version 3.4.0

podman --version

podman version 3.4.0

and now the cloud is happy: -

ic cr login

Logging in to 'us.icr.io'...

Logged in to 'us.icr.io'.


OK


Word to the wise - check your serial

I'm transitioning from an iPhone 8 Plus to an iPhone 13, and had wiped the older phone ( having remembered to turn off Find My... and also log out from iCloud ).

All was good, and the new iPhone 13 is oh-so-shiny; I'm a very happy camper ...

However, as I'm handing down the older model to a family member, I wanted to go get the battery replaced.

The Apple site asks, perhaps quite wisely, for a serial or an International Mobile Equipment Identity (IMEI) number.

Try getting that from the Settings page on a wiped, yet-to-be-setup iPhone ...

Thankfully, this article had the answer: -

Find the serial number or IMEI on your iPhone, iPad or iPod touch


It's on the SIM tray; just grab a handy paperclip, and you're good to go !

Monday, 4 October 2021

Podman - pruning

So, back in the Docker days, I wrote a basic little script called prune.sh which would ... prune containers that had exited: -

#!/usr/bin/env bash
echo "Removing Docker containers that have Exited"
docker rm `docker ps -a|grep Exited|awk '{print $1}'`
echo "Completed"

so wanted to do much the same now I'm into podman

My starting position is that I've got two containers, one recently exited and one running: -

podman ps -a

CONTAINER ID  IMAGE                                                               COMMAND               CREATED         STATUS                     PORTS                   NAMES
413adb675c62  docker.io/library/hello-world:latest                                /hello                37 seconds ago  Exited (0) 38 seconds ago  0.0.0.0:8080->8080/tcp  hopeful_vaughan
408ce1f9513f  us.icr.io/demo_time/hello_world_nginx_june_2021:latest  nginx -g daemon o...  32 seconds ago  Up 32 seconds ago          0.0.0.0:8443->443/tcp   zealous_brattain

so I want to remove them both.

Now obviously I don't want to be bother typing in the container ID or name, so lets just get a list of the IDs: -

podman ps -a | grep -v CONTAINER | awk '{print $1}'

413adb675c62
408ce1f9513f

and then use that in a loop to remove the unwanted containers: -

for i in $(podman ps -a | grep -v CONTAINER | awk '{print $1}'); do podman rm $i; done

413adb675c62
Error: cannot remove container 408ce1f9513f8056497d9e6353dd9b210c59d38eafe30c698f202ba6f240babe as it is running - running or paused containers cannot be removed without force: container state improper

so that's a 50% success.

Perhaps I need to add in a podman stop before the podman rm like this: -

podman ps -a

CONTAINER ID  IMAGE                                                               COMMAND               CREATED             STATUS                         PORTS                   NAMES
408ce1f9513f  us.icr.io/demo_time/hello_world_nginx_june_2021:latest  nginx -g daemon o...  7 minutes ago       Exited (0) 49 seconds ago      0.0.0.0:8443->443/tcp   zealous_brattain
43a6797e1036  docker.io/library/hello-world:latest                                /hello                About a minute ago  Exited (0) About a minute ago  0.0.0.0:8080->8080/tcp  hopeful_robinson

for i in $(podman ps -a | grep -v CONTAINER | awk '{print $1}'); do podman stop $i && podman rm $i; done

408ce1f9513f
408ce1f9513f
43a6797e1036
43a6797e1036

podman ps -a

CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

I can do much the same for the images, if I want my prune to be really really ruthless: -

for i in $(podman images | grep -v REPOSITORY | awk '{print $1}'); do podman rmi $i; done

Untagged: docker.io/library/hello-world:latest
Deleted: feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412
Untagged: us.icr.io/demo_time/hello_world_nginx_june_2021:latest
Deleted: c5318a40be88ede4e70c8c11f552a765c1c8aa5965ebd428da0b4766c2546968

So here's my final script: -

#!/usr/bin/env bash

echo "Removing containers"

for i in $(podman ps -a | grep -v CONTAINER | awk '{print $1}'); do podman stop $i && podman rm $i; done

echo "Removing images"

for i in $(podman images | grep -v REPOSITORY | awk '{print $1}'); do podman rmi $i; done

echo "Done"

~/prune.sh
 
Removing containers
854a1937c347
854a1937c347
ERRO[12181] accept tcp [::]:8443: use of closed network connection 
95ef5827777e
95ef5827777e
Removing images
Untagged: docker.io/library/hello-world:latest
Deleted: feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412
Untagged: us.icr.io/demo_time/hello_world_nginx_june_2021:latest
Deleted: c5318a40be88ede4e70c8c11f552a765c1c8aa5965ebd428da0b4766c2546968
Done

podman ps -a

CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

podman images

REPOSITORY  TAG         IMAGE ID    CREATED     SIZE

And there's more - podman in action

Following on from my two earlier posts: -

Podman - my first time

Podman and Homebrew and Docker - Permission to launch ...

here we go, using Podman to run a container from a "Here's one I created earlier" container image that hosts Nginx on the internal container port of 443 using SSL/TLS : -

Starting position - no containers nor images

podman ps -a

CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

podman images

REPOSITORY  TAG         IMAGE ID    CREATED     SIZE

Logging into IBM Container Registry

export APIKEY="<THIS IS WHERE MY API KEY GOES>"

echo $APIKEY | podman login us.icr.io --username iamapikey --password-stdin

Login Succeeded!

Pulling image

podman pull us.icr.io/demo_time/hello_world_nginx_june_2021:latest

Trying to pull us.icr.io/demo_time/hello_world_nginx_june_2021:latest...

Getting image source signatures
Checking if image destination supports signatures
Copying blob sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b
Copying blob sha256:0dc18a5274f2c43405a2ecccd3b10c159e3141b963a899c1f8127fd921a919dc
Copying blob sha256:48a0ee941dcdebbf017f21b46c5dd6f6ee81f8086e9347e852a067cf6f18209a
Copying blob sha256:2446243a1a3fbd03fffa8180f51dee385c4c5dbd91a84ebcdb6958f0e42cf764
Copying blob sha256:cbf0756b41fb647e1222f78d79397c27439b0c3a9b27aafbdd34aa5b72bd6a49
Copying blob sha256:c72750a979b985e3c3d6299106d90b0cff7e0b833a53ac02fcb7d76bd5fe4066
Copying blob sha256:48a0ee941dcdebbf017f21b46c5dd6f6ee81f8086e9347e852a067cf6f18209a
Copying blob sha256:45b6990e7dbfc9c43a357f0eb0ff074f159ed75c6ed865d0d9dad33a028cc2a2
Copying blob sha256:cbf0756b41fb647e1222f78d79397c27439b0c3a9b27aafbdd34aa5b72bd6a49
Copying blob sha256:5e158c5bf01f5e088f575e2fbc228bf6412be3c3c203d27d8a54e81eb9dc469e
Copying blob sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b
Copying blob sha256:2446243a1a3fbd03fffa8180f51dee385c4c5dbd91a84ebcdb6958f0e42cf764
Copying blob sha256:2a7c6912841852e1c853229bd6a6e02035b47a39aec2e98d5a2b0168a843d879
Copying blob sha256:c72750a979b985e3c3d6299106d90b0cff7e0b833a53ac02fcb7d76bd5fe4066
Copying blob sha256:449e432369550bb7d8e8d7424208c98b20e2fa419c885b5786523597afe613f1
Copying blob sha256:5e158c5bf01f5e088f575e2fbc228bf6412be3c3c203d27d8a54e81eb9dc469e
Copying blob sha256:0dc18a5274f2c43405a2ecccd3b10c159e3141b963a899c1f8127fd921a919dc
Copying blob sha256:747e67851ee5fae34759ef37ad7aa7fc1a3f547a47d949ba03fcf6a8aa391146
Copying blob sha256:45b6990e7dbfc9c43a357f0eb0ff074f159ed75c6ed865d0d9dad33a028cc2a2
Copying blob sha256:2a7c6912841852e1c853229bd6a6e02035b47a39aec2e98d5a2b0168a843d879
Copying blob sha256:747e67851ee5fae34759ef37ad7aa7fc1a3f547a47d949ba03fcf6a8aa391146
Copying blob sha256:0217b8cca4864fe2a874053cae58c1d3d195dc5763fb081b1939e241c4f58ed3
Copying blob sha256:449e432369550bb7d8e8d7424208c98b20e2fa419c885b5786523597afe613f1
Copying blob sha256:b6f423348fcd82b9ce715e06704d4ab65f5a7ae41ddc2c4fff8806a66c57ee93
Copying blob sha256:0217b8cca4864fe2a874053cae58c1d3d195dc5763fb081b1939e241c4f58ed3
Copying blob sha256:b6f423348fcd82b9ce715e06704d4ab65f5a7ae41ddc2c4fff8806a66c57ee93
Copying config sha256:c5318a40be88ede4e70c8c11f552a765c1c8aa5965ebd428da0b4766c2546968
Writing manifest to image destination
Storing signatures
c5318a40be88ede4e70c8c11f552a765c1c8aa5965ebd428da0b4766c2546968

Verify pull

podman images

REPOSITORY                                                   TAG         IMAGE ID      CREATED      SIZE
us.icr.io/demo_time/hello_world_nginx_june_2021  latest      c5318a40be88  2 weeks ago  36.8 MB

Create a container

Note that we're using the --detach CLI parameter to run it as a daemon and the 

podman run --detach --publish 8443:443 us.icr.io/demo_time/hello_world_nginx_june_2021

1ac8b1b735d9c1407a143e09f71a86d39ed27b12777a4c2425f1196ae21b9f50

Verify running container

podman ps

CONTAINER ID  IMAGE                                                               COMMAND               CREATED         STATUS             PORTS                  NAMES
1ac8b1b735d9  us.icr.io/demo_time/hello_world_nginx_june_2021:latest  nginx -g daemon o...  26 seconds ago  Up 26 seconds ago  0.0.0.0:8443->443/tcp  heuristic_euclid

Validate HTTPS listener

netstat -an | grep 8443

tcp46      0      0  *.8443                 *.*                    LISTEN     

Validate HTTPS endpoint

openssl s_client -connect localhost:8443 </dev/null

...

SSL handshake has read 2262 bytes and written 289 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 4096 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
...

Test Nginx from the CLI

curl --insecure https://localhost:8443

- Note that we use the --insecure CLI parameter because Nginx is presenting a self-signed SSL certificate that cURL won't automagically trust

<html>
  <head>
    <title>Hello World</title>
  </head>
  <body>
    <div class="info">
      <p>
        <h2>
          <span>Welcome to IBM Hyper Protect ...</span>
        </h2>
      </p>
      <p>
        <h2>
          <span>Message of the Day .... Drink More Herbal Tea!!</span>
        </h2>
      </p>
      <p>
        <h2>
          <span>( and, of course, Hello World! )</span>
        </h2>
      </p>
    </div>
  </body>
</html>

Test Nginx from a browser

- Note that I'm using Firefox as Chrome has decided that it's just too secure to allow self-signed certificates 😁





Stop the container

podman stop 1ac8b1b735d9

ERRO[7790] accept tcp [::]:8443: use of closed network connection 
1ac8b1b735d9

Remove the container

podman rm 1ac8b1b735d9

1ac8b1b735d9

Remove the image

podman rmi us.icr.io/demo_time/hello_world_nginx_june_2021:latest

Untagged: us.icr.io/demo_time/hello_world_nginx_june_2021:latest
Deleted: c5318a40be88ede4e70c8c11f552a765c1c8aa5965ebd428da0b4766c2546968

Podman and Homebrew and Docker - Permission to launch ...

Following my earlier post: -

Podman - my first time

and harking back to an older post: -

Homebrew on macOS - Docker says "No" - well, kinda

I thought I'd update Homebrew: -

brew upgrade

Updating Homebrew...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core and homebrew/cask).
==> New Formulae
ca-certificates                            clickhouse-odbc                            cmake-docs                                 texlive
==> Updated Formulae
Updated 187 formulae.
==> New Casks
plistedplus
==> Updated Casks
Updated 116 casks.
==> Upgrading 14 outdated packages:
git-lfs 2.13.3 -> 3.0.1
coreutils 8.32 -> 9.0
podman 3.4.0 -> 3.4.0_1
yq 4.9.6 -> 4.13.3
putty 0.75 -> 0.76
maven 3.8.1 -> 3.8.2
htop 3.0.5 -> 3.1.0
openjdk 16.0.1 -> 17
openssl@1.1 1.1.1l -> 1.1.1l_1
kubernetes-cli 1.21.2 -> 1.22.2
fltk 1.3.6 -> 1.3.7
libzip 1.8.0 -> 1.8.0_1
helm 3.6.1 -> 3.7.0
jpeg-turbo 2.1.0 -> 2.1.1
...
Removing: /Users/hayd/Library/Caches/Homebrew/libidn2_bottle_manifest--2.3.1... (5.2KB)
Removing: /Users/hayd/Library/Caches/Homebrew/guile_bottle_manifest--3.0.7... (6.3KB)
Removing: /Users/hayd/Library/Caches/Homebrew/rust_bottle_manifest--1.51.0... (6.4KB)
Removing: /Users/hayd/Library/Caches/Homebrew/node_bottle_manifest--16.2.0... (9.6KB)
Removing: /Users/hayd/Library/Caches/Homebrew/fltk_bottle_manifest--1.3.6... (4.9KB)
Removing: /Users/hayd/Library/Caches/Homebrew/yq_bottle_manifest--4.9.3... (4.3KB)
Removing: /Users/hayd/Library/Caches/Homebrew/python@3.8_bottle_manifest--3.8.10... (13.7KB)
Removing: /Users/hayd/Library/Caches/Homebrew/node_bottle_manifest--16.1.0... (9KB)
Removing: /Users/hayd/Library/Caches/Homebrew/libtasn1_bottle_manifest--4.17.0... (4.5KB)
Removing: /Users/hayd/Library/Caches/Homebrew/yq_bottle_manifest--4.8.0... (4.3KB)
Removing: /Users/hayd/Library/Caches/Homebrew/gnutls_bottle_manifest--3.6.16... (11KB)
Removing: /Users/hayd/Library/Caches/Homebrew/putty_bottle_manifest--0.75... (4.2KB)
Removing: /Users/hayd/Library/Caches/Homebrew/pyenv_bottle_manifest--1.2.27... (16.0KB)
Removing: /Users/hayd/Library/Caches/Homebrew/kubernetes-cli_bottle_manifest--1.21.1... (4.3KB)
Removing: /Users/hayd/Library/Caches/Homebrew/python@3.9_bottle_manifest--3.9.5... (12.5KB)
Removing: /Users/hayd/Library/Caches/Homebrew/pyenv_bottle_manifest--2.0.0... (16.6KB)
Removing: /Users/hayd/Library/Caches/Homebrew/libssh2_bottle_manifest--1.9.0_1... (5.8KB)
Removing: /Users/hayd/Library/Logs/Homebrew/fdupes... (64B)
Removing: /Users/hayd/Library/Logs/Homebrew/pcre2... (64B)
Error: Permission denied @ apply2files - /usr/local/lib/docker/cli-plugins

The computer said "No".

So we've got some hangover from Docker Desktop, which I'd uninstalled by dragging Docker.app from the Applications folder to the Trashcan.

Thankfully, I remembered the previous incident of this on my 2014 Mac mini.

Knowing that this was permission-related, I checked the offending folder - /usr/local/lib/docker - as follows: -

ls -al /usr/local/lib/docker/

total 0
drwxr-xr-x    3 root  admin    96 15 Sep 17:25 .
drwxrwxr-x  294 hayd  admin  9408  1 Oct 14:52 ..
lrwxr-xr-x    1 root  admin    55 15 Sep 17:25 cli-plugins -> /Applications/Docker.app/Contents/Resources/cli-plugins

and just fixed up the permissions: -

sudo chown -R hayd:admin /usr/local/lib

and then re-ran the Brew upgrade: -

brew upgrade

and now it's happy: -

Bash completion has been installed to:
  /usr/local/etc/bash_completion.d
==> coreutils
Commands also provided by macOS and the commands dir, dircolors, vdir have been installed with the prefix "g".
If you need to use these commands with their normal names, you can add a "gnubin" directory to your PATH with:
  PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"

Sadly, despite the supposed upgraded to podman the version still reports as before: -

podman --version

podman version 3.4.0

podman version

Client:
Version:      3.4.0
API Version:  3.4.0
Go Version:   go1.17.1
Built:        Thu Sep 30 19:44:31 2021
OS/Arch:      darwin/amd64

Server:
Version:      3.3.1
API Version:  3.3.1
Go Version:   go1.16.6
Built:        Mon Aug 30 21:46:36 2021
OS/Arch:      linux/amd64

and the initial JSON-related issue persists: -

podman run hello-world

Error: error preparing container c30900631b5e91c564a3c8093dc11ff975bd09b02d156b95f6ef243844548320 for attach: error configuring network namespace for container c30900631b5e91c564a3c8093dc11ff975bd09b02d156b95f6ef243844548320: error adding pod musing_kirch_musing_kirch to CNI network "podman": unexpected end of JSON input

Sigh!!!

Podman - my first time

Now that things are a-changin' with Docker Desktop, I wanted to give podman a try.

I'm running macOS 11.6 Big Sur and have Homebrew installed: -

brew --version

Homebrew 3.2.14
Homebrew/homebrew-core (git revision 161852c41e5; last commit 2021-10-01)
Homebrew/homebrew-cask (git revision 486624192d; last commit 2021-10-01)

so simply installed podman via brew install podman.

Having installed, I checked the version of podman thusly: -

podman version

Client:
Version:      3.4.0
API Version:  3.4.0
Go Version:   go1.17.1
Built:        Thu Sep 30 19:44:31 2021
OS/Arch:      darwin/amd64
Server:
Version:      3.3.1
API Version:  3.3.1
Go Version:   go1.16.6
Built:        Mon Aug 30 21:46:36 2021
OS/Arch:      linux/amd64

Starting at the beginning, I tried ( and sadly failed ) to run the stock Hello World image: -

podman run hello-world

Resolved "hello-world" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/hello-world:latest...
Getting image source signatures
Copying blob sha256:2db29710123e3e53a794f2694094b9b4338aa9ee5c40b930cb8063a1be392c54
Copying blob sha256:2db29710123e3e53a794f2694094b9b4338aa9ee5c40b930cb8063a1be392c54
Copying config sha256:feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412
Writing manifest to image destination
Storing signatures
Error: error preparing container f8dee00901c488f0362af2ffe6e8736a41a609890bfc2f3f8863349932cb050d for attach: error configuring network namespace for container f8dee00901c488f0362af2ffe6e8736a41a609890bfc2f3f8863349932cb050d: error adding pod confident_panini_confident_panini to CNI network "podman": unexpected end of JSON input

Thankfully, GitHub had the answer: -


which says in part: -

Should be fixed once podman 3.4 lands in CoreOS, as workaround you have to forward at least one port, e.g. -p 8080.

This is a duplicate of: -


which says, in part: -

TLDR, the bug is that you cannot use the machine plugin without ports.

Given this, I tried the circumvention ( even though Hello World doesn't actually need a port ) : -

podman run -p 8080:8080 hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

~ $ ERRO[170262] accept tcp [::]:8080: use of closed network connection 

Given the GitHub issues, and noting that the "server" side of Podman is still 3.3.1 : -

podman version

Client:
Version:      3.4.0
API Version:  3.4.0
Go Version:   go1.17.1
Built:        Thu Sep 30 19:44:31 2021
OS/Arch:      darwin/amd64

Server:
Version:      3.3.1
API Version:  3.3.1
Go Version:   go1.16.6
Built:        Mon Aug 30 21:46:36 2021
OS/Arch:      linux/amd64

I'll keep an eye on the issue etc. and see when we see a fix for the server side ....

Wednesday, 29 September 2021

Tinkering with Kubernetes Networking - Today I'm Learning ....

I'm having oh-so-much fun debugging a TLS-encrypted, client-certificate-protected service running on Kubernetes 1.20, and was looking for a way to "see inside" the ClusterIP service itself.

This: -

Lab 01 - Kubernetes Networking, using Service Types, Ingress and Network Policies to Control Application Access

provided some really useful insight: -

The HelloWorld Service is accessible now but only within the cluster. To expose a Service onto an external IP address, you have to create a ServiceType other than ClusterIP. Apps inside the cluster can access a pod by using the in-cluster IP of the service or by sending a request to the name of the service. When you use the name of the service, kube-proxy looks up the name in the cluster DNS provider and routes the request to the in-cluster IP address of the service.


To allow external traffic into a kubernetes cluster, you need a NodePort ServiceType. If you set the type field of Service to NodePort, Kubernetes allocates a port in the range 30000-32767. Each node proxies the assigned NodePort (the same port number on every Node) into your Service.

Patch the existing Service for helloworld to type: NodePort,

$ kubectl patch svc helloworld -p '{"spec": {"type": "NodePort"}}'

service/helloworld patched

Describe the Service again,

$ kubectl describe svc helloworld

Name:                     helloworld
Namespace:                default
Labels:                   app=helloworld
Annotations:              <none>
Selector:                 app=helloworld
Type:                     NodePort
IP:                       172.21.161.255
Port:                     <unset>  8080/TCP
TargetPort:               http-server/TCP
NodePort:                 <unset>  31777/TCP
Endpoints:                172.30.153.79:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

In this example, Kubernetes added a NodePort with port value 31777 in this example. For everyone, this is likely to be a different port in the range 30000-32767.

You can now connect to the service via the public IP address of any worker node in the cluster and traffic gets forwarded to the service, which uses service discovery and the selector of the Service to deliver the request to the assigned pod. With this piece in place we now have a complete pipeline for load balancing external client requests to all the nodes in the cluster.

With that, I can (temporarily) patch my ClusterIP service to a NodePort, and then poke into it from the outside, using the K8s Node's external IP: -

kubectl get node nodename --output json | jq -r .status.addresses

and the newly allocated NodePort e.g. 31777.

Lovely!

Yay, VMware Fusion and macOS Big Sur - no longer "NAT good friends" - forgive the double negative and the terrible pun ...

After macOS 11 Big Sur was released in 2020, VMware updated their Fusion product to v12 and, sadly, managed to break Network Address Trans...