Monday 17 May 2021

Calico Node, more like Calico No

 I spent a happy few hours over the weekend, trying to work out why my Kubernetes 1.21 cluster wasn't behaving as expected.

I was seeing a bunch o' weirdness whereby certain pods weren't able to access certain services, manifested specifically when I was trying/failing to create a DataVolume using the KubeVirt Containerised Data Importer (CDI) capability.

I was seeing exceptions such as: -

Error from server (InternalError): error when creating "create_volume.yaml": Internal error occurred: failed calling webhook "datavolume-mutate.cdi.kubevirt.io": Post "https://cdi-api.cdi.svc:443/datavolume-mutate?timeout=30s": dial tcp 10.102.58.243:443: i/o timeout

from: -

kubectl apply -f create_volume.yaml

After much digging and DNS debugging including using BusyBox to resolve various K8s services: -

kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- nslookup cdi-api.cdi

Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      cdi-api.cdi
Address 1: 10.102.58.243 cdi-api.cdi.svc.cluster.local
pod "busybox" deleted

kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- nslookup kubernetes.default

Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
pod "busybox" deleted

which looked OK, something inspired me to look at Calico Node, which is the networking layer overlaying my cluster: -

kubectl get pods -A|grep calico

kube-system   calico-kube-controllers-bf965bfd8-hg82b          1/1     Running   0          58m
kube-system   calico-node-8zkvt                                0/1     Running   0          7m24s
kube-system   calico-node-srmj6                                0/1     Running   0          7m47s

Noticing that both calico-node pods were showing 0/1 rather than 1/1, meaning that they weren't running on the Compute node, I dug further: -

kubectl describe pod `kubectl get pods -A|grep calico-node|awk '{print $2}'` --namespace kube-system

which, in part, showed: -

  Warning  Unhealthy  9m44s  kubelet            Readiness probe failed: calico/node is not ready: felix is not ready: Get "http://localhost:9099/readiness": dial tcp 127.0.0.1:9099: connect: connection refused
  Warning  Unhealthy  9m42s  kubelet            Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused
  Warning  Unhealthy  9m32s  kubelet            Readiness probe failed: 2021-05-17 11:33:08.203 [INFO][197] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.51.16.137
  Warning  Unhealthy  9m22s  kubelet  Readiness probe failed: 2021-05-17 11:33:18.195 [INFO][231] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.51.16.137
  Warning  Unhealthy  9m12s  kubelet  Readiness probe failed: 2021-05-17 11:33:28.278 [INFO][268] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.51.16.137
  Warning  Unhealthy  9m2s  kubelet  Readiness probe failed: 2021-05-17 11:33:38.334 [INFO][301] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.51.16.137
  Warning  Unhealthy  8m52s  kubelet  Readiness probe failed: 2021-05-17 11:33:48.182 [INFO][319] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.51.16.137
  Warning  Unhealthy  8m42s  kubelet  Readiness probe failed: 2021-05-17 11:33:58.266 [INFO][356] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.51.16.137
  Warning  Unhealthy  8m32s  kubelet  Readiness probe failed: 2021-05-17 11:34:08.185 [INFO][377] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.51.16.137
  Warning  Unhealthy  4m42s (x23 over 8m22s)  kubelet  (combined from similar events): Readiness probe failed: 2021-05-17 11:37:58.234 [INFO][1014] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.51.16.137

Knowing that my firewall configuration - iptables - was clean n' green, in that I'd opened up the Border Gateway Protocol (BGP) port 179 on both the Control Plane and Compute nodes: -

iptables -A INPUT -p tcp -m tcp --dport 179 -j ACCEPT

I looked back through my notes, and remembered the issue with IP_AUTODETECTION_METHOD and the Calico Node daemonset.

I checked the daemonset: -

kubectl get daemonset -A

NAMESPACE     NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   calico-node    2         2         0       2            0           kubernetes.io/os=linux   64m
kube-system   kube-proxy     2         2         2       2            2           kubernetes.io/os=linux   112m
kubevirt      virt-handler   1         1         1       1            1           kubernetes.io/os=linux   30m

and noticed that the calico-node daemonset was, like the pods, showing as unready ( 0 instead of 1+ in the READY column )

I inspected the offending daemonset: -

kubectl get daemonset/calico-node -n kube-system --output json | jq '.spec.template.spec.containers[].env[] | select(.name | startswith("IP"))'

{
  "name": "IP",
  "value": "autodetect"
}

noting that the IP_AUTODETECTION_METHOD environment variable wasn't specifically set.

Given that the VMs that host my K8s nodes have TWO network adapters, eth0 and eth1, and that I want Calico Node to use eth0 which is the private IP, I explicitly set that: -

kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=eth0

daemonset.apps/calico-node env updated

and then validated the change: -

kubectl get daemonset/calico-node -n kube-system --output json | jq '.spec.template.spec.containers[].env[] | select(.name | startswith("IP"))'

{
  "name": "IP",
  "value": "autodetect"
}
{
  "name": "IP_AUTODETECTION_METHOD",
  "value": "interface=eth0"
}

More importantly, the Calico Node pods are happy: -

kubectl get pods -A

NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
cdi           cdi-apiserver-6b87945b8d-dww25                   1/1     Running   0          120m
cdi           cdi-deployment-86c6d76d98-7cxlv                  1/1     Running   0          120m
cdi           cdi-operator-5757c84894-xhw6r                    1/1     Running   0          120m
cdi           cdi-uploadproxy-79dd97b4d5-lvd72                 1/1     Running   0          120m
kube-system   calico-kube-controllers-bf965bfd8-hg82b          1/1     Running   0          154m
kube-system   calico-node-8llqm                                1/1     Running   0          75m
kube-system   calico-node-j9rdb                                1/1     Running   0          75m
kube-system   coredns-558bd4d5db-fml9w                         1/1     Running   0          3h21m
kube-system   coredns-558bd4d5db-gg8dm                         1/1     Running   0          3h21m
kube-system   etcd-grouched1.fyre.ibm.com                      1/1     Running   0          3h22m
kube-system   kube-apiserver-grouched1.fyre.ibm.com            1/1     Running   0          3h22m
kube-system   kube-controller-manager-grouched1.fyre.ibm.com   1/1     Running   1          3h22m
kube-system   kube-proxy-47txj                                 1/1     Running   0          3h19m
kube-system   kube-proxy-hg7f8                                 1/1     Running   0          3h21m
kube-system   kube-scheduler-grouched1.fyre.ibm.com            1/1     Running   0          3h22m
kubevirt      virt-api-58999dff54-c8mch                        1/1     Running   0          120m
kubevirt      virt-api-58999dff54-gs8pm                        1/1     Running   0          120m
kubevirt      virt-controller-5c68c56896-l2rp7                 1/1     Running   0          120m
kubevirt      virt-controller-5c68c56896-phrt9                 1/1     Running   0          120m
kubevirt      virt-handler-85dhc                               1/1     Running   0          120m
kubevirt      virt-operator-78f65c88d4-ldtgj                   1/1     Running   0          123m
kubevirt      virt-operator-78f65c88d4-tmxhs                   1/1     Running   0          123m

as is the daemonset: -

kubectl get daemonset -A

NAMESPACE     NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   calico-node    2         2         2       2            2           kubernetes.io/os=linux   154m
kube-system   kube-proxy     2         2         2       2            2           kubernetes.io/os=linux   3h23m
kubevirt      virt-handler   1         1         1       1            1           kubernetes.io/os=linux   120m

and I can now create my DataVolume: -

kubectl apply -f create_volume.yaml

datavolume.cdi.kubevirt.io/registry-image-datavolume created

Thursday 13 May 2021

Grubbing about with Grub on Ubuntu 20.04

Whilst updating my Ubuntu 20.04 virtual machine ( actually an LPAR running on an IBM z/15 box ) : -

apt-get update && apt-get --with-new-pkgs upgrade -y 

I saw this

...
/usr/sbin/update-grub: not found
...

When I checked for the missing script: -

ls -al /usr/sbin/update-grub

I saw: -

ls: cannot access '/usr/sbin/update-grub': No such file or directory

Following this fine link - update-grub command not found - I created the missing script: -

cat <<EOF | tee /usr/sbin/update-grub
#!/bin/sh
set -e
exec grub-mkconfig -o /boot/grub/grub.cfg "$@"
EOF

and set it to execute: -

chmod +x /usr/sbin/update-grub

and validated: -

ls -al /usr/sbin/update-grub

-rwx------ 1 root root 62 May 13 11:42 /usr/sbin/update-grub

and then ran it manually: -

/usr/sbin/update-grub

Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.0-73-generic
Found initrd image: /boot/initrd.img-5.4.0-73-generic
Found linux image: /boot/vmlinuz-5.4.0-72-generic
Found initrd image: /boot/initrd.img-5.4.0-72-generic
Found linux image: /boot/vmlinuz-5.4.0-70-generic
Found initrd image: /boot/initrd.img-5.4.0-70-generic
done

which looked good.

I again ran the update: -

apt-get update && apt-get --with-new-pkgs upgrade -y 

and everything updated as expected.

For reference, on another box, where grub wasn't properly installed, I searched the Apt cache for relevant binaries: -

apt-cache search grub

...
grub-common - GRand Unified Bootloader (common files)
grub-ipxe - Network booting from GRUB using iPXE
grub-legacy-ec2 - Handles update-grub for ec2 instances
...

and then installed grub-common as follows: -

apt-get install -y grub-common

Again, all is now good, which is nice

Tuesday 11 May 2021

Tinkering with containerd on Linux - "cannot delete a non stopped container"

 I'm getting to grips with containerd as an alternate container runtime, having spent much of the past 5 years tinkering with docker etc.

I'm using an IBM Cloud Virtual Server to do this work, which is running Ubuntu 18.04.5 LTS 

There are a number of command-line interface (CLI) tools that one can use with containerd including the one that comes alongside it - ctr

ctr help
NAME:
   ctr - 
        __
  _____/ /______
 / ___/ __/ ___/
/ /__/ /_/ /
\___/\__/_/
containerd CLI

Having pulled an image, validated as follows: -

ctr image list

REF                                               TYPE                                                 DIGEST                                                                  SIZE      PLATFORMS   LABELS 

us.icr.io/dave/ubuntu_x86:latest application/vnd.docker.distribution.manifest.v2+json sha256:7a76e38256731356de234124f1d1078134c3e0cf02082b376977d7b0131d4195 289.1 MiB linux/amd64 -     

I created/launched a container: -

ctr run --net-host -d --rm -t us.icr.io/dave/ubuntu_x86:latest k8s_control_plane

using the -d switch to run it in the background ( as a daemon ).

I could see the resulting process running: -

ps auxw | grep k8s_control_plane | grep -v grep

root      3643  0.0  0.1 111968  7140 ?        Sl   14:37   0:00 /usr/bin/containerd-shim-runc-v2 -namespace default -id k8s_control_plane -address /run/containerd/containerd.sock

Having finished my tinker, I tried to remove the container: -

ctr container del k8s_control_plane

ERRO[0000] failed to delete container "k8s_control_plane"  error="cannot delete a non stopped container: {running 0 0001-01-01 00:00:00 +0000 UTC}"
ctr: cannot delete a non stopped container: {running 0 0001-01-01 00:00:00 +0000 UTC}

Head scratching time .....

So I'm equating this to Docker ....

So I've done the equivalent of docker run and then tried to do docker rm without doing docker stop

But how do I stop a running container using ctr ?

Well, ctr has the concept of tasks: -

ctr task list

TASK                 PID     STATUS    
k8s_control_plane    3670    RUNNING

Therefore, I need to stop that task: -

ctr task kill k8s_control_plane

ctr task list

TASK                 PID     STATUS    
k8s_control_plane    3670    STOPPED

and then delete the container: -

ctr container del k8s_control_plane

and then validate: -

ctr container list

CONTAINER    IMAGE    RUNTIME    

ctr task list

TASK    PID    STATUS    

More to follow .....

Monday 10 May 2021

Kubernetes - debugging the API Server - and there's more

 Whilst trying to work out why my Kubernetes API Server ( kube-apiserver ) was crashing n' burning every 8-10 minutes, I found a few more useful things at which to look ...

kubectl get --raw='/readyz?verbose'

[+]ping ok
[+]log ok
[+]etcd ok
[+]informer-sync ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]shutdown ok
readyz check passed

kubectl describe pod `kubectl get pods --namespace kube-system | grep apiserver | awk '{print $1}'` --namespace kube-system

...
Events:
  Type    Reason   Age    From     Message
  ----    ------   ----   ----     -------
  Normal  Pulled   5m17s  kubelet  Container image "k8s.gcr.io/kube-apiserver:v1.21.0" already present on machine
  Normal  Created  5m17s  kubelet  Created container kube-apiserver
  Normal  Started  5m16s  kubelet  Started container kube-apiserver

PS managed to resolve the kube-apiserver crash issue ... ask me how ?

Friday 7 May 2021

Debugging Kubernetes - some things for me to remember ....

 Just making a few notes of things to which I need to refer back on a frequent basis ...

Firstly, debugging kubelet when it's running as a systemd service on Ubuntu ....

journalctl -u kubelet -f

-- Logs begin at Fri 2021-05-07 08:26:28 UTC. --
May 07 08:35:18 49b3d0c825c0 kubelet[8810]: I0507 08:35:18.383575    8810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-local-net-dir\" (UniqueName: \"kubernetes.io/host-path/f2f7f941-a88d-421d-919f-48d8faedfb4e-host-local-net-dir\") pod \"calico-node-2csfs\" (UID: \"f2f7f941-a88d-421d-919f-48d8faedfb4e\") "
May 07 08:35:18 49b3d0c825c0 kubelet[8810]: I0507 08:35:18.383603    8810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f2f7f941-a88d-421d-919f-48d8faedfb4e-policysync\") pod \"calico-node-2csfs\" (UID: \"f2f7f941-a88d-421d-919f-48d8faedfb4e\") "
May 07 08:35:18 49b3d0c825c0 kubelet[8810]: I0507 08:35:18.383638    8810 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f2f7f941-a88d-421d-919f-48d8faedfb4e-var-lib-calico\") pod \"calico-node-2csfs\" (UID: \"f2f7f941-a88d-421d-919f-48d8faedfb4e\") "
...


systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Fri 2021-05-07 08:33:57 UTC; 3min 34s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 8810 (kubelet)
    Tasks: 14 (limit: 4915)
   CGroup: /system.slice/kubelet.service
           └─8810 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.y
...

In my specific case, I'm running K8s on a pair of Ubuntu containers ( which are running on runq on my IBM Z box ), using the containerd runtime.

Previously, I'd been using the docker runtime, and was trying/failing to get logs for specific containers into text files ....

In that context, I discovered ( via Google, of course ), that I could grab the container ID: -

id=$(docker ps -a | grep "kube-apiserver " | awk '{print $1}')

logfile=$(docker inspect --format='{{.LogPath}}' $id)

cat $logfile

...
{"log":"I0427 04:21:53.055167       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2021-04-27T04:21:53.055185496Z"}
{"log":"I0427 04:22:35.661984       1 client.go:360] parsed scheme: \"passthrough\"\n","stream":"stderr","time":"2021-04-27T04:22:35.662047076Z"}
{"log":"I0427 04:22:35.662127       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2021-04-27T04:22:35.662147916Z"}
{"log":"I0427 04:22:35.662164       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2021-04-27T04:22:35.662181568Z"}
{"log":"I0427 04:23:12.096739       1 client.go:360] parsed scheme: \"passthrough\"\n","stream":"stderr","time":"2021-04-27T04:23:12.096825215Z"}
{"log":"I0427 04:23:12.096837       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2021-04-27T04:23:12.096877914Z"}
...

I've not yet worked out how to do the same in the world of containerd and the ctr command-line tool but ....

I know the logs are located in  /var/log/containers/ so can simply grab the most recent kube-apiserver logs: -

ls -altrct  /var/log/containers/ | grep kube-apiserver | awk '{print $9}'

kube-apiserver-988bcfa294be_kube-system_kube-apiserver-7bb8147ef0be60950cdea7647fd09c404691e3de1d7e40db7d9296b953770cb1.log
kube-apiserver-988bcfa294be_kube-system_kube-apiserver-b66ced00d4c180d11c87df7cd017cb906c0cf6f2d6956f792c82b541d5aab580.log

cat /var/log/containers/kube-apiserver-988bcfa294be_kube-system_kube-apiserver-b66ced00d4c180d11c87df7cd017cb906c0cf6f2d6956f792c82b541d5aab580.log

...
2021-05-07T08:53:40.999115308Z stderr F I0507 08:53:40.998261       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
2021-05-07T08:54:09.450483534Z stderr F I0507 08:54:09.450253       1 client.go:360] parsed scheme: "endpoint"
2021-05-07T08:54:09.450624554Z stderr F I0507 08:54:09.450600       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
2021-05-07T08:54:09.505302623Z stderr F I0507 08:54:09.505253       1 client.go:360] parsed scheme: "endpoint"
2021-05-07T08:54:09.505314868Z stderr F I0507 08:54:09.505280       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
...


Other than that, also noting that I can simply use good ole Linux commands to inspect kubelet etc. as per this: -

ps -ef | grep /usr/bin/kubelet | grep -v grep

root     11644     1  1 08:33 ?        00:00:24 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cni-conf-dir=/etc/cni/net.d/ --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock --pod-infra-container-image=k8s.gcr.io/pause:3.4.1

ps -ef | grep kube-apiserver | grep -v grep

root     29843 12033 11 08:53 ?        00:00:28 kube-apiserver --advertise-address=172.16.84.4 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction,AlwaysPullImages,SecurityContextDeny --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --profiling=false --audit-log-path=/var/log/apiserver/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --anonymous-auth=false --encryption-provider-config=/etc/kubernetes/pki/secrets.yml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

etc.

Visual Studio Code - Wow 🙀

Why did I not know that I can merely hit [cmd] [p]  to bring up a search box allowing me to search my project e.g. a repo cloned from GitHub...