Tuesday, 29 November 2022

TIL - Searching back through zsh history on macOS

A friend just showed me a rather nifty CLI hack on macOS.

With an iTerm session going, hit [ctrl] [r] and search back through zsh history

So, as per the screenshot, I hit that sequence and typed in kube - at which point zsh showed me all of my recent kubectl commands, allowing me to use [ctrl] [r] to toggle back through only those commands: -






which is nice

Thanks Jordan 😁

YIL - Where does Apple keep its podcasts on macOS ?

I wanted to grab a copy of a bunch of podcasts that I'd downloaded via the Apple Podcasts app ( I usually listen to them on my iPhone, but they're also replicated on my MacBook ).

A quick Google led me here: -

/Users/hayd/Library/Group Containers/243LU875E5.groups.com.apple.podcasts/Library/Cache

which is, I think you'll agree, a memorable file path ... 😁

Also, who doesn't love spaces in paths ? Microsoft Windows and C:\Program Files, I'm looking at you.

Anyway, having added double quotes to the path to protect myself ...

cd "/Users/hayd/Library/Group Containers/243LU875E5.groups.com.apple.podcasts/Library/Cache"

ls -al

total 281064
drwxr-xr-x@  15 hayd  staff       480 28 Nov 15:15 .
drwx------    8 hayd  staff       256 28 Nov 13:58 ..
-rw-------@   1 hayd  staff  14405359 28 Nov 13:57 12039B90-D8F4-4E7C-A72F-B12FD9446AD0.mp3
-rw-------@   1 hayd  staff  14480550 28 Nov 13:57 3A3F9C52-29D5-4078-A8DD-D72709ED8570.mp3
-rw-------@   1 hayd  staff  14177014 28 Nov 13:57 5F2546E2-38B6-4943-91AA-1B1F629F1DEF.mp3
-rw-------@   1 hayd  staff  14692089 28 Nov 14:07 6A29D77B-CAA8-4973-A401-71E766C50FFD.mp3
-rw-------@   1 hayd  staff   1303411 28 Nov 13:57 7F3A1811-C194-4EEA-9952-86BF3C7262CA.mp3
-rw-------@   1 hayd  staff  14174282 28 Nov 14:13 A3C5E109-4E10-4A89-97A1-393C40D7159B.mp3
-rw-------@   1 hayd  staff  14146202 14 Nov 18:13 D031B8D2-D555-4EB5-9256-AA4E7BE625C7.mp3
-rw-------@   1 hayd  staff  14000627 28 Nov 13:57 DDD7F857-DAEE-42DD-9979-C7661E4DDA1E.mp3
-rw-------@   1 hayd  staff  14253164 28 Nov 14:13 EAC34548-0661-449E-8772-26FCBB59AAE2.mp3
-rw-------@   1 hayd  staff  13801565 28 Nov 13:58 EE5E9ECD-B637-46D0-8078-CE191E73304C.mp3
-rw-------@   1 hayd  staff  14449933 28 Nov 14:01 EED252F0-3A26-4490-AA6B-06419DFA4A62.mp3
drwxr-xr-x@ 202 hayd  staff      6464 28 Nov 15:14 IMImageStore-Default
drwxr-xr-x@   3 hayd  staff        96 28 Nov 15:30 JSStoreDataProvider

Sorted

PS YIL == Yesterday I Learned ( 'cos it was yesterday, when I learned this 🤣)

Monday, 28 November 2022

IBM Cloud Kubernetes Service - where's my KUBECONFIG ?

As much as anything, this is a reminder of where the KUBECONFIG ( Kubernetes configuration ) gets persisted, by default, when I retrieve the cluster config using the IBM Cloud CLI tool.

So, I have a cluster called davehay-cluster-24112022 in my account, which I spun up last week, targeting version 1.25.4_1522. This cluster has a unique ID of cdvpi5320u9g36cvpjrg.

I can retrieve this cluster configuration using a command such as: -

ibmcloud cs cluster config --cluster davehay-cluster-24112022

or, even: -

ibmcloud cs cluster config --cluster cdvpi5320u9g36cvpjrg --admin --network

( if I want to (a) admin the cluster and (b) get the Calico network configuration )

When I run this command, I get a helpful output reminding me where things get stored: -

OK
The configuration for cdvpi5320u9g36cvpjrg was downloaded successfully.
Network Config:
/Users/hayd/.bluemix/plugins/container-service/clusters/davehay-cluster-24112022-cdvpi5320u9g36cvpjrg-admin/calicoctl.cfg

Added context for cdvpi5320u9g36cvpjrg to the current kubeconfig file.
You can now execute 'kubectl' commands against your cluster. For example, run 'kubectl get nodes'.
If you are accessing the cluster for the first time, 'kubectl' commands might fail for a few seconds while RBAC synchronizes.

Most notably, this is the important bit: -

/Users/hayd/.bluemix/plugins/container-service/clusters/davehay-cluster-24112022-cdvpi5320u9g36cvpjrg-admin

If I inspect that subdirectory: -

ls -al

I see a bunch of files: -

total 64
drwxr-x---  10 hayd  staff   320 28 Nov 14:41 .
drwxr-x---   3 hayd  staff    96 28 Nov 14:41 ..
-rw-r--r--   1 hayd  staff  1679 28 Nov 14:41 admin-key.pem
-rw-r--r--   1 hayd  staff  1350 28 Nov 14:41 admin.pem
-rw-r--r--   1 hayd  staff  1188 28 Nov 14:41 ca-aaa00-davehay-cluster-24112022.pem
-rw-r--r--   1 hayd  staff  1188 28 Nov 14:41 ca.pem
-rw-r--r--   1 hayd  staff   230 28 Nov 14:41 calicoctl.cfg
-rw-r--r--   1 hayd  staff   135 28 Nov 14:41 calicoctl.cfg.template
-rw-r--r--   1 hayd  staff   628 28 Nov 14:41 kube-config-aaa00-davehay-cluster-24112022.yml
-rw-r--r--   1 hayd  staff   628 28 Nov 14:41 kube-config.yaml

including kube-config.yaml.

I can then setup my kubectl environment: -

export KUBECONFIG=/Users/hayd/.bluemix/plugins/container-service/clusters/davehay-cluster-24112022-cdvpi5320u9g36cvpjrg-admin/kube-config.yaml

and then run kubectl commands such as: -

kubectl get nodes -A

NAME         STATUS   ROLES    AGE     VERSION
10.240.0.5   Ready    <none>   3d22h   v1.25.4+IKS

As an alternate, I could do this: -

ibmcloud cs cluster config --cluster cdvpi5320u9g36cvpjrg --admin --output YAML > ~/k8s.yaml

export KUBECONFIG=~/k8s.yaml

which is a nice alternative.


Container images and Software Bill Of Materials (SBOM)

Today, I'll mainly be reading about, and tinkering with, Software Bill Of Materials (SBOM), in the context of container images.

I'm starting with this: -

Generate the SBOM for Docker images

A Software Bill Of Materials (SBOM) is analogous to a packing list for a shipment. It lists all the components that make up the software, or were used to build it. For container images, this includes the operating system packages that are installed (for example, ca-certificates) along with language-specific packages that the software depends on (for example, Log4j). The SBOM could include a subset of this information or even more details, like the versions of components and their source.

and this: -

How to Use “docker sbom” to Index Your Docker Image’s Packages

Software supply chain security has become topical in the wake of high profile dependency-based attacks. Producing an SBOM for your software artifacts can help you identify weaknesses and trim down the number of packages you rely on.

A new Docker feature integrates support for SBOM generation into the docker CLI. This lets you produce an SBOM alongside your build, then distribute it to consumers of your image.

and am now building the sbom-cli-plugin on my Mac and Ubuntu boxes ....


Thursday, 24 November 2022

K8s networking - where's my Flannel ?

Whilst setting up a new "vanilla" Kubernetes (K8s) cluster across two Ubuntu 20.04.5 VMs, I kept hitting a networking aka  - issue.

Having created the cluster using kubeadm init as per the following: -

export ip_address=$(ifconfig eth0 | grep inet | awk '{print $2}')

kubeadm init --apiserver-advertise-address=$ip_address --pod-network-cidr=172.20.0.0/16 --cri-socket unix:///run/containerd/containerd.sock

and having added Flannel as my Container Network Interface (CNI), as follows: -

curl -sL -o /tmp/kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f /tmp/kube-flannel.yml

I was having problems with the kube-flannel pods not starting.

Initially, I thought it was because I was specifying the --pod-network-cidr switch which, I'd read, was only required for Calico CNI.

Therefore, I reset the cluster using kubeadm reset and re-ran the init as follows: -

kubeadm init --apiserver-advertise-address=$ip_address --cri-socket unix:///run/containerd/containerd.sock

but, this time around, the Flannel pods failed with: -

pod cidr not assigned

I resorted to Google, and found this: -

pod cidr not assgned #728

in the Flannel repo on GitHub.

One response said: -

The node needs to have a podCidr. Can you check if it does - kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'

When I checked: -

kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'

nothing was returned, which was worrying.

I then read on ...

Did you see this note in the kubeadm docs

    There are pod network implementations where the master also plays a role in allocating a set of network address space for each node. When using flannel as the pod network (described in step 3), specify --pod-network-cidr=10.244.0.0/16. This is not required for any other networks besides Flannel.

So, third time lucky: -

kubeadm init --apiserver-advertise-address=$ip_address --pod-network-cidr=10.244.0.0/16 --cri-socket unix:///run/containerd/containerd.sock

and ... IT WORKED! 

kubectl get nodes

NAME                  STATUS   ROLES           AGE   VERSION
acarids1.foobar.com   Ready    control-plane   16m   v1.25.4
acarids2.foobar.com   Ready    <none>          14m   v1.25.4

kubectl get pods -A

NAMESPACE      NAME                                            READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-p9rxb                           1/1     Running   0          13m
kube-flannel   kube-flannel-ds-ztbj7                           1/1     Running   0          13m
kube-system    coredns-565d847f94-r8fdp                        1/1     Running   0          15m
kube-system    coredns-565d847f94-x4qhk                        1/1     Running   0          15m
kube-system    etcd-acarids1.foobar.com.                       1/1     Running   2          16m
kube-system    kube-apiserver-acarids1.foobar.com.             1/1     Running   2          16m
kube-system    kube-controller-manager-acarids1.foobar.com.    1/1     Running   0          16m
kube-system    kube-proxy-2nzbd                                1/1     Running   0          14m
kube-system    kube-proxy-jcwzr                                1/1     Running   0          15m
kube-system    kube-scheduler-acarids1.foobar.com.             1/1     Running   2          16m

Don't Panic - kubelet won't start but ....

Whilst building a new "vanilla" Kubernetes 1.25.4 cluster, I'd started the kubelet service via: -

systemctl start kubelet.service

and then decided to check how it was doing: -

systemctl status kubelet.service

● kubelet.service - kubelet: The Kubernetes Node Agent

     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)

    Drop-In: /etc/systemd/system/kubelet.service.d

             └─10-kubeadm.conf

     Active: activating (auto-restart) (Result: exit-code) since Thu 2022-11-24 01:04:45 PST; 9s ago

       Docs: https://kubernetes.io/docs/home/

    Process: 19526 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS >

   Main PID: 19526 (code=exited, status=1/FAILURE)

which was slightly worrying ....

I checked the system logs ( this is an Ubuntu 20.04.5 LTS box ) : -

cat /var/log/syslog

which, in part, reported: -

Nov 24 01:11:04 acarids2 systemd[1]: Started kubelet: The Kubernetes Node Agent.

Nov 24 01:11:04 acarids2 kubelet[20446]: E1124 01:11:04.390575   20446 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"

Nov 24 01:11:04 acarids2 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE

Nov 24 01:11:04 acarids2 systemd[1]: kubelet.service: Failed with result 'exit-code'.

Nov 24 01:11:14 acarids2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 67.

Nov 24 01:11:14 acarids2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.

specifically this: -

open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"

At that point, common sense prevailed ....

This was very early on in the build process, and I'd NOT yet initialised the K8s API Server ( on the Control Plane node ) and, therefore, NOT yet joined the Compute Node to the yet-to-be-started API Server.

Therefore, until I finished creating the cluster, and joining the Compute Node, what did I expect ?

Once I ran kubeadm init on the Control Plane node, and kubeadm join, on the Compute Node, all was well: -

systemctl status kubelet.service

which looks happier: -

● kubelet.service - kubelet: The Kubernetes Node Agent

     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)

    Drop-In: /etc/systemd/system/kubelet.service.d

             └─10-kubeadm.conf

     Active: active (running) since Thu 2022-11-24 01:08:29 PST; 8min ago

       Docs: https://kubernetes.io/docs/home/

   Main PID: 21417 (kubelet)

      Tasks: 16 (limit: 9442)

     Memory: 40.7M

     CGroup: /system.slice/kubelet.service

             └─21417 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubele>

Yay!

Tuesday, 22 November 2022

TIL - Docker secrets and BuildKit

Today I was initially struggling to build a container image using Docker BuildKit, via : -

DOCKER_BUILDKIT=1 docker build

and was somewhat confused by a reference to to : -

cat /run/secrets/SECRET.TXT

in the Dockerfile, given that I didn't have a file called /run/secrets/SECRET.TXT.

Thankfully, this article came to my rescue: -

Don’t leak your Docker image’s build secrets

where I use a new ( to me ) Docker CLI argument - --secret - to specify the ID of, and path, to the file on my local file-system that contains the secret.

Easy when you know ?

Reminder - installing podman and skopeo on Ubuntu 22.04

This follows on from: - Lest I forget - how to install pip on Ubuntu I had reason to install podman  and skopeo  on an Ubuntu box: - lsb_rel...