Friday 30 December 2022

Weirdness with Go

Whilst tinkering ( my fave word ) with a Go project,  I was trying to force a complete rebuild of the module, via go mod tidy.

Now I started by removing go.mod and go.sum : -

rm go.mod
rm go.sum

And, when I ran go mod tidy  I noticed that the command was: -

(a) failing
(b) looking at a whole slew of OTHER Go projects, including those outside of the directory within which I was running the command, namely : -

/root/go/src/github.com/docker/sbom-cli-plugin

Now, of course, note what I did previously i.e. rm go.mod 

THAT WAS MY DOWNFALL :-)

From this: -

Why does 'go mod tidy' record indirect and test dependencies in my 'go.mod'?

<snip>

go mod tidy updates your current go.mod to include the dependencies needed for tests in your module — if a test fails, we must know which dependencies were used in order to reproduce the failure.

</snip>

Guess what I didn't have ?

So, the key was to re-create go.mod via: -

go mod init

and then re-run the make command that had initially populated go.mod - which also ran go mod tidy and then ... everything is OK.

Phew !


Tuesday 20 December 2022

Giving Twitter the bird ...

After 15 years or so, I decided to jump out of the Twitter-verse, and join Mastodon ...

I'm probably following the herd ( well, a mastodon is kinda like a mammoth, which must've been a herd critter ), but ... here I am

@davehay@infosec.exchange 

https://infosec.exchange/@davehay

Let's see what happens next .....

Friday 2 December 2022

Reminder - how to split strings in a shell script

There's almost certainly 73 different ways to do this, but this worked for me

The problem to be solved ... I have a string containing three container architectures: -

export architecture="amd64 s390x arm64"

and I want to split it into three, each on a newline

This does the needful: -

echo $architecture | tr ";" "\n"

rather nicely

Tuesday 29 November 2022

TIL - Searching back through zsh history on macOS

A friend just showed me a rather nifty CLI hack on macOS.

With an iTerm session going, hit [ctrl] [r] and search back through zsh history

So, as per the screenshot, I hit that sequence and typed in kube - at which point zsh showed me all of my recent kubectl commands, allowing me to use [ctrl] [r] to toggle back through only those commands: -






which is nice

Thanks Jordan 😁

YIL - Where does Apple keep its podcasts on macOS ?

I wanted to grab a copy of a bunch of podcasts that I'd downloaded via the Apple Podcasts app ( I usually listen to them on my iPhone, but they're also replicated on my MacBook ).

A quick Google led me here: -

/Users/hayd/Library/Group Containers/243LU875E5.groups.com.apple.podcasts/Library/Cache

which is, I think you'll agree, a memorable file path ... 😁

Also, who doesn't love spaces in paths ? Microsoft Windows and C:\Program Files, I'm looking at you.

Anyway, having added double quotes to the path to protect myself ...

cd "/Users/hayd/Library/Group Containers/243LU875E5.groups.com.apple.podcasts/Library/Cache"

ls -al

total 281064
drwxr-xr-x@  15 hayd  staff       480 28 Nov 15:15 .
drwx------    8 hayd  staff       256 28 Nov 13:58 ..
-rw-------@   1 hayd  staff  14405359 28 Nov 13:57 12039B90-D8F4-4E7C-A72F-B12FD9446AD0.mp3
-rw-------@   1 hayd  staff  14480550 28 Nov 13:57 3A3F9C52-29D5-4078-A8DD-D72709ED8570.mp3
-rw-------@   1 hayd  staff  14177014 28 Nov 13:57 5F2546E2-38B6-4943-91AA-1B1F629F1DEF.mp3
-rw-------@   1 hayd  staff  14692089 28 Nov 14:07 6A29D77B-CAA8-4973-A401-71E766C50FFD.mp3
-rw-------@   1 hayd  staff   1303411 28 Nov 13:57 7F3A1811-C194-4EEA-9952-86BF3C7262CA.mp3
-rw-------@   1 hayd  staff  14174282 28 Nov 14:13 A3C5E109-4E10-4A89-97A1-393C40D7159B.mp3
-rw-------@   1 hayd  staff  14146202 14 Nov 18:13 D031B8D2-D555-4EB5-9256-AA4E7BE625C7.mp3
-rw-------@   1 hayd  staff  14000627 28 Nov 13:57 DDD7F857-DAEE-42DD-9979-C7661E4DDA1E.mp3
-rw-------@   1 hayd  staff  14253164 28 Nov 14:13 EAC34548-0661-449E-8772-26FCBB59AAE2.mp3
-rw-------@   1 hayd  staff  13801565 28 Nov 13:58 EE5E9ECD-B637-46D0-8078-CE191E73304C.mp3
-rw-------@   1 hayd  staff  14449933 28 Nov 14:01 EED252F0-3A26-4490-AA6B-06419DFA4A62.mp3
drwxr-xr-x@ 202 hayd  staff      6464 28 Nov 15:14 IMImageStore-Default
drwxr-xr-x@   3 hayd  staff        96 28 Nov 15:30 JSStoreDataProvider

Sorted

PS YIL == Yesterday I Learned ( 'cos it was yesterday, when I learned this 🤣)

Monday 28 November 2022

IBM Cloud Kubernetes Service - where's my KUBECONFIG ?

As much as anything, this is a reminder of where the KUBECONFIG ( Kubernetes configuration ) gets persisted, by default, when I retrieve the cluster config using the IBM Cloud CLI tool.

So, I have a cluster called davehay-cluster-24112022 in my account, which I spun up last week, targeting version 1.25.4_1522. This cluster has a unique ID of cdvpi5320u9g36cvpjrg.

I can retrieve this cluster configuration using a command such as: -

ibmcloud cs cluster config --cluster davehay-cluster-24112022

or, even: -

ibmcloud cs cluster config --cluster cdvpi5320u9g36cvpjrg --admin --network

( if I want to (a) admin the cluster and (b) get the Calico network configuration )

When I run this command, I get a helpful output reminding me where things get stored: -

OK
The configuration for cdvpi5320u9g36cvpjrg was downloaded successfully.
Network Config:
/Users/hayd/.bluemix/plugins/container-service/clusters/davehay-cluster-24112022-cdvpi5320u9g36cvpjrg-admin/calicoctl.cfg

Added context for cdvpi5320u9g36cvpjrg to the current kubeconfig file.
You can now execute 'kubectl' commands against your cluster. For example, run 'kubectl get nodes'.
If you are accessing the cluster for the first time, 'kubectl' commands might fail for a few seconds while RBAC synchronizes.

Most notably, this is the important bit: -

/Users/hayd/.bluemix/plugins/container-service/clusters/davehay-cluster-24112022-cdvpi5320u9g36cvpjrg-admin

If I inspect that subdirectory: -

ls -al

I see a bunch of files: -

total 64
drwxr-x---  10 hayd  staff   320 28 Nov 14:41 .
drwxr-x---   3 hayd  staff    96 28 Nov 14:41 ..
-rw-r--r--   1 hayd  staff  1679 28 Nov 14:41 admin-key.pem
-rw-r--r--   1 hayd  staff  1350 28 Nov 14:41 admin.pem
-rw-r--r--   1 hayd  staff  1188 28 Nov 14:41 ca-aaa00-davehay-cluster-24112022.pem
-rw-r--r--   1 hayd  staff  1188 28 Nov 14:41 ca.pem
-rw-r--r--   1 hayd  staff   230 28 Nov 14:41 calicoctl.cfg
-rw-r--r--   1 hayd  staff   135 28 Nov 14:41 calicoctl.cfg.template
-rw-r--r--   1 hayd  staff   628 28 Nov 14:41 kube-config-aaa00-davehay-cluster-24112022.yml
-rw-r--r--   1 hayd  staff   628 28 Nov 14:41 kube-config.yaml

including kube-config.yaml.

I can then setup my kubectl environment: -

export KUBECONFIG=/Users/hayd/.bluemix/plugins/container-service/clusters/davehay-cluster-24112022-cdvpi5320u9g36cvpjrg-admin/kube-config.yaml

and then run kubectl commands such as: -

kubectl get nodes -A

NAME         STATUS   ROLES    AGE     VERSION
10.240.0.5   Ready    <none>   3d22h   v1.25.4+IKS

As an alternate, I could do this: -

ibmcloud cs cluster config --cluster cdvpi5320u9g36cvpjrg --admin --output YAML > ~/k8s.yaml

export KUBECONFIG=~/k8s.yaml

which is a nice alternative.


Container images and Software Bill Of Materials (SBOM)

Today, I'll mainly be reading about, and tinkering with, Software Bill Of Materials (SBOM), in the context of container images.

I'm starting with this: -

Generate the SBOM for Docker images

A Software Bill Of Materials (SBOM) is analogous to a packing list for a shipment. It lists all the components that make up the software, or were used to build it. For container images, this includes the operating system packages that are installed (for example, ca-certificates) along with language-specific packages that the software depends on (for example, Log4j). The SBOM could include a subset of this information or even more details, like the versions of components and their source.

and this: -

How to Use “docker sbom” to Index Your Docker Image’s Packages

Software supply chain security has become topical in the wake of high profile dependency-based attacks. Producing an SBOM for your software artifacts can help you identify weaknesses and trim down the number of packages you rely on.

A new Docker feature integrates support for SBOM generation into the docker CLI. This lets you produce an SBOM alongside your build, then distribute it to consumers of your image.

and am now building the sbom-cli-plugin on my Mac and Ubuntu boxes ....


Thursday 24 November 2022

K8s networking - where's my Flannel ?

Whilst setting up a new "vanilla" Kubernetes (K8s) cluster across two Ubuntu 20.04.5 VMs, I kept hitting a networking aka  - issue.

Having created the cluster using kubeadm init as per the following: -

export ip_address=$(ifconfig eth0 | grep inet | awk '{print $2}')

kubeadm init --apiserver-advertise-address=$ip_address --pod-network-cidr=172.20.0.0/16 --cri-socket unix:///run/containerd/containerd.sock

and having added Flannel as my Container Network Interface (CNI), as follows: -

curl -sL -o /tmp/kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f /tmp/kube-flannel.yml

I was having problems with the kube-flannel pods not starting.

Initially, I thought it was because I was specifying the --pod-network-cidr switch which, I'd read, was only required for Calico CNI.

Therefore, I reset the cluster using kubeadm reset and re-ran the init as follows: -

kubeadm init --apiserver-advertise-address=$ip_address --cri-socket unix:///run/containerd/containerd.sock

but, this time around, the Flannel pods failed with: -

pod cidr not assigned

I resorted to Google, and found this: -

pod cidr not assgned #728

in the Flannel repo on GitHub.

One response said: -

The node needs to have a podCidr. Can you check if it does - kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'

When I checked: -

kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'

nothing was returned, which was worrying.

I then read on ...

Did you see this note in the kubeadm docs

    There are pod network implementations where the master also plays a role in allocating a set of network address space for each node. When using flannel as the pod network (described in step 3), specify --pod-network-cidr=10.244.0.0/16. This is not required for any other networks besides Flannel.

So, third time lucky: -

kubeadm init --apiserver-advertise-address=$ip_address --pod-network-cidr=10.244.0.0/16 --cri-socket unix:///run/containerd/containerd.sock

and ... IT WORKED! 

kubectl get nodes

NAME                  STATUS   ROLES           AGE   VERSION
acarids1.foobar.com   Ready    control-plane   16m   v1.25.4
acarids2.foobar.com   Ready    <none>          14m   v1.25.4

kubectl get pods -A

NAMESPACE      NAME                                            READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-p9rxb                           1/1     Running   0          13m
kube-flannel   kube-flannel-ds-ztbj7                           1/1     Running   0          13m
kube-system    coredns-565d847f94-r8fdp                        1/1     Running   0          15m
kube-system    coredns-565d847f94-x4qhk                        1/1     Running   0          15m
kube-system    etcd-acarids1.foobar.com.                       1/1     Running   2          16m
kube-system    kube-apiserver-acarids1.foobar.com.             1/1     Running   2          16m
kube-system    kube-controller-manager-acarids1.foobar.com.    1/1     Running   0          16m
kube-system    kube-proxy-2nzbd                                1/1     Running   0          14m
kube-system    kube-proxy-jcwzr                                1/1     Running   0          15m
kube-system    kube-scheduler-acarids1.foobar.com.             1/1     Running   2          16m

Don't Panic - kubelet won't start but ....

Whilst building a new "vanilla" Kubernetes 1.25.4 cluster, I'd started the kubelet service via: -

systemctl start kubelet.service

and then decided to check how it was doing: -

systemctl status kubelet.service

● kubelet.service - kubelet: The Kubernetes Node Agent

     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)

    Drop-In: /etc/systemd/system/kubelet.service.d

             └─10-kubeadm.conf

     Active: activating (auto-restart) (Result: exit-code) since Thu 2022-11-24 01:04:45 PST; 9s ago

       Docs: https://kubernetes.io/docs/home/

    Process: 19526 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS >

   Main PID: 19526 (code=exited, status=1/FAILURE)

which was slightly worrying ....

I checked the system logs ( this is an Ubuntu 20.04.5 LTS box ) : -

cat /var/log/syslog

which, in part, reported: -

Nov 24 01:11:04 acarids2 systemd[1]: Started kubelet: The Kubernetes Node Agent.

Nov 24 01:11:04 acarids2 kubelet[20446]: E1124 01:11:04.390575   20446 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"

Nov 24 01:11:04 acarids2 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE

Nov 24 01:11:04 acarids2 systemd[1]: kubelet.service: Failed with result 'exit-code'.

Nov 24 01:11:14 acarids2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 67.

Nov 24 01:11:14 acarids2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.

specifically this: -

open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"

At that point, common sense prevailed ....

This was very early on in the build process, and I'd NOT yet initialised the K8s API Server ( on the Control Plane node ) and, therefore, NOT yet joined the Compute Node to the yet-to-be-started API Server.

Therefore, until I finished creating the cluster, and joining the Compute Node, what did I expect ?

Once I ran kubeadm init on the Control Plane node, and kubeadm join, on the Compute Node, all was well: -

systemctl status kubelet.service

which looks happier: -

● kubelet.service - kubelet: The Kubernetes Node Agent

     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)

    Drop-In: /etc/systemd/system/kubelet.service.d

             └─10-kubeadm.conf

     Active: active (running) since Thu 2022-11-24 01:08:29 PST; 8min ago

       Docs: https://kubernetes.io/docs/home/

   Main PID: 21417 (kubelet)

      Tasks: 16 (limit: 9442)

     Memory: 40.7M

     CGroup: /system.slice/kubelet.service

             └─21417 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubele>

Yay!

Tuesday 22 November 2022

TIL - Docker secrets and BuildKit

Today I was initially struggling to build a container image using Docker BuildKit, via : -

DOCKER_BUILDKIT=1 docker build

and was somewhat confused by a reference to to : -

cat /run/secrets/SECRET.TXT

in the Dockerfile, given that I didn't have a file called /run/secrets/SECRET.TXT.

Thankfully, this article came to my rescue: -

Don’t leak your Docker image’s build secrets

where I use a new ( to me ) Docker CLI argument - --secret - to specify the ID of, and path, to the file on my local file-system that contains the secret.

Easy when you know ?

Thursday 6 October 2022

Getting the architecture right

 I'm building container images using Docker and Podman, and wanted to provide the consuming engineer with a way to specify the architecture e.g. amd64 or s390x etc. of a dependent download from within the Dockerfile itself.

The Dockerfile was hard-coded to pull down the amd64 version; which doesn't work too well on my IBM Z s390x boxes.

Therefore, I set an argument within the Dockerfile, via the ARG parameter, which I left defaulted to amd64 but which allowed the engineer to then over-ride that argument when they run the docker build or podman build command.

This over-ride works via the --build-arg option of the build command.

However, I wanted to be even more lazy and allow the OS upon which the command was being run to set the architecture.

I started with the following: -

docker build --build-arg=$(uname -p) -f Dockerfile .

which worked on my IBM Z box ( running Ubuntu 18.04.06 ).

However, on my x86-64 box, running Ubuntu 20.04.5this failed because, it transpires, uname -p returns x86_64 rather than amd64.

However, StackExchange came to my rescue : -

On a Debian-based system, the bullet-proof way of determining the architecture, as appropriate for use in a package’s file name, is
dpkg --print-architecture
Note that architecture-independent packages use all there, and you’d have to know that in advance.

So I changed my command to: -

docker build --build-arg ARCH=$(dpkg --print-architecture) -f Dockerfile .

which did the trick.

For the record, docker build and podman build are, in this context, the same

Friday 23 September 2022

More fun with pip

Again with the Python and pip fun, this time on my Mac, where commands such as: 

pip3 list

and: -

pip3 install --upgrade pip

were failing with: -

WARNING: There was an error checking the latest version of pip.

Long story short, the error was actually more obvious than I'd realised ....

In essence, it was actually telling me what was going wrong ...

Looking in indexes:.......

where the location was an internal repository for **ONE** single Python module, which would never have the things I was trying to install e.g. pip itself

I updated the config: -

vi ~/.pip/pip.conf

and commented out the aberrant index ... 

And now we're off to the races ...

Installing pylint on Linux - there's more than one way ...

 I'm running some Travis builds of a project which includes Python modules, and wanted to manually run the pylint linting tool across my GitHub repo, to compare/contrast what Travis was telling me.

I'm running Ubuntu 20.04.5 LTS so just went ahead and installed via the Aptitude package manager: -

apt-get update && apt-get install -y pylint

which seemed fine.

Then I noticed that the results differed - when I checked Travis, I was using pylint 2.15.2 whereas, on Ubuntu, I was way behind the curve: -

pylint --version

pylint 2.4.4
astroid 2.3.3
Python 3.8.10 (default, Jun 22 2022, 20:18:18)
[GCC 9.4.0]

My mistake ? Using Aptitude ...

I uninstalled pylint 

apt-get remove -y pylint

and installed it using the Package Installer (for) Python, aka pip

I did notice that the installation location was different: -

- apt-get install ->>> /usr/bin/pylint

- pip  ->>> /usr/local/bin/pylint

so had to restart my shell to pick up the new version: -

pylint --version

pylint 2.15.3
astroid 2.12.10
Python 3.8.10 (default, Jun 22 2022, 20:18:18)
[GCC 9.4.0]

which is better ( part of my planned change will be to have the Travis job use 2.15.3 as well )

Right, onwards for some linting .....


Saturday 10 September 2022

Been a while - back with some wget fun

Whilst digging into some Kubernetes testing, validating a fix that I'd made to my cluster, I was deploying both Nginx and Busybox pods.

The Nginx pods were deployed across all my Compute Nodes, to provide a web server tier, and I was using Busybox, again across all the Compute Nodes, to validate that Nginx connectivity was clean and green.

Having obtained the individual IPs of each of the Nginx pods- I'm using Calico Node as my Container Network Interface (CNI) layer - I wanted to hit each Nginx pod from each of the Busybox pods, in turn.

Now, ordinarily, I'd use a command such as: -

curl http://192.168.1.24:80/index.html

to retrieve the sample HTML page that Nginx presents.

However, Busybox is a very cut-down Linux-like environment and, therefore, it doesn't include the curl command.

Thankfully, Busybox does include the wget command, so I was able to use that in a similar manner: -

wget -q --output-document - http://192.168.1.24:80/index.html

which did the trick i.e. dumped the content of nginx.html to the console ( stdout ).

The key parameters are as follows: -

-q == run wget in quiet mode, only returning the required web page, rather than the normal debug

--output-document == tells wget what and where to return the output

- == tells --output-document to write to stdout rather than a specific file/document

Nice !

Friday 22 July 2022

Grokking grep

A colleague was tinkering with grep and, thanks to him, I discovered a bit more about the trusty little utility.

I had not really explored the -e switch: -

     -e pattern, --regexp=pattern

             Specify a pattern used during the search of the input: an input line is selected if it matches any of the specified patterns.  This option is

             most useful when multiple -e options are used to specify multiple patterns, or when a pattern begins with a dash (‘-’).

but he pointed out that that switch can have problems when parsing, say, a list of strings that are similar but different.

As an example, I have a set of files: -

-rw-r--r--   1 hayd  wheel    0 22 Jul 17:34 dave
-rw-r--r--   1 hayd  wheel    0 22 Jul 17:34 dave_1
-rw-r--r--   1 hayd  wheel    0 22 Jul 17:36 dave_2
-rw-r--r--   1 hayd  wheel    0 22 Jul 17:36 dave_3

If I want to e.g. query a directory for all files containing the word 'dave' I could do this: -

ls -al | grep dave

or, to be more specific: -

ls -al | grep -e Dave

both of which return: -

-rw-r--r--   1 hayd  wheel    0 22 Jul 17:34 dave
-rw-r--r--   1 hayd  wheel    0 22 Jul 17:34 dave_1
-rw-r--r--   1 hayd  wheel    0 22 Jul 17:36 dave_2
-rw-r--r--   1 hayd  wheel    0 22 Jul 17:36 dave_3

However, if I only want to return the file that's named 'dave' and ignore the rest, I'm somewhat stymied.

However, grep -w comes to the rescue: -

     -w, --word-regexp
             The expression is searched for as a word (as if surrounded by ‘[[:<:]]’ and ‘[[:>:]]’; see re_format(7)).  This option has no effect if -x is
             also specified.


so I can run: -

ls -al | grep -w dave

and just get this: -

-rw-r--r--   1 hayd  wheel    0 22 Jul 17:34 Dave

Isn't that nifty ?

Wednesday 20 July 2022

To start, press any key .... hey, where's the [Any] key ?

So I was writing some scripts for a demo that I delivered earlier today...

One script will run on my Mac, using the Zsh shell, the other will run on an Ubuntu box, using Bash.

In both cases, I wanted to pause the script and wait for user input before proceeding.

On Ubuntu, I did this: -

read -p "Press [Enter] to continue"

Sadly, however, on macOS 12.4 via zsh, that didn't work ...

read: -p: no coprocess

Thankfully, there's a better / different way: -

read "?Press [Enter] to continue"

which is nice

Today I Learned - Shorter redirection, what's not to like ?

So I write a lot of scrips for Bash and Zsh, and often make use of redirection where, for example, I generate a variable by running another command within parentheses.

As a specific example, I'd query my Floating IP and persist it to a text file for future use: -

ic is floating-ips --output JSON | jq -r '.[].address' > ~/floating_ip.txt

Later on, I'd look to use that Floating IP for, for example, a SSH connection, as follows: -

export floating_ip=$(cat floating_ip.txt)

ssh root@$floating_ip

Authorized uses only. All activity may be monitored and reported.

root@163.61.95.34's password:

However, one of my colleague's did point out that I could reduce the number of characters typed when setting the floating_ip variable: -

export floating_ip=$(< floating_ip.txt)

See, that saves a whole two characters ....

Mind you, I can also reduce things further via: -

ssh root@$(cat floating_ip.txt)

I love shelling !

Friday 29 April 2022

TIL - read-only variables in Linux

 A co-worker was seeing an exception: -

 line 8: TMOUT: readonly variable

when trying to SCP a file from a remote Linux box.

I did some digging and found a RedHat article: -

Why does it prompt "bash: TMOUT: readonly variable" when sudo'ing or ssh'ing to the system? 

that said, in part: -

The TMOUT variable is usually defined read-only to avoid users from unsetting or modifying its value. Due to this it's not possible to set it twice.

I reproduced the problem on my Ubuntu 18.04 box: -

-bash: TMOUT: readonly variable

with a pair of scripts: -

The first script: -

cat /etc/profile.d/a.sh

readonly TMOUT=500; export TMOUT

sets TMOUT as readonly

The second script: -

cat /etc/profile.d/b.sh

TMOUT=600; export TMOUT

then tries to override it

which I validated: -

fgrep -R TMOUT /etc/profile.d/

/etc/profile.d/a.sh:readonly TMOUT=500; export TMOUT
/etc/profile.d/b.sh:TMOUT=600; export TMOUT

I left my colleague to dig into /etc etc. and see what was going on, but TIL about read-only variables 

Having fun and games with Kubernetes networking

I'd forgotten how much I simply enjoy the opportunities for hacking - in the original naive sense of the word - that Kubernetes (K8s) offers.

Today I've been working to find out why CoreDNS didn't work in my cluster - clue, it was containerd that did it

However, I then started seeing: -

failed to allocate for range 0: no IP addresses available in range set: 10.48.131.1-10.48.131.62

from my CoreDNS pods, having "fixed" containerd.

Thankfully, Google Cloud have a doc for that: -

Pods display failed to allocate for range 0: no IP addresses available in range set error message

In part, it required me to stop containerd and kubelet, and clear out the previously defined IP address range: -

mkdir /var/lib/cni/networks

Once I did this, and restarted containerd and kubelet, we're back in the game !

Wednesday 27 April 2022

Munging JSON with JQ - without using grep and awk

Further to a previous post: -

Grep AND awk

I wanted to achieve much the same, but only using jq

So here we go: -

ic is images --output JSON | jq -r '.[] | {Name:.name,Architecture:.operating_system.architecture,Status:.status} | select((.Name | contains("ubuntu")) and (.Architecture | startswith("s390x")) and (.Status | startswith("available")))'

which returns: -

{
  "Name": "ibm-ubuntu-18-04-1-minimal-s390x-3",
  "Architecture": "s390x",
  "Status": "available"
}

as opposed to the alternative: -

ic is images | awk '/s390x/ && /ubuntu/ && /available/'

r018-e3d94080-972f-4f18-8a79-60a12d0b61c2   ibm-ubuntu-18-04-1-minimal-s390x-3                 available    s390x   ubuntu-18-04-s390x                   18.04 LTS Bionic Beaver Minimal Install   2               public       provider     none         -   

Tuesday 26 April 2022

Git and Ubuntu - not branching out

 Whilst trying to make some changes to a GitHub project, using an Ubuntu box, I hit an interesting issue with git switch - namely that it doesn't work 

So I'm running Ubuntu 18.04.6 LTS: -

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic

and had created a new branch: -

git branch

  test_doc_update
* develop

and then tried to switch to it: -

git switch test_doc_update

but instead got: -

git: 'switch' is not a git command. See 'git --help'.

I checked the version of Git: -

git --version

git version 2.17.1

and even tried upgrading it: -

apt-get update && apt-get upgrade -y git

...

The following packages will be upgraded:
  git git-man
2 upgraded, 0 newly installed, 0 to remove and 4 not upgraded.

...

but to no avail: -

git --version

git version 2.17.1

Instead, as per this Git: ‘switch’ is not a git command. See ‘git –help’  I used this instead: -

git checkout test_doc_update

Switched to branch 'test_doc_update'

git branch

* test_doc_update
  develop

Sorted !

For the record, I have a more up-to-date version of git on the Mac, via Homebrew

ls -al `which git`

lrwxr-xr-x  1 hayd  admin  28 19 Apr 08:59 /usr/local/bin/git -> ../Cellar/git/2.36.0/bin/git

git --version

git version 2.36.0

with which git switch DOES work

Monday 18 April 2022

Tinkering with arrays in ZSH

Someone had asked: -

if you have a command that returns two values, can you assign each value to a separate variable? For example, I have a command that returns two lines, and I want NAME to be set to the first line and TITLE to the second line. I seem to recall doing this in the past, but I can’t find an example or a note on it.

which made me think about how one might achieve this

I'm using zsh, and looked at dumping the output from a command into an array

I started with the sw_vers command: -

ProductName: macOS
ProductVersion: 12.3.1
BuildVersion: 21E258

and found a way to dump the multiline output into an array: -

array_of_lines=("${(@f)$(sw_vers)}")                                 

and checked each of the elements in the array: -

echo $array_of_lines[1]                                                    

ProductName: macOS

echo $array_of_lines[2]

ProductVersion: 12.3.1

echo $array_of_lines[3]

BuildVersion: 21E258

I also used ${#array_of_lines} to get the size of the array: -

echo ${#array_of_lines}

3

to iterate through the array and dump out each element in turn: -

for ((i = 1; i<=${#array_of_lines}; i++)); do echo $array_of_lines[i]; done

ProductName: macOS
ProductVersion: 12.3.1
BuildVersion: 21E258

Whether this helps, we shall see ....

Thursday 7 April 2022

AirPlay and  TV and macOS Monterey

For those using AirPlay to mirror to an external screen e.g.  TV etc. you can toggle whether the Screen Mirroring tool shows up in the Menu Bar - or not - via System Preferences -> Dock & Menu Bar -> Screen Mirroring





This is especially useful for me now that we're back in the office in meeting rooms equipped with  TV

Wednesday 30 March 2022

iTerm2 and keyboard navigation - Today I Learned

 I've been using iTerm2 off and on for a few years now, and find it especially useful when running live demos - one nice feature is that I can present multiple terminal windows in the same screen - windowed windows on a Mac 🤣

Right now, I'm using version 3.4.15 of iTerm, but that's not necessarily important right now.

The thing that was making me go "Hmmmm" was the "feature" that meant that I struggled to jump back along a line of commands using the [option] [arrow] keys ...

My muscle memory is that I can use that key combination to jump back and forth within a long list of commands on a single line to, for example, allow me to edit the command, add a switch etc.

By default, in iTerm, hitting [option] and [left-arrow] would, instead of jumping one word to the left ( as in the default macOS Terminal.app ) would instead append [D - which is a pain

So, here's an example: -

echo `date` && echo "Hello World" && echo "Dave"

I want to jump back to the second echo command, and change the greeting to "Hello World!".

Obviously, that's a very very trivial example, but the point remains the same ...

As ever, the internet helped - my Google Fu found this: -

Using Alt/Cmd + Right/Left Arrow in iTerm

which had a bunch of suggested solutions, the most simple of which was to simply append: -

bindkey "\e\e[D" backward-word
bindkey "\e\e[C" forward-word

to ~/.zshrc

Once I did this and restarted my shell ( I'm using Zsh obviously ), I was off to the races ....

For the record, I could've simply sourced in the updated ~/.zshrc via source ~/.zshrc


Grep AND awk

The context for this is that I'm working with the IBM Cloud CLI tool, specifically the Infrastructure Services plugin to find a specific subset of images, from which I can instantiate a Virtual Server Instrance (VSI).

Given this, I wanted to list the images in a specific region and filter by those that: -

- Run on the IBM Z Linux platform, aka s390x **AND**

- Are based upon Ubuntu **AND**

- Are available rather than deprecated

I started with this: -

ic is images

( where ic is an alias for the ibmcloud plugin )

and then filtered using a set of nested grep calls: -

ic is images | grep s390x | grep ubuntu | grep available

r134-0568bd44-e2b7-4876-bcbf-b74b244d3b66   ibm-ubuntu-18-04-1-minimal-s390x-2                 available    s390x   ubuntu-18-04-s390x                   18.04 LTS Bionic Beaver Minimal Install                      2               public       provider     none         -

r134-77d54c6d-5071-4729-ac79-a26c7404866a   ibm-ubuntu-18-04-6-minimal-s390x-3                 available    s390x   ubuntu-18-04-s390x                   18.04 LTS Bionic Beaver Minimal Install                      2               public       provider     none         -

r134-622ffa9c-47e1-450f-9a02-4f0567b7139f   ibm-ubuntu-20-04-2-minimal-s390x-2                 available    s390x   ubuntu-20-04-s390x                   20.04 LTS Focal Fossa Minimal Install                        2               public       provider     none         -

r134-323e4b0e-118b-4199-8d0b-745c05a75194   ibm-ubuntu-20-04-2-minimal-s390x-enclaved-2        available    s390x   ubuntu-20-04-s390x-enclaved          20.04 LTS Focal Fossa Minimal Install for Secure Execution   2               public       provider     none         -

but that's not very elegant

I knew how to use the OR operator in grep e.g.

ic is images | grep "s390x\|ubuntu"

but that's not what I want; I want AND rather that OR

This helped: -

How to run grep with multiple AND patterns?

In other words, use awk rather than grep, as per this: -

ic is images | awk '/s390x/ && /ubuntu/ && /available/'

r134-0568bd44-e2b7-4876-bcbf-b74b244d3b66   ibm-ubuntu-18-04-1-minimal-s390x-2                 available    s390x   ubuntu-18-04-s390x                   18.04 LTS Bionic Beaver Minimal Install                      2               public       provider     none         -

r134-77d54c6d-5071-4729-ac79-a26c7404866a   ibm-ubuntu-18-04-6-minimal-s390x-3                 available    s390x   ubuntu-18-04-s390x                   18.04 LTS Bionic Beaver Minimal Install                      2               public       provider     none         -

r134-622ffa9c-47e1-450f-9a02-4f0567b7139f   ibm-ubuntu-20-04-2-minimal-s390x-2                 available    s390x   ubuntu-20-04-s390x                   20.04 LTS Focal Fossa Minimal Install                        2               public       provider     none         -

r134-323e4b0e-118b-4199-8d0b-745c05a75194   ibm-ubuntu-20-04-2-minimal-s390x-enclaved-2        available    s390x   ubuntu-20-04-s390x-enclaved          20.04 LTS Focal Fossa Minimal Install for Secure Execution   2               public       provider     none         -

which is much better

Friday 25 March 2022

Fun with Pip and Python on macOS 12.3 Monterey

I'm tinkering with a tool that uses pip and python, and was seeing: -

zsh: /usr/local/bin/pip: bad interpreter: /usr/bin/python: no such file or directory

repeatedly e.g.

pip --version                                                                                                                                                                           

zsh: /usr/local/bin/pip: bad interpreter: /usr/bin/python: no such file or directory

pip 22.0.4 from /Users/hayd/.pyenv/versions/3.10.0/lib/python3.10/site-packages/pip (python 3.10)

I've got Python 3.1 installed via Homebrew, using PyEnv: -

which python                                                                                                                                                                            

/Users/hayd/.pyenv/shims/python

python --version                                                                                                                                                                        

Python 3.10.0

as per an earlier post: -


I checked using brew doctor but nothing jumped off the page at me.

head -n1 /usr/local/bin/pip

#!/usr/bin/python

ls -al /usr/bin/python

ls: /usr/bin/python: No such file or directory

I then moved pip 

mv /usr/local/bin/pip /usr/local/bin/pip.old

and tested: -

which pip

/Users/hayd/.pyenv/shims/pip

and now we're happy: -

pip --version

pip 22.0.4 from /Users/hayd/.pyenv/versions/3.10.0/lib/python3.10/site-packages/pip (python 3.10)

So, in other words, in the pre-PyEnv days, I had pip pointing at the older Apple-supplied Python 2.7, which has long since been deprecated/removed.

Now, post the mv command, it's pointing at pip via PyEnv, which is nice.

Thanks, as ever, to StackOverflow: -


Wednesday 23 March 2022

Running Podman on Ubuntu 18.04

 I'm leaving this to remind my future self ...

echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_18.04/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list

curl -L "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_18.04/Release.key" | sudo apt-key add -

sudo apt-get update

sudo apt-get -y upgrade

sudo apt-get install -y podman

podman run docker.io/docker/whalesay:latest cowsay Hello World!

when I want to install and test Podman on Ubuntu 18.04

Remembering that we'll end up with an older version of Podman ...

podman version

Version:      3.0.1
API Version:  3.0.0
Go Version:   go1.15.2
Built:        Thu Jan  1 00:00:00 1970
OS/Arch:      linux/amd64


Monday 21 March 2022

Bash and history - where's it gone ?

I logged into an Ubuntu 18.04.6 LTS box and noticed that my Bash history was completely and utterly gone.

Having typed the command: -

history

I got absolutely nothing back.

Knowing that the command really just outputs the content of the ~/.bash_history file, I checked that out: -

ls -al ~/.bash_history

-rw------- 1 root root 9007 Mar 20 13:39 /home/hayd/.bash_history

Notice that my user name is hayd ....

Also notice the user/group ownership of the file .... root ....

I changed the ownership of the file: -

sudo chown hayd:hayd ~/.bash_history

[sudo] password for hayd: 

and logged out and back in again ...

And now I have my history back ...

...
  348  ~/upgrade.sh 
  349  which podman
  350  apt-get install -y podman
  351  ~/upgrade.sh 
  352  ~/createHelloWorld.sh 
  353  history 
  354  lsb_release -a
...

Also, as mentioned elsewhere, I use an alias - hist - to output history without line numbers: -

alias hist='history | cut -c 8-'

...
~/upgrade.sh 
which podman
apt-get install -y podman
~/upgrade.sh 
~/createHelloWorld.sh 
history 
lsb_release -a
...

Wednesday 16 March 2022

ZSH and history - going back in time

OK, so this didn't take long to "fix" ...

As an ex-Bash user, I've had an alias setup on all my Unix boxen to allow me to list out my shell's history, without line numbers.

So, therefore, rather than typing: -

history

and seeing line numbers, such as these: -

  989  podman version
  990  cd
  991  hostname
  992  ls -al ~/.ssh
  993  ls -al ~/.ssh
  994  date
  995  cat ~/.ssh/readme.txt

I have an alias setup: -

hist='history | cut -c 8-'

which returns much the same but without line numbers e.g.

hostname
ls -al ~/.ssh
ls -al ~/.ssh
date
cat ~/.ssh/readme.txt

This alias was setup in ~/.bash_profile and is now set up in ~/.zshenv.

However, I'd noticed that hist would only ever return the last 16 commands ...

This was easily solved: -


...
History accepts a range in zsh entries as [first] [last] arguments, so to get them all run history 0.
...

Therefore, I just needed to update my alias: -

alias hist='history 0 | cut -c 8-'

and now I see everything.....

Podman say "No"

Whilst tinkering with podman today, I saw this: -

podman images

Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM

Error: unable to connect to Podman socket: server API version is too old. Client "4.0.0" server "3.4.4"

I tried: -

podman system connection list

which returned: -

Name                         URI                                                          Identity                                 Default

podman-machine-default       ssh://core@localhost:58173/run/user/1000/podman/podman.sock  /Users/hayd/.ssh/podman-machine-default  true

podman-machine-default-root  ssh://root@localhost:58173/run/podman/podman.sock            /Users/hayd/.ssh/podman-machine-default  false

which wasn't terribly useful, and then checked the machine: -

podman machine list

NAME                     VM TYPE     CREATED      LAST UP            CPUS        MEMORY      DISK SIZE

podman-machine-default*  qemu        2 weeks ago  Currently running  1           2.147GB     10.74GB

Having checked versions: -

podman --version

podman version 4.0.2

I tried restarting the machine: -

podman machine stop

Machine "podman-machine-default" stopped successfully

podman machine start

Starting machine "podman-machine-default"

INFO[0000] waiting for clients...                       

INFO[0000] new connection from  to /var/folders/b5/8vqr9tt54v94jxzs0_k2qq2m0000gn/T/podman/qemu_podman-machine-default.sock 

Waiting for VM ...

INFO[0012] Socket forward established: /Users/hayd/.local/share/containers/podman/machine/podman-machine-default/podman.sock to /run/user/0/podman/podman.sock 

ERRO[0013] Couldn't restablish ssh tunnel on path: /run/user/0/podman/podman.sock: ssh: rejected: connect failed (open failed) 

WARN[0013] API socket failed ping test                  


This machine is currently configured in rootless mode. If your containers

require root permissions (e.g. ports < 1024), or if you run into compatibility

issues with non-podman clients, you can switch using the following command: 


podman machine set --rootful


API forwarding listening on: /Users/hayd/.local/share/containers/podman/machine/podman-machine-default/podman.sock


The system helper service is not installed; the default Docker API socket

address can't be used by podman. If you would like to install it run the

following command:


sudo /usr/local/Cellar/podman/4.0.2/bin/podman-mac-helper install


You can still connect Docker API clients by setting DOCKER_HOST using the

following command in your terminal session:


export DOCKER_HOST='unix:///Users/hayd/.local/share/containers/podman/machine/podman-machine-default/podman.sock'


Machine "podman-machine-default" started successfully

but still saw the same issue: -

podman images

Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM

Error: unable to connect to Podman socket: server API version is too old. Client "4.0.0" server "3.4.4"

I've installed podman via Homebrew: -

brew info podman

podman: stable 4.0.2 (bottled), HEAD

Tool for managing OCI containers and pods

https://podman.io/

/usr/local/Cellar/podman/4.0.2 (172 files, 48.7MB) *

  Poured from bottle on 2022-03-15 at 12:23:57

From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/podman.rb

License: Apache-2.0

==> Dependencies

Build: go ✘, go-md2man ✘

Required: qemu ✔

==> Options

--HEAD

Install HEAD version

==> Caveats

zsh completions have been installed to:

  /usr/local/share/zsh/site-functions

==> Analytics

install: 10,898 (30 days), 36,534 (90 days), 104,199 (365 days)

install-on-request: 10,891 (30 days), 36,516 (90 days), 104,182 (365 days)

build-error: 13 (30 days)

Nothing was terribly informative, so I chose to nuke the machine: -

podman machine rm

The following files will be deleted:


/Users/hayd/.ssh/podman-machine-default

/Users/hayd/.ssh/podman-machine-default.pub

/Users/hayd/.config/containers/podman/machine/qemu/podman-machine-default.ign

/Users/hayd/.local/share/containers/podman/machine/qemu/podman-machine-default_fedora-coreos-35.20220213.2.0-qemu.x86_64.qcow2

/Users/hayd/.config/containers/podman/machine/qemu/podman-machine-default.json



Are you sure you want to continue? [y/N] y

and create a new one: -

podman machine init

Downloading VM image: fedora-coreos-35.20220305.dev.0-qemu.x86_64.qcow2.xz: done  

Extracting compressed file

Image resized.

Machine init complete

To start your machine run:


podman machine start

podman machine start

Starting machine "podman-machine-default"
INFO[0000] waiting for clients...                       
INFO[0000] new connection from  to /var/folders/b5/8vqr9tt54v94jxzs0_k2qq2m0000gn/T/podman/qemu_podman-machine-default.sock 
Waiting for VM ...
INFO[0018] Socket forward established: /Users/hayd/.local/share/containers/podman/machine/podman-machine-default/podman.sock to /run/user/501/podman/podman.sock 

This machine is currently configured in rootless mode. If your containers
require root permissions (e.g. ports < 1024), or if you run into compatibility
issues with non-podman clients, you can switch using the following command: 

podman machine set --rootful

API forwarding listening on: /Users/hayd/.local/share/containers/podman/machine/podman-machine-default/podman.sock

The system helper service is not installed; the default Docker API socket
address can't be used by podman. If you would like to install it run the
following command:

sudo /usr/local/Cellar/podman/4.0.2/bin/podman-mac-helper install

You can still connect Docker API clients by setting DOCKER_HOST using the
following command in your terminal session:

export DOCKER_HOST='unix:///Users/hayd/.local/share/containers/podman/machine/podman-machine-default/podman.sock'

Machine "podman-machine-default" started successfully

and then: -

podman images

REPOSITORY  TAG         IMAGE ID    CREATED     SIZE

which looked better.

I started a container: -

podman run -it alpine:latest sh

Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob sha256:59bf1c3509f33515622619af21ed55bbe26d24913cedbca106468a5fb37a50c3
Copying blob sha256:59bf1c3509f33515622619af21ed55bbe26d24913cedbca106468a5fb37a50c3
Copying config sha256:c059bfaa849c4d8e4aecaeb3a10c2d9b3d85f5165c66ad3a4d937758128c4d18
Writing manifest to image destination
Storing signatures
/ # uname -a
Linux c4367a60c3d4 5.15.18-200.fc35.x86_64 #1 SMP Sat Jan 29 13:54:17 UTC 2022 x86_64 Linux
/ # exit

and then re-checked the downloaded images: -

podman images

REPOSITORY                TAG         IMAGE ID      CREATED       SIZE
docker.io/library/alpine  latest      c059bfaa849c  3 months ago  5.87 MB

podman version

Client:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.17.8

Built:      Wed Mar  2 14:04:36 2022
OS/Arch:    darwin/amd64

Server:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.16.14

Built:      Thu Mar  3 14:56:56 2022
OS/Arch:    linux/amd64

Visual Studio Code - Wow 🙀

Why did I not know that I can merely hit [cmd] [p]  to bring up a search box allowing me to search my project e.g. a repo cloned from GitHub...