Wednesday 29 September 2021

Tinkering with Kubernetes Networking - Today I'm Learning ....

I'm having oh-so-much fun debugging a TLS-encrypted, client-certificate-protected service running on Kubernetes 1.20, and was looking for a way to "see inside" the ClusterIP service itself.

This: -

Lab 01 - Kubernetes Networking, using Service Types, Ingress and Network Policies to Control Application Access

provided some really useful insight: -

The HelloWorld Service is accessible now but only within the cluster. To expose a Service onto an external IP address, you have to create a ServiceType other than ClusterIP. Apps inside the cluster can access a pod by using the in-cluster IP of the service or by sending a request to the name of the service. When you use the name of the service, kube-proxy looks up the name in the cluster DNS provider and routes the request to the in-cluster IP address of the service.

To allow external traffic into a kubernetes cluster, you need a NodePort ServiceType. If you set the type field of Service to NodePort, Kubernetes allocates a port in the range 30000-32767. Each node proxies the assigned NodePort (the same port number on every Node) into your Service.

Patch the existing Service for helloworld to type: NodePort,

$ kubectl patch svc helloworld -p '{"spec": {"type": "NodePort"}}'

service/helloworld patched

Describe the Service again,

$ kubectl describe svc helloworld

Name:                     helloworld
Namespace:                default
Labels:                   app=helloworld
Annotations:              <none>
Selector:                 app=helloworld
Type:                     NodePort
Port:                     <unset>  8080/TCP
TargetPort:               http-server/TCP
NodePort:                 <unset>  31777/TCP
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

In this example, Kubernetes added a NodePort with port value 31777 in this example. For everyone, this is likely to be a different port in the range 30000-32767.

You can now connect to the service via the public IP address of any worker node in the cluster and traffic gets forwarded to the service, which uses service discovery and the selector of the Service to deliver the request to the assigned pod. With this piece in place we now have a complete pipeline for load balancing external client requests to all the nodes in the cluster.

With that, I can (temporarily) patch my ClusterIP service to a NodePort, and then poke into it from the outside, using the K8s Node's external IP: -

kubectl get node nodename --output json | jq -r .status.addresses

and the newly allocated NodePort e.g. 31777.


Tuesday 28 September 2021

Nuking a pesky K8s namespace from orbit ...

I was having problems with a recalcitrant Kubernetes namespace that just did not want to be deleted ...

kubectl get namespaces | grep -v Active

NAME              STATUS        AGE
foobar            Terminating   7d22h

kubectl get namespace foobar --output json

    "apiVersion": "v1",
    "kind": "Namespace",
    "metadata": {
        "creationTimestamp": "2021-09-20T09:30:17Z",
        "deletionTimestamp": "2021-09-27T17:18:57Z",
        "name": "foobar",
        "resourceVersion": "772406",
        "uid": "22863d2f-f956-4da8-bfd5-dd70a4d1685a"
    "spec": {
        "finalizers": [
    "status": {
        "conditions": [
                "lastTransitionTime": "2021-09-27T17:19:04Z",
                "message": "All resources successfully discovered",
                "reason": "ResourcesDiscovered",
                "status": "False",
                "type": "NamespaceDeletionDiscoveryFailure"
                "lastTransitionTime": "2021-09-27T17:19:04Z",
                "message": "All legacy kube types successfully parsed",
                "reason": "ParsedGroupVersions",
                "status": "False",
                "type": "NamespaceDeletionGroupVersionParsingFailure"
                "lastTransitionTime": "2021-09-27T17:19:04Z",
                "message": "All content successfully deleted, may be waiting on finalization",
                "reason": "ContentDeleted",
                "status": "False",
                "type": "NamespaceDeletionContentFailure"
                "lastTransitionTime": "2021-09-27T17:19:04Z",
                "message": "Some resources are remaining: has 1 resource instances",
                "reason": "SomeResourcesRemain",
                "status": "True",
                "type": "NamespaceContentRemaining"
                "lastTransitionTime": "2021-09-27T17:19:04Z",
                "message": "Some content in the namespace has finalizers remaining: backup-operator-periodic in 1 resource instances",
                "reason": "SomeFinalizersRemain",
                "status": "True",
                "type": "NamespaceFinalizersRemaining"
        "phase": "Terminating"

This: -

Namespace "stuck" as Terminating, How do I remove it?

gave me some inspiration: -

kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -n foobar

NAME                  AGE
foobar-etcd-cluster   7d22h

kubectl get etcdbackups -A

NAMESPACE   NAME                  AGE

foobar      foobar-etcd-cluster   7d22h

kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n foobar

NAME                                                      AGE   7d22h
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use Ingress

This bit was the most useful: -

kubectl get namespace foobar --output json > /tmp/foobar.json

I edited the resulting JSON document: -

vi /tmp/foobar.json

removing the kubernetes finaliser, from this: -

    }, "spec": { "finalizers": [ "kubernetes" ] },
to this: -

    }, "spec": { "finalizers": [ ] },

and then started the K8s proxy: -

kubectl proxy

in one terminal and, from another terminal, ran cURL: -

cd /tmp

curl -k -H "Content-Type: application/json" -X PUT --data-binary @foobar.json https://localhost:8001/api/v1/namespaces/foobar/finalize

Sadly this responded with: -

curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number


Well, that wasn't ever gonna work; I was trying to hit the API server using HTTPS rather than HTTP - I guess that kubectl proxy only listens using HTTP.

Therefore, the solution was to hit the endpoint using HTTP rather than HTTPS.


However, once I changed from localhost to all was well: -

curl -k -H "Content-Type: application/json" -X PUT --data-binary @foobar.json

and now all is well: -

kubectl get namespaces

NAME              STATUS   AGE
default           Active   9d
ibm-cert-store    Active   9d
ibm-operators     Active   9d
ibm-system        Active   9d
kube-node-lease   Active   9d
kube-public       Active   9d
kube-system       Active   9d

Monday 27 September 2021

Hello World, what is your version ?

Whilst having a discussion with a colleague about running Eclipse on macOS Big Sur 11.6 with Java 8, I wanted to check whether the latest Eclipse 2021-09 would work with Java 8 ( aka Java 1.8.0_251 ).

So I "wrote" a Hello World to check: -

public class hello {

    public static void main(String[] args) {
        // Prints "Hello, World" to the terminal window.
        System.out.println("Hello, World");
        String version = System.getProperty("java.version").toString();
        System.out.println("JVM version is " + version);

I ran this in Eclipse: -

having also confirmed that I could run it locally: -

java -cp ~/eclipse-workspace/HelloWorld/bin/ hello

Hello, World
JVM version is 1.8.0_251

Tuesday 21 September 2021

Today I Learned - more about Git config

Whilst trying to create a container image from a project on GitHub, I hit an issue with the cloning process of the GH repository ...

Specifically, the repo contains a submodule which, in my simple brain, is where one project/repo includes the content of another project/repo as an internal dependency.

I was cloning the original project using it's HTTPS URL rather than, say, via a SSH key pair.

Which is normally fine ...

However, I needed to provide a GitHub Personal Access Token for the clone, as the repo is private/protected.

This works perfectly: -

git clone --recurse-submodules 


Well, to a point .... the repo clone worked BUT the submodule failed to clone, because it, too, wanted a set of credentials ...

Thus I saw: -

Submodule 'submodule-dependency' ( registered for path 'submodule-dependency'
Cloning into '/tmp/project-repo/submodule-dependency'...
Username for '': ^Cwarning: Clone succeeded, but checkout failed.

which was a PITA.

There is, however, a solution AND a happy ending ...

I'd previously used the insteadOf property within my Git config: -

git config --global url."".insteadOf ""

when I wanted to ensure that git clone used my SSH private key for authentication to private/protected repositories ...

There's an equivalent in the world of cloning submodules via HTTPS ...

git config --global url."https://x-access-token:${ACCESS_TOKEN}".insteadOf

which solves the problem nicely.

One thing to note, however, this does mean that the value of the $ACCESS_TOKEN variable is then persisted to the global Git configuration, and can thus be seen via: -

git config --global --list

There is, however, a solution to this: -

git config --global --unset url."https://x-access-token:${ACCESS_TOKEN}".insteadOf

git config --global --unset credential.helper

Easy when you know, NOW ?

Monday 20 September 2021

Doh, Jenkins says " Is a directory"

I'm using Jenkins to build a container image, using a Jenkinsfile hosted in a GitHub project, and was hitting this each and every time I ran my build: - Is a directory
at Method)
at java.nio.file.Files.readAllBytes(
at hudson.FilePath$ReadToString.invoke(
at hudson.FilePath$ReadToString.invoke(
at hudson.FilePath.act(
at hudson.FilePath.act(
at hudson.FilePath.readToString(
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(
at hudson.model.ResourceController.execute(
Finished: FAILURE

Like a fool, I dug into the actual Jenkinsfile, looking at the Groovy formatting and conventions, including the embedded Bash script ...

And couldn't find the root cause ...

And then I compared and contrasted my new Jenkins build job with one that actually worked ...

And then realised that it was a PEBCAK 

Yep, I'd specified the Script Path quite correctly BUT hadn't actually specified the name of the Jenkinsfile ...

Ordinarily, that'd probably be OK, assuming that Jenkins assumes that all Jenkinsfiles are called Jenkinsfile.

But... in my case, the Jenkinsfile is called ... build_push_etcd_operator_Jenkinsfile.

In other words, the error message: - Is a directory

was 100% correct; /etcd-operator/ is definitely a directory, not a file ....

Once I updated the path: -

all was good ๐Ÿ˜…

Can you say "Doofus" ?  I bet you can .....

Friday 17 September 2021

Today I Learned - how to grep for two strings

So I do a lot of work with IBM Container Registry (ICR) via the IBM Cloud command-line tool.

Having logged into my IBM Cloud account: -

ic login -sso

I then log into my ICR instance: -

ic cr login

and go look at my images.

I'm specifically interested in those images which have vulnerabilities, as scanned by the oh-so-useful built-in Vulnerability Advisor (VA) tool.

In the past, I did this via an unwieldy use of grep -v as per this: -

ic cr images | grep -v "No Issues" | grep -v "Unsupported OS"

effectively parsing out images that have either No Issues or are based upon an Unsupported OS.

Is there a better way, I thought to myself ?

Well, dur, of course there is: -

ic cr images | grep -v "No Issues\|Unsupported OS"

which does the same job but in fewer characters.

Thursday 16 September 2021

Note to self - using the CRI tool - crictl - to clean up unready pods

 Purely 'cos I know I'll need this again: -

for i in `crictl pods | grep NotReady | awk '{print $1}'`; do crictl rmp $i; done

which is for use when I've got a bunch of NotReady pods waiting to be nuked.

More fun with Docker and Homebrew - authentication this time

Following on from my earlier post: -

Homebrew on macOS - Docker says "No" - well, kinda 

I hit a related problem post the installation of Minikube etc.

When I tried to log into Docker Hub: -

docker login -u davidhay1969

I saw this: -


Error saving credentials: error storing credentials - err: exec: "docker-credential-desktop": executable file not found in $PATH, out: ``

This post: -

docker-credential-desktop not installed or not available in PATH

gave me the "solution" - it was another hangover from having Docker Desktop installed, namely this file: -


For whatever good reason, it was necessary to edit it, and replace credsStore with credStore

Once I did this, all was well ...

And, of course, docker login completely rewrote that file, removing any reference to either credsStore or credStore ...

Again... HANGOVER!

Wednesday 15 September 2021

Homebrew on macOS - Docker says "No" - well, kinda

Whilst helping out a friend, I was running through a process to get Docker running without Docker Desktop, as per this: -

Run Docker without Docker Desktop on macOS

which makes heavy use of Homebrew.

This has one install stuff such as hyperkit and minikube : -

brew install hyperkit

brew install minikube

which is nice.

However, I was seeing interesting results from Homebrew in general ...

As an example when I ran brew upgrade or brew install hyperkit I was seeing errors such as: -

Error: Permission denied @ apply2files - /usr/local/lib/docker/cli-plugins

and: -

==> Pouring kubernetes-cli--1.22.1.big_sur.bottle.tar.gz

Error: The `brew link` step did not complete successfully

The formula built, but is not symlinked into /usr/local

Could not symlink bin/kubectl

Target /usr/local/bin/kubectl

already exists. You may want to remove it:

  rm '/usr/local/bin/kubectl'

This on macOS 11.6 on my good ole 2014 Mac mini ...

The solution was relatively simple ...

At some point in the past, I'd had Docker Desktop installed, which was obviously running as root via sudo etc.

Therefore, having removed Docker Desktop, I needed to clean up the permissions : -

sudo chown -R davidhay:admin /usr/local/lib

Once I did this, all was well ....

Tuesday 14 September 2021

Aide Memoire - Git and SSH rather than HTTPS

 So I'm tinkering with a GitHub project and, having cloned it, was trying to build it using the Makefile: -

cd etcd-operator/


which fairly quickly tried/failed to pull in a submodule: -

Cleanup submodule go.mod files
rm -f ./shared-logger/go.mod
Generate submodule go.mod files
git submodule update --init --recursive
Submodule 'shared-logger' ( registered for path 'shared-logger'
Cloning into '/root/etcd-operator/shared-logger'...
Username for '': 

Now I have hit this many times before ...

And there is a solution ....

But I couldn't quite remember it ...

Thankfully, Google had the answer, Google is my friend: -

Go: Make error: unable to update repository: fatal: could not read Username for ‘’: terminal prompts disabled

The solution was to update my Git config: -

git config --global url."".insteadOf ""

In other words, Git needs to know to use my SSH private key for authentication, rather than the more default HTTPS URL.


Friday 10 September 2021

TIL - building bash using cat and tee and EOF

So I've started to make more use of cat and tee and the EOF symbol when creating documentation describing how one can create various different artefacts from, say, Bash on Linux and macOS.

Typically, I'm focusing upon configuration files e.g. files with .conf and .yaml but, a few days back, I wanted to create a Bash script.

Here's an example of what I'd do for a containerd.conf configuration file: -

cat <<EOF | tee /etc/modules-load.d/containerd.conf

Nice and simple, right ?

I then tried the same for a Bash script, here's a trivial example that munges a JSON document - me.json : -

    "name": "Dave Hay",
    "eddress": "",
    "github": "davidhay1969"

called ./ created thusly: -

cat << EOF | tee
name=$(cat ~/me.json | jq -r .name)
echo "Hello, "$name

So, when I paste that into a macOS Terminal session, using /bin/bash this is what I see: -

cat << EOF | tee
> #!/bin/bash
> name=$(cat ~/me.json | jq -r .name)
> echo "Hello, "$name
name=Dave Hay
echo "Hello, "

with the resulting script looking a bit ... weird ...


name=Dave Hay
echo "Hello, "

Having made the script executable: -

chmod +x

it doesn't run too well: -


./ line 2: Hay: command not found

Thankfully, when I hit the original issue a few days back, something I read ( and, alas, I didn't grab the actual source ), the trick is to wrap the first EOF in single quotes, like this: -

cat << 'EOF' | tee
name=$(cat ~/me.json | jq -r .name)
echo "Hello, "$name

This is what happens when I paste the above into the same Terminal Bash session: -

cat << 'EOF' | tee
> #!/bin/bash
> name=$(cat ~/me.json | jq -r .name)
> echo "Hello, "$name
name=$(cat ~/me.json | jq -r .name)
echo "Hello, "$name

and the corresponding script looks OK: -


name=$(cat ~/me.json | jq -r .name)
echo "Hello, "$name

and, even better, it actually works: -


Hello, Dave Hay

Can you say "Yay" ?

etcd - base64 isn't the only way

Having written: -

etcd - Today I learned ... 

And there's more - munging base64 in JSON for etcd

I forgot to mention that base64 isn't the only way to write to / read from etcd ...

The encoding is required where we choose to use the gRPC route to etcd via its REST APIs : -

Why gRPC gateway

etcd3 API

v3.5 docs

However, the etcdctl tool does NOT need to use base64 encoding.

Building upon my previous example, where I used the following command: -

jq -r '.[] |= @base64' dave.json > dave_encoded.json

to encode the key and value elements of a JSON document - dave.json - which looks like this: -

"value": {
"name":"Dave Hay",

and produce an alternate version - dave_encoded.json - with the base64 encoded elements therein, and then fed etcd using that encoded document: -

curl -X POST --silent --cacert /root/ssl/ca-cert.pem --cert /root/ssl/client-cert.pem --key /root/ssl/client-key.pem https://localhost:2379/v3/kv/put -d @dave_encoded.json | jq

and then use the encoded key - MDEyMzQ1 - to query etcd  and pull the data back out: -

curl -X POST --silent --cacert /root/ssl/ca-cert.pem --cert /root/ssl/client-cert.pem --key /root/ssl/client-key.pem https://localhost:2379/v3/kv/range -d '{"key":"MDEyMzQ1"}' | jq -r .kvs[].value | base64 -d | jq

  "name": "Dave Hay",
  "id": "davehay1969"

Well, there is an alternate approach, using etcdctl which is further described here: -

Interacting with etcd

So, given that I know the key ID of 012345 I can quite simply ask etcd to provide the value: -

etcdctl --endpoints=localhost:2379 --cacert="/root/ssl/ca-cert.pem" --cert="/root/ssl/client-cert.pem" --key="/root/ssl/client-key.pem" get 012345 | jq

  "name": "Dave Hay",
  "id": "davehay1969"

which, I think you'll agree, is way simpler.

Bottom line, knowing that the gRPC APIs use base64 is crucial; knowing that etcdctl exists is also rather neat-o.

Thursday 9 September 2021

And there's more - munging base64 in JSON for etcd

Following on from my earlier post: -

etcd - Today I learned ...

I dug into jq more, and found this: -

base64 decoding function #47

specifically this comment: -

As this is showing up on Google a lot, and good documentation on jq is sparse, here is to everybody who lands here:


echo '{"foo": "Ym9iIGxpa2VzIGFsaWNlCg=="}' | jq '.foo | @base64d'

Or even use it when building new objects:

echo '{"foo": "Ym9iIGxpa2VzIGFsaWNlCg=="}' | jq '{encoded: .foo, decoded: .foo | @base64d}'

dating back to 2018.

This led me to a neat-o mechanism to encode the key and value of my JSON document: -

cat dave.json

"value": {
"name":"Dave Hay",

jq -r '.[] |= @base64' dave.json

  "key": "MDEyMzQ1",
  "value": "eyJuYW1lIjoiRGF2ZSBIYXkiLCJpZCI6ImRhdmVoYXkxOTY5In0="

I then used this to generate a new JSON document: -

jq -r '.[] |= @base64' dave.json > dave_encoded.json

which I then fed into etcd: -

curl -X POST --silent --cacert /root/ssl/ca-cert.pem --cert /root/ssl/client-cert.pem --key /root/ssl/client-key.pem https://localhost:2379/v3/kv/put -d @dave_encoded.json | jq

  "header": {
    "cluster_id": "14841639068965178418",
    "member_id": "10276657743932975437",
    "revision": "2",
    "raft_term": "2"

and then confirmed that I could pull the data back out: -

curl -X POST --silent --cacert /root/ssl/ca-cert.pem --cert /root/ssl/client-cert.pem --key /root/ssl/client-key.pem https://localhost:2379/v3/kv/range -d '{"key":"MDEyMzQ1"}' | jq -r .kvs[].value | base64 -d | jq


  "name": "Dave Hay",
  "id": "davehay1969"

which is nice

See, I told you I'd find a way .....

etcd - Today I learned ...

I've been tinkering with the most recent version of etcd namely 3.5.0 recently, having built it from their GitHub project.

My initial and main requirement was to test etcd with SSL/TLS, specifically both server-side X509 certificate-based encryption ( to validate endpoint integrity and on-the-wire data encryption ) AND client-side certificate-based authentication ( to ensure that only authenticated clients could access the etcd service ).

Having got this done and dusted, albeit with a self-signed certificate process, with a locally-created Certificate Authority (CA) being used to create the keys and certificates being presented: -

(a) by the etcd server to its clients

(b) the client consuming etcd, both via its REST APIs, being consumed by curl and also the native etcdctl utility.

Now I'm looking at the REST API in more depth, having previously only tested the /version endpoint: -

curl --silent --cacert /root/ssl/ca-cert.pem --cert /root/ssl/client-cert.pem --key /root/ssl/client-key.pem https://localhost:2379/version | jq

  "etcdserver": "3.5.0",
  "etcdcluster": "3.5.0"

I then wanted to try sending data to, and receiving data from, etcd again using its REST APIs, recognising that the purpose of etcd is: -

etcd is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines.


Thankfully, the etcd documentation does cover this nicely: -

Why gRPC gateway

Interestingly, this uses the POST HTTP method for almost all of the operations, both PUT and GET, which was new to me ...

The documentation does cover this: -

Put and get keys 

Watch keys


Interestingly, we have this: -

The gateway accepts a JSON mapping for etcd’s protocol buffer message definitions. Note that key and value fields are defined as byte arrays and therefore must be base64 encoded in JSON. The following examples use curl, but any HTTP/JSON client should work all the same.

Source: Using gRPC gateway 

Using a test JSON document: -

cat dave.json

"value": {
"name":"Dave Hay",

I need to encode the key and the value into base64; the Q&D way to do this is using jq and base64

cat dave.json | jq -r .key | base64


cat dave.json | jq -r .value | base64


and then insert those into a new JSON document: -

cat dave_b64.json

"value": "ewogICJuYW1lIjogIkRhdmUgSGF5IiwKICAiaWQiOiAiZGF2ZWhheTE5NjkiCn0K"

We can then put that into etcd again using the REST API: -

curl -X POST --silent --cacert /root/ssl/ca-cert.pem --cert /root/ssl/client-cert.pem --key /root/ssl/client-key.pem https://localhost:2379/v3/kv/put -d @dave_b64.json | jq

  "header": {
    "cluster_id": "14841639068965178418",
    "member_id": "10276657743932975437",
    "revision": "8",
    "raft_term": "2"

and then query etcd to get it back out again ...

Now, for some reason, etcd uses the range endpoint rather than, say, get but that's fine ....

curl -X POST --silent --cacert /root/ssl/ca-cert.pem --cert /root/ssl/client-cert.pem --key /root/ssl/client-key.pem https://localhost:2379/v3/kv/range -d '{"key":"MDEyMzQ1Cg=="}' | jq

  "header": {
    "cluster_id": "14841639068965178418",
    "member_id": "10276657743932975437",
    "revision": "8",
    "raft_term": "2"
  "kvs": [
      "key": "MDEyMzQ1Cg==",
      "create_revision": "7",
      "mod_revision": "8",
      "version": "2",
      "value": "ewogICJuYW1lIjogIkRhdmUgSGF5IiwKICAiaWQiOiAiZGF2ZWhheTE5NjkiCn0K"
  "count": "1"

Note that we get back the base64 encrypted value field: -

      "value": "ewogICJuYW1lIjogIkRhdmUgSGF5IiwKICAiaWQiOiAiZGF2ZWhheTE5NjkiCn0K"

We can quite easily decode this manually: -

echo "ewogICJuYW1lIjogIkRhdmUgSGF5IiwKICAiaWQiOiAiZGF2ZWhheTE5NjkiCn0K" | base64 -d

  "name": "Dave Hay",
  "id": "davehay1969"

or decode it on the fly: -

curl -X POST --silent --cacert /root/ssl/ca-cert.pem --cert /root/ssl/client-cert.pem --key /root/ssl/client-key.pem https://localhost:2379/v3/kv/range -d '{"key":"MDEyMzQ1Cg=="}' | jq -r .kvs[].value | base64 -d

  "name": "Dave Hay",
  "id": "davehay1969"

I'm sure I can script up a process to create the JSON document containing Base64-coded data without too much problem ....

TL;DR; writing to / reading from etcd feels very achievable, recognising that there's a bit of a learning curve to (a) understand the REST API and (b) handle the Base64 encoding / decoding process ....

Tuesday 7 September 2021

openssl - Get your subject right

I'm tinkering with OpenSSL to create a Certificate Authority, server keys/certificates and client keys/certificates and keys.

Having done all of this, I was then looking to verify the server's certificate - server-cert.pem - again using openssl as follows: -

openssl verify server-cert.pem 

C = GB, O = IBM, CN =
error 18 at 0 depth lookup: self signed certificate
error server-cert.pem: verification failed

Wait, what now ?

Thankfully, this came to my rescue: -

I think you missed this part of the instructions:

Whatever method you use to generate the certificate and key files, the Common Name value used for the server and client certificates/keys must each differ from the Common Name value used for the CA certificate. Otherwise, the certificate and key files will not work for servers compiled using OpenSSL.

When OpenSSL prompts you for the Common Name for each certificate, use different names.

When I created the server's certificate: -

openssl req -newkey rsa:2048 -nodes -keyout server-key.pem -out server-req.pem -subj '/C=GB/O=IBM/

I'd used the same Subject as I used for the Certificate Authority (CA) e.g.

openssl req -new -x509 -nodes -days 365 -key ca-key.pem -out ca-cert.pem -subj '/C=GB/O=IBM/

which is a pretty bad idea.

Once I did it properly: -

openssl req -new -x509 -nodes -days 365 -key ca-key.pem -out ca-cert.pem -subj '/C=GB/O=IBM/CN=etcd_ca'

for the CA and: -

openssl req -newkey rsa:2048 -nodes -keyout server-key.pem -out server-req.pem -subj '/C=GB/O=IBM/CN=etcd_server'

for the server, all was well.

For reference, this is from where I started wrt using openSSL in this context: -

Thursday 2 September 2021

GitHub Copilot for VSCode Might Make Coding Easier

Saw this over on @podfeet's blog: -

GitHub Copilot for VSCode Might Make Coding Easier

For reference, @podfeet, aka Allison Sheridan, is the host of a number of podcasts, including NosillaCastProgramming by Stealth and Taming the Terminal: -

Podfeet Podcasts

along with Bart Busschots of Let's Talk Apple etc.

I recommend a read of the GitHub Copilot article, and also a listen to / follow of Allison and Bart, and their respective ( and respectable ) podcasts.

Kata Containers - spell checking the docs

I'm working on a change to some of the Kata Containers documentation, and hadn't - until today - realised that the project has a rather neat Spell Check tool that runs against documentation e.g. that written in Markdown.

The tool - - has its own README and relies upon a set of dictionaries, leveraging the hunspel and pandoc projects.

To use/update this on my Ubuntu box, I needed to install those two projects: -

apt-get update && apt-get install -y hunspell

apt-get update && apt-get install -y pandoc

and then clone the Kata test project: -

git clone

I was then able to update the dictionary: -

vi ~/tests/cmd/check-spelling/data/main.txt

to add in the to-be-included words, and then run the appropriate command to have update its own internal dictionary: -

cd ~/tests/cmd/check-spelling

./ make-dict

and then run the tool itself: -

./ check ~/ 

INFO: Spell checking file '/root/'

INFO: Spell check successful for file: '/root/'

This against the world's most simple Markdown file: -

# Context

## Introduction

This is some text.
This is an error - `error message`

but you get the idea :-)

Wednesday 1 September 2021

Apple TV - subtitles on

In the context of a previous post: -

Apple Remote - tell the telly to turn the heck off !

I had a similar requirement - to have Apple TV show subtitles whilst watching Discovery+

The answer ? Simples !

Just press the mic button: -

and ask "Please turn subtitles on"

It worked!

And, perhaps unsurprisingly, the reverse also worked - "Please turn subtitles off"

I ❤️ my Apple TV 

Reminder - Apple Time Machine - where are your logs ?

Want to see what Time Machine is doing ?

If so, run the following Terminal command: -

printf '\e[3J' && log show --predicate 'subsystem == ""' --info --last 6h | grep -F 'eMac' | grep -Fv 'etat' | awk -F']' '{print substr($0,1,19), $NF}' 

which is, in part, parsing the output from the log show command.

Which is nice ๐ŸŒž

Forgot to include my source for the above: -

Visual Studio Code - Wow ๐Ÿ™€

Why did I not know that I can merely hit [cmd] [p]  to bring up a search box allowing me to search my project e.g. a repo cloned from GitHub...