Thursday, 28 October 2021

RedHat OpenShift Container Platform - commanding the line ..

As per previous posts, I'm working with RedHat OpenShift Container Platform (OCP) a lot at present, and was looking for the most recent command-line tool, namely oc, for my Mac.

RedHat have the downloads for macOS, Windows and Linux here: -

Command-line interface (CLI) tools

so I pulled the appropriate tarball 



which resulted in: -

-rw-r--r--@   1 hayd  staff    41481963 28 Oct 15:35 openshift-client-mac.tar.gz

Having unpacked this: -

ls -al ~/Downloads/openshift-client-mac

total 384664
drwx------@ 5 hayd  staff       160 28 Oct 15:51 .
drwx------@ 7 hayd  staff       224 28 Oct 15:51 ..
-rw-r--r--@ 1 hayd  staff       954  1 Oct 01:41 README.md
-rwxr-xr-x@ 2 hayd  staff  98469344  1 Oct 01:41 kubectl
-rwxr-xr-x@ 2 hayd  staff  98469344  1 Oct 01:41 oc


I checked the version of oc - having had 4.7 previously: -

~/Downloads/openshift-client-mac/oc version

Client Version: 4.9.0

However, I also noted that the bundle includes kubectl 

When I checked the version of that: -

~/Downloads/openshift-client-mac/kubectl version

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v0.21.0-beta.1", GitCommit:"96e95cef877ba04872b88e4e2597eabb0174d182", GitTreeState:"clean", BuildDate:"2021-10-01T00:41:12Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"}

which is out of sync with the version that I'm already using: -

kubectl version

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:31:32Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"darwin/amd64"}

I'm not 100% sure why, and will ask around, but... in the meantime, I'll stick with what I have ...

IBM Cloud - The computer says "No"

Whilst trying to delete an IBM Cloud Object Storage (COS) instance from the command-line: -

ic resource service-instance-delete e8c79678-2ef8-4c7c-8a89-24ff2e85e691 --force

I saw: -

Deleting service instance e8c79678-2ef8-4c7c-8a89-24ff2e85e691 in resource group default under account DAVID HAY's Account as david_hay@uk.ibm.com...
FAILED
Cannot delete instance or alias, resource keys must first be deleted.


and couldn't quite work out to what it was referring.

Thankfully, the help text for that command had the answer: -

ic resource service-instance-delete --help

FAILED
Incorrect Usage.

NAME:
  service-instance-delete - Delete service instance

USAGE:
  /usr/local/bin/ibmcloud resource service-instance-delete ( NAME | ID ) [-g RESOURCE_GROUP] [-f, --force] [--recursive] [-q, --quiet]
  
OPTIONS:
  -g value     Resource group name
  -f, --force  Force deletion without confirmation
  --recursive  Delete all belonging resources
  -q, --quiet  Suppress verbose output



ic resource service-instance-delete e8c79678-2ef8-4c7c-8a89-24ff2e85e691 --force --recursive

Deleting service instance e8c79678-2ef8-4c7c-8a89-24ff2e85e691 in resource group default under account DAVID HAY's Account as david_hay@uk.ibm.com...
OK
Service instance cos_for_roks with ID crn:v1:bluemix:public:cloud-object-storage:global:a/f5e2ac71094077500e0d4b1ef85fdaec:e8c79678-2ef8-4c7c-8a89-24ff2e85e691:: is deleted successfully


Apple Watch - stop talking to me

For some peculiar reason, the Workout app on my Apple Watch started talking to me this morning …

Whilst that's not necessarily a problem, hearing a disembodied voice coming through my AirPods at 0600ish, whilst out on my morning walk was somewhat disconcerting

Thankfully a quick Google came to the rescue: -

which led me to the Apple Watch app on my iPhone : -


Why this started today, given that I updated to watchOS 8 a week or two back …. 🤷‍♀️

Tuesday, 26 October 2021

Podman and IBM Container Registry - there's more ...

Following on from my most recent Podman-related posts I'm creating an image to test my RedHat OpenShift Kubernetes Service (ROKS) deployment.

Given that I'm living in the IBM Cloud CLI, I thought I'd try the ic cr command-line: -

ic cr build --no-cache de.icr.io/roks_oct2021/hello_world:latest --file Dockerfile .

FAILED

The 'build' command is deprecated, you must specify the --accept-deprecation option to use this command. For more information see: https://www.ibm.com/cloud/blog/announcements/ibm-cloud-container-registry-deprecating-container-builds

Aw shucks, of course ...

Thankfully, I have Podman ...

podman build --no-cache -t de.icr.io/roks_oct2021/hello_world:latest -f Dockerfile .

and then, having created myself a namespace in ICR: -

ic cr namespace-add roks_oct2021

I can push the image: -

podman push de.icr.io/roks_oct2021/hello_world:latest

and we're off to the races ...

ic cr images

Listing images...


Repository                           Tag      Digest         Namespace      Created          Size    Security status   

de.icr.io/roks_oct2021/hello_world   latest   029263beb4d4   roks_oct2021   43 minutes ago   16 MB   No Issues   


OK

ic cr va de.icr.io/roks_oct2021/hello_world:latest

Checking security issues for 'de.icr.io/roks_oct2021/hello_world:latest'...

Image 'de.icr.io/roks_oct2021/hello_world:latest' was last scanned on Tue Oct 26 12:36:34 UTC 2021
The scan results show that NO ISSUES were found for the image.

OK

ic cr va --extended de.icr.io/roks_oct2021/hello_world:latest

Checking security issues for 'de.icr.io/roks_oct2021/hello_world:latest'...

Image 'de.icr.io/roks_oct2021/hello_world:latest' was last scanned on Tue Oct 26 12:36:34 UTC 2021
The scan results show that NO ISSUES were found for the image.

OK

which is nice !

Ooops, Podman broke my IBM Container Registry - well, kinda

I was digging into an IBM Container Registry (ICR), specifically to look at an image that I'd just built/pushed.

This is on a Mac upon which I've installed Podman, to replace Docker Desktop, as per previous posts.

Having logged into IBM Cloud ( I have a script for that ), I logged into the ICR instance: -

ic cr login

which responded: -

Logging in to 'us.icr.io'...

FAILED

Failed to 'docker login' to 'us.icr.io' with error: Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM

Error: unable to connect to Podman. failed to create sshClient: Connection to bastion host (ssh://core@localhost:53095/run/user/1000/podman/podman.sock) failed.: dial tcp [::1]:53095: connect: connection refused

At which point, I realised where I'd gone wrong - I'd rebooted my Mac since last I did this, and, at a guess, the Podman Machine doesn't autostart.

I manually started it: -

podman machine start

INFO[0000] waiting for clients...                       

INFO[0000] listening tcp://0.0.0.0:7777                 

INFO[0000] new connection from  to /var/folders/b5/8vqr9tt54v94jxzs0_k2qq2m0000gn/T/podman/qemu_podman-machine-default.sock 

Waiting for VM ...

Machine "podman-machine-default" started successfully

and then attempted to log into ICR: -

ic cr login

Logging in to 'us.icr.io'...

Logged in to 'us.icr.io'.


OK

Next step is to find out how to autostart the Podman Machine "service" .... 

Monday, 25 October 2021

IBM Cloud - OCP clusters, Ingress and Certificate Manager

So this is definitely a work-in-progress but I may have resolved an issue that I was seeing with a newly created OpenShift Container Platform (OCP) cluster.

TL;DR; the command ic cs cluster ls showed my cluster state as warning and never as ready.

When I inspected the cluster using ic cs cluster get --cluster $cluster_name I saw: -

Ingress Subdomain:              - †   

Ingress Secret:                 - †   

Ingress Status:                 -   

Ingress Message:                -   

and: -

† Your Ingress subdomain and secret might not be ready yet. For more info by cluster type, see 'https://ibm.biz/ingress-sub' for Kubernetes or 'https://ibm.biz/ingress-sub-ocp' for OpenShift.

and, after a while, this: -

Ingress Message:                Could not upload certificates to Certificate Manager instance. Ensure you have the correct IAM permissions. For more info, see http://ibm.biz/ingress-secret   

I followed the suggested link: -

and ended up with: -


which said, in part: -     

What’s happening

You create and delete a cluster multiple times, such as for automation purposes.

Every time that you create the cluster, you use either the same name or a name that is very similar to previous names that you used. When you run ibmcloud ks cluster get --cluster <cluster>, your cluster is in a normal state but no Ingress Subdomain or Ingress Secret are available.
Why it’s happening

When you create and delete a cluster that uses the same name multiple times, the Ingress subdomain for that cluster in the format <cluster_name>.<globally_unique_account_HASH>-0000.<region>.containers.appdomain.cloud is registered and unregistered each time.

The certificate for the subdomain is also generated and deleted each time. If you create and delete a cluster with the same name 5 times or more within 7 days, you might reach the Let's Encrypt Duplicate Certificate rate limit, because the same Ingress subdomain and certificate are registered every time that you create the cluster. Because very long cluster names are truncated to 24 characters in the Ingress subdomain for the cluster, you can also reach the rate limit if you use multiple cluster names that have the same first 24 characters.

Given that I'm writing a document guiding one through the process of deploying OCP on IBM Cloud, I have been re-using the same cluster name e.g. roks-oct2021 over and over the past few days.

Working on the hypothesis that that's the root cause, I've changed the way that I generate the cluster name for my document: -

export cluster_name="roks_`date +%s`"

which uses the date in epoch format e.g. run the command three times in sequence: -

date +%s

1635176609

date +%s

1635176610

date +%s

1635176611

and note the difference.

I've just deleted and recreated my cluster, and it's looking good thus far: -

ic cs cluster ls

OK
Name              ID                     State       Created          Workers   Location    Version                 Resource Group Name   Provider   
roks_1635176017   c5rcsngf0kf7u096q2e0   deploying   10 minutes ago   2         Frankfurt   4.8.11_1526_openshift   default               vpc-gen2   

The state shows as deploying rather than warning and, even more promisingly, the number of Worker ( Computer Nodes ) shows as 2 rather than 0.

We'll see ....

Today I learned - one reason why one may not be able to authenticate to a RedHat OpenShift cluster running on IBM Cloud ...

Today I mainly be tinkering with RedHat OpenShift Container Platform (OCP) on IBM Cloud, and am running through a set of steps that I've written to document the end-to-end setup.

Having created a cluster on Friday, I tried to authenticate to it today: -

export apikey="my_api_key_goes_here"

oc login https://control-plane-endpoint-url:31452 -u apikey -p $apikey

but this threw up: -

Error from server (InternalError): Internal error occurred: unexpected response: 500

After a spot of digging, I realised why this was the case.
This was a newly created cluster BUT I'd not yet retrieved the cluster configuration from IBM Cloud.

The cluster name is roks-oct2021 so I needed to run: -

ic cs cluster config --cluster roks-oct2021

OK
The configuration for roks-oct2021 was downloaded successfully.

Added context for roks-oct2021 to the current kubeconfig file.
You can now execute 'kubectl' commands against your cluster. For example, run 'kubectl get nodes'.
If you are accessing the cluster for the first time, 'kubectl' commands might fail for a few seconds while RBAC synchronizes.

Once I'd done this, all was well - I was able to run: -

oc login https://control-plane-endpoint-url:31452 -u apikey -p $apikey

which responded: -

Login successful.

You have access to 66 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".


Saturday, 16 October 2021

Yay, VMware Fusion and macOS Big Sur - no longer "NAT good friends" - forgive the double negative and the terrible pun ...

After macOS 11 Big Sur was released in 2020, VMware updated their Fusion product to v12 and, sadly, managed to break Network Address Translation (NAT), as per their release notes: -

VMware Fusion 12 Release Notes

Unable to access port forwarding on a NAT virtual machine, if the traffic is routed through the host IP stack on Big Sur hosts

On Big Sur hosts, if user configures NAT port forwarding in Fusion, the service running in the virtual machine is unavailable on the macOS host using localhost:exposedPort, hostIP:exposedPort, or 127.0.0.1:exposedPort; Port forwarding is also not accessible inside a NAT virtual machine using hostIP:exposedPort.

Thankfully, as of now, with Fusion 12.2.0 this is now resolved: -

VMware Fusion 12.2.0 Release Notes

Resolved Issues

Unable to access port forwarding on a NAT virtual machine if the traffic is routed through the host IP stack on Big Sur hosts

On Big Sur hosts, if a user configures NAT port forwarding in Fusion, the service running in the virtual machine is unavailable on the macOS host using localhost:exposedPort, hostIP:exposedPort, or 127.0.0.1:exposedPort; Port forwarding is also not accessible inside a NAT virtual machine using hostIP:exposedPort.

This issue is now resolved.

I updated this AM, and am now running Fusion 12.2.0 on macOS 11.6 and all is well - my Windows 10 VM happily shares my Mac's connection using NAT, and can tunnel via my Cisco AnyConnect VPN, which is nice ....

Wednesday, 13 October 2021

For my future self - don't try and use crictl to deploy pods into an existing Kubernetes cluster

I'm doing some work with the Kata Containers v2 runtime, and was trying to test it using crictl 

First I created a pair of YAML documents: -

tee podsandbox-config.yaml <<EOF
metadata:
  attempt: 1
  name: busybox-sandbox
  namespace: default
  uid: hdishd83djaidwnduwk28bcsb
log_directory: /tmp
linux:
  namespaces:
    options: {}
EOF
tee container-config.json <<EOF
{
  "metadata": {
      "name": "busybox"
  },
  "image":{
      "image": "busybox"
  },
  "command": [
      "top"
  ],
  "log_path":"busybox.log",
  "linux": {
  }
}
EOF

and then I used the first of those to create a Pod Sandbox: -

sandbox=$(crictl runp -r kata podsandbox-config.yaml)

However, the resulting pod soon disappeared - I managed to check its state before it disappeared: -

crictl pods | grep kata

9fafade8c3216       23 seconds ago      NotReady            busybox-sandbox                                  default             1                   kata

65fc059b8129d       40 minutes ago      Ready               nginx-kata                                       default             0                   kata

and inspected the NotReady pod: -

crictl inspectp 9fafade8c3216 | jq .status.state

"SANDBOX_NOTREADY"

Thankfully someone else had hit this issue over in the cri-o project: -


specifically this comment: -

I guess you might be using crictl to create pod/container on a running kubernetes node. Kubelet deletes unwanted containers/pods, please don't do that on a running kubernetes node.

See https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/#example-crictl-commands

Ah, yes, that'd be it !

Back to K8s ..........

Aiding my memory - parsing Kubernetes using JQ etc.

So I was looking for a way to munge a Kubernetes Compute Node configuration to extract it's external/public IP.

I know I can do it using purely K8s: -

kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

123.45.6.78

but I also wanted to do it via jq and here it is: -

kubectl get node `kubectl get nodes | tail -1 | awk '{print $1}'` --output json | jq -r '.status.addresses[] | select(.type=="ExternalIP") .address'

123.45.6.78

which is nice!

Tinkering with Istio and Envoy on IBM Kubernetes Service via macOS

Whilst I've been aware of Istio for some years, I've never really played with it.

Well, today that's changing ...

I'm following this tutorial guide: -

Getting Started

and starting by installing the CLI tool / installation file on my Mac: -

curl -L https://istio.io/downloadIstio | sh -

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   102  100   102    0     0     72      0  0:00:01  0:00:01 --:--:--    72
100  4549  100  4549    0     0   2693      0  0:00:01  0:00:01 --:--:--  2693
Downloading istio-1.11.3 from https://github.com/istio/istio/releases/download/1.11.3/istio-1.11.3-osx.tar.gz ...
Istio 1.11.3 Download Complete!
Istio has been successfully downloaded into the istio-1.11.3 folder on your system.
Next Steps:
See https://istio.io/latest/docs/setup/install/ to add Istio to your Kubernetes cluster.
To configure the istioctl client tool for your workstation,
add the /Users/hayd/istio-1.11.3/bin directory to your environment path variable with:
export PATH="$PATH:/Users/hayd/istio-1.11.3/bin"
Begin the Istio pre-installation check by running:
istioctl x precheck 
Need more information? Visit https://istio.io/latest/docs/setup/install/ 

and adding the installation directory to my path: _

export PATH="$PATH:$HOME/istio-1.11.3/bin"

and validating the istioctl tool: -

which istioctl

/Users/hayd/istio-1.11.3/bin/istioctl

istioctl version

no running Istio pods in "istio-system"
1.11.3

and then install it into my K8s 1.20 cluster: -

istioctl install --set profile=demo -y

✔ Istio core installed                                                                                                                                                                  
✔ Istiod installed                                                                                                                                                                       
✔ Ingress gateways installed                                                                                                                                                             
✔ Egress gateways installed                                                                                                                                                              
✔ Installation complete                                                                                                                                                                  
Thank you for installing Istio 1.11.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/asdsdasdas

and checked the now running pods: -

kubectl get pods -A

NAMESPACE      NAME                                         READY   STATUS    RESTARTS   AGE
ibm-system     addon-catalog-source-2x7hj                   1/1     Running   0          42h
ibm-system     catalog-operator-578f7c8857-666wd            1/1     Running   0          42h
ibm-system     olm-operator-6c45d79d96-pjtmr                1/1     Running   0          42h
istio-system   istio-egressgateway-5fdc76bf94-v5dpg         1/1     Running   0          59s
istio-system   istio-ingressgateway-6bd7764b48-rr4fp        1/1     Running   0          59s
istio-system   istiod-675949b7c5-zqg6w                      1/1     Running   0          74s
kube-system    calico-kube-controllers-78ccd56cd7-wqgtf     1/1     Running   0          42h
kube-system    calico-node-pg6vv                            1/1     Running   0          42h
kube-system    calico-typha-ddd44968b-86cgs                 1/1     Running   0          42h
kube-system    calico-typha-ddd44968b-ffxmt                 0/1     Pending   0          42h
kube-system    calico-typha-ddd44968b-mqjrb                 0/1     Pending   0          42h
kube-system    coredns-7fc9f85d9c-5rwwv                     1/1     Running   0          42h
kube-system    coredns-7fc9f85d9c-bxtts                     1/1     Running   0          42h
kube-system    coredns-7fc9f85d9c-qk6gv                     1/1     Running   0          42h
kube-system    coredns-autoscaler-9cccfb98d-mw9qj           1/1     Running   0          42h
kube-system    dashboard-metrics-scraper-7c75dcd466-d5b9f   1/1     Running   0          42h
kube-system    ibm-keepalived-watcher-856k6                 1/1     Running   0          42h
kube-system    ibm-master-proxy-static-10.144.213.225       2/2     Running   0          42h
kube-system    kubernetes-dashboard-659cd5b798-thd57        1/1     Running   0          42h
kube-system    metrics-server-b7bc76594-4fdg2               2/2     Running   0          42h
kube-system    vpn-546847fcbf-dzzml                         1/1     Running   0          42h

and added the appropriate label for Envoy sidecar proxies: -

kubectl label namespace default istio-injection=enabled

namespace/default labeled

and then deployed the sample Bookinfo application: -

cd ~/istio-1.11.3/

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created


and verified the created services: -

kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   172.21.9.104     <none>        9080/TCP   34m
kubernetes    ClusterIP   172.21.0.1       <none>        443/TCP    43h
productpage   ClusterIP   172.21.149.123   <none>        9080/TCP   34m
ratings       ClusterIP   172.21.233.195   <none>        9080/TCP   34m
reviews       ClusterIP   172.21.163.74    <none>        9080/TCP   34m

and pods: -

kubectl get pods

NAME                              READY   STATUS    RESTARTS   AGE
details-v1-79f774bdb9-zvnzr       2/2     Running   0          34m
productpage-v1-6b746f74dc-swnwk   2/2     Running   0          34m
ratings-v1-b6994bb9-kspd6         2/2     Running   0          34m
reviews-v1-545db77b95-bwdmz       2/2     Running   0          34m
reviews-v2-7bf8c9648f-h2nsl       2/2     Running   0          34m
reviews-v3-84779c7bbc-x2v2l       2/2     Running   0          34m

before testing the application: -

kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"

<title>Simple Bookstore App</title>

and then configure the Istio gateway: -

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

and run the istioctl analysis: -

istioctl analyze

✔ No validation issues found when analyzing namespace: default.

and set the INGRESS_PORT and SECURE_INGRESS_PORT variable: -

export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')

and grab the external IP of my K8s Compute Node into the INGRESS HOST: -

export INGRESS_HOST=$(kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}')

and set the the GATEWAY_URL variable: -

export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT

and then hit the sample application: -

curl $(echo "http://$GATEWAY_URL/productpage")

which returns a bunch of HTML 🤣

I also hit the same URL via a real browser: -





And, finally, deploy and access the Dashboard: -

kubectl apply -f samples/addons

serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created
deployment.apps/jaeger created
service/tracing created
service/zipkin created
service/jaeger-collector created
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
role.rbac.authorization.k8s.io/kiali-controlplane created
rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
service/kiali created
deployment.apps/kiali created
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created

kubectl rollout status deployment/kiali -n istio-system

Waiting for deployment "kiali" rollout to finish: 0 of 1 updated replicas are available...
deployment "kiali" successfully rolled out

istioctl dashboard kiali

http://localhost:20001/kiali

which popped up a browser ....

Having thrown some traffic at the application: -

for i in $(seq 1 100); do curl -s -o /dev/null "http://A.B.C.D:30588/productpage"; done

I could then see the application/flow/throughput etc. via the dashboard: -



To conclude, the Getting Started is really rather peachy, and definitely worth following through ....

Hacking go.mod files - there IS a better way

TL;DR; for our project, we're making heavy use of Go and, more specifically, modules: -

Modules are how Go manages dependencies.

A module is a collection of packages that are released, versioned, and distributed together. Modules may be downloaded directly from version control repositories or from module proxy servers.

A module is identified by a module path, which is declared in a go.mod file, together with information about the module’s dependencies. The module root directory is the directory that contains the go.mod file. The main module is the module containing the directory where the go command is invoked.

Each package within a module is a collection of source files in the same directory that are compiled together. A package path is the module path joined with the subdirectory containing the package (relative to the module root). For example, the module "golang.org/x/net" contains a package in the directory "html". That package’s path is "golang.org/x/net/html".

Source: Go Modules

In our specific use case, we have a go.mod file which contains the replace directive: -

A replace directive replaces the contents of a specific version of a module, or all versions of a module, with contents found elsewhere. The replacement may be specified with either another module path and version, or a platform-specific file path.

Source: replace directive

We had a requirement to update this replace directive, and have it point at a specific fork of a GitHub project rather than defaulting to the "main" upstream repository.

So, using the documentation's example: -

replace golang.org/x/net v1.2.3 => example.com/fork/net v1.4.5

we wanted to change from fork to, say, foobar AND specific a different branch e.g. v1.5.7

Now it's perfectly easy to do this merely by hand-editing go.mod e.g. vi go.mod ....

BUT

there is ( as ever ) a better way ....

go mod edit --replace golang.org/x/net=example.com/foobar/net@v1.5.7

which gives us this in go.mod : -

replace golang.org/x/net => example.com/foobar/net v1.5.7

which is a much nicer approach ...

TL;DR; go mod edit is your friend!

Kubernetes - tailing Pod logs - TIL

I saw this on Twitter a few days back: -


but hadn't yet got around to playing with it.

I thought I'd test it with a brand new K8s 1.20 cluster running on IBM Kubernetes Service (IKS).

Firstly, I queried the pods that were running: -

kubectl get pods -A

NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE

ibm-system    addon-catalog-source-2x7hj                   1/1     Running   0          41h
ibm-system    catalog-operator-578f7c8857-666wd            1/1     Running   0          41h
ibm-system    olm-operator-6c45d79d96-pjtmr                1/1     Running   0          41h
kube-system   calico-kube-controllers-78ccd56cd7-wqgtf     1/1     Running   0          41h
kube-system   calico-node-pg6vv                            1/1     Running   0          41h
kube-system   calico-typha-ddd44968b-86cgs                 1/1     Running   0          41h
kube-system   calico-typha-ddd44968b-ffxmt                 0/1     Pending   0          41h
kube-system   calico-typha-ddd44968b-mqjrb                 0/1     Pending   0          41h
kube-system   coredns-7fc9f85d9c-5rwwv                     1/1     Running   0          41h
kube-system   coredns-7fc9f85d9c-bxtts                     1/1     Running   0          41h
kube-system   coredns-7fc9f85d9c-qk6gv                     1/1     Running   0          41h
kube-system   coredns-autoscaler-9cccfb98d-mw9qj           1/1     Running   0          41h
kube-system   dashboard-metrics-scraper-7c75dcd466-d5b9f   1/1     Running   0          41h
kube-system   ibm-keepalived-watcher-856k6                 1/1     Running   0          41h
kube-system   ibm-master-proxy-static-10.144.213.225       2/2     Running   0          41h
kube-system   kubernetes-dashboard-659cd5b798-thd57        1/1     Running   0          41h
kube-system   metrics-server-b7bc76594-4fdg2               2/2     Running   0          41h
kube-system   vpn-546847fcbf-dzzml                         1/1     Running   0          41h

and chose the metrics-server-b7bc76594-4fdg2 pod, because it's running two containers: -

kubectl get pod metrics-server-b7bc76594-4fdg2 --namespace kube-system --output json | jq -r .spec.containers[].name

metrics-server
metrics-server-nanny

Grabbing the labels for this pod: -

kubectl get pod metrics-server-b7bc76594-4fdg2 --namespace kube-system --output json | jq .metadata.labels

{
  "cert-checksum": "6457ed123878693e37fbde8b4a0abf966ae050c3",
  "k8s-app": "metrics-server",
  "pod-template-hash": "b7bc76594",
  "version": "v0.4.4"
}

and choosing the k8s-app label, I followed Lili's advice: -

kubectl logs --namespace kube-system --selector k8s-app=metrics-server --container metrics-server --tail=-1

I1011 17:10:21.606350       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/metrics-server-certs/tls.crt::/etc/metrics-server-certs/tls.key
I1011 17:10:21.606421       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.626860       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.646520       1 secure_serving.go:197] Serving securely on [::]:4443
I1011 17:10:21.627139       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1011 17:10:21.647378       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I1011 17:10:21.647922       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1011 17:10:21.606461       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:21.686383       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:22.007270       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I1011 17:10:22.046837       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I1011 17:10:22.126237       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 

and: -

kubectl logs --namespace kube-system --selector k8s-app=metrics-server --container metrics-server-nanny --tail=-1

ERROR: logging before flag.Parse: I1011 17:10:03.951537       1 pod_nanny.go:68] Invoked by [/pod_nanny --config-dir=/etc/config --cpu=40m --extra-cpu=0.5m --memory=40Mi --extra-memory=4Mi --threshold=5 --deployment=metrics-server --container=metrics-server --poll-period=300000 --estimator=exponential --use-metrics=true]
ERROR: logging before flag.Parse: I1011 17:10:03.951645       1 pod_nanny.go:69] Version: 1.8.12
ERROR: logging before flag.Parse: I1011 17:10:03.951673       1 pod_nanny.go:85] Watching namespace: kube-system, pod: metrics-server-b7bc76594-4fdg2, container: metrics-server.
ERROR: logging before flag.Parse: I1011 17:10:03.951684       1 pod_nanny.go:86] storage: MISSING, extra_storage: 0Gi
ERROR: logging before flag.Parse: I1011 17:10:03.968655       1 pod_nanny.go:116] cpu: 40m, extra_cpu: 0.5m, memory: 40Mi, extra_memory: 4Mi
ERROR: logging before flag.Parse: I1011 17:10:03.968697       1 pod_nanny.go:145] Resources: [{Base:{i:{value:40 scale:-3} d:{Dec:<nil>} s:40m Format:DecimalSI} ExtraPerNode:{i:{value:5 scale:-4} d:{Dec:<nil>} s: Format:DecimalSI} Name:cpu} {Base:{i:{value:41943040 scale:0} d:{Dec:<nil>} s: Format:BinarySI} ExtraPerNode:{i:{value:4194304 scale:0} d:{Dec:<nil>} s:4Mi Format:BinarySI} Name:memory}]

In other words, I used the --selector switch to grab logs for the two containers by their label, rather than knowing/caring about the pod name.

I could've done much the same BUT via a much more complicated route using grep and awk : -

kubectl logs --namespace kube-system $(kubectl get pods --namespace kube-system | grep metrics-server | awk '{print $1}')  -c metrics-server

I1011 17:10:21.606350       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/metrics-server-certs/tls.crt::/etc/metrics-server-certs/tls.key
I1011 17:10:21.606421       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.626860       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.646520       1 secure_serving.go:197] Serving securely on [::]:4443
I1011 17:10:21.627139       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1011 17:10:21.647378       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I1011 17:10:21.647922       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1011 17:10:21.606461       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:21.686383       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:22.007270       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I1011 17:10:22.046837       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I1011 17:10:22.126237       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 


so Lili's approach is WAY more streamlined: -

Tuesday, 12 October 2021

More fun with keyctl on Ubuntu

Following on from my earlier post: -

Fun with keyctl on Ubuntu

I started seeing the same Bad message exception when adding a certificate into a keyring: -

keyctl padd asymmetric foo @u < ~/ssl/server.crt 

add_key: Bad message

even though the required kernel module was loaded: -

lsmod |grep pkcs

pkcs8_key_parser       16384  0

and this appeared to be a valid certificate: -

file ~/ssl/server.crt

/home/hayd/ssl/server.crt: PEM certificate

openssl verify -verbose -CAfile ~/ssl/etcd-ca.crt ~/ssl/server.crt

/home/hayd/ssl/server.crt: OK

so, as per the above, the certificate is stored in Privacy Enhanced Mail (PEM) format: -

PEM or Privacy Enhanced Mail is a Base64 encoded DER certificate. PEM certificates are frequently used for web servers as they can easily be translated into readable data using a simple text editor. Generally when a PEM encoded file is opened in a text editor, it contains very distinct headers and footers.


Jumping to a conclusion that keyctl may require a different format e.g. Distinguished Encoding Rules (DER) instead: -

DER (Distinguished Encoding Rules) is a binary encoding for X.509 certificates and private keys. Unlike PEM, DER-encoded files do not contain plain text statements such as -----BEGIN CERTIFICATE-----. DER files are most commonly seen in Java contexts.


I regenerated the certificate: -

openssl x509 -req -extfile <(printf "subjectAltName=DNS:localhost,DNS:genctl-etcd-cluster.genctl.svc,DNS:genctl-etcd-cluster-client.genctl.svc") -days 365 -in ~/ssl/server.csr -CA ~/ssl/etcd-ca.crt -CAkey ~/ssl/etcd-ca.key -CAcreateserial -out ~/ssl/server.der -outform der

in DER format ( via -outform der ) and verified it: -

file ~/ssl/server.der

/home/hayd/ssl/server.der: data

and then imported it using keyctl : -

export description="Test1"

keyctl padd asymmetric $description @u < ~/ssl/server.der

526852507

and validated thusly: -

keyctl list @u

1 key in keyring:
526852507: --als--v  1000  1000 asymmetric: Test1

Nice!

Thursday, 7 October 2021

Fun with keyctl on Ubuntu

One of my friends is tinkering with keyctl and had a few questions about the Linux kernel modules e.g. pkcs8_key_parser.

So I ran through an end-to-end setup to grow my own understanding, with thanks to: -

keyring-ima-signer

for enabling me to grow my understanding 🤣🤣

What OS do I have ?

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal

What kernel am I running ?

uname -a

Linux ubuntu 5.4.0-84-generic #94-Ubuntu SMP Thu Aug 26 20:27:37 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Install the keyutils package

sudo apt install keyutils

Set a subject for the public key

export subject="/C=GB/O=IBM/CN="`hostname`

Set the description for the keyring entry

export description="Test1"

Generate a RSA private key

openssl genrsa | openssl pkcs8 -topk8 -nocrypt -outform DER -out privatekey.der

Generating RSA private key, 2048 bit long modulus (2 primes)
....................+++++
.....................+++++
e is 65537 (0x010001)

Generate a public key/certificate

openssl req -x509 -key privatekey.der -out certificate.pem -days 365 -keyform DER -subj $subject

Add private key to keyring

keyctl padd asymmetric $description @u <privatekey.der

add_key: Bad message

Load required key parser module

sudo modprobe pkcs8_key_parser

Verify module load

lsmod |grep key

pkcs8_key_parser       16384  0

Add private key to keyring - second attempt

keyctl padd asymmetric $description @u <privatekey.der

676878733

Validate keyring

keyctl list @u

1 key in keyring:
676878733: --als--v  1000  1000 asymmetric: Test1

Wednesday, 6 October 2021

Podman broke my IBM Cloud

As per my earlier posts, I'm all over Podman at the moment.

Having removed Docker Desktop, I then realised that this broke the IBM Cloud CLI tool, specifically the Container Registry plugin: -

ic cr login

FAILED

Could not locate 'docker'. Check your 'docker' installation and path.

Of course, Docker isn't installed, and thus is no longer in the path: -

which docker

which returns NADA!

But Podman is there: -

which podman

/usr/local/bin/podman

so I created an alias/shortcut/symbolic link between the two: -

ln -s `which podman` /usr/local/bin/docker

so now Docker is back: -

which docker

/usr/local/bin/docker

and is remarkably similar to Podman: -

docker --version

docker version 3.4.0

podman --version

podman version 3.4.0

and now the cloud is happy: -

ic cr login

Logging in to 'us.icr.io'...

Logged in to 'us.icr.io'.


OK


Word to the wise - check your serial

I'm transitioning from an iPhone 8 Plus to an iPhone 13, and had wiped the older phone ( having remembered to turn off Find My... and also log out from iCloud ).

All was good, and the new iPhone 13 is oh-so-shiny; I'm a very happy camper ...

However, as I'm handing down the older model to a family member, I wanted to go get the battery replaced.

The Apple site asks, perhaps quite wisely, for a serial or an International Mobile Equipment Identity (IMEI) number.

Try getting that from the Settings page on a wiped, yet-to-be-setup iPhone ...

Thankfully, this article had the answer: -

Find the serial number or IMEI on your iPhone, iPad or iPod touch


It's on the SIM tray; just grab a handy paperclip, and you're good to go !

Monday, 4 October 2021

Podman - pruning

So, back in the Docker days, I wrote a basic little script called prune.sh which would ... prune containers that had exited: -

#!/usr/bin/env bash
echo "Removing Docker containers that have Exited"
docker rm `docker ps -a|grep Exited|awk '{print $1}'`
echo "Completed"

so wanted to do much the same now I'm into podman

My starting position is that I've got two containers, one recently exited and one running: -

podman ps -a

CONTAINER ID  IMAGE                                                               COMMAND               CREATED         STATUS                     PORTS                   NAMES
413adb675c62  docker.io/library/hello-world:latest                                /hello                37 seconds ago  Exited (0) 38 seconds ago  0.0.0.0:8080->8080/tcp  hopeful_vaughan
408ce1f9513f  us.icr.io/demo_time/hello_world_nginx_june_2021:latest  nginx -g daemon o...  32 seconds ago  Up 32 seconds ago          0.0.0.0:8443->443/tcp   zealous_brattain

so I want to remove them both.

Now obviously I don't want to be bother typing in the container ID or name, so lets just get a list of the IDs: -

podman ps -a | grep -v CONTAINER | awk '{print $1}'

413adb675c62
408ce1f9513f

and then use that in a loop to remove the unwanted containers: -

for i in $(podman ps -a | grep -v CONTAINER | awk '{print $1}'); do podman rm $i; done

413adb675c62
Error: cannot remove container 408ce1f9513f8056497d9e6353dd9b210c59d38eafe30c698f202ba6f240babe as it is running - running or paused containers cannot be removed without force: container state improper

so that's a 50% success.

Perhaps I need to add in a podman stop before the podman rm like this: -

podman ps -a

CONTAINER ID  IMAGE                                                               COMMAND               CREATED             STATUS                         PORTS                   NAMES
408ce1f9513f  us.icr.io/demo_time/hello_world_nginx_june_2021:latest  nginx -g daemon o...  7 minutes ago       Exited (0) 49 seconds ago      0.0.0.0:8443->443/tcp   zealous_brattain
43a6797e1036  docker.io/library/hello-world:latest                                /hello                About a minute ago  Exited (0) About a minute ago  0.0.0.0:8080->8080/tcp  hopeful_robinson

for i in $(podman ps -a | grep -v CONTAINER | awk '{print $1}'); do podman stop $i && podman rm $i; done

408ce1f9513f
408ce1f9513f
43a6797e1036
43a6797e1036

podman ps -a

CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

I can do much the same for the images, if I want my prune to be really really ruthless: -

for i in $(podman images | grep -v REPOSITORY | awk '{print $1}'); do podman rmi $i; done

Untagged: docker.io/library/hello-world:latest
Deleted: feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412
Untagged: us.icr.io/demo_time/hello_world_nginx_june_2021:latest
Deleted: c5318a40be88ede4e70c8c11f552a765c1c8aa5965ebd428da0b4766c2546968

So here's my final script: -

#!/usr/bin/env bash

echo "Removing containers"

for i in $(podman ps -a | grep -v CONTAINER | awk '{print $1}'); do podman stop $i && podman rm $i; done

echo "Removing images"

for i in $(podman images | grep -v REPOSITORY | awk '{print $1}'); do podman rmi $i; done

echo "Done"

~/prune.sh
 
Removing containers
854a1937c347
854a1937c347
ERRO[12181] accept tcp [::]:8443: use of closed network connection 
95ef5827777e
95ef5827777e
Removing images
Untagged: docker.io/library/hello-world:latest
Deleted: feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412
Untagged: us.icr.io/demo_time/hello_world_nginx_june_2021:latest
Deleted: c5318a40be88ede4e70c8c11f552a765c1c8aa5965ebd428da0b4766c2546968
Done

podman ps -a

CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

podman images

REPOSITORY  TAG         IMAGE ID    CREATED     SIZE

And there's more - podman in action

Following on from my two earlier posts: -

Podman - my first time

Podman and Homebrew and Docker - Permission to launch ...

here we go, using Podman to run a container from a "Here's one I created earlier" container image that hosts Nginx on the internal container port of 443 using SSL/TLS : -

Starting position - no containers nor images

podman ps -a

CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

podman images

REPOSITORY  TAG         IMAGE ID    CREATED     SIZE

Logging into IBM Container Registry

export APIKEY="<THIS IS WHERE MY API KEY GOES>"

echo $APIKEY | podman login us.icr.io --username iamapikey --password-stdin

Login Succeeded!

Pulling image

podman pull us.icr.io/demo_time/hello_world_nginx_june_2021:latest

Trying to pull us.icr.io/demo_time/hello_world_nginx_june_2021:latest...

Getting image source signatures
Checking if image destination supports signatures
Copying blob sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b
Copying blob sha256:0dc18a5274f2c43405a2ecccd3b10c159e3141b963a899c1f8127fd921a919dc
Copying blob sha256:48a0ee941dcdebbf017f21b46c5dd6f6ee81f8086e9347e852a067cf6f18209a
Copying blob sha256:2446243a1a3fbd03fffa8180f51dee385c4c5dbd91a84ebcdb6958f0e42cf764
Copying blob sha256:cbf0756b41fb647e1222f78d79397c27439b0c3a9b27aafbdd34aa5b72bd6a49
Copying blob sha256:c72750a979b985e3c3d6299106d90b0cff7e0b833a53ac02fcb7d76bd5fe4066
Copying blob sha256:48a0ee941dcdebbf017f21b46c5dd6f6ee81f8086e9347e852a067cf6f18209a
Copying blob sha256:45b6990e7dbfc9c43a357f0eb0ff074f159ed75c6ed865d0d9dad33a028cc2a2
Copying blob sha256:cbf0756b41fb647e1222f78d79397c27439b0c3a9b27aafbdd34aa5b72bd6a49
Copying blob sha256:5e158c5bf01f5e088f575e2fbc228bf6412be3c3c203d27d8a54e81eb9dc469e
Copying blob sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b
Copying blob sha256:2446243a1a3fbd03fffa8180f51dee385c4c5dbd91a84ebcdb6958f0e42cf764
Copying blob sha256:2a7c6912841852e1c853229bd6a6e02035b47a39aec2e98d5a2b0168a843d879
Copying blob sha256:c72750a979b985e3c3d6299106d90b0cff7e0b833a53ac02fcb7d76bd5fe4066
Copying blob sha256:449e432369550bb7d8e8d7424208c98b20e2fa419c885b5786523597afe613f1
Copying blob sha256:5e158c5bf01f5e088f575e2fbc228bf6412be3c3c203d27d8a54e81eb9dc469e
Copying blob sha256:0dc18a5274f2c43405a2ecccd3b10c159e3141b963a899c1f8127fd921a919dc
Copying blob sha256:747e67851ee5fae34759ef37ad7aa7fc1a3f547a47d949ba03fcf6a8aa391146
Copying blob sha256:45b6990e7dbfc9c43a357f0eb0ff074f159ed75c6ed865d0d9dad33a028cc2a2
Copying blob sha256:2a7c6912841852e1c853229bd6a6e02035b47a39aec2e98d5a2b0168a843d879
Copying blob sha256:747e67851ee5fae34759ef37ad7aa7fc1a3f547a47d949ba03fcf6a8aa391146
Copying blob sha256:0217b8cca4864fe2a874053cae58c1d3d195dc5763fb081b1939e241c4f58ed3
Copying blob sha256:449e432369550bb7d8e8d7424208c98b20e2fa419c885b5786523597afe613f1
Copying blob sha256:b6f423348fcd82b9ce715e06704d4ab65f5a7ae41ddc2c4fff8806a66c57ee93
Copying blob sha256:0217b8cca4864fe2a874053cae58c1d3d195dc5763fb081b1939e241c4f58ed3
Copying blob sha256:b6f423348fcd82b9ce715e06704d4ab65f5a7ae41ddc2c4fff8806a66c57ee93
Copying config sha256:c5318a40be88ede4e70c8c11f552a765c1c8aa5965ebd428da0b4766c2546968
Writing manifest to image destination
Storing signatures
c5318a40be88ede4e70c8c11f552a765c1c8aa5965ebd428da0b4766c2546968

Verify pull

podman images

REPOSITORY                                                   TAG         IMAGE ID      CREATED      SIZE
us.icr.io/demo_time/hello_world_nginx_june_2021  latest      c5318a40be88  2 weeks ago  36.8 MB

Create a container

Note that we're using the --detach CLI parameter to run it as a daemon and the 

podman run --detach --publish 8443:443 us.icr.io/demo_time/hello_world_nginx_june_2021

1ac8b1b735d9c1407a143e09f71a86d39ed27b12777a4c2425f1196ae21b9f50

Verify running container

podman ps

CONTAINER ID  IMAGE                                                               COMMAND               CREATED         STATUS             PORTS                  NAMES
1ac8b1b735d9  us.icr.io/demo_time/hello_world_nginx_june_2021:latest  nginx -g daemon o...  26 seconds ago  Up 26 seconds ago  0.0.0.0:8443->443/tcp  heuristic_euclid

Validate HTTPS listener

netstat -an | grep 8443

tcp46      0      0  *.8443                 *.*                    LISTEN     

Validate HTTPS endpoint

openssl s_client -connect localhost:8443 </dev/null

...

SSL handshake has read 2262 bytes and written 289 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 4096 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
...

Test Nginx from the CLI

curl --insecure https://localhost:8443

- Note that we use the --insecure CLI parameter because Nginx is presenting a self-signed SSL certificate that cURL won't automagically trust

<html>
  <head>
    <title>Hello World</title>
  </head>
  <body>
    <div class="info">
      <p>
        <h2>
          <span>Welcome to IBM Hyper Protect ...</span>
        </h2>
      </p>
      <p>
        <h2>
          <span>Message of the Day .... Drink More Herbal Tea!!</span>
        </h2>
      </p>
      <p>
        <h2>
          <span>( and, of course, Hello World! )</span>
        </h2>
      </p>
    </div>
  </body>
</html>

Test Nginx from a browser

- Note that I'm using Firefox as Chrome has decided that it's just too secure to allow self-signed certificates 😁





Stop the container

podman stop 1ac8b1b735d9

ERRO[7790] accept tcp [::]:8443: use of closed network connection 
1ac8b1b735d9

Remove the container

podman rm 1ac8b1b735d9

1ac8b1b735d9

Remove the image

podman rmi us.icr.io/demo_time/hello_world_nginx_june_2021:latest

Untagged: us.icr.io/demo_time/hello_world_nginx_june_2021:latest
Deleted: c5318a40be88ede4e70c8c11f552a765c1c8aa5965ebd428da0b4766c2546968

Note to self - use kubectl to query images in a pod or deployment

In both cases, we use JSON ... For a deployment, we can do this: - kubectl get deployment foobar --namespace snafu --output jsonpath="{...