Wednesday 30 December 2020

Microsoft Visual Studio Code and Go - compiler says "No" - "main redeclared in this block" -

 So I had a bit of a face-palm moment this afternoon, whilst discussing how to configure Microsoft Visual Studio Code ( aka VS Code ) with Go on macOS 11 Big Sur.

Having updated Go to the most recent version: -

go version

go version go1.15.6 darwin/amd64

I was trying/failing to run a newly created Hello World Go module within VS Code itself: -

hello-world.go 

package main

import "fmt"


func main() {

fmt.Printf("hello, world\n")

}

However, VS Code just returns: -

# hello
/Users/hayd/go/src/hello/hello.go:5:6: main redeclared in this block
previous declaration at /Users/hayd/go/src/hello/hello-world.go:5:6
Build process exiting with code: 2 signal: null

within the Debug Console and, within the Problems perspective, I see: -


Now it **SHOULD** be obvious .....

But I'm obviously getting old ....

Can you see what I'm doing wrong ? I bet you can ....

Having validated that the module ran OK from the Terminal: -

cd $GOPATH/src/hello

go run hello-world.go 

hello, world

I again missed the obvious: -

ls -al

total 16
drwxr-xr-x  5 hayd  staff  160 30 Dec 17:27 .
drwxr-xr-x  9 hayd  staff  288 11 Sep 10:50 ..
-rw-r--r--  1 hayd  staff   74 30 Dec 17:27 hello-world.go
-rw-r--r--  1 hayd  staff   74 20 Feb  2020 hello.go
drwxr-xr-x  2 hayd  staff   64 20 Feb  2020 vendor

cat hello.go

package main

import "fmt"

func main() {
fmt.Printf("hello, world\n")
}

cat hello-world.go

package main

import "fmt"

func main() {
fmt.Printf("hello, world\n")
}

To put you, and me, out of our collective miseries, my mistake was ....

HAVING TWO MODULES IN THE SAME FOLDER, BOTH OF WHICH ARE DEFINING THE MAIN METHOD !!!!

Exactly as the VS Code errors were telling me: -

other declaration of main

main redeclared in this block

Yep, once I nuked the second instance: -

rm hello-world.go

VS Code was happy: -


and the remaining hello.go module ran happily: -


Can you say "Doofus" ? I bet you can ......

Tuesday 22 December 2020

Kubernetes on IBM Z - Flannel says "No"

I hit an interesting issue with a K8s 1.19.2 running on an IBM Z box, specifically across a pair of Ubuntu containers which were then running on an IBM Secure Service Container (SSC) LPAR on a Z box in IBM's cloud.

One of my colleagues had just upgraded the SSC software on that particular LPAR, known as Hosting Appliance, and I was performing some post-upgrade checks.

Having set the KUBECONFIG variable: -

export KUBECONFIG=~/davehay_k8s.conf 

I checked the running pods: -

 kubectl get pods --all-namespaces

NAMESPACE          NAME                                                  READY   STATUS              RESTARTS   AGE

default            hello-world-nginx-74bbbf57b4-8kzpb                    0/1     Error       0          25d

default            hello-world-nginx-deploy-to-cluster-dc8x2-pod-8fqzm   0/8     Completed           0          25d

default            hello-world-nginx-source-to-image-b2t7r-pod-l8mmm     0/2     Completed           0          25d

kube-system        coredns-f9fd979d6-tbp67                               0/1     Error       0          26d

kube-system        coredns-f9fd979d6-v9thn                               0/1     Error       0          26d

kube-system        etcd-b23976de6423                                     1/1     Running             1          26d

kube-system        kube-apiserver-b23976de6423                           1/1     Running             1          26d

kube-system        kube-controller-manager-b23976de6423                  1/1     Running             1          26d

kube-system        kube-proxy-cq5sg                                      1/1     Running             1          26d

kube-system        kube-proxy-qfg6v                                      1/1     Running             1          26d

kube-system        kube-scheduler-b23976de6423                           1/1     Running             1          26d

tekton-pipelines   tekton-pipelines-controller-587569588b-jv6hc          0/1     Error       0          25d

tekton-pipelines   tekton-pipelines-webhook-655cf7f8bb-nhlld             0/1     Error       0          25d


and noticed that four pods, including the Tekton Pipelines components were all in Error.

Initially, I tried to simply remove the erroring pods: -

kubectl delete pod coredns-f9fd979d6-v9thn --namespace kube-system

kubectl delete pod coredns-f9fd979d6-tbp67 --namespace kube-system

kubectl delete pod tekton-pipelines-controller-587569588b-jv6hc --namespace tekton-pipelines

kubectl delete pod tekton-pipelines-webhook-655cf7f8bb-nhlld --namespace tekton-pipelines

but that didn't seem to help much: -

kubectl get pods --all-namespaces

NAMESPACE          NAME                                                  READY   STATUS              RESTARTS   AGE
default            hello-world-nginx-deploy-to-cluster-dc8x2-pod-8fqzm   0/8     Completed           0          25d
default            hello-world-nginx-source-to-image-b2t7r-pod-l8mmm     0/2     Completed           0          25d
kube-system        coredns-f9fd979d6-xl7nx                               0/1     ContainerCreating   0          7m35s
kube-system        coredns-f9fd979d6-zd62l                               0/1     ContainerCreating   0          7m21s
kube-system        etcd-b23976de6423                                     1/1     Running             1          26d
kube-system        kube-apiserver-b23976de6423                           1/1     Running             1          26d
kube-system        kube-controller-manager-b23976de6423                  1/1     Running             1          26d
kube-system        kube-proxy-cq5sg                                      1/1     Running             1          26d
kube-system        kube-proxy-qfg6v                                      1/1     Running             1          26d
kube-system        kube-scheduler-b23976de6423                           1/1     Running             1          26d
tekton-pipelines   tekton-pipelines-controller-587569588b-mm9d9          0/1     ContainerCreating   0          9m49s
tekton-pipelines   tekton-pipelines-webhook-655cf7f8bb-h772d             0/1     ContainerCreating   0          9m32s

I dug into the health of one of the CoreDNS pods: -

kubectl describe pod coredns-f9fd979d6-zd62l --namespace kube-system

which, in part, showed: -

Type     Reason                  Age                     From               Message

----     ------                  ----                    ----               -------

Normal   Scheduled               4m17s                   default-scheduler  Successfully assigned kube-system/coredns-f9fd979d6-zd62l to b23976de6423

Warning  FailedCreatePodSandBox  4m16s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2afd79ad7a4aa926050bdb72affc567949f5dba1f07803020bd64bcbfe2de27b" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m14s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8c628d3e6851acadc25fcd4a4121bd6bbfa6638557a91464fbd724c98bfec40b" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m12s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ac5787fc3163e5216feceedbaaa16862ffea0e79d8ffc70951a531c625bd5424" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m10s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "115475e415ad5a74442639f3731a050608d0409191486a518e0b62d5ffde1756" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m8s                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2b6c81b54301793d87d0e426a35b44c41f44ce9f768d0f85cc89dbb7391baa5b" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m6s                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9f8279ab67680e7b14d43d4ea109c0440527fccf1d2d06f3737e0c5ff38c9b82" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m4s                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6788f48b5408ced802f77111e8b1f2968c4368228996e2fb946375638b8ca473" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m2s                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c1daa71473cdadfeb187962b918902890bfd90981a96907fdc60b2937cc3ece4" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m                      kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "bee12244bd8e1070633226913f94cb0faae6de820b0f745fd905308b92a22b0d" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Normal   SandboxChanged          3m53s (x12 over 4m15s)  kubelet            Pod sandbox changed, it will be killed and re-created.

Warning  FailedCreatePodSandBox  3m52s (x4 over 3m58s)   kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3a0c18424d9c640e3d33467866852801982bd07fc919a323e290aae6852f7d04" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Working on the principle that the problem MIGHT be with the Flannel networking plugin, mainly because the Flannel pods were NOT listed as running / failing in the list of pods reporting, I redeployed it: -

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged created

Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole

clusterrole.rbac.authorization.k8s.io/flannel created

Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds-amd64 created

daemonset.apps/kube-flannel-ds-arm64 created

daemonset.apps/kube-flannel-ds-arm created

daemonset.apps/kube-flannel-ds-ppc64le created

daemonset.apps/kube-flannel-ds-s390x created

which made a positive difference: -

kubectl get pods --all-namespaces

NAMESPACE          NAME                                                  READY   STATUS              RESTARTS   AGE
default            hello-world-nginx-deploy-to-cluster-dc8x2-pod-8fqzm   0/8     Completed           0          25d
default            hello-world-nginx-source-to-image-b2t7r-pod-l8mmm     0/2     Completed           0          25d
kube-system        coredns-f9fd979d6-xl7nx                               0/1     ContainerCreating   0          9m5s
kube-system        coredns-f9fd979d6-zd62l                               0/1     ContainerCreating   0          8m51s
kube-system        etcd-b23976de6423                                     1/1     Running             1          26d
kube-system        kube-apiserver-b23976de6423                           1/1     Running             1          26d
kube-system        kube-controller-manager-b23976de6423                  1/1     Running             1          26d
kube-system        kube-flannel-ds-s390x-ttl4n                           1/1     Running             0          4s
kube-system        kube-flannel-ds-s390x-wpx2h                           1/1     Running             0          4s
kube-system        kube-proxy-cq5sg                                      1/1     Running             1          26d
kube-system        kube-proxy-qfg6v                                      1/1     Running             1          26d
kube-system        kube-scheduler-b23976de6423                           1/1     Running             1          26d
tekton-pipelines   tekton-pipelines-controller-587569588b-mm9d9          0/1     ContainerCreating   0          11m
tekton-pipelines   tekton-pipelines-webhook-655cf7f8bb-h772d             0/1     ContainerCreating   0          11m

kubectl get pods --all-namespaces

NAMESPACE          NAME                                                  READY   STATUS      RESTARTS   AGE
default            hello-world-nginx-deploy-to-cluster-dc8x2-pod-8fqzm   0/8     Completed   0          25d
default            hello-world-nginx-source-to-image-b2t7r-pod-l8mmm     0/2     Completed   0          25d
kube-system        coredns-f9fd979d6-xl7nx                               1/1     Running     0          9m56s
kube-system        coredns-f9fd979d6-zd62l                               1/1     Running     0          9m42s
kube-system        etcd-b23976de6423                                     1/1     Running     1          26d
kube-system        kube-apiserver-b23976de6423                           1/1     Running     1          26d
kube-system        kube-controller-manager-b23976de6423                  1/1     Running     1          26d
kube-system        kube-flannel-ds-s390x-ttl4n                           1/1     Running     0          55s
kube-system        kube-flannel-ds-s390x-wpx2h                           1/1     Running     0          55s
kube-system        kube-proxy-cq5sg                                      1/1     Running     1          26d
kube-system        kube-proxy-qfg6v                                      1/1     Running     1          26d
kube-system        kube-scheduler-b23976de6423                           1/1     Running     1          26d
tekton-pipelines   tekton-pipelines-controller-587569588b-mm9d9          1/1     Running     0          12m
tekton-pipelines   tekton-pipelines-webhook-655cf7f8bb-h772d             1/1     Running     0          11m

I then re-ran the script that creates a Tekton deployment: -

# Create the tutorial-service Service Account

kubectl apply -f ./serviceaccounts/create_tutorial_service_account.yaml

# Create the clusterrole and clusterrolebinding

kubectl apply -f ./roles/create_cluster_role.yaml

# Create the Tekton Resource aligned to the Git repository

kubectl apply -f ./resources/git.yaml

# Create the Tekton Task that creates the Docker image from the GitHub repository

kubectl apply -f ./tasks/source-to-image.yaml

# Create the Tekton Task that pushes the built image to Docker Hub

kubectl apply -f ./tasks/deploy-using-kubectl.yaml

# Create the Tekton Pipeline that runs the two tasks

kubectl apply -f ./pipelines/build-and-deploy-pipeline.yaml

# Create the Tekton PipelineRun that runs the Pipeline

kubectl apply -f ./runs/pipeline-run.yaml

# Display the Pipeline logs

tkn pipelines logs

and checked the resulting deployment: -

kubectl get deployments

NAME                READY   UP-TO-DATE   AVAILABLE   AGE
hello-world-nginx   1/1     1            1           17m

service: -

kubectl get services

NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
hello-world-nginx   NodePort    10.97.37.211   <none>        80:30674/TCP   17m
kubernetes          ClusterIP   10.96.0.1      <none>        443/TCP        26d

and nodes: -

kubectl get nodes

NAME           STATUS   ROLES    AGE   VERSION
68bc83cf0d09   Ready    <none>   26d   v1.19.2
b23976de6423   Ready    master   26d   v1.19.2

I then used cURL to validate the Nginx pod: -

curl http://192.168.32.142:30674

<html>
  <head>
    <title>Hello World</title>
  </head>
  <body>
    <div class="info">
      <p>
        <h2>
          <span>Welcome to IBM Cloud Kubernetes Service with Hyper Protect ...</span>
        </h2>
      </p>
      <p>
        <h2>
          <span>and your first Docker application built by Tekton Pipelines and Triggers ...</span>
        </h2>
      </p>
      <p>
        <h2>
          <span>Message of the Day .... Drink More Herbal Tea!!</span>
        </h2>
      </p>
      <p>
        <h2>
          <span>( and, of course, Hello World! )</span>
        </h2>
      </p>
    </div>
  </body>
</html>

So we're now good to go ......

Sunday 20 December 2020

Today I learned ... when is Bash not Bash ?

 Having written Bash ( Bourne Again Shell ) scripts for the longest time, I couldn't quite work out why some things that worked on my Mac did NOT work on my colleague's Mac, even though he was using my script .....

TL;DR; the major difference was the shell that each of us was using.

For me, it's Bash all the way ...

echo $SHELL

/bin/bash

whereas, for him, that same command return: -

/bin/zsh

Now I typically write my scripts the same way, with line 1 reading: -

#!/bin/bash

which works for me ....

However, for him, he was essentially trying to run a Bash script in ZSH, but using the older default version of Bash.

When he ran /bin/bash to switch to Bash rather than ZSH, all was well ....

Thankfully, the internet came to our rescue: -


which says, in part: -

If your scripts start with the line #!/bin/bash they will still be run using bash, even if your default shell is zsh.

I've found the syntax of zsh really close to the one of bash, and I did not pay attention if there was really some incompatibilities. I switched 6 years ago from bash to zsh seamlessly.

and, even more importantly: -

Hardg-coding the path to the shell is bad advice, even if it's done frequently. You should use #!/usr/bin/env bash instead, especially on macOS, where the default bash is severely outdated and new versions are virtually always installed into a different path.

Following that second piece of advice, I changed my script's first line to read: -

#!/usr/bin/env bash

and, quelle surprise, it just worked for my colleague, from within a ZSH session.

As I say, TIL !

RESTing rather than wrestling with Jenkins

 So this is not quite Today I Learned, as I discovered this a few days back, but that doesn't work quite so well as TIL ....

Jenkins has a rather nifty REST API, where all/most of the common functions can be invoked via cURL commands.

In order to demonstrate this, I'd previously installed Jenkins onto a Virtual Server running Ubuntu, and, having created a basic Hello World Job, ran through the steps to manage the lifecycle of the job via my Mac's CLI.

Firstly, I needed to create an Access Token, using the Jenkins GUI: -

http://192.168.1.10:8080/user/hayd/configure

This generates a nice long hexadecimal string of gobbledegook, which I then use via my CLI invocations: -

Set the environment variables

export USER="hayd"

export ACCESS_TOKEN="11eec6f237adbdc9c61c15b27188d64028"

export JENKINS="http://192.168.1.10:8080"

List the available Jobs

curl --silent --request GET --user $USER:$ACCESS_TOKEN $JENKINS/api/json|jq '.jobs[].name'

of which there's one: -

"HelloWorld"

With that Job name also set as an environment variable: -

export JOB="HelloWorld"

I can then retrieve the Job's configuration: -

curl --silent --request GET --user $USER:$ACCESS_TOKEN $JENKINS/job/$JOB/config.xml --output ~/$(echo $JOB).xml

which returns an XML document: -

ls -al ~/$(echo $JOB).xml

-rw-r--r--  1 hayd  staff  694 20 Dec 14:15 /Users/hayd/HelloWorld.xml

which we can inspect: -

cat ~/$(echo $JOB).xml

<?xml version='1.1' encoding='UTF-8'?>

<project>

  <description>Say Hello</description>

  <keepDependencies>false</keepDependencies>

  <properties/>

  <scm class="hudson.scm.NullSCM"/>

  <canRoam>true</canRoam>

  <disabled>false</disabled>

  <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>

  <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>

  <triggers/>

  <concurrentBuild>false</concurrentBuild>

  <builders>

    <hudson.tasks.Shell>

      <command>#!/bin/bash


export GREETING=&quot;Hello World!&quot;

echo $GREETING</command>

      <configuredLocalRules/>

    </hudson.tasks.Shell>

  </builders>

  <publishers/>

  <buildWrappers/>

</project>

Definitely not the world's most exciting Jenkins Job ...

So let's nuke it ....

Delete the Job

curl --silent --request DELETE --user $USER:$ACCESS_TOKEN $JENKINS/job/$JOB/

This doesn't return anything .....

List the available Jobs

curl --silent --request GET --user $USER:$ACCESS_TOKEN $JENKINS/api/json|jq '.jobs[].name'

This doesn't return anything ..... because there's no longer anything to return ...

Ooops, didn't mean to delete it, where's the backup ?

Create a new job from the XML document

curl --silent --request POST --user $USER:$ACCESS_TOKEN --header 'Content-type: application/xml' $JENKINS/createItem?name=$(echo $JOB) --data @$(echo $JOB).xml

This doesn't return anything .....

List the available Jobs

curl --silent --request GET --user $USER:$ACCESS_TOKEN $JENKINS/api/json|jq '.jobs[].name'

"HelloWorld"

Phew!

TL;DR; almost all of the Jenkins GUI pages have their own REST API, as indicated by the link at the bottom of all/most pages: -


which leads to pages such as: -

http://192.168.1.10:8080/api/


which provides a useful set of suggestions e.g.

  •     Controlling the amount of data you fetch
  •     Create Job
  •     Copy Job
  •     Build Queue
  •     Load Statistics
  •     Restarting Jenkins

For yet more information, check out the Jenkins API documentation: -


and: -

Wednesday 16 December 2020

Connecting to IBM Db2 from SQuirreLSQL on macOS

This came up in a thread from an IBM colleague, and led me to have a tinker with a new-to-me tool, SQuirreLSQL 

Having downloaded and installed the app, making sure to select the IBM Db2 driver: -


and having spun up a Db2 container on one of my Docker boxes: -

docker run -itd --name mydb2 --privileged=true -p 50000:50000 -e LICENSE=accept -e DB2INST1_PASSWORD=p4ssw0rd -e DBNAME=testdb -v ~/db2:/database ibmcom/db2 

( this is a useful inspiration for Db2 on Docker )

I realised that I also needed to download an IBM Db2 JDBC driver from: -


which resulted in: -

-rw-r--r--@   1 hayd  staff  8341739 16 Dec 10:46 v11.5.5_jdbc_sqlj.tar.gz

I unpacked this: -

tar xvzf ~/Downloads/v11.5.5_jdbc_sqlj.tar.gz -C /tmp

x jdbc_sqlj/
x jdbc_sqlj/license/
x jdbc_sqlj/license/UNIX/
x jdbc_sqlj/license/UNIX/jdbc4_LI_fr
x jdbc_sqlj/license/UNIX/jdbc4_LI_cs
x jdbc_sqlj/license/UNIX/jdbc4_LI_pl
x jdbc_sqlj/license/UNIX/jdbc4_LI_tr
x jdbc_sqlj/license/UNIX/jdbc4_LI_de
x jdbc_sqlj/license/UNIX/jdbc4_LI_zh_TW
x jdbc_sqlj/license/UNIX/jdbc4_LI_in
x jdbc_sqlj/license/UNIX/jdbc4_LI_es
x jdbc_sqlj/license/UNIX/jdbc4_LI_ja
x jdbc_sqlj/license/UNIX/jdbc4_LI_zh
x jdbc_sqlj/license/UNIX/jdbc4_LI_it
x jdbc_sqlj/license/UNIX/jdbc4_LI_pt
x jdbc_sqlj/license/UNIX/jdbc4_LI_en
x jdbc_sqlj/license/UNIX/jdbc4_LI_ko
x jdbc_sqlj/license/Windows/
x jdbc_sqlj/license/Windows/jdbc4_LI_tw.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_tr.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_kr.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_pl.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_cz.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_it.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_br.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_en.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_es.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_cn.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_fr.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_de.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_jp.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_in.rtf
x jdbc_sqlj/license/jdbc4_notices.txt
x jdbc_sqlj/license/jdbc4_notices.rtf
x jdbc_sqlj/db2_db2driver_for_jdbc_sqlj.zip

unzip /tmp/jdbc_sqlj/db2_db2driver_for_jdbc_sqlj.zip -d /tmp

Archive:  /tmp/jdbc_sqlj/db2_db2driver_for_jdbc_sqlj.zip
  inflating: /tmp/db2jcc4.jar        
  inflating: /tmp/jdbc4_LI_br.rtf    
  inflating: /tmp/jdbc4_LI_cn.rtf    
  inflating: /tmp/jdbc4_LI_cs        
  inflating: /tmp/jdbc4_LI_cz.rtf    
  inflating: /tmp/jdbc4_LI_de        
  inflating: /tmp/jdbc4_LI_de.rtf    
  inflating: /tmp/jdbc4_LI_en        
  inflating: /tmp/jdbc4_LI_en.rtf    
  inflating: /tmp/jdbc4_LI_es        
  inflating: /tmp/jdbc4_LI_es.rtf    
  inflating: /tmp/jdbc4_LI_fr        
  inflating: /tmp/jdbc4_LI_fr.rtf    
  inflating: /tmp/jdbc4_LI_it        
  inflating: /tmp/jdbc4_LI_it.rtf    
  inflating: /tmp/jdbc4_LI_ja        
  inflating: /tmp/jdbc4_LI_jp.rtf    
  inflating: /tmp/jdbc4_LI_ko        
  inflating: /tmp/jdbc4_LI_kr.rtf    
  inflating: /tmp/jdbc4_LI_pl        
  inflating: /tmp/jdbc4_LI_pl.rtf    
  inflating: /tmp/jdbc4_LI_pt        
  inflating: /tmp/jdbc4_LI_tr        
  inflating: /tmp/jdbc4_LI_tr.rtf    
  inflating: /tmp/jdbc4_LI_tw.rtf    
  inflating: /tmp/jdbc4_LI_zh        
  inflating: /tmp/jdbc4_LI_zh_TW     
  inflating: /tmp/jdbc4_REDIST.txt   
  inflating: /tmp/sqlj4.zip          

and then added a new JDBC driver to the SQuirreLSQL client: -



and connect to my Db2 on Docker instance: -


and query my database: -


Nice!

Friday 11 December 2020

Want to learn IBM Cloudant ? Check out YouTube ....

Further tinkering with Cloudant and couchimport etc. today, and found this: -

Cloudant Course

This is an eighteen part training course design to introduce new users to the IBM Cloudant Database as a service. It begins with an introduction to the database,  its open-source heritage and how it differs from traditional relational data stores. By the end of the course, you will have been introduced to the database's API, bulk operations, querying, aggregation, replication and much more. Some course parts include practical exercises which can be completed on IBM's free Cloudant Lite service or using a self-hosted Apache CouchDB installation.

This course is ideal for developers looking to get started with the IBM Cloudant service, or to Apache CouchDB. Ideally course participants would have a technical grounding and be familiar with HTTP and JSON.

Each video has full closed captions and links to textual version of the course.

Thursday 3 December 2020

TIL - how to print from  Notes in iOS and iPadOS 14.2

 This has baffled me since I upgraded to iOS 14 a while back ...

As an avid user of the  Notes app across my various Apple devices - iPhone, iPad, Watch and Mac, I was struggling to work out how to print ( IKR? ) from Notes ....

Google had the answer, Google is my friend ...

Specifically, this: -

How to print Notes on iPhone and iPad


Thanks iMore, you rule!

Thursday 26 November 2020

Back in the day - PuTTY and Windows and RDP

I had an interesting tinker this PM, harking back to a client engagement where we were using PuTTY on Windows to access a bunch of AIX boxen.

In this case, a colleague was running PuTTY on a Windows boxen, via Microsoft's Remote Desktop client, and was trying to work out how to paste text from macOS into the target Unix boxen, via PuTTY.

I setup a Windows 10 VM on one of our hypervisors, and accessed it via RDP,  and downloaded/installed PuTTY

Once I'd connected to an Ubuntu boxen from the PuTTY session, I set out to test the options for copy/paste.

Having proven that I could copy text from the Mac using [cmd][c] and paste it into Notepad.exe on the Windows boxen, using [ctrl][v], I tried/failed to do the same within the PuTTY session.

No matter what I did, paste failed to ... paste, in terms of invoking it via a keyboard shortcut.

Whilst the "right mouse button" action worked for me ( I've got the Apple Magic Mouse 2, so there are actually no buttons - the entire mouse is a button ! ), the keyboard failed ....

I dug around in PuTTY's settings for a while, and then found this: -



Once I changed Ctrl + Shift + [C,V] from No action to System Clipboard : -



it just worked.

In other words, I could, for example, go into Visual Studio Code (VSCode) on my Mac, use [cmd] [a] to select some text e.g. ps auxw and then [cmd] [c] to copy it to the clipboard.

I could then toggle back into the Remote Desktop session, using [cmd] [tab], use [option] [tab] within the RDP session to toggle into the PuTTY session, and hit [shift] [control] [v] to paste it into the PuTTY session: -



So a fair few keystrokes to remember ... but .. SUCCESS!

For the record, I'm running: -


and the remote Windows 10 box has: -



Monday 23 November 2020

Tinkering with Spotlight disk indexing in macOS 11 Big Sur

 Having upgraded to Big Sur last week, I'd noticed that Spotlight still hadn't completed disk indexing after ~7 days.

I was digging in further using Terminal: -

sudo mdutil -E /

/:
Error: Index is already changing state.  Please try again in a moment.

sudo mdutil -i on /

Password:
/:
Indexing enabled. 


sudo mdutil -i off /

/:
Error: Index is already changing state.  Please try again in a moment.

sudo mdutil -s /

/:
Error: unexpected indexing state.  kMDConfigSearchLevelTransitioning

None of this looked particularly good ....

Thankfully, a colleague showed me how to turn indexing off: -

sudo mdutil -a -i off

/:
2020-11-23 11:12:08.988 mdutil[75847:1674629] mdutil disabling Spotlight: / -> kMDConfigSearchLevelFSSearchOnly
Indexing disabled.
/System/Volumes/Data:
2020-11-23 11:12:09.061 mdutil[75847:1674629] mdutil disabling Spotlight: /System/Volumes/Data -> kMDConfigSearchLevelFSSearchOnly
Indexing disabled.
/Volumes/Backups of Dave’s MacBook Pro:
2020-11-23 11:12:10.963 mdutil[75847:1674629] mdutil disabling Spotlight: /Volumes/Backups of Dave’s MacBook Pro -> kMDConfigSearchLevelFSSearchOnly
Indexing enabled. 

and on again: -

sudo mdutil -a -i on

/:
Indexing enabled. 
/System/Volumes/Data:
Indexing enabled. 
/Volumes/Backups of Dave’s MacBook Pro:
Indexing enabled. 

and, now, things are looking better ....

sudo mdutil -s /

Password:
/:
Indexing enabled. 

Spotlight is still eating battery : -



but .... we'll see ....


Saturday 21 November 2020

Synology NAS via Ethernet - more fun n' games

 Following on from an earlier ( wow, two years ago ) post: -

Synology DS414 - From Megabits to Gigabits

I was talking with a colleague about the speed of the Ethernet between my Mac ( now a more modern 2018 MacBook Pro ) and my DS414.

I wanted to test, and demonstrate, the speed of the 1 GB/s Ethernet connection between the two devices: -

MacBook Pro


Synology DS414


Or, via the CLI: -

MacBook Pro

ifconfig en8

en8: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=6467<RXCSUM,TXCSUM,VLAN_MTU,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
ether 00:e0:4c:68:03:70 
inet6 fe80::14a7:3984:6d54:14f3%en8 prefixlen 64 secured scopeid 0xb 
inet 192.168.1.21 netmask 0xffffff00 broadcast 192.168.1.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect (1000baseT <full-duplex>)
status: active

Synology DS414

ifconfig eth0

eth0      Link encap:Ethernet  HWaddr 00:11:32:25:58:91  
          inet addr:192.168.1.100  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::211:32ff:fe25:5891/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:90968788 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13099066 errors:0 dropped:19 overruns:0 carrier:0
          collisions:0 txqueuelen:532 
          RX bytes:2857752864 (2.6 GiB)  TX bytes:36719514 (35.0 MiB)
          Interrupt:8 


but, to "prove" the performance between the two, this is what I did: -

Create a 10 MB file

time dd if=/dev/zero of=tstfile bs=1024 count=1024000

1024000+0 records in
1024000+0 records out
1048576000 bytes transferred in 4.880373 secs (214855709 bytes/sec)

real 0m4.886s
user 0m0.470s
sys 0m4.381s

Validate the file

ls -alh tstfile 

-rw-r--r--  1 hayd  staff   1.0G 21 Nov 13:38 tstfile

Upload the file to the NAS

scp -P 8822 -c aes256-cbc tstfile admin@diskstation:~

tstfile                                                                                                                                                                                              100% 1000MB  22.9MB/s   00:43

which, in part, shows an upload speed of 23.0 MB/s - which ain't too shabby 

Friday 20 November 2020

macOS 11 Big Sur and Kernel Extensions - down the rabbit hole I go ....

I've been having a few discussions with colleagues as we get to grips with the new macOS 11 Big Sur release, especially with regard to the slow evolution away from Kernel Extensions ( aka KExts ).

One particular thread led me here: -

How to configure Kernel Extension settings for Mac

and, specifically this: -

sudo sqlite3 /var/db/SystemPolicyConfiguration/KextPolicy

Password:

]SQLite version 3.32.3 2020-06-18 14:16:19

Enter ".help" for usage hints.

sqlite> SELECT * FROM kext_policy; 

QED4VVPZWA|com.logitech.manager.kernel.driver|1|Logitech Inc.|5

6HB5Y2QTA3|com.hp.kext.io.enabler.compound|1|HP Inc.|0

Z2SG5H3HC8|net.tunnelblick.tun|1|Jonathan Bullard|5

Z2SG5H3HC8|net.tunnelblick.tap|1|Jonathan Bullard|5

sqlite> ^D

Why did I not know this before ?

There's a whole SQLite database infrastructure inside my Mac ? Wow, who knew ?

A colleague then pointed out that macOS also has kextstat which allows me to show which kernel extensions are loaded and, via this: -

kextstat | grep -v com.apple

Executing: /usr/bin/kmutil showloaded
No variant specified, falling back to release
Index Refs Address            Size       Wired      Name (Version) UUID <Linked Against>

the non-Apple extensions that are loaded or, in my case, NOT !

So, whilst the SQLite database has kexts from Logitech, HP and Tunnelblick listed, none appear to be loaded ...

Which is nice!

Friday 13 November 2020

Inspecting Kubernetes Worker nodes - a Work-in-Progress

I have a need to query a list of Kubernetes Worker Nodes, and ignore the Master Node.

This is definitely a W-I-P, but here's what I've got thus far

So we have a list of nodes: -

kubectl get nodes

NAME           STATUS   ROLES    AGE   VERSION
68bc83cf0d09   Ready    <none>   51d   v1.19.2
b23976de6423   Ready    master   51d   v1.19.2

of which I want the one that is NOT the Master.

So I do this: -

kubectl get nodes | awk 'NR>1' | grep -v master | awk '{print $1}'

which gives me this: -

68bc83cf0d09

so that I can do this: -

kubectl describe node 68bc83cf0d09 | grep -i internal

which gives me this: -

  InternalIP:  172.16.84.5

If I combine the two commands together: -

kubectl describe node `kubectl get nodes | awk 'NR>1' | grep -v master | awk '{print $1}'` | grep -I internal

I get what I need: -

  InternalIP:  172.16.84.5

Obviously, there are fifty-seven other ways to achieve the same, including using JSON and JQ: -

kubectl get node `kubectl get nodes | awk 'NR>1' | grep -v master | awk '{print $1}'` --output json | jq

so that I could then use JQ's select statement to find the internal IP .... but that's for another day.....

Yet more fun and goodness with Cloudant and couchimport

 One of my friends was wondering why couchimport was apparently working BUT not actually working ....

Running a test such as: -

cat cartoon.csv | couchimport --url https://<<SECRET>>-bluemix.cloudantnosqldb.appdomain.cloud --database cartoon

couchimport
-----------
 url         : "https://<<SECRET>>-bluemix.cloudantnosqldb.appdomain.cloud"
 database    : "cartoon"
 delimiter   : "\t"
 buffer      : 500
 parallelism : 1
 type        : "text"
-----------
  couchimport {"documents":0,"failed":8,"total":0,"totalfailed":8,"statusCodes":{"401":1},"latency":475} +0ms
  couchimport Import complete +0ms

In other words, it does something but reports failed:8

I had to dig back into my memory AND into the docs to work out what was going on....

Specifically this: -



So, it's a case of "If your name's not down, you're not coming in ....

If Cloudant or CouchDB ( from whence Cloudant came ) was running elsewhere, we could specify user/password credentials but, given that it's running as SaaS on the IBM Cloud, we need a better way ....

Once I realised ( remembered ) that, we were golden....

In essence, the "key" ( do you see what I did there? ) thing is to set an environment variable with an IBM Cloud API key: -

export IAM_API_KEY="<<TOP SECRET>>"

Here's the end-to-end walkthrough : -

Create data to be imported

vi cartoon.csv

id,givenName,familyName
1,Maggie,Simpson
2,Lisa,Simpson
3,Bart,Simpson
4,Homer,Simpson
5,Fred,Flintstone
6,Wilma,Flintstone
7,Barney,Rubble
8,Betty,Rubble

Set environment variables

export COUCH_URL="https://<<TOP SECRET>>-bluemix.cloudantnosqldb.appdomain.cloud"
export IAM_API_KEY="<<TOP SECRET>>"
export COUCH_DATABASE="cartoon"
export COUCH_DELIMITER=","

Generate Access Token

- This is a script that generates an ACCESS_TOKEN variable for my IBM Cloud API key

source ~/genAccessToken.sh

Create database

curl -s -k -X PUT -H 'Authorization: Bearer '"$ACCESS_TOKEN" $COUCH_URL/$COUCH_DATABASE | json_pp

{
   "ok" : true
}

Populate database

cat cartoon.csv | couchimport

couchimport
-----------
 url         : "https://<<SECRET>>-bluemix.cloudantnosqldb.appdomain.cloud"
 database    : "cartoon"
 delimiter   : ","
 buffer      : 500
 parallelism : 1
 type        : "text"
-----------
  couchimport {"documents":8,"failed":0,"total":8,"totalfailed":0,"statusCodes":{"201":1},"latency":381} +0ms
  couchimport Import complete +0ms

Create index

curl -s -k -X POST -H 'Authorization: Bearer '"$ACCESS_TOKEN" -H 'Content-type: application/json' $COUCH_URL/$COUCH_DATABASE/_index -d '{
   "index": {
      "fields": [
         "givenName"
      ]
   },
   "name": "givenName-json-index",
   "type": "json"
}'

Query database

curl -s -k -X POST -H 'Authorization: Bearer '"$ACCESS_TOKEN" -H 'Content-type: application/json' $COUCH_URL/$COUCH_DATABASE/_find -d '{
   "selector": {
      "$or": [
         {
            "givenName": "Maggie"
         },
         {
            "givenName": "Lisa"
         }
      ]
   },
   "fields": [
      "givenName",
      "familyName"
   ],
   "sort": [
      {
         "givenName": "asc"
      }
   ]
}'  | json_pp

{
   "bookmark" : "g2wAAAACaAJkAA5zdGFydGtleV9kb2NpZG0AAAAgNGI5YWZhMzZjNTBiNTg4ZTljMWFmMzUxZjQyNzViMGNoAmQACHN0YXJ0a2V5bAAAAAFtAAAABk1hZ2dpZWpq",
   "docs" : [
      {
         "givenName" : "Lisa",
         "familyName" : "Simpson"
      },
      {
         "givenName" : "Maggie",
         "familyName" : "Simpson"
      }
   ]
}

Can you say "Yay" ? I bet you can .....

Thursday 12 November 2020

Random weirdness with OpenSSL on Ubuntu 18.04.5

 I hit an interesting problem today, whilst trying to create a public/private key pair: -

openssl req -subj '/C=GB/O=IBM/CN=david_hay.uk.ibm.com' -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out ~/nginx/nginx.crt -keyout ~/nginx/nginx.key

Can't load /root/.rnd into RNG

4396464178976:error:2406F079:random number generator:RAND_load_file:Cannot open file:../crypto/rand/randfile.c:88:Filename=/root/.rnd

Generating a RSA private key

........................++++

........................++++

writing new private key to '/root/nginx/nginx.key'

-----

on an Ubuntu box: -

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic

( actually it's an Ubuntu container running on an IBM Z box, via the Secure Service Container technology,  but that's not the point of the story here ! )

I'd not seen that before ... but I noticed that the missing file was .rnd in my user's home directory - /root.

Taking a punt, I tried creating that file: -

touch ~/.rnd

and re-ran the openssl command: -

openssl req -subj '/C=GB/O=IBM/CN=david_hay.uk.ibm.com' -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out ~/nginx/nginx.crt -keyout ~/nginx/nginx.key

Generating a RSA private key
....................................................................++++
..++++
writing new private key to '/root/nginx/nginx.key'
-----

I'd previously run the same command on a different Ubuntu container: -

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic

without similar issues.

Both are running the same version of openssl namely: -

openssl version

OpenSSL 1.1.1  11 Sep 2018

Using this as a source: -


I used openssl to generate the .rnd file: -

openssl rand -out /root/.rnd -hex 256

and validated that I could still generate the key pair: -

openssl req -subj '/C=GB/O=IBM/CN=david_hay.uk.ibm.com' -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out ~/nginx/nginx.crt -keyout ~/nginx/nginx.key

Generating a RSA private key
.....................................................................++++
..................++++
writing new private key to '/root/nginx/nginx.key'
-----

Weird !

Visual Studio Code - Wow 🙀

Why did I not know that I can merely hit [cmd] [p]  to bring up a search box allowing me to search my project e.g. a repo cloned from GitHub...