Wednesday, 30 December 2020

Microsoft Visual Studio Code and Go - compiler says "No" - "main redeclared in this block" -

 So I had a bit of a face-palm moment this afternoon, whilst discussing how to configure Microsoft Visual Studio Code ( aka VS Code ) with Go on macOS 11 Big Sur.

Having updated Go to the most recent version: -

go version

go version go1.15.6 darwin/amd64

I was trying/failing to run a newly created Hello World Go module within VS Code itself: -

hello-world.go 

package main

import "fmt"


func main() {

fmt.Printf("hello, world\n")

}

However, VS Code just returns: -

# hello
/Users/hayd/go/src/hello/hello.go:5:6: main redeclared in this block
previous declaration at /Users/hayd/go/src/hello/hello-world.go:5:6
Build process exiting with code: 2 signal: null

within the Debug Console and, within the Problems perspective, I see: -


Now it **SHOULD** be obvious .....

But I'm obviously getting old ....

Can you see what I'm doing wrong ? I bet you can ....

Having validated that the module ran OK from the Terminal: -

cd $GOPATH/src/hello

go run hello-world.go 

hello, world

I again missed the obvious: -

ls -al

total 16
drwxr-xr-x  5 hayd  staff  160 30 Dec 17:27 .
drwxr-xr-x  9 hayd  staff  288 11 Sep 10:50 ..
-rw-r--r--  1 hayd  staff   74 30 Dec 17:27 hello-world.go
-rw-r--r--  1 hayd  staff   74 20 Feb  2020 hello.go
drwxr-xr-x  2 hayd  staff   64 20 Feb  2020 vendor

cat hello.go

package main

import "fmt"

func main() {
fmt.Printf("hello, world\n")
}

cat hello-world.go

package main

import "fmt"

func main() {
fmt.Printf("hello, world\n")
}

To put you, and me, out of our collective miseries, my mistake was ....

HAVING TWO MODULES IN THE SAME FOLDER, BOTH OF WHICH ARE DEFINING THE MAIN METHOD !!!!

Exactly as the VS Code errors were telling me: -

other declaration of main

main redeclared in this block

Yep, once I nuked the second instance: -

rm hello-world.go

VS Code was happy: -


and the remaining hello.go module ran happily: -


Can you say "Doofus" ? I bet you can ......

Tuesday, 22 December 2020

Kubernetes on IBM Z - Flannel says "No"

I hit an interesting issue with a K8s 1.19.2 running on an IBM Z box, specifically across a pair of Ubuntu containers which were then running on an IBM Secure Service Container (SSC) LPAR on a Z box in IBM's cloud.

One of my colleagues had just upgraded the SSC software on that particular LPAR, known as Hosting Appliance, and I was performing some post-upgrade checks.

Having set the KUBECONFIG variable: -

export KUBECONFIG=~/davehay_k8s.conf 

I checked the running pods: -

 kubectl get pods --all-namespaces

NAMESPACE          NAME                                                  READY   STATUS              RESTARTS   AGE

default            hello-world-nginx-74bbbf57b4-8kzpb                    0/1     Error       0          25d

default            hello-world-nginx-deploy-to-cluster-dc8x2-pod-8fqzm   0/8     Completed           0          25d

default            hello-world-nginx-source-to-image-b2t7r-pod-l8mmm     0/2     Completed           0          25d

kube-system        coredns-f9fd979d6-tbp67                               0/1     Error       0          26d

kube-system        coredns-f9fd979d6-v9thn                               0/1     Error       0          26d

kube-system        etcd-b23976de6423                                     1/1     Running             1          26d

kube-system        kube-apiserver-b23976de6423                           1/1     Running             1          26d

kube-system        kube-controller-manager-b23976de6423                  1/1     Running             1          26d

kube-system        kube-proxy-cq5sg                                      1/1     Running             1          26d

kube-system        kube-proxy-qfg6v                                      1/1     Running             1          26d

kube-system        kube-scheduler-b23976de6423                           1/1     Running             1          26d

tekton-pipelines   tekton-pipelines-controller-587569588b-jv6hc          0/1     Error       0          25d

tekton-pipelines   tekton-pipelines-webhook-655cf7f8bb-nhlld             0/1     Error       0          25d


and noticed that four pods, including the Tekton Pipelines components were all in Error.

Initially, I tried to simply remove the erroring pods: -

kubectl delete pod coredns-f9fd979d6-v9thn --namespace kube-system

kubectl delete pod coredns-f9fd979d6-tbp67 --namespace kube-system

kubectl delete pod tekton-pipelines-controller-587569588b-jv6hc --namespace tekton-pipelines

kubectl delete pod tekton-pipelines-webhook-655cf7f8bb-nhlld --namespace tekton-pipelines

but that didn't seem to help much: -

kubectl get pods --all-namespaces

NAMESPACE          NAME                                                  READY   STATUS              RESTARTS   AGE
default            hello-world-nginx-deploy-to-cluster-dc8x2-pod-8fqzm   0/8     Completed           0          25d
default            hello-world-nginx-source-to-image-b2t7r-pod-l8mmm     0/2     Completed           0          25d
kube-system        coredns-f9fd979d6-xl7nx                               0/1     ContainerCreating   0          7m35s
kube-system        coredns-f9fd979d6-zd62l                               0/1     ContainerCreating   0          7m21s
kube-system        etcd-b23976de6423                                     1/1     Running             1          26d
kube-system        kube-apiserver-b23976de6423                           1/1     Running             1          26d
kube-system        kube-controller-manager-b23976de6423                  1/1     Running             1          26d
kube-system        kube-proxy-cq5sg                                      1/1     Running             1          26d
kube-system        kube-proxy-qfg6v                                      1/1     Running             1          26d
kube-system        kube-scheduler-b23976de6423                           1/1     Running             1          26d
tekton-pipelines   tekton-pipelines-controller-587569588b-mm9d9          0/1     ContainerCreating   0          9m49s
tekton-pipelines   tekton-pipelines-webhook-655cf7f8bb-h772d             0/1     ContainerCreating   0          9m32s

I dug into the health of one of the CoreDNS pods: -

kubectl describe pod coredns-f9fd979d6-zd62l --namespace kube-system

which, in part, showed: -

Type     Reason                  Age                     From               Message

----     ------                  ----                    ----               -------

Normal   Scheduled               4m17s                   default-scheduler  Successfully assigned kube-system/coredns-f9fd979d6-zd62l to b23976de6423

Warning  FailedCreatePodSandBox  4m16s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2afd79ad7a4aa926050bdb72affc567949f5dba1f07803020bd64bcbfe2de27b" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m14s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8c628d3e6851acadc25fcd4a4121bd6bbfa6638557a91464fbd724c98bfec40b" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m12s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ac5787fc3163e5216feceedbaaa16862ffea0e79d8ffc70951a531c625bd5424" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m10s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "115475e415ad5a74442639f3731a050608d0409191486a518e0b62d5ffde1756" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m8s                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2b6c81b54301793d87d0e426a35b44c41f44ce9f768d0f85cc89dbb7391baa5b" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m6s                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9f8279ab67680e7b14d43d4ea109c0440527fccf1d2d06f3737e0c5ff38c9b82" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m4s                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6788f48b5408ced802f77111e8b1f2968c4368228996e2fb946375638b8ca473" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m2s                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c1daa71473cdadfeb187962b918902890bfd90981a96907fdc60b2937cc3ece4" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Warning  FailedCreatePodSandBox  4m                      kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "bee12244bd8e1070633226913f94cb0faae6de820b0f745fd905308b92a22b0d" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Normal   SandboxChanged          3m53s (x12 over 4m15s)  kubelet            Pod sandbox changed, it will be killed and re-created.

Warning  FailedCreatePodSandBox  3m52s (x4 over 3m58s)   kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3a0c18424d9c640e3d33467866852801982bd07fc919a323e290aae6852f7d04" network for pod "coredns-f9fd979d6-zd62l": networkPlugin cni failed to set up pod "coredns-f9fd979d6-zd62l_kube-system" network: open /run/flannel/subnet.env: no such file or directory

Working on the principle that the problem MIGHT be with the Flannel networking plugin, mainly because the Flannel pods were NOT listed as running / failing in the list of pods reporting, I redeployed it: -

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged created

Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole

clusterrole.rbac.authorization.k8s.io/flannel created

Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds-amd64 created

daemonset.apps/kube-flannel-ds-arm64 created

daemonset.apps/kube-flannel-ds-arm created

daemonset.apps/kube-flannel-ds-ppc64le created

daemonset.apps/kube-flannel-ds-s390x created

which made a positive difference: -

kubectl get pods --all-namespaces

NAMESPACE          NAME                                                  READY   STATUS              RESTARTS   AGE
default            hello-world-nginx-deploy-to-cluster-dc8x2-pod-8fqzm   0/8     Completed           0          25d
default            hello-world-nginx-source-to-image-b2t7r-pod-l8mmm     0/2     Completed           0          25d
kube-system        coredns-f9fd979d6-xl7nx                               0/1     ContainerCreating   0          9m5s
kube-system        coredns-f9fd979d6-zd62l                               0/1     ContainerCreating   0          8m51s
kube-system        etcd-b23976de6423                                     1/1     Running             1          26d
kube-system        kube-apiserver-b23976de6423                           1/1     Running             1          26d
kube-system        kube-controller-manager-b23976de6423                  1/1     Running             1          26d
kube-system        kube-flannel-ds-s390x-ttl4n                           1/1     Running             0          4s
kube-system        kube-flannel-ds-s390x-wpx2h                           1/1     Running             0          4s
kube-system        kube-proxy-cq5sg                                      1/1     Running             1          26d
kube-system        kube-proxy-qfg6v                                      1/1     Running             1          26d
kube-system        kube-scheduler-b23976de6423                           1/1     Running             1          26d
tekton-pipelines   tekton-pipelines-controller-587569588b-mm9d9          0/1     ContainerCreating   0          11m
tekton-pipelines   tekton-pipelines-webhook-655cf7f8bb-h772d             0/1     ContainerCreating   0          11m

kubectl get pods --all-namespaces

NAMESPACE          NAME                                                  READY   STATUS      RESTARTS   AGE
default            hello-world-nginx-deploy-to-cluster-dc8x2-pod-8fqzm   0/8     Completed   0          25d
default            hello-world-nginx-source-to-image-b2t7r-pod-l8mmm     0/2     Completed   0          25d
kube-system        coredns-f9fd979d6-xl7nx                               1/1     Running     0          9m56s
kube-system        coredns-f9fd979d6-zd62l                               1/1     Running     0          9m42s
kube-system        etcd-b23976de6423                                     1/1     Running     1          26d
kube-system        kube-apiserver-b23976de6423                           1/1     Running     1          26d
kube-system        kube-controller-manager-b23976de6423                  1/1     Running     1          26d
kube-system        kube-flannel-ds-s390x-ttl4n                           1/1     Running     0          55s
kube-system        kube-flannel-ds-s390x-wpx2h                           1/1     Running     0          55s
kube-system        kube-proxy-cq5sg                                      1/1     Running     1          26d
kube-system        kube-proxy-qfg6v                                      1/1     Running     1          26d
kube-system        kube-scheduler-b23976de6423                           1/1     Running     1          26d
tekton-pipelines   tekton-pipelines-controller-587569588b-mm9d9          1/1     Running     0          12m
tekton-pipelines   tekton-pipelines-webhook-655cf7f8bb-h772d             1/1     Running     0          11m

I then re-ran the script that creates a Tekton deployment: -

# Create the tutorial-service Service Account

kubectl apply -f ./serviceaccounts/create_tutorial_service_account.yaml

# Create the clusterrole and clusterrolebinding

kubectl apply -f ./roles/create_cluster_role.yaml

# Create the Tekton Resource aligned to the Git repository

kubectl apply -f ./resources/git.yaml

# Create the Tekton Task that creates the Docker image from the GitHub repository

kubectl apply -f ./tasks/source-to-image.yaml

# Create the Tekton Task that pushes the built image to Docker Hub

kubectl apply -f ./tasks/deploy-using-kubectl.yaml

# Create the Tekton Pipeline that runs the two tasks

kubectl apply -f ./pipelines/build-and-deploy-pipeline.yaml

# Create the Tekton PipelineRun that runs the Pipeline

kubectl apply -f ./runs/pipeline-run.yaml

# Display the Pipeline logs

tkn pipelines logs

and checked the resulting deployment: -

kubectl get deployments

NAME                READY   UP-TO-DATE   AVAILABLE   AGE
hello-world-nginx   1/1     1            1           17m

service: -

kubectl get services

NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
hello-world-nginx   NodePort    10.97.37.211   <none>        80:30674/TCP   17m
kubernetes          ClusterIP   10.96.0.1      <none>        443/TCP        26d

and nodes: -

kubectl get nodes

NAME           STATUS   ROLES    AGE   VERSION
68bc83cf0d09   Ready    <none>   26d   v1.19.2
b23976de6423   Ready    master   26d   v1.19.2

I then used cURL to validate the Nginx pod: -

curl http://192.168.32.142:30674

<html>
  <head>
    <title>Hello World</title>
  </head>
  <body>
    <div class="info">
      <p>
        <h2>
          <span>Welcome to IBM Cloud Kubernetes Service with Hyper Protect ...</span>
        </h2>
      </p>
      <p>
        <h2>
          <span>and your first Docker application built by Tekton Pipelines and Triggers ...</span>
        </h2>
      </p>
      <p>
        <h2>
          <span>Message of the Day .... Drink More Herbal Tea!!</span>
        </h2>
      </p>
      <p>
        <h2>
          <span>( and, of course, Hello World! )</span>
        </h2>
      </p>
    </div>
  </body>
</html>

So we're now good to go ......

Sunday, 20 December 2020

Today I learned ... when is Bash not Bash ?

 Having written Bash ( Bourne Again Shell ) scripts for the longest time, I couldn't quite work out why some things that worked on my Mac did NOT work on my colleague's Mac, even though he was using my script .....

TL;DR; the major difference was the shell that each of us was using.

For me, it's Bash all the way ...

echo $SHELL

/bin/bash

whereas, for him, that same command return: -

/bin/zsh

Now I typically write my scripts the same way, with line 1 reading: -

#!/bin/bash

which works for me ....

However, for him, he was essentially trying to run a Bash script in ZSH, but using the older default version of Bash.

When he ran /bin/bash to switch to Bash rather than ZSH, all was well ....

Thankfully, the internet came to our rescue: -


which says, in part: -

If your scripts start with the line #!/bin/bash they will still be run using bash, even if your default shell is zsh.

I've found the syntax of zsh really close to the one of bash, and I did not pay attention if there was really some incompatibilities. I switched 6 years ago from bash to zsh seamlessly.

and, even more importantly: -

Hardg-coding the path to the shell is bad advice, even if it's done frequently. You should use #!/usr/bin/env bash instead, especially on macOS, where the default bash is severely outdated and new versions are virtually always installed into a different path.

Following that second piece of advice, I changed my script's first line to read: -

#!/usr/bin/env bash

and, quelle surprise, it just worked for my colleague, from within a ZSH session.

As I say, TIL !

RESTing rather than wrestling with Jenkins

 So this is not quite Today I Learned, as I discovered this a few days back, but that doesn't work quite so well as TIL ....

Jenkins has a rather nifty REST API, where all/most of the common functions can be invoked via cURL commands.

In order to demonstrate this, I'd previously installed Jenkins onto a Virtual Server running Ubuntu, and, having created a basic Hello World Job, ran through the steps to manage the lifecycle of the job via my Mac's CLI.

Firstly, I needed to create an Access Token, using the Jenkins GUI: -

http://192.168.1.10:8080/user/hayd/configure

This generates a nice long hexadecimal string of gobbledegook, which I then use via my CLI invocations: -

Set the environment variables

export USER="hayd"

export ACCESS_TOKEN="11eec6f237adbdc9c61c15b27188d64028"

export JENKINS="http://192.168.1.10:8080"

List the available Jobs

curl --silent --request GET --user $USER:$ACCESS_TOKEN $JENKINS/api/json|jq '.jobs[].name'

of which there's one: -

"HelloWorld"

With that Job name also set as an environment variable: -

export JOB="HelloWorld"

I can then retrieve the Job's configuration: -

curl --silent --request GET --user $USER:$ACCESS_TOKEN $JENKINS/job/$JOB/config.xml --output ~/$(echo $JOB).xml

which returns an XML document: -

ls -al ~/$(echo $JOB).xml

-rw-r--r--  1 hayd  staff  694 20 Dec 14:15 /Users/hayd/HelloWorld.xml

which we can inspect: -

cat ~/$(echo $JOB).xml

<?xml version='1.1' encoding='UTF-8'?>

<project>

  <description>Say Hello</description>

  <keepDependencies>false</keepDependencies>

  <properties/>

  <scm class="hudson.scm.NullSCM"/>

  <canRoam>true</canRoam>

  <disabled>false</disabled>

  <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>

  <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>

  <triggers/>

  <concurrentBuild>false</concurrentBuild>

  <builders>

    <hudson.tasks.Shell>

      <command>#!/bin/bash


export GREETING=&quot;Hello World!&quot;

echo $GREETING</command>

      <configuredLocalRules/>

    </hudson.tasks.Shell>

  </builders>

  <publishers/>

  <buildWrappers/>

</project>

Definitely not the world's most exciting Jenkins Job ...

So let's nuke it ....

Delete the Job

curl --silent --request DELETE --user $USER:$ACCESS_TOKEN $JENKINS/job/$JOB/

This doesn't return anything .....

List the available Jobs

curl --silent --request GET --user $USER:$ACCESS_TOKEN $JENKINS/api/json|jq '.jobs[].name'

This doesn't return anything ..... because there's no longer anything to return ...

Ooops, didn't mean to delete it, where's the backup ?

Create a new job from the XML document

curl --silent --request POST --user $USER:$ACCESS_TOKEN --header 'Content-type: application/xml' $JENKINS/createItem?name=$(echo $JOB) --data @$(echo $JOB).xml

This doesn't return anything .....

List the available Jobs

curl --silent --request GET --user $USER:$ACCESS_TOKEN $JENKINS/api/json|jq '.jobs[].name'

"HelloWorld"

Phew!

TL;DR; almost all of the Jenkins GUI pages have their own REST API, as indicated by the link at the bottom of all/most pages: -


which leads to pages such as: -

http://192.168.1.10:8080/api/


which provides a useful set of suggestions e.g.

  •     Controlling the amount of data you fetch
  •     Create Job
  •     Copy Job
  •     Build Queue
  •     Load Statistics
  •     Restarting Jenkins

For yet more information, check out the Jenkins API documentation: -


and: -

Wednesday, 16 December 2020

Connecting to IBM Db2 from SQuirreLSQL on macOS

This came up in a thread from an IBM colleague, and led me to have a tinker with a new-to-me tool, SQuirreLSQL 

Having downloaded and installed the app, making sure to select the IBM Db2 driver: -


and having spun up a Db2 container on one of my Docker boxes: -

docker run -itd --name mydb2 --privileged=true -p 50000:50000 -e LICENSE=accept -e DB2INST1_PASSWORD=p4ssw0rd -e DBNAME=testdb -v ~/db2:/database ibmcom/db2 

( this is a useful inspiration for Db2 on Docker )

I realised that I also needed to download an IBM Db2 JDBC driver from: -


which resulted in: -

-rw-r--r--@   1 hayd  staff  8341739 16 Dec 10:46 v11.5.5_jdbc_sqlj.tar.gz

I unpacked this: -

tar xvzf ~/Downloads/v11.5.5_jdbc_sqlj.tar.gz -C /tmp

x jdbc_sqlj/
x jdbc_sqlj/license/
x jdbc_sqlj/license/UNIX/
x jdbc_sqlj/license/UNIX/jdbc4_LI_fr
x jdbc_sqlj/license/UNIX/jdbc4_LI_cs
x jdbc_sqlj/license/UNIX/jdbc4_LI_pl
x jdbc_sqlj/license/UNIX/jdbc4_LI_tr
x jdbc_sqlj/license/UNIX/jdbc4_LI_de
x jdbc_sqlj/license/UNIX/jdbc4_LI_zh_TW
x jdbc_sqlj/license/UNIX/jdbc4_LI_in
x jdbc_sqlj/license/UNIX/jdbc4_LI_es
x jdbc_sqlj/license/UNIX/jdbc4_LI_ja
x jdbc_sqlj/license/UNIX/jdbc4_LI_zh
x jdbc_sqlj/license/UNIX/jdbc4_LI_it
x jdbc_sqlj/license/UNIX/jdbc4_LI_pt
x jdbc_sqlj/license/UNIX/jdbc4_LI_en
x jdbc_sqlj/license/UNIX/jdbc4_LI_ko
x jdbc_sqlj/license/Windows/
x jdbc_sqlj/license/Windows/jdbc4_LI_tw.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_tr.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_kr.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_pl.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_cz.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_it.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_br.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_en.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_es.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_cn.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_fr.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_de.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_jp.rtf
x jdbc_sqlj/license/Windows/jdbc4_LI_in.rtf
x jdbc_sqlj/license/jdbc4_notices.txt
x jdbc_sqlj/license/jdbc4_notices.rtf
x jdbc_sqlj/db2_db2driver_for_jdbc_sqlj.zip

unzip /tmp/jdbc_sqlj/db2_db2driver_for_jdbc_sqlj.zip -d /tmp

Archive:  /tmp/jdbc_sqlj/db2_db2driver_for_jdbc_sqlj.zip
  inflating: /tmp/db2jcc4.jar        
  inflating: /tmp/jdbc4_LI_br.rtf    
  inflating: /tmp/jdbc4_LI_cn.rtf    
  inflating: /tmp/jdbc4_LI_cs        
  inflating: /tmp/jdbc4_LI_cz.rtf    
  inflating: /tmp/jdbc4_LI_de        
  inflating: /tmp/jdbc4_LI_de.rtf    
  inflating: /tmp/jdbc4_LI_en        
  inflating: /tmp/jdbc4_LI_en.rtf    
  inflating: /tmp/jdbc4_LI_es        
  inflating: /tmp/jdbc4_LI_es.rtf    
  inflating: /tmp/jdbc4_LI_fr        
  inflating: /tmp/jdbc4_LI_fr.rtf    
  inflating: /tmp/jdbc4_LI_it        
  inflating: /tmp/jdbc4_LI_it.rtf    
  inflating: /tmp/jdbc4_LI_ja        
  inflating: /tmp/jdbc4_LI_jp.rtf    
  inflating: /tmp/jdbc4_LI_ko        
  inflating: /tmp/jdbc4_LI_kr.rtf    
  inflating: /tmp/jdbc4_LI_pl        
  inflating: /tmp/jdbc4_LI_pl.rtf    
  inflating: /tmp/jdbc4_LI_pt        
  inflating: /tmp/jdbc4_LI_tr        
  inflating: /tmp/jdbc4_LI_tr.rtf    
  inflating: /tmp/jdbc4_LI_tw.rtf    
  inflating: /tmp/jdbc4_LI_zh        
  inflating: /tmp/jdbc4_LI_zh_TW     
  inflating: /tmp/jdbc4_REDIST.txt   
  inflating: /tmp/sqlj4.zip          

and then added a new JDBC driver to the SQuirreLSQL client: -



and connect to my Db2 on Docker instance: -


and query my database: -


Nice!

Friday, 11 December 2020

Want to learn IBM Cloudant ? Check out YouTube ....

Further tinkering with Cloudant and couchimport etc. today, and found this: -

Cloudant Course

This is an eighteen part training course design to introduce new users to the IBM Cloudant Database as a service. It begins with an introduction to the database,  its open-source heritage and how it differs from traditional relational data stores. By the end of the course, you will have been introduced to the database's API, bulk operations, querying, aggregation, replication and much more. Some course parts include practical exercises which can be completed on IBM's free Cloudant Lite service or using a self-hosted Apache CouchDB installation.

This course is ideal for developers looking to get started with the IBM Cloudant service, or to Apache CouchDB. Ideally course participants would have a technical grounding and be familiar with HTTP and JSON.

Each video has full closed captions and links to textual version of the course.

Thursday, 3 December 2020

TIL - how to print from  Notes in iOS and iPadOS 14.2

 This has baffled me since I upgraded to iOS 14 a while back ...

As an avid user of the  Notes app across my various Apple devices - iPhone, iPad, Watch and Mac, I was struggling to work out how to print ( IKR? ) from Notes ....

Google had the answer, Google is my friend ...

Specifically, this: -

How to print Notes on iPhone and iPad


Thanks iMore, you rule!

Reminder - installing podman and skopeo on Ubuntu 22.04

This follows on from: - Lest I forget - how to install pip on Ubuntu I had reason to install podman  and skopeo  on an Ubuntu box: - lsb_rel...