Friday, 20 September 2019

SSH and "Too many authentication failures" - a new one on me

Having created a new user on an Ubuntu 16.04 boxen, I started seeing this: -

Received disconnect from port 22:2: Too many authentication failures
Disconnected from port 22

whilst trying to SSH into the box, using the new account: -

ssh testfest@

even though I was able to SSH using my own account ....

On the target box, I was seeing: -

 Sep 19 16:21:24 ubuntu sshd[192635]: error: maximum authentication attempts exceeded for testfest from port 54324 ssh2 [preauth]
 Sep 19 16:21:24 ubuntu sshd[192635]: Disconnecting: Too many authentication failures [preauth]
 Sep 19 16:21:48 ubuntu su[192609]: pam_unix(su:session): session closed for user testfest

One key (!) difference ....

For my own user, I'm using my SSH private key ...

For this new user, I'm using a password ...

There was a correlation ...

In my Mac's local SSH directory ( ~/.ssh ) I had a file: -


which was set to: -

Host *
 AddKeysToAgent yes
 UseKeychain yes
 IdentityFile ~/.ssh/id_rsa

In broad terms, my Mac was trying to be helpful and send MY private key to assert the identity of this new user ... which wasn't ever going to work ...

I tried moving ~/.ssh/config to ~/.ssh/cheese but to no avail.

As ever, Google had the answer ( and, yes, Google is my friend ) : -

This is usually caused by inadvertently offering multiple ssh keys to the server. The server will reject any key after too many keys have been offered.

You can see this for yourself by adding the -v flag to your ssh command to get verbose output. You will see that a bunch of keys are offered, until the server rejects the connection saying: "Too many authentication failures for [user]". Without verbose mode, you will only see the ambiguous message "Connection reset by peer".

To prevent irrelevant keys from being offered, you have to explicitly specify this in every host entry in the ~/.ssh/config (on the client machine) file by adding IdentitiesOnly like so:

  IdentityFile ~/.ssh/key_for_somehost_rsa
  IdentitiesOnly yes
  Port 22

If you use the ssh-agent, it helps to run ssh-add -D to clear the identities.

Of course, I didn't think to enable verbose mode on the SSH client via ssh -v but ...

I did try the tip of clearing the identities: -

ssh-add -D

and ... IT WORKED!!

Every day, it's a school day !

Friday, 13 September 2019

Yay, we have a new mainframe with which to play .... IBM z15

Sharing an article from Patrick Moorhead, hosted at Forbes: -

IBM Galvanizes Its Place In Secure And Private Workloads With New z15 Platform

In the world of computers, one of the oldest and best-known in the industry is the IBM mainframe, which has existed since the 1960s. This week IBM unveiled the latest addition to its Z mainframe portfolio, a new platform called the “z15”, which was designed with data privacy, security and hybrid multicloud in mind. Let’s take a closer look at the offering, and what it means for IBM’s play for a seat at the secured hybrid cloud table.  

IBM Galvanizes Its Place In Secure And Private Workloads With New z15 Platform

and we have one sitting in the machine-room less than 50 feet away from where I'm sitting right now ....

To say I'm excited is an under-statement .....

More to follow .....

Friday, 6 September 2019

MainframerZ Skills meetup at Mediaocean in London - 2 October 2019

I'll be there, will you ?

MainframerZ Skills meetup at Mediaocean

Join us for our 4th MainframerZ meetup on Wednesday 2nd October. With the success of our last event, we'll be hosted again by Mediaocean near the Tate Modern.

This will be our first themed event, with focus on Z Skills. Come along for a range of lightning talks, discussions, and not forgetting of course, free pizza!

Meet other Z professionals, grow your network, and help continue to shape the future of MainframerZ!

We look forward to meeting new members and welcoming back some of our experienced members (and don't forget your MainframerZ badges!)

Want to share something at the event? Or start a discussion? Get in touch with our organisers, we'd love to hear from you.

Provisional Agenda
6:15 - 6:45 Arrival and registration
7:00 - 9:15 Introductions, talks, pizza and discussion

GitHub and SSH keys - so now I know

I've been using GitHub in seriousness for the past 7 months or so, since switching into a development role.

One of the oh-so-lovely things is that I can access my repositories using SSH, making git clone and git remote and git fetch and git rebase so much easier ...

I no longer need to muck about with HTTPS URLs and user IDs and passwords, like a cave person ...

Instead, I merely need to teach the git client about my SSH credentials, and I'm good to go.

This means, in part, generating an SSH public/private key pair using a command such as ssh-keygen as per the following example: -

ssh-keygen -b 4096 -t rsa -f /tmp/foobar -N ""

Generating public/private rsa key pair.
Your identification has been saved in /tmp/foobar.
Your public key has been saved in /tmp/
The key fingerprint is:
SHA256:jdVHYm0U7hceMDZ594LaOhBTz+BC0YPTWtfl6eNfv3I hayd@Daves-MBP
The key's randomart image is:
+---[RSA 4096]----+
|        .=  oO=o.|
|        + Boo=Boo|
|       . *.*.o+++|
|        =+. +o+ +|
|        S+.o  .=.|
|        . . . ...|
|         . .   ..|
|          o  . E+|
|           .  o.+|

and then grab the public key: -

cat /tmp/

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC+RhkGDj7zr86FLEkmhcQ5+bA9IwFGtdAVwq7bqkVvbbsWv4YtupknAEaao8epLAipZjHGgitlUskBGDlQc4TGTTyOHt6goYIjfetUv9XtWy4gsyF8k69x6NfvPZ/BFvLWSSc0LPH6+jYSs7ZNdzsqoafo7qr/nnjkCvD/raTUkuPgnoWFMAyKGcUbMHjaHHvOYf2DJriFoIlK+hSYO7tBj+Cf5OS1/DgNYHqSM8l3fVspM2fzyz2VAGEMZRsRWBh0CF7nKxc1aWp2gMzZEX5RJ9Lth+gIVIaWCixbuAerh82y4d/7eTHSh9OAOX/QNTwCC+eOTTaS7G/W+PoBSAx8wJi3xZapotOe43UgJ+KE+sRFjGXr/oe8w9IenfWiPiEAhdD9YHsOBCpvQ65zZLH75tGdKQ3Neu2wgP8os6qPTMU9S02wsit3vWsiAgLURRMX9Fat1XTI737B9rwdA/JnFMDf15szN9wg3nypRuvgtAtihhxJH87CT8R23XzlgNhCroYBkprlC+hGXmdyifxCFdSAgBo4xzm2XYRL63WBnfd8MOOLOoxxQDV8EYTW5PCS3grx3Rh07W6Lcs1Jnw6oYBUOpseZQHdzerX0mLuMoJL4uM3ZB+moTyi7UgsbMsBlPWO6xKTbhD3X4ZOhiMpBF/J9dJ3HfeFFnVU6Is6z2w== hayd@Daves-MBP

to the clipboard

PS On macOS, this is a simple matter of running pbcopy < /tmp/ 

We can then navigate to the Settings -> SSH and GPG keys page on GitHub: -

From there we can simply click the New SSH key button, give the to-be-added key a useful name e.g. Dave's MacBook, September 2019 etc. and paste in the public key from the clipboard.

This results in a new key: -

However, there is one small downside - GitHub merely shows the fingerprint of the newly added key: -


which makes it somewhat hard to track back to an actual key pair, especially if one uses a less-than-explanatory name.

However, there is good news ....

This command: -

ssh-keygen -l -E md5 -f /tmp/foobar

can be run against the public OR private key ( they're a pair! ), and returns the fingerprint in the same MD5 format as GitHub uses: -

4096 MD5:19:fd:a6:ff:44:a2:5a:11:06:75:0b:86:4d:c1:88:4c hayd@Daves-MBP (RSA)

Notice that the fingerprint is the same !!

Now on an audit of my GitHub and GitHub Enterprise accounts ........

*UPDATE* And I can do this to get the fingerprint of ALL the keys: -

for i in ~/.ssh/*.pub; do ssh-keygen -l -E md5 -f $i; done

Thanks to this: -

for inspiration.

Friday, 30 August 2019

Announcing IBM Cloud Hyper Protect Virtual Servers – BETA

Something upon which my wider IBM Z team have been working this past few months ....

Announcing IBM Cloud Hyper Protect Virtual Servers – BETA 

YouTube - IBM Cloud Hyper Protect Virtual Servers

IBM Cloud Catalog - Hyper Protect Virtual Servers

Hyper protect line of virtual servers service leveraging the unparalleled security and reliability of Secure Service Containers on IBM Z.


Ability to deploy a Virtual Server in a Secure Service Container ensuring confidentiality of data and code running within the VS

Z Capabilities on the cloud

Ability to deploy workload into the most secure, highly performant, Linux virtual server with extreme vertical scale

Easy to use, open, and flexible

User experience at parity with market leaders both when buying and using the VS; with the openness and flexibility of a public cloud

No Z skills required

Access Z technology without having to purchase, install, and maintain unique hardware

Can you say "Yay" ??

Saturday, 17 August 2019

Ubuntu and Docker - Handling the GUI

Long story very short, I've been building some Docker images from an existing parent image.

As part of my build, I wanted to ensure that the resulting Ubuntu OS was as up-to-date as possible, so I included: -

RUN apt-get update && apt-get upgrade -y

in my Dockerfile as good practice.

However, some of the apt-get upgrade steps involve the debconf tool being invoked "under the covers" which requires CLI interaction, via a very minimal GUI.

This doesn't work too well during a docker build, whether performed manually or, as I do, via a Jenkins Pipeline.

Therefore, I had to find a way to suppress the CLI/GUI interactions.

After some digging, via Google ( of course ), I found this: -

If Dockerfile specify the following line, the error will disappear:

ENV DEBIAN_FRONTEND=noninteractive

so I tried adding that line: -

ENV DEBIAN_FRONTEND=noninteractive

to my Dockerfile, immediately after the FROM statement.

Guess what ? It worked !

Another one bites the dust .....

Tuesday, 13 August 2019

Python Learnings #2 - handling indices from YAML

I was seeing this: -

TypeError: string indices must be integers

whilst processing a list of data in Python, where said data was being read from a YAML file.

It took me a while to work out the problem/solution.

Long story very short, lists in YAML are different to key/value pairs in that lists do NOT have a key, but instead have an index.

Therefore, the YAML needs to look like: -

   - 1
   - 2
   - 3

or: -

   - Lisa
   - Marge
   - Bart
   - Homer

Once I changed my YAML, we were good to go.

These also helped: -

Python Learnings #1 - decoding Base64 strings

Long story short, I was seeing exceptions such as this: -

Traceback (most recent call last):
  File "", line 8, in
    decoded_string = base64.b64decode(string).decode('UTF-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb2 in position 0: invalid start byte

and: -

Traceback (most recent call last):
  File "", line 8, in
    decoded_string = base64.b64decode(string).decode('UTF-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 25: invalid start byte

with a piece of inherited Python code.

I wasn't 100% clear how the decoder was being used, apart from this: -----

decoded_string = base64.b64decode(string).decode('UTF-8')

where string contained, I assume,  Base64 encoded string.

The actual string was a public key, generated using openssl : -

ssh-keygen -t rsa -b 4096 -f ~/.ssh/dummy -N ""

I wrote a test harness to simulate the problem and test potential solutions, and was posting the public key to the string variable: -

cat ~/.ssh/ | awk '{print $2}'


Can you see where I was going wrong ?

Yes, the public key is NOT Base64 encoded ....

The solution was to encode the public key: -

cat ~/.ssh/ | awk '{print $2}' | base64


at which point my code started working: -


which returns the original unencoded public key: -


Thursday, 8 August 2019

X-Post - Use IBM Cloud Hyper Protect Crypto Services to encrypt VMware disks

One of my colleagues, Chris Poole, recently wrote this: -

Use IBM Cloud Hyper Protect Crypto Services to encrypt VMware disks

IBM Cloud offers integrated VMware solutions. Each virtual machine you stand up has storage coupled to it, which you may want to encrypt. These VMs may host applications and data that contain sensitive information, so you would need to lock it down.

You can encrypt this storage via highly secure, industry-standard algorithms. But this can lead to a key management concern: Where do you keep the keys, and how do you secure them? You can now configure a tight integration between IBM Cloud Hyper Protect Crypto Services (HPCS) and VMware on IBM Cloud. This tutorial shows you how to set this up to ensure that your most sensitive data is protected.

HPCS allows for secure key generation and storage, and takes advantage of an industry-leading hardware security module (HSM). This is the only public-cloud HSM that offers FIPS 140-2 level 4 data protection, which means that it’s highly tamper resistant. Store your keys here, and you can be sure that they’re kept safe from hackers — and even from IBM. No one but you can read them.

Down the rabbit hole with Docker and Kubernetes security

One of the many many fine podcasts to which I listen is The Kubernetes Podcast from Google.

A recent episode, Attacking and Defending Kubernetes, with Ian Coldwater, covered a lot of ground with regard to Docker/Kubernetes security, and led me to Ian's co-presentation from this year's BlackHat conference in Vegas: -

The Path Less Traveled: Abusing Kubernetes Defaults

Kubernetes is a container orchestration framework that is increasingly widely used in enterprise and elsewhere. While the industry is starting to pay some attention to Kubernetes security, there are many attack paths that aren’t well-documented, and are rarely discussed. This lack of information can make your clusters vulnerable.

as well as this: -

Understanding Docker container escapes

Definitely LOTS about which to think .....

PSA The podcast also mentioned some things upon which I'm working .... 😀

IBM and Red Hat:

OpenShift on IBM Cloud
OpenShift coming to Z Series and LinuxONE
Cloud Paks and services

Tuesday, 6 August 2019

SSH - Tinkering with the Known Hosts file

From the department of "I Did Not Know This" ....

Having been doing a LOT with SSH client/server connectivity this past few weeks, I'd seen a lot of this: -

Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in /Users/hayd/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /Users/hayd/.ssh/known_hosts:1
ECDSA host key for has changed and you have requested strict checking.
Host key verification failed.

mainly because I've been creating/deleting/recreating hosts ( containers running on IBM Z ) using the same IP address.

Each time I generate a new container, the unique private (host) key for the SSH daemon on the new container changes, which means that the above warning is back on ...

However, it's still a wrench to see "IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!" each and every time.

My hacky solution was to: -
  • Manually edit ~/.ssh/known_hosts each and every time ...
  • Delete ~/.ssh/known_hosts which is somewhat nuclear 
One of my colleagues gave me a MUCH better way ...

Use the ssh-keygen command to remove ONLY the "offending" host: -

ssh-keygen -f ~/.ssh/known_hosts -R

# Host found: line 1
/Users/hayd/.ssh/known_hosts updated.
Original contents retained as /Users/hayd/.ssh/known_hosts.old

which is WAY better.

For background, here's the Man page: -

-R hostname | [hostname]:port
    Removes all keys belonging to the specified hostname (with optional port number) from a known_hosts file. This option is useful to delete hashed hosts (see the -H option above).

Wednesday, 31 July 2019

Synology NAS and SSH Ciphers

I've seen this before: -

ssh -i ~/.ssh/id_rsa admin@diskstation

Unable to negotiate with port 22: no matching cipher found. Their offer: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc

but had forgotten how to debug/solve it.

The NAS, a Synology DS-414, is running: -

DSM 6.2.1-23824 Update 4

and the client is my Mac, running macOS 10.14.6.

This helped: -

ssh error: unable to negotiate with IP: no matching cipher found

which advised running: -

ssh -Q cipher


and then picking one of the ciphers that BOTH the Synology AND the Mac support.

I chose: -


as follows: -

ssh -c aes256-cbc -i ~/.ssh/id_rsa admin@diskstation

and was in like Flynn: -

admin@DiskStation:~$ uname -a

Linux DiskStation 3.2.40 #23824 SMP Fri Sep 7 12:49:31 CST 2018 armv7l GNU/Linux synology_armadaxp_ds414

For the record, and as I type this, I'm busy updating the Synology to the most recent firmware.

In addition, given this: -

I'm also double-checking my security, in terms of from where one can access the NAS ( hint, ONLY from the LAN, not the WAN ) and also in terms of having a nice long, strong, complex password ......

Or, to put it another way, Patchy McPatchface ( thanks, Bart )

Friday, 26 July 2019

Logging into Docker Hub, or similar container registries, with "special" passwords

Note to self, if your password contains a special character such as dollar ( $ ), ensure that you escape if IF you're putting your password into a Bash environment variable, or passing it as a parameter to a Bash script.

Obviously, NOBODY would do that but .....

So, for a password such as: -


it should be escaped thusly: -


Again, do NOT use passwords in variables or on the CLI ......

Thursday, 25 July 2019

Python - Yet another way to learn ... Edublocks

I heard about this on this week's Digital Planet podcast, and it struck a chord, in part because I've struggled with Python in the past: -


Joshua LoweEduBlocks

Finalist category: BT Young Pioneer

So, what exactly is Edublocks, it’s a drag and drop version of the computer coding language Python 3, which allows students to learn the Python syntax with minimal errors, allowing younger children to access Python. This is something that proves a problem around the world, especially in the UK since the introduction of the new computing curriculum back in 2014. The aim of Edublocks is to remove the barriers faced when making the transition from block based programs like Scratch, to the text-based language Python, easier for students and teachers, as presently there is no drop-in solution that bridges this gap.

Something at which to look ......

Monday, 22 July 2019

IBM WebSphere Liberty Profile and IBM DB2 and Docker - An approach

One of my IBM colleagues asked me about an approach for "versioning" DB2 JDBC drivers within a Docker image that's using those drivers to connect to a DB2 database.

He was looking at options for ensure that the drivers can be updated to match the target DB2 runtime, without needing to rebuild the entire WebSphere Liberty Profile image each and every time.

There are a number of approaches, including storing the drivers on an external ( to the image/container ) file-system, using a Docker volume.

As an example, something like this: -

docker run -v /host/directory:/container/directory image

In the absence of volumes, we could configure the container to reach out to an external file/web-server at boot-time ( akin to a bootstrap service )

As an alternative, we considered an approach where the image hosting WebSphere Liberty Profile references the JDBC drivers from a DIFFERENT image when it's built, meaning that we CAN rebuild the Liberty image without a huge amount of impact upon the size/layers of the Liberty image itself.

This approach seemed to work ....

I started by pulling the requisite images: -

docker pull websphere-liberty

docker pull store/ibmcorp/db2_developer_c:

and created an environment file to start the DB2 instance within its own container: -

vi ~/.envlist


and started the DB2 database container: -

db2=`docker run -h db2server --name db2server --restart=always --detach  --privileged=true -p 50000:50000 -p 55000:55000 --env-file ~/.envlist -v /db2data:/database store/ibmcorp/db2_developer_c:`

and checked the logs until DB2 came up: -

docker logs $db2 -f

07/19/2019 19:10:30     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.
(*) Starting TEXT SEARCH service ...
CIE00001 Operation completed successfully. 

and then logged into the running container: -

docker exec -it $db2 /bin/bash

and switched to the db2inst1 user: -

su - db2inst1

Last login: Mon Jul 22 13:44:17 UTC 2019

and listed the DB directory: -

db2 list db directory

 System Database Directory

 Number of entries in the directory = 1

Database 1 entry:

 Database alias                       = SAMPLE
 Database name                        = SAMPLE
 Local database directory             = /database/config/db2inst1
 Database release level               = 14.00
 Comment                              =
 Directory entry type                 = Indirect
 Catalog database partition number    = 0
 Alternate server hostname            =
 Alternate server port number         =

and connected to the SAMPLE DB: -

db2 connect to sample

   Database Connection Information

 Database server        = DB2/LINUXZ64
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLE

and query the EMPLOYEE table: -

db2 'select * from employee'

------ ------------ ------- --------------- -------- ------- ---------- -------- ------- --- ---------- ----------- ----------- -----------
000010 CHRISTINE    I       HAAS            A00      3978    01/01/1995 PRES          18 F   08/24/1963   152750.00     1000.00     4220.00
000020 MICHAEL      L       THOMPSON        B01      3476    10/10/2003 MANAGER       18 M   02/02/1978    94250.00      800.00     3300.00
000030 SALLY        A       KWAN            C01      4738    04/05/2005 MANAGER       20 F   05/11/1971    98250.00      800.00     3060.00
000050 JOHN         B       GEYER           E01      6789    08/17/1979 MANAGER       16 M   09/15/1955    80175.00      800.00     3214.00
000060 IRVING       F       STERN           D11      6423    09/14/2003 MANAGER       16 M   07/07/1975    72250.00      500.00     2580.00
000070 EVA          D       PULASKI         D21      7831    09/30/2005 MANAGER       16 F   05/26/2003    96170.00      700.00     2893.00
000090 EILEEN       W       HENDERSON       E11      5498    08/15/2000 MANAGER       16 F   05/15/1971    89750.00      600.00     2380.00
000100 THEODORE     Q       SPENSER         E21      0972    06/19/2000 MANAGER       14 M   12/18/1980    86150.00      500.00     2092.00
000110 VINCENZO     G       LUCCHESSI       A00      3490    05/16/1988 SALESREP      19 M   11/05/1959    66500.00      900.00     3720.00
000120 SEAN                 O'CONNELL       A00      2167    12/05/1993 CLERK         14 M   10/18/1972    49250.00      600.00     2340.00
000130 DELORES      M       QUINTANA        C01      4578    07/28/2001 ANALYST       16 F   09/15/1955    73800.00      500.00     1904.00
000140 HEATHER      A       NICHOLLS        C01      1793    12/15/2006 ANALYST       18 F   01/19/1976    68420.00      600.00     2274.00
000150 BRUCE                ADAMSON         D11      4510    02/12/2002 DESIGNER      16 M   05/17/1977    55280.00      500.00     2022.00
000160 ELIZABETH    R       PIANKA          D11      3782    10/11/2006 DESIGNER      17 F   04/12/1980    62250.00      400.00     1780.00
000170 MASATOSHI    J       YOSHIMURA       D11      2890    09/15/1999 DESIGNER      16 M   01/05/1981    44680.00      500.00     1974.00
000180 MARILYN      S       SCOUTTEN        D11      1682    07/07/2003 DESIGNER      17 F   02/21/1979    51340.00      500.00     1707.00
000190 JAMES        H       WALKER          D11      2986    07/26/2004 DESIGNER      16 M   06/25/1982    50450.00      400.00     1636.00
000200 DAVID                BROWN           D11      4501    03/03/2002 DESIGNER      16 M   05/29/1971    57740.00      600.00     2217.00
000210 WILLIAM      T       JONES           D11      0942    04/11/1998 DESIGNER      17 M   02/23/2003    68270.00      400.00     1462.00
000220 JENNIFER     K       LUTZ            D11      0672    08/29/1998 DESIGNER      18 F   03/19/1978    49840.00      600.00     2387.00
000230 JAMES        J       JEFFERSON       D21      2094    11/21/1996 CLERK         14 M   05/30/1980    42180.00      400.00     1774.00
000240 SALVATORE    M       MARINO          D21      3780    12/05/2004 CLERK         17 M   03/31/2002    48760.00      600.00     2301.00
000250 DANIEL       S       SMITH           D21      0961    10/30/1999 CLERK         15 M   11/12/1969    49180.00      400.00     1534.00
000260 SYBIL        P       JOHNSON         D21      8953    09/11/2005 CLERK         16 F   10/05/1976    47250.00      300.00     1380.00
000270 MARIA        L       PEREZ           D21      9001    09/30/2006 CLERK         15 F   05/26/2003    37380.00      500.00     2190.00
000280 ETHEL        R       SCHNEIDER       E11      8997    03/24/1997 OPERATOR      17 F   03/28/1976    36250.00      500.00     2100.00
000290 JOHN         R       PARKER          E11      4502    05/30/2006 OPERATOR      12 M   07/09/1985    35340.00      300.00     1227.00
000300 PHILIP       X       SMITH           E11      2095    06/19/2002 OPERATOR      14 M   10/27/1976    37750.00      400.00     1420.00
000310 MAUDE        F       SETRIGHT        E11      3332    09/12/1994 OPERATOR      12 F   04/21/1961    35900.00      300.00     1272.00
000320 RAMLAL       V       MEHTA           E21      9990    07/07/1995 FIELDREP      16 M   08/11/1962    39950.00      400.00     1596.00
000330 WING                 LEE             E21      2103    02/23/2006 FIELDREP      14 M   07/18/1971    45370.00      500.00     2030.00
000340 JASON        R       GOUNOT          E21      5698    05/05/1977 FIELDREP      16 M   05/17/1956    43840.00      500.00     1907.00
200010 DIAN         J       HEMMINGER       A00      3978    01/01/1995 SALESREP      18 F   08/14/1973    46500.00     1000.00     4220.00
200120 GREG                 ORLANDO         A00      2167    05/05/2002 CLERK         14 M   10/18/1972    39250.00      600.00     2340.00
200140 KIM          N       NATZ            C01      1793    12/15/2006 ANALYST       18 F   01/19/1976    68420.00      600.00     2274.00
200170 KIYOSHI              YAMAMOTO        D11      2890    09/15/2005 DESIGNER      16 M   01/05/1981    64680.00      500.00     1974.00
200220 REBA         K       JOHN            D11      0672    08/29/2005 DESIGNER      18 F   03/19/1978    69840.00      600.00     2387.00
200240 ROBERT       M       MONTEVERDE      D21      3780    12/05/2004 CLERK         17 M   03/31/1984    37760.00      600.00     2301.00
200280 EILEEN       R       SCHWARTZ        E11      8997    03/24/1997 OPERATOR      17 F   03/28/1966    46250.00      500.00     2100.00
200310 MICHELLE     F       SPRINGER        E11      3332    09/12/1994 OPERATOR      12 F   04/21/1961    35900.00      300.00     1272.00
200330 HELENA               WONG            E21      2103    02/23/2006 FIELDREP      14 F   07/18/1971    35370.00      500.00     2030.00
200340 ROY          R       ALONZO          E21      5698    07/05/1997 FIELDREP      16 M   05/17/1956    31840.00      500.00     1907.00

  42 record(s) selected.

Having validated that the DB2 container was clean-and-green, I then proceeded to download a recent set of DB2 JDBC drivers from here: -

and extracted the relevant JARs to a newly created subdirectory: -

mkdir /db2jars
tar xzvf /tmp/v11.1.4fp4_jdbc_sqlj.tar.gz -C /tmp
unzip /tmp/jdbc_sqlj/ -d /tmp
cp /tmp/db2jcc.jar /db2jars
cp /tmp/db2jcc4.jar /db2jars

and then created a TAR file containing those JARs: -

tar cvf dependency.tar /db2jars

I then created a Dockerfile: -

vi Dockerfile.DB2

FROM scratch
ADD dependency.tar /

and built an image: -

docker build -t db2jars:latest -f ~/Dockerfile.DB2 .

Sending build context to Docker daemon  1.459GB
Step 1/2 : FROM scratch
Step 2/2 : ADD dependency.tar /
 ---> d7cfd446014e
Successfully built d7cfd446014e
Successfully tagged db2jars:latest

which gave me a Docker image: -

docker images

REPOSITORY                                            TAG                 IMAGE ID            CREATED             SIZE
db2jars                                               latest              69deb549f3f0        5 days ago          8.05MB

I then created a second Dockerfile, for WebSphere Liberty Profile: -

vi ~/Dockerfile.WLP

FROM websphere-liberty:latest

COPY --from=db2jars:latest /db2jars /db2jars
COPY JdbcTestDB2.class /
CMD ["java","-cp","/:/db2jars/db2jcc.jar","JdbcTestDB2","","50000","sample","db2inst1 ","passw0rd"]

and built the image: -

docker build -t wlp:latest -f ~/Dockerfile.WLP .

Sending build context to Docker daemon  1.459GB
Step 1/5 : FROM websphere-liberty:latest
 ---> 5005e127f3b4
Step 2/5 : ENV LICENSE accept
 ---> Using cache
 ---> 4fb20054c1b4
Step 3/5 : COPY --from=db2jars:latest /db2jars /db2jars
 ---> Using cache
 ---> 7e7ba23d46d4
Step 4/5 : COPY JdbcTestDB2.class /
 ---> Using cache
 ---> ffe1ef5dca2c
Step 5/5 : CMD ["java","-cp","/:/db2jars/db2jcc.jar","JdbcTestDB2","","50000","sample","db2inst1 ","Qp455w0rd"]
 ---> Using cache
 ---> 490fac5dabe4
Successfully built 490fac5dabe4
Successfully tagged wlp:latest

I then instantiated the container: -

wlp=`docker run -d -t -p 80:9080 -p 443:9443 wlp:latest`

and checked the logs: -

docker logs $wlp -f

000330 WING LEE
200140 KIM NATZ
200220 REBA JOHN

Magic has occurred ....

For the record, the reason that this happens is in the last line of Dockerfile.WLP: -

CMD ["java","-cp","/:/db2jars/db2jcc.jar","JdbcTestDB2","","50000","sample","db2inst1 ","passw0rd"]

which uses this Java class: - 

import java.sql.Connection ;
import java.sql.DriverManager ;
import java.sql.ResultSet ;
import java.sql.Statement ;
import java.sql.SQLException;

class JdbcTestDB2
public static void main (String args[])
catch (ClassNotFoundException e)
System.err.println (e) ;
System.exit (-1) ;
  String hostname      = args[0];
  String port          = args[1];
  String dbName        = args[2];
  String userName      = args[3];
  String password      = args[4];
  String sslConnection = "false";

  java.util.Properties properties = new java.util.Properties();
  properties.put("password", password);

String url = "jdbc:db2://" + hostname + ":" + port + "/" + dbName;
      Connection connection = DriverManager.getConnection(url,properties);

      String query = "select EMPNO,FIRSTNME,LASTNAME from DB2INST1.EMPLOYEE" ;

      Statement statement = connection.createStatement () ;
ResultSet rs = statement.executeQuery (query) ;

while ( () )
System.out.println (rs.getString (1) + " " + rs.getString(2) + " " + rs.getString(3)) ;
connection.close () ;
  catch (java.sql.SQLException e)
System.err.println (e) ;
System.exit (-1) ;

to connect to the DB2 container, using the IP address of the box hosting the container, on port 50000 ( which is mapped from container to host when we start the DB2 container, via -p 50000:50000 

So, when I'm ready to rev the DB2 drivers, I merely need to repeat the above steps to download them, TAR them up, create a new Docker image, and rebuild the WLP image, without making a huge impact upon the size / layers of the image itself.

As I said at the beginning, this is ONE way of solving the problem BUT FUN!!!!

Thursday, 18 July 2019

Jenkins and the Case of the Missing Body

I was repeatedly seeing this: -

java.lang.IllegalStateException: There is no body to invoke

with a Jenkins Pipeline that I was executing; this Pipeline executes whenever one commits new code into a GitHub Enterprise (GHE) repository, with a Pull Request.

To debug this further, I created a dummy GHE repository with a corresponding Jenkinsfile, and a new Jenkins pipeline.

This allowed me to hack iterate the code in the GHE web UI, and immediately test the Pipeline within Jenkins itself.

Without wishing to give away the plot, I'll TL;DR; and say that the problem was ME ( quelle surprise ).

Here's my initial Jenkinsfile: -

    checkout scm
    def givenName = "Dave"
    def familyName = "Hay"
    withEnv(["GIVEN_NAME=${givenName}", "FAMILY_NAME=${familyName}"])
            sh '''#!/bin/bash
            echo "Doing it"
            echo $GIVEN_NAME
            echo $FAMILY_NAME

Can you see the problem ?

It took me a while ....

The node directive is NOT followed by a set of braces, meaning that nothing actually gets done, hence the exception.

The code SHOULD look like this: -

        checkout scm
        def givenName = "Dave"
        def familyName = "Hay"
        withEnv(["GIVEN_NAME=${givenName}", "FAMILY_NAME=${familyName}"])
                sh '''#!/bin/bash
                echo "Doing it"
                echo $GIVEN_NAME
                echo $FAMILY_NAME

In other words, the node() directive needs something to do, hence the need for the braces, which can contain one or more stages(), plus associated directives.

Nice :-)

Tuesday, 16 July 2019

Containers: A Complete Guide

I found this whilst looking for something completely different: -

Containers: A Complete Guide

This guide looks at the importance of containers in cloud computing, highlighting the benefits and showing how containers figure into such technologies as Docker, Kubernetes, Istio, VMs, and Knative.

Quite a nice little introduction ...

Monday, 15 July 2019

Shelling out - fun with Ubuntu shells

I saw this: -

-sh: 2: [: -gt: unexpected operator
-sh: 29: [: -gt: unexpected operator

when logging into an Ubuntu boxen.

I was pretty sure that this'd worked before, but wondered whether my shell was giving me (s)hell ....

I checked what I was currently running: -

echo $SHELL


which is a flavour of the Bourne Again SHell ( BASH ).

I then checked the /etc/passwd file: -

cat /etc/passwd


and realised that I didn't have an explicit shell set.

I upped my authority ( super user do ): -

sudo bash

[sudo] password for hayd: 

and then updated my account: -

usermod --shell /bin/bash hayd

Now /etc/passwd looks OK: -


and I'm now all good to go: -

echo $SHELL


Friday, 12 July 2019

Intro Guide to Dockerfile Best Practices

Not sure how I found this ( it MAY have been Twitter ), but this is rather useful: -

Intro Guide to Dockerfile Best Practices

especially whilst I've been automating the build of Docker images via Jenkins pipelines.

Definitely a few tips to try, such as: -

Tip #4: Remove unnecessary dependencies

Remove unnecessary dependencies and do not install debugging tools. If needed debugging tools can always be installed later. Certain package managers such as apt, automatically install packages that are recommended by the user-specified package, unnecessarily increasing the footprint. Apt has the –no-install-recommends flag which ensures that dependencies that were not actually needed are not installed. If they are needed, add them explicitly.

Go read !

Now Available - IBM Cloud Hyper Protect Virtual Servers

I'm pleased to see one of the IBM Z offerings upon which my Squad are working is now available in the IBM Cloud Experimental Services section of the IBM Cloud Catalog: -

Hyper protect line of virtual servers service leveraging the unparalleled security and reliability of Secure Service Containers on IBM Z.



Ability to deploy a Virtual Server in a Secure Service Container ensuring confidentiality of data and code running within the VS

Z Capabilities on the cloud

Ability to deploy workload into the most secure, highly performant, Linux virtual server with extreme vertical scale

Easy to use, open, and flexible

User experience at parity with market leaders both when buying and using the VS; with the openness and flexibility of a public cloud

No Z skills required

Access Z technology without having to purchase, install, and maintain unique hardware

IBM Cloud Hyper Protect Virtual Servers

Yay us!

Friday, 5 July 2019

Book Review - Left To Our Own Devices, by Margaret E Morris

As mentioned previously, I've been writing a series of book reviews for the British Computer Society (BCS), including: -

Book Review - You'll See This Message When It Is Too Late - The Legal and Economic Aftermath of Cybersecurity Breaches

Rails, Angular, Postgres, and Bootstrap - A Book Review

Kubernetes Microservices with Docker - A Book Review

Book Review - Mastering Puppet Second Edition by Thomas Uphill


So here's the most  recent review - as before, for full disclosure, I must mention that BCS kindly provided me with a free hardcopy of the book, albeit a review version: -

Left To Our Own Devices, by Margaret E Morris

If nothing else, the title of this book intrigued me, in part because it reminded me of a Pet Shop Boys track from my youth. More seriously, the subtitle of the book: -

Outsmarting smart technology to reclaim our relationships, health and focus

resonated with a lot of recent media coverage about the impacts, both real and perceived, both positive and negative, of information technology in the modern era.

Whilst I don't claim to have strong opinions about the topic, or be particularly well-informed, apart from as a consumer, I have given thought to my family's use of mobile devices, Internet of Things gadgets, so-called smart home technology etc.

I'd especially considered limits on screen time, impact on sleep patterns, exposure to sources of news, including social media, and my tendency to live in a bubble, self-selecting news and opinions that mirror my own.

Therefore, this book came at precisely the right time, and opened my eyes to a number of use cases of technology, including smart lighting, health tracking ( including the so-called Quantified Self ), social media and messaging, technology as an art-form, self-identity, including gender and sexuality, and technology as a therapist.

Ms Morris illustrates each chapter, of which there eight, with a large number of individual user stories, taking inspiration and insight from real people, who allow her to share how they use technology, mainly for the positive, but with thought and insight.

Despite the title, and the subtitle, I found the book to be a very positive read; whilst there are definitely shortcomings to an over-use and over-reliance upon technology, the book shows how humans do manage to mostly outsmart their smart technology, and get from it what they need, whether or not that's what the original inventor intended.

I didn't come away with a list of Do's and Don'ts, but a better understanding of how, and why, people choose to use certain technologies, and, therefore, how I can evaluate my own use, and be more qualitative in my choice of technologies.

In conclusion, I strongly recommend this book, it's a relatively short read, coming in ~130 pages, and is a high enough level that one doesn't need to be a total geek to get the points raised, whether or not one is a total geek.

Out of 10, I'd give this book 10, mainly for completeness, brevity and for the all-important human touch.

Thursday, 4 July 2019

Docker Registries and Repositories - Is there a difference ? ( Hint, yes, there really is )

This came up in discussion today, and one of my colleagues pointed me here: -

Difference between Docker registry and repository

Docker registry is a service that is storing your docker images.

Docker registry could be hosted by a third party, as public or private registry, like one of the following registries:

    Docker Hub,
    Google Container Registry,
    AWS Container Registry

or you can host the docker registry by yourself
(see for more details).

Docker repository is a collection of different docker images with same name, that have different tags. Tag is alphanumeric identifier of the image within a repository.

For example see There are many different tags for the official python image, these tags are all members of the official python repository on the Docker Hub. Docker Hub is a Docker Registry hosted by Docker.

To find out more read:

IBM Cloud also helped me here, in that I have an IBM Cloud Container Registry service, aka ICCR, within which I have access to several Repositories, and the ICCR UI helpfully tells me: -

A repository is a set of related images with the same name, but different tags.

which is, as they say, nice 😂

Monday, 1 July 2019

Bash and a sufficiency of input parameters

I hit an interesting quirk in Bash earlier today; I'm passing in a list of command-line parameters to a Bash script, using the $1, $2 etc. input parameter method.

However, I noticed that the TENTH parameter failed, and I ended up with a trailing zero on the end of a string that was actually the FIRST parameter.

It appeared that Bash was stopping at 9, and then simply adding the character '0' to the end of the string provided as the FIRST parameter.

Here's an example: -


export A=$1
export B=$2
export C=$3
export D=$4
export E=$5
export F=$6
export G=$7
export H=$8
export I=$9
export J=$10

echo $J

When I execute the script: -

~/ 1 2 3 4 5 6 7 8 9 0

I'd expect to see this: -

The tenth parameter is 0

whereas I actually saw this: -

The tenth parameter is 10

As ever, the internet came to the rescue: -

which said, in part: -

Use curly braces to set them off:

echo "${10}"

I updated my script: -


export A=${1}
export B=${2}
export C=${3}
export D=${4}
export E=${5}
export F=${6}
export G=${7}
export H=${8}
export I=${9}
export J=${10}

echo "The tenth parameter is" $J

and now it works as expected: -

The tenth parameter is 0

To be fair, the article also said: -

You can also iterate over the positional parameters like this:

for arg


for arg in "$@"


while (( $# > 0 ))    # or [ $# -gt 0 ]
    echo "$1"

which I should definitely try .........

SSH and "Too many authentication failures" - a new one on me

Having created a new user on an Ubuntu 16.04 boxen, I started seeing this: - Received disconnect from port 22:2: Too many au...