Saturday, 11 July 2020

Modernize a monolithic Node.js application into a microservices architecture using IBM Cloud Pak for Applications

From one of my IBM colleagues, we have this: -

This tutorial shows how to transform a traditional monolithic core banking application, which is implemented in Node.js, into a modern microservices architecture by using IBM Cloud Pak for Applications.

Cloud Pak for Applications speeds the development of applications that are built for Kubernetes by using agile DevOps processes. Running on Red Hat OpenShift, the Cloud Pak provides a hybrid, multicloud foundation that is built on open standards, enabling workloads and data to run anywhere. It integrates two main open source projects: Kabanero and Appsody.


Thursday, 9 July 2020

Wednesday, 8 July 2020

Encrypt Kubernetes secrets with IBM Cloud Hyper Protect Crypto Services

This was recently authored by a couple of my IBM colleagues: -

Encrypt Kubernetes secrets with IBM Cloud Hyper Protect Crypto Services

Create a secret in Kubernetes, create a root key in Hyper Protect Crypto Services, and enable KMS encryption in Kubernetes

This tutorial shows you how to encrypt your Kubernetes secrets using IBM Cloud Hyper Protect Crypto Services as the KMS provider. You'll learn how to create a secret in IBM Cloud Kubernetes, create a root key in Hyper Protect Crypto Services, and encrypt the secrets and etcd component of your Kubernetes master with the root key in your Hyper Protect Crypto Services instance.

Tuesday, 30 June 2020

Mockers ride again

I'm tinkering with Java at the moment, using Microsoft Visual Studio Code ( my "new" fave IDE-of-choice ), and am looking at mocking specific HTTP services and calls .....

To that end ....





Monday, 29 June 2020

Apple iMovie and the Ken Burns effect

PSA: If using iMovie to create ... movies from screen recordings taken using [cmd][shift][5] be aware of the auto-cropping, including the so-called Ken Burns effect

https://support.apple.com/kb/PH22923?locale=en_US&viewlocale=en_US

I ended up recording clips several times before I remembered that I'd seen, and solved, this before ....

I wanted a full-screen recording of Terminal ( because .... I live there ) and iMovie was trying to be extra-helpful ......

Tuesday, 16 June 2020

Building Kubernetes - Getting it to Go right ....

I'm currently building Kubernetes from source, on an Ubuntu box.

Having cloned the repo: -

mkdir -p $(go env GOPATH)/src/k8s.io
cd $(go env GOPATH)/src/k8s.io
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes


I was looking at a specific version of K8s - v1.16.2 : -

git checkout tags/v1.16.2

but, when I started to build any of the K8s binaries e.g. kubectl I saw this: -

make kubectl

cannot find module providing package k8s.io/kubernetes: unrecognized import path "k8s.io": parse https://k8s.io/?go-get=1: no go-import meta tags (meta tag k8s.io/ did not match import path k8s.io)
+++ [0616 01:06:04] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
go: finding module for package k8s.io/kubernetes/vendor/k8s.io/code-generator/cmd/deepcopy-gen
can't load package: package k8s.io/kubernetes/vendor/k8s.io/code-generator/cmd/deepcopy-gen: no matching versions for query "latest"
!!! [0616 01:06:07] Call tree:
!!! [0616 01:06:07]  1: /root/go/src/k8s.io/kubernetes/hack/lib/golang.sh:714 kube::golang::build_some_binaries(...)
!!! [0616 01:06:07]  2: /root/go/src/k8s.io/kubernetes/hack/lib/golang.sh:853 kube::golang::build_binaries_for_platform(...)
!!! [0616 01:06:07]  3: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
!!! [0616 01:06:07] Call tree:
!!! [0616 01:06:07]  1: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
!!! [0616 01:06:07] Call tree:
!!! [0616 01:06:07]  1: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
Makefile.generated_files:200: recipe for target '_output/bin/deepcopy-gen' failed
make[1]: *** [_output/bin/deepcopy-gen] Error 1
Makefile:559: recipe for target 'generated_files' failed
make: *** [generated_files] Error 2


This led me here: -

can't load package: package k8s.io/kubernetes: cannot find module providing package k8s.io/kubernetes #84224

and here: -

switch build scripts to use go modules, stop requiring $GOPATH #82531

and made me realise that I had a disparity between the version of Go I was using: -

go version

go version go1.14.3 linux/amd64

and the version of K8s itself.

In other words, I was trying to build K8s 1.16.2 using Go 1.14.3

Thankfully, the K8s repo has the answer: -



Once I realised this, and switched back to the main branch of K8s, which is the latest 

Cloning into 'kubernetes'...
remote: Enumerating objects: 229, done.
remote: Counting objects: 100% (229/229), done.
remote: Compressing objects: 100% (136/136), done.
remote: Total 1118551 (delta 115), reused 95 (delta 93), pack-reused 1118322
Receiving objects: 100% (1118551/1118551), 688.00 MiB | 25.10 MiB/s, done.
Resolving deltas: 100% (800101/800101), done.
Updating files: 100% (22439/22439), done.


cd kubernetes/
git branch

* master

all was good

make kubectl

+++ [0616 01:15:05] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/prerelease-lifecycle-gen
+++ [0616 01:15:15] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
+++ [0616 01:15:31] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/defaulter-gen
+++ [0616 01:15:45] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/conversion-gen
+++ [0616 01:16:08] Building go targets for linux/amd64:
    ./vendor/k8s.io/kube-openapi/cmd/openapi-gen
+++ [0616 01:16:27] Building go targets for linux/amd64:
    ./vendor/github.com/go-bindata/go-bindata/go-bindata
+++ [0616 01:16:30] Building go targets for linux/amd64:
    cmd/kubectl


Friday, 5 June 2020

Demo: Z DevOps using Git and Jenkins pipeline - 17 June 2020

The objective of this session is to demonstrate  the power of collaborative development and delivery enabled by the IBM Z for DevOps  solutions. Application teams can learn how they can  modernize their application portfolio using  modern technology and tools. We will focus on code and build, automate unit testing and deploy. Production teams can learn how they can extend and automate their existing build and deployment solutions for enterprise applications.
 
This session will demonstrate how IBM Z DevOps and open source tools like Git and Jenkins, progress the continuous integration, continuous testing and continuous delivery in the z/OS operating environment.
 
Our scenario consists of a "broken" COBOL/CICS/DB2 application. The developer will use IBM DevOps tools to locate and identify the faulty code, make the appropriate fixes and compile/link/bind using a new build automation tool (DBB) that integrates with Git and Jenkins pipeline on z/OS. Once the code is fixed, the developer will push the changes to a Git repository.

In order to demonstrate continuous delivery capabilities, this session will showcase a Jenkins pipeline integrated with IBM tools for the build process, unit testing(using the new zUnit functionality), and deployment of code to a CICS environment (using UrbanCode Deploy).
While our scenario uses a COBOL/CICS/DB2 program as an example, the process is similar to other environments or languages (such as Batch, IMS, or PL/1

Something to read/do later - learn to use the iter8 toolchain with Kubernetes

A colleague drew my attention to iter8 

Automate canary releases and A/B testing for cloud-native development

via this tutorial: -

Progressively roll out your application in Kubernetes by using the iter8 toolchain

Learn how to set up and use a DevOps toolchain by using iter8 to implement a progressive rollout of a new version of your application.


That's next on my hit list .....

Want to learn COBOL ? Here's how ...

During yesterday's most excellent MainframerZ Meetup, one of our speakers, Will Yates, talked about the COBOL enablement available via the Open Mainframe Project.

Here it is: -

The COBOL Training Course is an open source initiative under the Open Mainframe Project that offers introductory-level educational COBOL materials with modern tooling. 


Remembering that COBOL skills are still definitely relevant, despite the language's relative maturity - and yesterday's Meetup amplified that ( replays available shortly ), so get learning !

Thursday, 4 June 2020

Webcast - Keep your data yours – IBM Cloud Hyper Protect Services - 16 June 2020

One of my esteemed colleagues, Stefan Liesche, who is the IBM Distinguished Engineer for IBM Cloud Hyper Protect Services is presenting next week: -

With the introduction of Hyper Protect Services as part of IBM's public and private cloud offerings, we are putting data protection at the center of our focus. This family of services, containers and IaaS runtimes supports the construction of cloud solutions on cloud native technologies and patterns to inherit many of the protection capabilities of the hyper protection profile.

We are enabling developers to build secure cloud applications seamlessly using a portfolio of cloud services powered by IBM LinuxONE.
It is now easier than ever to fortify data protection with increased customer control of access (including from privileged users) and encryption of data at rest and in flight and extend this protection to data in use.


Worth an hour of your time, methinks

Please register here: -

Thursday, 28 May 2020

More on the old ToDo list - tinkering with Red Hat OpenShift Container Platform

Having spent a few years working on "pure" Kubernetes, I also need to get up-to-speed with RH OCP ....

I've spun up a cluster on the IBM Cloud, now need to get around to using/learning it ...

To me, OCP is a distribution of K8s, but it's oh-so-much more ....

Here's at what I'm looking....






Thursday, 21 May 2020

Docker and Node and macOS - Computer Said No

I'm using a Docker container that runs a NodeJS application to lint-check YAML files relating to my new best friend, Tekton CD.

This container is run as follows: -

docker run --volume $(PWD):/foo --rm --interactive tekton-lint /foo/sbs_pipeline.yaml

and should check the YAML for errors such as this: -

Error: Pipeline 'pipeline' references task 'build-task' but the referenced task cannot be found. To fix this, include all the task definitions to the lint task for this pipeline.

Sadly, however, when I ran the container, I saw: -

[Error: EPERM: operation not permitted, open '/foo/foo.yaml'] {

Something told me that this MAY be related to the path of the file that I was checking ....

Note that the container uses Docker volumes, via the --volume switch, mapping FROM $(PWD) on the host TO /foo inside the container.

Therefore, I wondered whether the problem was with the FROM path, which was a GitHub repository cloned locally on my Mac.

Well, I was close .......

For no particular reason, I've chosen to clone GH repositories to a subdirectory of my ~/Documents folder: -

/Users/hayd/Documents/GitHub/....

Can you see where I might be going wrong ??

Yep, macOS is protecting me from myself, by disallowing Docker from full access to my Documents folder ......


Once I changed this: -


and restarted Docker .....

All was well 👍🏽



Saturday, 9 May 2020

Running COBOL on ....

There's been a splurge of interest in COBOL recently, and especially in acquiring COBOL skills.

Now I learned COBOL in college in the late 80s, and have barely ever looked at it since, apart from a brief spot of exploration of COBOL on the AS/400 back in the 90s

But now COBOL is cool again ( hint, it never went away )

So, without a Raspberry Pi to hand, here's me running COBOL on ..... an IBM mainframe !

cat /proc/cpuinfo 

vendor_id       : IBM/S390
# processors    : 2
bogomips per cpu: 21881.00
max thread id   : 0
features : esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te vx vxd vxe gs 
facilities      : 0 1 2 3 4 6 7 8 9 10 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 30 31 32 33 34 35 36 37 38 40 41 42 43 44 45 47 48 49 50 51 52 53 54 57 58 59 60 64 69 71 73 74 75 76 77 78 80 81 82 129 130 131 133 134 135 138 139 146 147 156
cache0          : level=1 type=Data scope=Private size=128K line_size=256 associativity=8
cache1          : level=1 type=Instruction scope=Private size=128K line_size=256 associativity=8
cache2          : level=2 type=Data scope=Private size=4096K line_size=256 associativity=8
cache3          : level=2 type=Instruction scope=Private size=2048K line_size=256 associativity=8
cache4          : level=3 type=Unified scope=Shared size=131072K line_size=256 associativity=32
cache5          : level=4 type=Unified scope=Shared size=688128K line_size=256 associativity=42
processor 0: version = FF,  identification = 4D3F07,  machine = 3906
processor 1: version = FF,  identification = 4D3F07,  machine = 3906

cpu number      : 0
cpu MHz dynamic : 5208
cpu MHz static  : 5208

cpu number      : 1
cpu MHz dynamic : 5208
cpu MHz static  : 5208


uname -a

Linux 766b81312d8b 4.15.0-55-generic #60-Ubuntu SMP Tue Jul 2 18:21:03 UTC 2019 s390x s390x s390x GNU/Linux

cobc --version

cobc (OpenCOBOL) 1.1.0
Copyright (C) 2001-2009 Keisuke Nishida / Roger While
Built    Aug 04 2016 15:56:22
Packaged Feb 06 2009 10:30:55 CET


cat hello.cbl 

       Identification Division.
       Program-ID. sampleCOBOL.

       Data Division.

       Procedure Division.
       Main-Paragraph.
       Display "Hello World!"
       Stop Run.


cobc -x -o hello hello.cbl 

file ./hello

./hello: ELF 64-bit MSB shared object, IBM S/390, version 1 (SYSV), dynamically linked, interpreter /lib/ld6, for GNU/Linux 3.2.0, BuildID[sha1]=351ec732463ff4add4598e53e1533e534a2fdf44, not stripped

./hello 

Hello World!

So, for the record, I'm using a Virtual Server running on an IBM Z box hosted in the IBM Cloud via the IBM Hyper Protect Virtual Servers offering.

Even better, the first one is free ......




Finally, the idea of running COBOL on Ubuntu came from this blog: -


and uses Open COBOL

Other COBOL compilers exist .......

Wednesday, 6 May 2020

Looking up nslookup


I regularly need to install tools and utilities onto newly-built Ubuntu boxes, and was looking for the nslookup tool today ....

I'd tried installing it via net-tools 

apt-get update && apt-get install -y net-tools

but that didn't help ...

I tried/failed to remember how I'd installed it last time ....

Then I found this: -


which reminded me about the apt-cache tool: -

apt-cache search nslookup

dnsutils - Clients provided with BIND
gresolver - graphical tool for performing DNS queries
knot-dnsutils - Clients provided with Knot DNS (kdig, knslookup, knsupdate)
libbot-basicbot-pluggable-perl - extended simple IRC bot for pluggable modules
libnet-nslookup-perl - simple DNS lookup module for perl


apt-get install -y dnsutils

Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  bind9-host geoip-database libbind9-160 libdns1100 libgeoip1 libirs160 libisc169 libisccc160 libisccfg160 liblwres160
Suggested packages:
  rblcheck geoip-bin
The following NEW packages will be installed:
  bind9-host dnsutils geoip-database libbind9-160 libdns1100 libgeoip1 libirs160 libisc169 libisccc160 libisccfg160 liblwres160
...
Setting up bind9-host (1:9.11.3+dfsg-1ubuntu1.11) ...
Setting up dnsutils (1:9.11.3+dfsg-1ubuntu1.11) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...

which nslookup

/usr/bin/nslookup

nslookup -version

nslookup 9.11.3-1ubuntu1.11-Ubuntu

Job done!!

Saturday, 2 May 2020

Build and deploy a Docker image on Kubernetes using Tekton Pipelines

I've been tinkering with Tekton this past few weeks, and have found a bunch of useful resources including: -


Tekton is an open source project to configure and run continuous integration (CI) and continuous delivery (CD) pipelines within a Kubernetes cluster. In this tutorial, I walk you through basic concepts used by Tekton Pipelines. Then, you get a chance to create a pipeline to build and deploy to a container registry. You also learn how to run the pipeline, check its status, and troubleshoot issues. But before you get started, you must set up a Kubernetes environment with Tekton installed.

I've also been following a pair of Tekton tutorials: -



and, of course, the official Installing Tekton Pipelines

Thursday, 30 April 2020

What Is - from IBM Developers

IBM Developer Advocate Sean Tracey previews a new short-form video series that will explain key technologies and concepts. Build Smart. Join a global community of developers at http://ibm.biz/IBMdeveloperYT

Available via YouTube


Enjoy!

IBM Cloud Hyper Protect - Some tasty new demos

For a little look at the things upon which my team and I are working, I give you .....


These, and more, demos on https://www.ibm.com/demos/  

Nice!

Wednesday, 29 April 2020

IBM Middleware User Community - Upcoming Events

Metrics for the win! Apr 30, 2020 from 2:00 PM to 3:00 PM (ET)

Understanding application behavior is important in any environment. The most efficient way to observe application behavior relies on metrics, key/value pairs of numerical data. This session will compare the capabilities of libraries like Micrometer, OpenTelemetry, and MicroProfile metrics. We’ll also explore how gathered data can be used to observe and understand application behavior in order to determine what should be measured. #dev-series


We will take a dive into understanding what the Reactive Streams specification is by comparing a few of the popular Java API implementations. #dev-series

Tuesday, 21 April 2020

MainframerZ meetup *online* - Mainframe skillZ at home

Join us for our 6th MainframerZ meetup on Wednesday 19th April at 17:30 BST (London). This will be our 2nd event online after the significant success of our first last month.

This event will be centred around the theme of mainframe skills that you can do at home, and will include a range of lightning talks and discussions.

We look forward to meeting new members from around the globe as well as welcoming back some of our experienced members.
Want to share something at the event? Or start a discussion? Get in touch with our organisers, we'd love to hear from you.


Wednesday, 15 April 2020

AirPods - Wax On, Wax Off

I wasn't sure whether my hearing was on the blink, or whether my AirPods gen. 2 were starting to fail

I was finding that all of podcasts, including @podfeet dulcet tones were mainly coming out in the right-hand AirPod, with way less volume in the left-hand AirPod

I thought I'd cleaned them both thoroughly ....

Thankfully, the problem was (alas) earwax, and was resolved by an even more robust clean, specifically using the pointy end of a plastic tooth pick to gently unblock the grills AND the teeny-tiny little hole


The thing that made the most difference was the little hole ....

One thing that helped diagnose the issue was the ability to change the balance on my iPhone, via Settings -> Accessibility -> Audio/Visual


When I posted this to a couple of Slack teams, one of my network said : -

Interesting diagnostic technique for what turned out to be a physical problem. I think that hole is how it listens for ambient noise, even though these aren't the noise cancelling ones. If I recall correctly it's for your voice going to your caller.

which ties up with my own findings.

Someone else said much the same: -

This is a common problem with hearing aids.  I had to pay $350 to replace mine due to that problem among other issues last week. (edited) 

So now I know ......

Saturday, 11 April 2020

VNC Viewer on macOS - Who knew ?

Whilst helping a colleague with a VNC-related question, I discovered that macOS has built-in support for VNC, via the Finder -> Connect to Server option.

VNC URLs look like this: -

vnc://192.168.1.19:5901

Alternatively, one can start the same VNC Viewer client from the command-line, via Terminal, and this command: -

open vnc://ubuntu:5901

For the record, I installed TigerVNC on my Ubuntu box: -

dpkg --list | grep tiger

ii  tigervnc-common                            1.7.0+dfsg-8ubuntu2                              amd64        Virtual network computing; Common software needed by servers
ii  tigervnc-standalone-server                 1.7.0+dfsg-8ubuntu2                              amd64        Standalone virtual network computing server

with this xstartup : -

cat ~/.vnc/xstartup 


#!/bin/sh

export XKL_XMODMAP_DISABLE=1
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &

gnome-session &
nautilus &

with the VNC Server's password encoded in: -

-rw------- 1 dave dave 8 Apr 10 15:19 /home/dave/.vnc/passwd

The thing that I initially missed, but then discovered, was that, by default, TigerVNC automatically starts assuming that all client connections will be local i.e. only from the box running the VNC Server itself !!!

This is easily mitigated when one starts the VNC Server: -

vncserver --localhost no

New 'ubuntu18:1 (dave)' desktop at :1 on machine ubuntu18

Starting applications specified in /home/dave/.vnc/xstartup
Log file is /home/dave/.vnc/ubuntu18:1.log

Use xtigervncviewer -SecurityTypes VncAuth,TLSVnc -passwd /home/dave/.vnc/passwd ubuntu18:1 to connect to the VNC server.


with the .PID file: -

cat ~/.vnc/ubuntu18\:1.pid 

7589

and the log file: -

cat ~/.vnc/ubuntu18\:1.log 

Xvnc TigerVNC 1.7.0 - built Dec  5 2017 09:25:01
Copyright (C) 1999-2016 TigerVNC Team and many others (see README.txt)
See http://www.tigervnc.org for information on TigerVNC.
Underlying X server release 11905000, The X.Org Foundation


Sat Apr 11 15:56:09 2020
 vncext:      VNC extension running!
 vncext:      Listening for VNC connections on all interface(s), port 5901
 vncext:      created VNC server for screen 0

For reference, this was a useful insight into TigerVNC

Friday, 10 April 2020

Tinkering with Docker in Bash with a side-order of Sed and Awk

Just tidying up some documentation ....

Want to list all of the containers running on your Docker server ?

docker ps -a | sed 1d | awk '{print $1}'

6153d5b70852
5936a4a0179b
3a11d49a5230
af8705a70abe
05475e542e5d
33648fe5e9d9
d1995a3d141f
d39f9ecc3058
e6310ec76d1f

Why is this useful ?

Because one can then use this inside a script to, say, inspect all containers.

Here's an example: -

for i in `docker ps -a | sed 1d | awk '{print $1}'`; do docker inspect $i; done | grep IPv4Address

                        "IPv4Address": "10.23.2.72"
                        "IPv4Address": "10.23.2.50"
                        "IPv4Address": "10.23.2.76"
                        "IPv4Address": "10.23.2.80"
                        "IPv4Address": "10.23.2.55"
                        "IPv4Address": "10.23.2.64"

or, using jq: -

for i in `docker ps -a | sed 1d | awk '{print $1}'`; do docker inspect $i | jq .[].NetworkSettings.Networks.staticIP.IPAddress; done

"10.23.2.72"
"10.23.2.50"
"10.23.2.76"
"10.23.2.80"
"10.23.2.55"
"10.23.2.64"

for i in `docker ps -a | sed 1d | awk '{print $1}'`; do docker inspect $i | jq .[].NetworkSettings.Networks.bridge.IPAddress; done

"172.31.0.4"
"172.31.0.2"
"172.31.0.3"

PS I've been getting into jq recently, and jqplay.org has been an invaluable resource ...


Docker and DNS - it's all in the network

Last month, my team and I worked through a problem that a client was seeing building Docker images.

Specifically, they were able to build but NOT push the images to Docker Hub.

The main symptom that they were seeing was: -

you are not authorized to perform this operation: server returned 401.

which equates to HTTP 401.

Now we went down all sorts of paths to resolve this, including trying different Docker Hub credentials, changing the parent image ( as specified in the FROM tag within the Dockerfile ) to the most generic - and easily available - image: -

FROM alpine:3.9

but to no avail.

Long story very very short, but it transpired that the DNS configuration was ... misconfigured.

But in a very very subtle way ...

For this particular environment, the /etc/resolv.conf file was being used for DNS resolution, rather than, say, dnsmasq or systemd-resolved.

Which is absolutely fine.

This is what they had: -

nameserver 8.8.8.8
nameserver 4.4.4.4

which, at first glance, looks absolutely fine ....

Given that the issues were with the docker push and that the error was ALWAYS HTTP401 Not Authorised, we came to the conclusion that the issue arose when the Docker CLI was trying to connect to the Docker Notary Service as we'd enforced Docker Content Trust via DOCKER_CONTENT_TRUST=1.

We further worked out that there was some kind of timeout / latency issue, and were able to see that the connection FROM the Ubuntu box upon which we were running TO, specifically https://notary.docker.io was taking the longest time to resolve/connect. The traceroute command was definitely our friend here.

It appears that, by default, the Docker CLI has a built-in timeout of 30 seconds, which isn't user-configurable.

After much trial and a lot of error, we realised that the DNS / resolv.conf configuration was wrong ...

Specifically, that the Docker CLI would try and resolve the IP address for notary.docker.io via the first DNS server, which is Google's own 8.8.8.8 service, but that this would take longer than 30 seconds.

At which point, the Docker CLI would try again, but against the other DNS server, 4.4.4.4, which wouldn't respond particularly quickly.

It transpired that 4.4.4.4 is actually an ISP in the USA, namely Level 3, and ... the client was not in the USA ....

Therefore, the connectivity from their network to the Level 3 network was highly latent, probably because Level 3 focus upon serving their local US customers, rather than treating traffic from geographically remote hosts less favourably.

Now why did we miss this ?

Because 4.4.4.4 looks quite similar to 8.8.4.4 which is Google's second public DNS server, as per Google Public DNS 

All of us tech-heads looked at resolv.conf many many many times, and missed this subtlety.

Once we changed it to 8.8.4.4 all was well: -

nameserver 8.8.8.8
nameserver 8.8.4.4

The morals of the story ....
  1. Never assume 
  2. Check everything
  3. Check everything AGAIN

What fun!

Hmmm, cURL, why did you stop responding ?

So, a few weeks back, I was working on a Bash script that uses cURL to drive a set of REST APIs, and parses the responses, separating out the HTTP response code e.g. 200, 201, 301, 403 etc. from the actual response body.

This allowed me to write conditional logic based upon the return code, using IF/ELSE/FI.

This worked, and did the job for which it was intended, and everyone was happy .....

Fast forward a few weeks, and ... the script starts breaking ...

When I dig into the WHAT and WHY, I realise that the HTTP response codes are no longer being parsed from the response, even though the response still includes them.

After some serious digging around, including a major refactor of my code, I think I've worked out what went wrong ....

I don't necessarily know WHY, but I'm gonna blame the REST APIs, working on the assumption that my code didn't change, and therefore something outside my control did ....

At one point, I even wondered whether Bash on my Mac had changed, perhaps as a result of the latest macOS updates, but I was then able to reproduce the problem on an Ubuntu VM, so kinda suspect that Bash is NOT the villain.

Rather than sharing the full script here, I've written a prototype that demonstrates the problem, and then the solution.

So here's the bit of code that fails: -

curl_output=$(curl -s -k http://www.bbc.com --write-out "|%{http_code}")

IFS='|' read -r RESPONSE HTTP_CODE <<< "$curl_output"

echo -e "Response is" $RESPONSE "\n"

echo -e "HTTP Code is" $HTTP_CODE "\n"

Now, to break it down a bit, we've got four lines of code; the first merely calls the BBC website, which actually responds with HTTP 301 ( redirect ).

This grabs the HTTP response code, and appends it to the end of the string, captured within the curl_output variable, prepended by a vertical bar ( | ) separator.

For reference, the HTTP response code is obtained using the cURL parameter --write-out "|%{http_code}".

The second line of code does two things: -

i. Sets the Internal File Separator (IFS) to the vertical bar ( | )
ii. Parses the curl_output variable, using the read command ( which is internal to Bash ), parsing the content of the variable into two sub-variables, RESPONSE and HTTP_CODE, using IFS to separate the two

The third and fourth lines of code merely output the content of the two sub-variables; I'm using echo -e merely to allow me to add the "\n" instruction to the end of the output, to make it look nice.

When I run the script, this is what I see: -

~/curlTest.sh 

Curl Output is   301 Moved Permanently

Moved Permanently

The document has moved here.
|301 

First Pass
==========

Response is  

HTTP Code is 

which demonstrates the problem that I was seeing with my real REST APIs.

Note that the output from the cURL command is returned, including the HTTP 301 response code: -

Curl Output is   301 Moved Permanently

Moved Permanently

The document has moved here.
|301 

but also note that, whilst the HTTP response is partially returned within the RESPONSE sub-variable, the HTTP return code is NOT returned via the HTTP_CODE variable.

There's a clue in this behaviour but it took me a while to find it.

I went down a rabbit hole for a while, replacing the use of IFS and the read command, with awk, because who doesn't love awk. This works BUT it does require one to know precisely how many columns are being returned in the output from the cURL command i.e. if I use  awk '{print $3}' I need to be sure that the thing I want will always be in column 3.

But, back to the root cause .... 

Notice that, in the above example, the value of the RESPONSE sub-variable is only PART of the actual curl_output variable, specifically the first piece of HTML: -


This gave me a clue; for some reason, the read command was treating that first element as the entirety of the response, and not seeing the vertical bar separator, meaning that IFS was, in essence, being ignored, meaning that the HTTP_CODE variable never gets populated.

So there's something between: -


and the next element of the curl_output variable: -


such as a space character ( 0x20 ).

I even resorted to using the hexdump and hexedit commands ( via brew install hexedit on macOS and apt-get install -y hexedit on Ubuntu ), to confirm what I was seeing: -

00000000   43 75 72 6C  20 4F 75 74  70 75 74 20  69 73 20 20  3C 21 44 4F  43 54 59 50  45 20 48 54  4D 4C 20 50  55 42 4C 49  43 20 22 2D  2F 2F 49 45  Curl Output is  
0000002C   54 46 2F 2F  44 54 44 20  48 54 4D 4C  20 32 2E 30  2F 2F 45 4E  22 3E 20 3C  68 74 6D 6C  3E 3C 68 65  61 64 3E 20  3C 74 69 74  6C 65 3E 33  TF//DTD HTML 2.0//EN"> 3</font></div> <div> <font face="Courier New, Courier, monospace">00000058   30 31 20 4D  6F 76 65 64  20 50 65 72  6D 61 6E 65  6E 74 6C 79  3C 2F 74 69  74 6C 65 3E  20 3C 2F 68  65 61 64 3E  3C 62 6F 64  79 3E 20 3C  01 Moved Permanently <
00000084   68 31 3E 4D  6F 76 65 64  20 50 65 72  6D 61 6E 65  6E 74 6C 79  3C 2F 68 31  3E 20 3C 70  3E 54 68 65  20 64 6F 63  75 6D 65 6E  74 20 68 61  h1>Moved Permanently
The document ha
000000B0   73 20 6D 6F  76 65 64 20  3C 61 20 68  72 65 66 3D  22 68 74 74  70 73 3A 2F  2F 77 77 77  2E 62 62 63  2E 63 6F 6D  2F 22 3E 68  65 72 65 3C  s moved here<
000000DC   2F 61 3E 2E  3C 2F 70 3E  20 3C 2F 62  6F 64 79 3E  3C 2F 68 74  6D 6C 3E 20  7C 33 30 31  20 0A 0A                                            /a>.
|301 ..

( For reference, the ASCII codes are documented here ).

Interestingly, there are multiple space characters ( 0x20 ) in the output, between DOCTYPE and HTML and PUBLIC etc. and yet ......

I'm not 100% sure why read would treat them differently but ..... c'est la vie.

This led me down a different path, reading up on the read command ( apologies for the punning ) led me to the -d parameter: -

-d delim continue until the first character of DELIM is read, rather than newline

This allowed me to tell read to effectively ignore space characters, rather than delimiting at the "first" space.

Thus I changed my script: -

IFS='|' read -d '' -r RESPONSE HTTP_CODE <<< "$curl_output"
echo -e "Response is" $RESPONSE "\n"
echo -e "HTTP Code is" $HTTP_CODE "\n"

adding in -d ''

Having created a script that does both types of read, one without -d and one with -d, this is what I see: -

Curl Output is   301 Moved Permanently

Moved Permanently

The document has moved here.
|301 

First Pass
==========

Response is  

HTTP Code is 

Second Pass
===========

Response is 301 Moved Permanently

Moved Permanently

The document has moved here.
 

HTTP Code is 301 

Note that, in the second pass, I see the entirety of the curl_output variable, specifically that before the vertical bar ( | ) separator in the RESPONSE sub-variable and, more importantly, I see the HTTP response code ( 301 ) in the HTTP_CODE sub-variable.

This is the entire script: -

#!/bin/bash

curl_output=$(curl -s -k http://www.bbc.com --write-out "|%{http_code}")

echo -e "Curl Output is " $curl_output "\n"

echo -e "First Pass"
echo -e "==========\n"

IFS='|' read -r RESPONSE HTTP_CODE <<< "$curl_output"
echo -e "Response is" $RESPONSE "\n"
echo -e "HTTP Code is" $HTTP_CODE "\n"

echo -e "Second Pass"
echo -e "===========\n"

IFS='|' read -d '' -r RESPONSE HTTP_CODE <<< "$curl_output"
echo -e "Response is" $RESPONSE "\n"
echo -e "HTTP Code is" $HTTP_CODE "\n"

Further more, whilst writing this post, with this little prototype script, I discovered something else that validated my original hypothesis, that the REST APIs had changed between the original script working and breaking ....

Recognising that the BBC homepage isn't really a good example of a REST API, even though cURL doesn't really care, I changed my script to use a "real" REST REST API ( I am using the Dummy REST API Example ).


When I run my script, as per the above, I get this: -

Curl Output is  {"status":"success","data":[{"id":"1","employee_name":"Tiger Nixon","employee_salary":"320800","employee_age":"61","profile_image":""},{"id":"2","employee_name":"Garrett Winters","employee_salary":"170750","employee_age":"63","profile_image":""},{"id":"3","employee_name":"Ashton Cox","employee_salary":"86000","employee_age":"66","profile_image":""},{"id":"4","employee_name":"Cedric Kelly","employee_salary":"433060","employee_age":"22","profile_image":""},{"id":"5","employee_name":"Airi Satou","employee_salary":"162700","employee_age":"33","profile_image":""},{"id":"6","employee_name":"Brielle Williamson","employee_salary":"372000","employee_age":"61","profile_image":""},{"id":"7","employee_name":"Herrod Chandler","employee_salary":"137500","employee_age":"59","profile_image":""},{"id":"8","employee_name":"Rhona Davidson","employee_salary":"327900","employee_age":"55","profile_image":""},{"id":"9","employee_name":"Colleen Hurst","employee_salary":"205500","employee_age":"39","profile_image":""},{"id":"10","employee_name":"Sonya Frost","employee_salary":"103600","employee_age":"23","profile_image":""},{"id":"11","employee_name":"Jena Gaines","employee_salary":"90560","employee_age":"30","profile_image":""},{"id":"12","employee_name":"Quinn Flynn","employee_salary":"342000","employee_age":"22","profile_image":""},{"id":"13","employee_name":"Charde Marshall","employee_salary":"470600","employee_age":"36","profile_image":""},{"id":"14","employee_name":"Haley Kennedy","employee_salary":"313500","employee_age":"43","profile_image":""},{"id":"15","employee_name":"Tatyana Fitzpatrick","employee_salary":"385750","employee_age":"19","profile_image":""},{"id":"16","employee_name":"Michael Silva","employee_salary":"198500","employee_age":"66","profile_image":""},{"id":"17","employee_name":"Paul Byrd","employee_salary":"725000","employee_age":"64","profile_image":""},{"id":"18","employee_name":"Gloria Little","employee_salary":"237500","employee_age":"59","profile_image":""},{"id":"19","employee_name":"Bradley Greer","employee_salary":"132000","employee_age":"41","profile_image":""},{"id":"20","employee_name":"Dai Rios","employee_salary":"217500","employee_age":"35","profile_image":""},{"id":"21","employee_name":"Jenette Caldwell","employee_salary":"345000","employee_age":"30","profile_image":""},{"id":"22","employee_name":"Yuri Berry","employee_salary":"675000","employee_age":"40","profile_image":""},{"id":"23","employee_name":"Caesar Vance","employee_salary":"106450","employee_age":"21","profile_image":""},{"id":"24","employee_name":"Doris Wilder","employee_salary":"85600","employee_age":"23","profile_image":""}]}|200 

First Pass
==========

Response is {"status":"success","data":[{"id":"1","employee_name":"Tiger Nixon","employee_salary":"320800","employee_age":"61","profile_image":""},{"id":"2","employee_name":"Garrett Winters","employee_salary":"170750","employee_age":"63","profile_image":""},{"id":"3","employee_name":"Ashton Cox","employee_salary":"86000","employee_age":"66","profile_image":""},{"id":"4","employee_name":"Cedric Kelly","employee_salary":"433060","employee_age":"22","profile_image":""},{"id":"5","employee_name":"Airi Satou","employee_salary":"162700","employee_age":"33","profile_image":""},{"id":"6","employee_name":"Brielle Williamson","employee_salary":"372000","employee_age":"61","profile_image":""},{"id":"7","employee_name":"Herrod Chandler","employee_salary":"137500","employee_age":"59","profile_image":""},{"id":"8","employee_name":"Rhona Davidson","employee_salary":"327900","employee_age":"55","profile_image":""},{"id":"9","employee_name":"Colleen Hurst","employee_salary":"205500","employee_age":"39","profile_image":""},{"id":"10","employee_name":"Sonya Frost","employee_salary":"103600","employee_age":"23","profile_image":""},{"id":"11","employee_name":"Jena Gaines","employee_salary":"90560","employee_age":"30","profile_image":""},{"id":"12","employee_name":"Quinn Flynn","employee_salary":"342000","employee_age":"22","profile_image":""},{"id":"13","employee_name":"Charde Marshall","employee_salary":"470600","employee_age":"36","profile_image":""},{"id":"14","employee_name":"Haley Kennedy","employee_salary":"313500","employee_age":"43","profile_image":""},{"id":"15","employee_name":"Tatyana Fitzpatrick","employee_salary":"385750","employee_age":"19","profile_image":""},{"id":"16","employee_name":"Michael Silva","employee_salary":"198500","employee_age":"66","profile_image":""},{"id":"17","employee_name":"Paul Byrd","employee_salary":"725000","employee_age":"64","profile_image":""},{"id":"18","employee_name":"Gloria Little","employee_salary":"237500","employee_age":"59","profile_image":""},{"id":"19","employee_name":"Bradley Greer","employee_salary":"132000","employee_age":"41","profile_image":""},{"id":"20","employee_name":"Dai Rios","employee_salary":"217500","employee_age":"35","profile_image":""},{"id":"21","employee_name":"Jenette Caldwell","employee_salary":"345000","employee_age":"30","profile_image":""},{"id":"22","employee_name":"Yuri Berry","employee_salary":"675000","employee_age":"40","profile_image":""},{"id":"23","employee_name":"Caesar Vance","employee_salary":"106450","employee_age":"21","profile_image":""},{"id":"24","employee_name":"Doris Wilder","employee_salary":"85600","employee_age":"23","profile_image":""}]} 

HTTP Code is 200 

Second Pass
===========

Response is {"status":"success","data":[{"id":"1","employee_name":"Tiger Nixon","employee_salary":"320800","employee_age":"61","profile_image":""},{"id":"2","employee_name":"Garrett Winters","employee_salary":"170750","employee_age":"63","profile_image":""},{"id":"3","employee_name":"Ashton Cox","employee_salary":"86000","employee_age":"66","profile_image":""},{"id":"4","employee_name":"Cedric Kelly","employee_salary":"433060","employee_age":"22","profile_image":""},{"id":"5","employee_name":"Airi Satou","employee_salary":"162700","employee_age":"33","profile_image":""},{"id":"6","employee_name":"Brielle Williamson","employee_salary":"372000","employee_age":"61","profile_image":""},{"id":"7","employee_name":"Herrod Chandler","employee_salary":"137500","employee_age":"59","profile_image":""},{"id":"8","employee_name":"Rhona Davidson","employee_salary":"327900","employee_age":"55","profile_image":""},{"id":"9","employee_name":"Colleen Hurst","employee_salary":"205500","employee_age":"39","profile_image":""},{"id":"10","employee_name":"Sonya Frost","employee_salary":"103600","employee_age":"23","profile_image":""},{"id":"11","employee_name":"Jena Gaines","employee_salary":"90560","employee_age":"30","profile_image":""},{"id":"12","employee_name":"Quinn Flynn","employee_salary":"342000","employee_age":"22","profile_image":""},{"id":"13","employee_name":"Charde Marshall","employee_salary":"470600","employee_age":"36","profile_image":""},{"id":"14","employee_name":"Haley Kennedy","employee_salary":"313500","employee_age":"43","profile_image":""},{"id":"15","employee_name":"Tatyana Fitzpatrick","employee_salary":"385750","employee_age":"19","profile_image":""},{"id":"16","employee_name":"Michael Silva","employee_salary":"198500","employee_age":"66","profile_image":""},{"id":"17","employee_name":"Paul Byrd","employee_salary":"725000","employee_age":"64","profile_image":""},{"id":"18","employee_name":"Gloria Little","employee_salary":"237500","employee_age":"59","profile_image":""},{"id":"19","employee_name":"Bradley Greer","employee_salary":"132000","employee_age":"41","profile_image":""},{"id":"20","employee_name":"Dai Rios","employee_salary":"217500","employee_age":"35","profile_image":""},{"id":"21","employee_name":"Jenette Caldwell","employee_salary":"345000","employee_age":"30","profile_image":""},{"id":"22","employee_name":"Yuri Berry","employee_salary":"675000","employee_age":"40","profile_image":""},{"id":"23","employee_name":"Caesar Vance","employee_salary":"106450","employee_age":"21","profile_image":""},{"id":"24","employee_name":"Doris Wilder","employee_salary":"85600","employee_age":"23","profile_image":""}]} 

HTTP Code is 200 

Yes, it works perfectly in both cases.

In other words, a real REST API that's returning JSON doesn't seem to break my script, regardless of whether I use read -d '' or not.

Which is nice.

However, I learned a lot whilst digging into this, both to fix the actual script AND to write this post.

And ....

EVERY DAY IS A SCHOOL DAY

Tuesday, 3 March 2020

PAM says "No"

I saw this yesterday: -

Mar  2 11:19:32 korath sudo: pam_tally2(sudo:auth): user bloggsj (12024) tally 51, deny 5
Mar  2 11:19:32 korath sudo: pam_unix(sudo:auth): auth could not identify password for [bloggsj]
Mar  2 11:19:32 korath sudo:    bloggsj : 1 incorrect password attempt ; TTY=pts/0 ; PWD=/var/bloggsj ; USER=root ; COMMAND=/bin/bash

after changing a user's password.

He was trying/failing to run sudo bash even though he was in the right group, and was using the right password ....

Assuming that Pluggable Authentication Module (PAM) was getting in the way, I checked the PAM Tally: -

pam_tally --user=bloggsj

and even reset it: -

pam_tally --user=bloggsj --reset

but to no avail.

Then I re-read the message: -

Mar  2 11:19:32 korath sudo: pam_tally2(sudo:auth): user bloggsj (12024) tally 51, deny 5

Yep, the offending module is pam_tally2 !

Once I did the needful: -

pam_tally2 --user=bloggsj --reset

all was good.

For the record: -

https://xkcd.com/149/

Monday, 2 March 2020

WebSphere User Group Spring Roadshow - 24 April 2020 @ IBM South Bank

From the department of my old team/job/love in IBM Cloud, we have: -

WebSphere User Group Spring Roadshow

We know that real-world applications are complicated, and that a "Hello World" example just doesn't cut it when compared to the enterprise-grade applications you develop and manage every day. That's why we've put together an in-depth experience to help you explore how you can transform your application from an on-prem monolith to a streamlined containerized cloud implementation.

Join us for a hands-on lab where you will take a fully-fledged application running in a traditional WebSphere ND environment all the way to a containerized solution running on OpenShift using Cloud Pak for Applications. 

In this lab, we will look at both operational modernization and application modernization.

  •  Operational modernization focuses on containerizing applications as-is and is the suggested approach for applications that are just too complex to change. We will do this using the traditional WebSphere base container image
  • Application modernization focuses on what changes can be made to applications to modernize aspects of them for optimal use on the cloud. For the application modernization portion we will use Open Liberty images to containerize those updated applications. 

Both types of containers will then be deployed to Red Hat OpenShift. We will explore some of the dashboards available in OpenShift to perform common application administration tasks. Finally, we will use Application Navigator to manage your whole portfolio of applications, whether running on-prem or in the cloud.

Friday 24 April 2020 ( 0900-1400 GMT )

9.00 Registration & Breakfast
9.30 Welcome & Introduction
10.00 Workshop
13.00 Networking lunch

14.00 Close

IBM South Bank
76/78 Upper Ground, South Bank, London SE1 9PZ


Wednesday, 26 February 2020

Just Announced - IBM Hyper Protect Virtual Servers

This is upon what my team and I have been working for the past few months, and I'm proud that we've announced it today: -

IBM Hyper Protect Virtual Servers is a software solution that is designed to protect your mission-critical workloads with sensitive data from both internal and external threats. This offering provides developers with security throughout the entire development lifecycle.
  • All images are signed and securely built with a trusted CI/CD (Continuous Integration, Continuous Delivery) flow
  • Infrastructure providers will not have access to your sensitive data, but can still manage images through APIs
  • Validate the source used to build images at any time – no backdoor can be introduced during the build process
This offering aligns with the IBM Cloud Hyper Protect Services portfolio for on-premises deployment to IBM Z®and IBM LinuxONE™servers.

IBM Hyper Protect Virtual Servers

Securely build, deploy and manage mission-critical applications for hybrid multicloud environments on IBM® Z® and LinuxONE systems.

Solution Brief

Thursday, 23 January 2020

What's been eating my disk ?

I'm sure I've posted this before, but repetition is the most sincere form of .... something deep and meaningful.

Want to see what's eating your disk in a particular file-system ?

Try: -

du -hs * | sort -h

against, say, /home to see who is eating your disk, especially in terms of specific user's home directories etc.

Wednesday, 22 January 2020

Sigh, Jenkins, I was holding it wrong ...

I've created a Jenkins Pipeline that clones a GitHub repository and ... SHOULD ... execute a bunch o' instructions in a Jenkinsfile in the top-level of the repo ....

SHOULD being the operative word ...

The Pipeline runs, clones the repo, even showing up the most recent Commit message ... and then reports: -

Finished: SUCCESS

What was I doing wrong ?

Yeah, you guessed it ...

When I created the Pipeline, I neglected to choose Pipeline script from SCM, which allows me to specify the Script Path as Jenkinsfile



Which meant that there was NOTHING for the Pipeline to do, apart from clone the GitHub repo ....

Friday, 17 January 2020

Run Linux on IBM Z Docker Containers Inside z/OS

Whilst this isn't upon which I'm currently working, it's definitely of interest and relevance: -

 Everybody knows that Linux* runs on IBM Z*, but what if you could build a hybrid workload consisting of native z/OS software and Linux on Z software, both running in the same z/OS* image?

Starting from z/OS V2R4, with an exciting new feature named IBM z/OS Container Extensions (zCX), you have a new way to run Linux on IBM Z Docker containers in direct support of z/OS workloads on the same z/OS system. It builds much more flexibility into operations on IBM Z by modernizing and extending z/OS applications.

“With zCX, customers will be able to access the most recent development tools and processes available in Linux on the Z ecosystem, giving developers the flexibility to build new, cloud-native containerized apps and deploy them on z/OS without requiring Linux or a Linux partition,” says Ross Mauri, general manager for IBM (ibm.co/2W04VWW).

See the zCX website (ibm.co/2JaDzWe) if you are interested in more details. 

 Run Linux on IBM Z Docker Containers Inside z/OS 

Tuesday, 14 January 2020

docker create - or ... one learns something every day ....

I was looking for a simple way to "peer" inside a newly-built Docker image, without actually starting a container from that image ...

Specifically, I wanted to look at the content of a configuration file - /etc/ssh/sshd_config - to check some security settings.

Thankfully, the internet had the answer - as per usual

Extract file from docker image?

and this worked for me: -

Use the docker create command to create a container without actually creating (instantiating) a container


docker create debian:jessie

This returns the ID of the created container: -

7233e5c0df37bd460cc4d13b98f1f0b4d2d04677ea3356ad178af3a4af6484e5

Use the container ID to copy the required file to, say, /tmp

docker cp 7233e5c0df37bd460cc4d13b98f1f0b4d2d04677ea3356ad178af3a4af6484e5:/etc/ssh/sshd_config /tmp

Check out the copied file

cat /tmp/sshd_config

Delete the container

docker rm 7233e5c0df37bd460cc4d13b98f1f0b4d2d04677ea3356ad178af3a4af6484e5

Job done!

Obviously, I could've been even more elegant: -

export CONTAINER=`docker create debian:jessie`
docker cp $CONTAINER:/etc/ssh/sshd_config /tmp
cat /tmp/sshd_config
docker rm $CONTAINER

Nice !

Monday, 13 January 2020

Book Review - Penetration Testing - A guide for business and IT managers

Another book review on behalf of the British Computer Society, who kindly provided me with a hard-copy of this book: -

Penetration Testing - A guide for business and IT managers

This book is written as a series of standalone chapters, each authored by one of a series of experienced practitioners, and can be consumed in whole or in part. Each chapter can then be used as a source of reference for a particular aspect of a penetration testing activity.

As the title suggests, the book is intended to be a guide for the leadership team of any business and, as such, uses brevity and clarity to facilitate understanding. It's not intended to be a detailed reference guide for a penetration tester - other materials exist to meet this requirement - but it does provide a useful insight into the wider discipline of security and penetration testing.

It is logically organised, introducing the subject of penetration testing before digging into the rules and regulations surrounding a project, in terms of the regulatory framework and contractual obligations.

This latter topic is crucial, in terms of ensuring that the scope of the testing activity is well-defined and that the testers are commercially and legally covered for their planned activities.

In later chapters, more attention is paid to scoping testing activities, in terms of ensuring that the organisation is aligned with the expected outcomes, and that the test coverage is appropriately sized and scaled.

As a former software services professional, I also appreciated the compare/contrast between "best" and "good" practices, especially as perfection is often the enemy of the good, to misquote a common phrase. In other words, whilst "best" practice may be desirable, "good enough" is perhaps a more realistic and timely aiming point, especially as financial budgets and timescales are often tight.

As one would expect, there is focus upon the tooling that a tester would use, including Burpsuite, nmap, Nessus and Wireshark, whilst also focusing on community-driven offerings such as Open Web Application Security Project (OWASP). Again, these are covered at a reasonably high-level, and the authors would expect testers to be aware of individual tools, in terms of fit, coverage, support and licensing models.

Towards the end of the book, attention is paid to test reporting and, equally importantly, the action planning that needs to follow on from testing, as well as the requirement to schedule a follow-up testing activity to check the actual results against the planned remediations.

In conclusion, whilst the audience for this book is clearly intended to be project or organisation leaders, it's brief enough to serve as a useful introduction to the practice of penetration testing, and would serve as a grounding for anyone intending to develop their career into this subject domain.

Therefore, I'm comfortable in recommending this book, and would rate it 9/10 for context, brevity and completeness.

Friday, 10 January 2020

Red Hat OpenShift, IBM Cloud Paks and more facilitate digital transformation.

This is, in part, upon which I've been working this past year or so

Red Hat OpenShift, IBM Cloud Paks and more facilitate digital transformation

Definitely worth a read, especially to provide context about RH OCP, Cloud Paks and, close to my heart, Hyper Protect Services.

Monday, 6 January 2020

More from Julia Evans - Your Linux Toolbox

I've mentioned Julia Evans several times before, but she's again wowed me with another rather useful set of enablement materials: -



Your Linux Toolbox

which is available as hard and soft copy.

At time of writing, she's also offering a 20% discount !

Check out Julia on Twitter - @b0rk - and enjoy !

Modernize a monolithic Node.js application into a microservices architecture using IBM Cloud Pak for Applications

From one of my IBM colleagues, we have this: - This tutorial shows how to transform a traditional monolithic core banking application, which...