Thursday, 18 July 2019

Jenkins and the Case of the Missing Body

I was repeatedly seeing this: -

java.lang.IllegalStateException: There is no body to invoke

with a Jenkins Pipeline that I was executing; this Pipeline executes whenever one commits new code into a GitHub Enterprise (GHE) repository, with a Pull Request.

To debug this further, I created a dummy GHE repository with a corresponding Jenkinsfile, and a new Jenkins pipeline.

This allowed me to hack iterate the code in the GHE web UI, and immediately test the Pipeline within Jenkins itself.

Without wishing to give away the plot, I'll TL;DR; and say that the problem was ME ( quelle surprise ).

Here's my initial Jenkinsfile: -

timestamps
{
    node('linuxboxen')
    checkout scm
    def givenName = "Dave"
    def familyName = "Hay"
    
    withEnv(["GIVEN_NAME=${givenName}", "FAMILY_NAME=${familyName}"])
    {
        stage('JFDI')
        {
            sh '''#!/bin/bash
            echo "Doing it"
            echo $GIVEN_NAME
            echo $FAMILY_NAME
            '''
        }
    }
}

Can you see the problem ?

It took me a while ....

The node directive is NOT followed by a set of braces, meaning that nothing actually gets done, hence the exception.

The code SHOULD look like this: -

timestamps
{
    node('linuxboxen')
    {
        checkout scm
        def givenName = "Dave"
        def familyName = "Hay"
    
        withEnv(["GIVEN_NAME=${givenName}", "FAMILY_NAME=${familyName}"])
        {
            stage('JFDI')
            {
                sh '''#!/bin/bash
                echo "Doing it"
                echo $GIVEN_NAME
                echo $FAMILY_NAME
                '''
            }
        }
    }
}

In other words, the node() directive needs something to do, hence the need for the braces, which can contain one or more stages(), plus associated directives.

Nice :-)

Tuesday, 16 July 2019

Containers: A Complete Guide

I found this whilst looking for something completely different: -

Containers: A Complete Guide

This guide looks at the importance of containers in cloud computing, highlighting the benefits and showing how containers figure into such technologies as Docker, Kubernetes, Istio, VMs, and Knative.

Quite a nice little introduction ...

Monday, 15 July 2019

Shelling out - fun with Ubuntu shells

I saw this: -

-sh: 2: [: -gt: unexpected operator
-sh: 29: [: -gt: unexpected operator

when logging into an Ubuntu boxen.

I was pretty sure that this'd worked before, but wondered whether my shell was giving me (s)hell ....

I checked what I was currently running: -

echo $SHELL

/bin/sh

which is a flavour of the Bourne Again SHell ( BASH ).

I then checked the /etc/passwd file: -

cat /etc/passwd

hayd:x:12039:12039::/home/hayd:

and realised that I didn't have an explicit shell set.

I upped my authority ( super user do ): -

sudo bash

[sudo] password for hayd: 

and then updated my account: -

usermod --shell /bin/bash hayd

Now /etc/passwd looks OK: -

hayd:x:12039:12039::/home/hayd:/bin/bash

and I'm now all good to go: -

echo $SHELL

/bin/bash


Friday, 12 July 2019

Intro Guide to Dockerfile Best Practices

Not sure how I found this ( it MAY have been Twitter ), but this is rather useful: -

Intro Guide to Dockerfile Best Practices

especially whilst I've been automating the build of Docker images via Jenkins pipelines.

Definitely a few tips to try, such as: -

Tip #4: Remove unnecessary dependencies

Remove unnecessary dependencies and do not install debugging tools. If needed debugging tools can always be installed later. Certain package managers such as apt, automatically install packages that are recommended by the user-specified package, unnecessarily increasing the footprint. Apt has the –no-install-recommends flag which ensures that dependencies that were not actually needed are not installed. If they are needed, add them explicitly.

Go read !

Now Available - IBM Cloud Hyper Protect Virtual Servers

I'm pleased to see one of the IBM Z offerings upon which my Squad are working is now available in the IBM Cloud Experimental Services section of the IBM Cloud Catalog: -




Hyper protect line of virtual servers service leveraging the unparalleled security and reliability of Secure Service Containers on IBM Z.

Features

Security

Ability to deploy a Virtual Server in a Secure Service Container ensuring confidentiality of data and code running within the VS

Z Capabilities on the cloud

Ability to deploy workload into the most secure, highly performant, Linux virtual server with extreme vertical scale

Easy to use, open, and flexible

User experience at parity with market leaders both when buying and using the VS; with the openness and flexibility of a public cloud

No Z skills required

Access Z technology without having to purchase, install, and maintain unique hardware

IBM Cloud Hyper Protect Virtual Servers

Yay us!

Friday, 5 July 2019

Book Review - Left To Our Own Devices, by Margaret E Morris

As mentioned previously, I've been writing a series of book reviews for the British Computer Society (BCS), including: -

Book Review - You'll See This Message When It Is Too Late - The Legal and Economic Aftermath of Cybersecurity Breaches

Rails, Angular, Postgres, and Bootstrap - A Book Review

Kubernetes Microservices with Docker - A Book Review

Book Review - Mastering Puppet Second Edition by Thomas Uphill

etc.

So here's the most  recent review - as before, for full disclosure, I must mention that BCS kindly provided me with a free hardcopy of the book, albeit a review version: -

Left To Our Own Devices, by Margaret E Morris

https://mitpress.mit.edu/books/left-our-own-devices

If nothing else, the title of this book intrigued me, in part because it reminded me of a Pet Shop Boys track from my youth. More seriously, the subtitle of the book: -

Outsmarting smart technology to reclaim our relationships, health and focus

resonated with a lot of recent media coverage about the impacts, both real and perceived, both positive and negative, of information technology in the modern era.

Whilst I don't claim to have strong opinions about the topic, or be particularly well-informed, apart from as a consumer, I have given thought to my family's use of mobile devices, Internet of Things gadgets, so-called smart home technology etc.

I'd especially considered limits on screen time, impact on sleep patterns, exposure to sources of news, including social media, and my tendency to live in a bubble, self-selecting news and opinions that mirror my own.

Therefore, this book came at precisely the right time, and opened my eyes to a number of use cases of technology, including smart lighting, health tracking ( including the so-called Quantified Self ), social media and messaging, technology as an art-form, self-identity, including gender and sexuality, and technology as a therapist.

Ms Morris illustrates each chapter, of which there eight, with a large number of individual user stories, taking inspiration and insight from real people, who allow her to share how they use technology, mainly for the positive, but with thought and insight.

Despite the title, and the subtitle, I found the book to be a very positive read; whilst there are definitely shortcomings to an over-use and over-reliance upon technology, the book shows how humans do manage to mostly outsmart their smart technology, and get from it what they need, whether or not that's what the original inventor intended.

I didn't come away with a list of Do's and Don'ts, but a better understanding of how, and why, people choose to use certain technologies, and, therefore, how I can evaluate my own use, and be more qualitative in my choice of technologies.

In conclusion, I strongly recommend this book, it's a relatively short read, coming in ~130 pages, and is a high enough level that one doesn't need to be a total geek to get the points raised, whether or not one is a total geek.

Out of 10, I'd give this book 10, mainly for completeness, brevity and for the all-important human touch.

Thursday, 4 July 2019

Docker Registries and Repositories - Is there a difference ? ( Hint, yes, there really is )

This came up in discussion today, and one of my colleagues pointed me here: -

Difference between Docker registry and repository

Docker registry is a service that is storing your docker images.

Docker registry could be hosted by a third party, as public or private registry, like one of the following registries:

    Docker Hub,
    Quay,
    Google Container Registry,
    AWS Container Registry

or you can host the docker registry by yourself
(see https://docs.docker.com/docker-trusted-registry/ for more details).

Docker repository is a collection of different docker images with same name, that have different tags. Tag is alphanumeric identifier of the image within a repository.

For example see https://hub.docker.com/r/library/python/tags/. There are many different tags for the official python image, these tags are all members of the official python repository on the Docker Hub. Docker Hub is a Docker Registry hosted by Docker.

To find out more read:

    https://docs.docker.com/registry/
    https://github.com/docker/distribution

IBM Cloud also helped me here, in that I have an IBM Cloud Container Registry service, aka ICCR, within which I have access to several Repositories, and the ICCR UI helpfully tells me: -

A repository is a set of related images with the same name, but different tags.



which is, as they say, nice 😂

Monday, 1 July 2019

Bash and a sufficiency of input parameters

I hit an interesting quirk in Bash earlier today; I'm passing in a list of command-line parameters to a Bash script, using the $1, $2 etc. input parameter method.

However, I noticed that the TENTH parameter failed, and I ended up with a trailing zero on the end of a string that was actually the FIRST parameter.

It appeared that Bash was stopping at 9, and then simply adding the character '0' to the end of the string provided as the FIRST parameter.

Here's an example: -

#!/bin/bash

export A=$1
export B=$2
export C=$3
export D=$4
export E=$5
export F=$6
export G=$7
export H=$8
export I=$9
export J=$10

echo $J

When I execute the script: -

~/foo.sh 1 2 3 4 5 6 7 8 9 0

I'd expect to see this: -

The tenth parameter is 0

whereas I actually saw this: -

The tenth parameter is 10

As ever, the internet came to the rescue: -


which said, in part: -

...
Use curly braces to set them off:

echo "${10}"
...

I updated my script: -

#!/bin/bash

export A=${1}
export B=${2}
export C=${3}
export D=${4}
export E=${5}
export F=${6}
export G=${7}
export H=${8}
export I=${9}
export J=${10}

echo "The tenth parameter is" $J

and now it works as expected: -

The tenth parameter is 0

To be fair, the article also said: -

...
You can also iterate over the positional parameters like this:

for arg

or

for arg in "$@"

or

while (( $# > 0 ))    # or [ $# -gt 0 ]
do
    echo "$1"
    shift
done
...

which I should definitely try .........

IBM Cloud Vulnerability Advisor - Poking the Endpoint

I've been using the Vulnerability Advisor (VA) tool to automate the testing of my built Docker images, looking for code vulnerabilities, scanning against the IBM X-Force database and known Common Vulnerabilities and Exposures (CVE) issues.

This is a nifty feature of the IBM Cloud Container Registry and provides both web UI *AND* command-line interface (CLI) options, which is super-good.

However, I'd not really looked at the REST APIs that VA provides, as documented here: -

Vulnerability Advisor for IBM Cloud Container Registry

Talking with a colleague, I realised that one can leverage API calls such as Report, which returns a JSON payload comprising ALL of the images "owned" by that IBM Cloud account, with an indication of status e.g. OK, UNSUPPORTED, FAIL, plus details of vulnerabilities, configuration issues etc.

So it's the same information that's available via the web UI and the CLI, but available for programmatic consumption ....

To consume this, one needs to pass in HTTP headers such as Account ( which IBM account is being targeted ) and Authorization ( a Bearer token ).

The first is retrieved via the command: -

bx iam accounts

which returns a list of Account GUIDs, plus the Name, Status and Owner.

The second is retrieved by the command: -

bx iam oauth-tokens

which returns a nice long string of apparent gibberish which is actually one's auth token.

Armed with the account ( which should be 32 hex characters ) and the auth token ( which should be 1074 characters ), one can hit the API endpoint.

This is the cURL command - other REST clients are available: -

curl -X GET \
  https://us.icr.io/va/api/v3/report/account \
  -H 'Accept: */*' \
  -H 'Account: db52f980f8c07a05b50cb223fae0d849' \
  -H 'Authorization: Bearer xdx60o64BxVzavTaO8ayNRZCwE4zavFdMbHh0DHh6sB2JKqqTENlQmBDVoBJdOwgQq5eYgUQiikG1KSJs6MuUGyIy2tdQD7eZ0DzIhj6YBX70V8UY0rwLjX0CEfV3UMIOHaGT1yZE7MRLy8iCLn0BOo20rI7vBqgjfJfQlJpd74o5fAkw6oppqchNRFW99GOtAJgGWxzULhx1hsG99ejHZNA7631uJ693whc1gdhp2xw46xQ0g66vGYMLKlyOMUqbm3aM9QoBJulDAp2k8TKdzy589PUafXcIKY4m3nFREAMdr6s8e6bktzkJLtFvqBPGejcaYSLXmnIuP7clOaGvMwwUd2k6KQkpeLPqqmnWP11nNnoMDAiFTnHsPSd74dXtZ4jpfb6co6fXRXmL1Nd2hqfXpVGCbn4wfHWhxbTZhVJs729UJjoZtJGZqdhRLQgrSqzO4jDre0WFfGy0JKsgTRJd37W92d2lbYyiX4Gze9rkSawPhMuRsHw9W6JwAZtqglOIweqy2AcwD3f599k2oEXhZ7LaXPUSlyJQtmTtE4VMBualv3R3zE3Q2wQrPzGLqrFHo30TdNeXDA99o5WVGxSb1QAs9oSWia3sLIr0c6IAdxBPSVv7Xb7aSnlTyB4EII4RKi0GiHWn1f4tJzeBSC9FMNCFvla7utslQB3lMPpA0SZprIa2FuQ9gdy6FYwhe8nzKlBLlCdcqGj7UG2S0m4YIdvtrxT8WDe0JiNY2kQtyZYOlF6tqDdGZNZpYjOvoWZLercfT71bKd3zpAkXcY9XK9G32CpZewBvPvFETzQj7mkdJOQirKY0ZADrUasniu1KgrrcirJt0TqnDRpkxOk23cDoLfs6utxEx3nxNeKqL0sxQdOtLjUsNCh0heranPqbUfV5DaWt4oHLBULOfTBpdcKLeOvIYl1khH9UnzW82fk7FrINp40vwUti5euXHsX2bbkfk0i0JG1qm7ynJY4ZatX4j6B4wglK4KRXeYql7wffRAq10fvbgrgKgBqrUsTHjuprElrFHHkpAS6jKvDmCquHD1ePLFA0IVOdAO407Apbe
' \

and wait for a nice long list of images and their vulnerabilities ....


Friday, 28 June 2019

Updating Ubuntu - remember to update AND upgrade

I'm putting this here to remind me ... as I'm old and often forget ...... wassat ?

So I've been doing lots of stuff with Ubuntu recently, including containers, Virtual Servers AND Virtual Machines ...

And I remember to run: -

sudo apt-get update

but then wonder why my packages don't get ... updated !

It's simple ... I've run the update which effectively refreshes the list of packages available .... but NOT remembered to run the corollary upgrade process: -

sudo apt-get upgrade -y

which actually performs the update ( or, if you will, upgrade to the updated packages )

I can concatenate this: -

sudo apt-get update && sudo apt-get upgrade -y

which does the job nicely.

If it helps, the auto-complete behaviour of most shells does help ....

Type sudo apt-get and then press the [TAB] key ...

sudo apt-get 

autoclean        build-dep        check            dist-upgrade     dselect-upgrade  purge            source           upgrade          
autoremove       changelog        clean            download         install          remove           update           


If I'm feeling really brave, I'll do this: -

sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade -y

which upgrades the underlying Ubuntu distribution ....

But that's for the brave ! YMMV

Thursday, 27 June 2019

Building Docker images for cache

When building a Docker image, it's useful to know that one can avoid the benefit of caching i.e. messages such as this: -


 ---> 790dcbffd65f
Step 2/3 : RUN apt-get update
 ---> Using cache
 ---> a9dadec81fda
Step 3/3 : RUN apt-get upgrade
 ---> Running in b0261d077b68
Reading package lists...
Building dependency tree...
Reading state information...
Calculating upgrade...

This is especially useful when attempting to upgrade packages within an image, having previously built an image ...

I appreciate that I could've just deleted the existing images using docker rmi XXXXXXXX but this is a differently easier option: -

docker build --no-cache -t ubuntu -f Dockerfile .

With thanks to this: -



Encrypted container images for container image security at rest

From IBM, we have this: -

Ensure the confidentiality of data and code in container images

This article addresses a remaining security concern for enterprises about the confidentiality of data and code in container images. The primary goal for container image security is to allow the building and distribution of encrypted container images for making them only available to a set of recipients. While others might be able to access these images, they cannot run them or see the confidential data inside them. Container encryption builds on existing cryptography such as Rivest–Shamir–Adleman (RSA), elliptic curve, and Advanced Encryption Standard (AES) encryption technologies.


kaniko - Build Images In Kubernetes

One of my IBM colleagues mentioned Kaniko today


From the site: -

kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.

kaniko doesn't depend on a Docker daemon and executes each command within a Dockerfile completely in userspace. This enables building container images in environments that can't easily or securely run a Docker daemon, such as a standard Kubernetes cluster.

https://github.com/GoogleContainerTools/kaniko

Monday, 17 June 2019

IBM Z defines the future of hybrid cloud

Some useful insights on the ever-moving world of IBM Z, including: -

Tailored Fit Pricing for IBM Z
...
The hallmark of this model is that pricing adjusts with usage, removing the need for complex and restrictive capping, and includes aggressive pricing for growth. The capacity solution, also part of Tailored Fit Pricing, enables clients to mix and match workloads to help maximize use of the full capacity of the platform. At the end of the day, Tailored Fit Pricing is designed to both unlock the full power of the platform and ensure optimal response times and service-level agreements, 24/7.
...

IBM z/OS Container Extensions

...
We’re giving our customers the ability to run Linux on IBM Z Docker container in direct support of z/OS workloads on the same z/OS system.
...

IBM z/OS® Cloud Broker

...
IBM z/OS Cloud Broker is designed such that cloud application developers can provision and deprovision z/OS environments to support the app development cycle.
...

and then there's my very own product-set ( well, not JUST mine, I'm just ONE of the engineers !! ): -

IBM Cloud Hyper Protect

...
Hyper Protect offers a range of on-premises and off-premises deployment choices for extending IBM Z services and data—while balancing performance, availability or security.

Next month, for example, Hyper Protect Database as a Service (DBaaS) will launch. DBaaS will support cloud-native developers by providing both PostgreSQL and MongoDB Enterprise Advanced database choices. It also provides the highest level of commercial data confidentiality for sensitive data, FIPS 140-2 Level 4.2
...


IBM Z defines the future of hybrid cloud

Friday, 14 June 2019

Practising Clean Code in Node.JS

One of my friends hosted a rather excellent Lunch and Learn session today, talking about the benefits of Clean Code, and referenced a book by Robert C. Martin, named: -

Clean Code: A Handbook of Agile Software Craftsmanship

We had a good debate about the advantages and disadvantages of comments in code, given that we can / should have self-describing variable and function names.

The debate continues apace; my personal view is that comment should serve to describe WHY I did something, rather than HOW and WHAT, which absolutely should be self-describing.

I'm thinking about this from the perspective of future support i.e. "Why did Dave do it that way ? Oh, because time was short, or Stack Overflow was down, or the code wasn't intended to live forever" 🤣

Meantime, my friend, Aiden, has written a much more well-informed piece here: -

Practising Clean Code in Node.JS

Go read, and let the debate continue .....

Wednesday, 12 June 2019

IBM Cloud Blog

Including content such as: -

IBM Cloud Virtual Private Cloud (VPC) Is Now Generally Available

We’re pleased to announce IBM Cloud VPC is GA in the Dallas, Frankfurt and Tokyo regions.

Tutorial: Virtual Private Cloud with Public and Private Subnets

A new solution tutorial covering virtual private cloud with public and private subnets.

Recap: KubeCon 2019 (Barcelona)

Looking back at the highlights of KubeCon + CloudNativeCon Europe 2019.

How to Choose a Database on IBM Cloud

Finding the right tool for the right job is an increasingly challenging decision.

Apple and HP - Not playing nicely - AirPrint and Bonjour and WiFi bands

Fun discovery after new ADSL modem/router/WAP acquisition ..... Draytek Vigor 2762ac .... AirPrint to HP deskjet via Bonjour gets borked ....

I couldn't print from my iOS devices, unless I brought them close to the printer ....

Long story short, new WAP has two WiFi bands ( 5 GHz and 2.4 GHz ) ... both bands sit on the same SSID, so there appears to be one WiFi....

It looks like printer was connecting to one band, and iOS devices were connecting to the other .... when they're NOT in the same room as the printer ( which is ~15 feet diagonally away from the WAP )

I checked this with the router's web UI, which shows devices connected to each band ... the printer always sits on the 2.4 GHz band .....

The iOS devices sit on the 5 GHz band, when they're closer to the router, but switch to the 2.4 GHz band when I move them upstairs to the study/computer/printer room

I'm assuming that Bonjour doesn't "like" crossing between the two bands i.e. if the printer is sitting on one, the iOS devices can no longer see it

I temporarily mitigated this by disabling the 5 GHz band ...

#EveryDayIsASchoolDay

Monday, 10 June 2019

Glide and Permissions - "Unable to update repository: exit status 255"

When using Go and Glide, one may see messages such as this: -

[WARN] Download failed.
[ERROR] Update failed for github.com/gorilla/securecookie: Unable to update repository: exit status 255
[WARN] Download failed.
[ERROR] Update failed for github.com/gin-gonic/gin: Unable to update repository: exit status 255
[WARN] Download failed.
[ERROR] Update failed for github.com/dgrijalva/jwt-go: Unable to update repository: exit status 255
[WARN] Download failed.
[ERROR] Update failed for gopkg.in/mgo.v2: Unable to update repository: exit status 255
[ERROR] Failed to do initial checkout of config: Unable to update repository: exit status 255
Unable to update repository: exit status 255
Unable to update repository: exit status 255
Unable to update repository: exit status 255

during the glide update process: -

glide update

Chances are it's permissions related ....

I saw this today - thinking that it might be a cache issue, I tried to clear the Glide Cache: -

glide cc

which failed with: -

[ERROR] Unable to clear the cache: unlinkat /home/hayd/.glide/cache/src/https-github.com-googleapis-gnostic/discovery/discovery.proto: permission denied

which reminded me that I'd been going back and forth between my non-root user and root ( via sudo ).

I fixed this as follows: -

sudo chown -R hayd:hayd /home/hayd/.glide

and then re-ran: -

glide cc

which worked as expected: -

[INFO] Glide cache has been cleared.

I was then able to update the Glide dependencies: -

glide update

Nice !

Thursday, 6 June 2019

Book Review - You'll See This Message When It Is Too Late - The Legal and Economic Aftermath of Cybersecurity Breaches

This is another of my irregular series of book reviews for the British Computer Society (BCS), who kindly provided me with a review hard-copy of this publication.

You'll See This Message When It Is Too Late
The Legal and Economic Aftermath of Cybersecurity Breaches

By Josephine Wolff

https://mitpress.mit.edu/books/youll-see-message-when-it-too-late

The title of this book gives away the core message, but in a very subtle way.

During the first few chapters, the author, Professor  Josephine Wolff, walks through a number of high-profile security incidents, affecting public and private sector organisations as diverse as the US Office of Personnel Management, the certificate authority, Diginotar, and the dating website, Ashley Madison.

In each case, she describes the technical details of the security breach, the political and organisational landscape of the affected organisation, the key stakeholders ( employees, customers, interested parties ) and, most importantly, how the incident was reported, mitigated and defended, the latter in the context of the personal, political and financial ramifications.

For me, as a technologist, whilst I initially thought that I was seeking a technical and deep-dive analysis of security breaches, this book made me appreciate the deeper impact of such a breach, especially in the way that organisations seek to spread the blame far and wide.

Additionally, Professor Wolff spends a fair amount of the book looking at the instigators of each breach, and explains how their motives vary from financial gain ( perhaps easier to understand ) to political and strategic aims ( espionage and geopolitics ).

This makes the book a very compelling read, and emphasises why this should be on the required reading list for anyone responsible for, or even just interested in, information security.

The book serves to provide a very credible alternative to the image of IT security portrayed by television and the cinema, and sits nicely alongside the reportage provided by the information security industry, and the journalists and analysts who report on it's trials and tribulations.

I sincerely recommend this to anyone with more than a passing interest in information security, and give it 10 out of 10 for breadth, depth and detail.

Wednesday, 5 June 2019

MainframerZ meetup at Mediaocean in London - Thursday 20 June 2019

Just a reminder that we're only three weeks away from the next MainframerZ Meetup ....

Here's the deets: _

MainframerZ meetup at Mediaocean

and here's the current agenda: -

  • Dave Hay - The flexibility of the Cloud, the popularity of Linux, PLUS the security of the Mainframe - A brief exploration of Hyper Protect Services
  • Andrew Schofield /Kate Stanley - Unlocking messages from MQ on z/OS into Apache Kafka without freaking out the Sys Admin
  • Melvyn Maltz - Mainframe development, some Assembler required
  • Mark Wilson - Mainframe pentesting war stories
  • Stuart Ashby - TLA all the way

We'd love to see you there ....

Just go here: -

MainframerZ meetup at Mediaocean

and register.


Monday, 3 June 2019

It's been a while - C++ and the case of the missing headers

Whilst trying to compile some code on my Linux box which, of course, is an IBM mainframe running Ubuntu, I was seeing this: -

/usr/include/features.h:424:12: fatal error: sys/cdefs.h: No such file or directory
 #  include
            ^~~~~~~~~~~~~
compilation terminated.

I'd started with a clean-but-minimised Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-36-generic s390x) environment, and had installed vim: -

sudo apt-get install vim

to create a test CPP source file: -

vi test.cpp

#include
#include
int main()
{
        printf("TESTING");
        return 1;
}

but, when I attempted to compile it: -

g++ --verbose -o test test.cpp 

I saw the previously quoted exception.

I checked that I had libc installed: -

sudo apt-get install libc6-dev
sudo apt-get install libc6

I did check the missing file: -

sudo find / -name cdefs.h

which returned: -

/usr/include/sys/cdefs.h

ls -al /usr/include/sys/cdefs.h

lrwxrwxrwx 1 root root 30 Apr 16  2018 /usr/include/sys/cdefs.h -> ../s390x-linux-gnu/sys/cdefs.h

which gave me a clue ...

After some digging around, I found this: -


on AskUbuntu, which referenced the apt-file command: -

sudo apt-get install apt-file
sudo apt-file update

Having installed it, I ran it: -

apt-file find cdefs.h|grep s390

which showed: -

libc6-dev: /usr/include/s390x-linux-gnu/sys/cdefs.h
libc6-dev-s390: /usr/include/sys/cdefs.h
libc6-dev-s390x-cross: /usr/s390x-linux-gnu/include/sys/cdefs.h

Taking a leap o' faith, I installed the s390 element of libc6-dev: -

sudo apt-get install libc6-dev-s390

but to no avail.

I then did the same for the s390x-linux-gnu element: -

sudo apt-get install libc6-dev-s390x-cross

which did the job.

I'm now able to compile my test module and, more importantly, I'm able to build the Docker image that led me down this particular rabbit hole ( as it uses LuaJIT )

Saturday, 1 June 2019

IBM Cloud Private 3.2 is out ! With added Multicloud Manager....

From the announcement letter here: -

IBM Cloud Private V3.2 adds IBM Multicloud Manager, an integrated solution purpose-built to help modernize your applications to cloud native deployments

and the feature list includes: -

IBM Cloud Private V3.2 brings an integrated platform for developing, modernizing, and managing containerized applications:

  •     Provides an integrated cloud platform for enterprise workloads that need to be securely run behind your firewalls
  •     Enables development and production of cloud native applications in a private cloud
  •     Enables refactoring and modernization of monolithic or legacy enterprise applications
  •     Provides security-rich toolsets to integrate to public cloud services from within your data centers
  •     Features application analytics, integration, monitoring, and security tools that are ready for immediate use to consistently manage IBM and non-IBM based workloads, providing a consistent and resilient way to build, deploy, and manage applications
  •     Includes IBM Cloud Automation Manager, IBM Microclimate, IBM Transformation Advisor, and IBM Vulnerability Advisor
  •     Brings one cloud experience for clients with hybrid cloud integration
  •     Provides IBM Content for Red Hat OpenShift Container Platform (RH OCP) - Cloud Packs and Solution Packs
  •     Extends its use cases to edge computing, specifically edge servers and gateways
  •     Upgrades directly from previous versions of IBM Cloud Private V3.x.x to IBM Cloud Private V3.2

and: -

Multicloud Manager V3.2 is an enterprise-grade, multicloud, multicluster management solution, purpose-built to address the policy, compliance, and application management challenges of multiple clusters:
  •     Set and enforce polices for security, applications, and infrastructure (auto enforcement at cluster level)
  •     Streamline application management with Cross Cloud Security Dashboard, Management Console, and policy-based application movement
  •     Check compliance against deployment parameters, configuration, and policies
  •     Automatically remediate violations
  •     Deploy applications across clusters based on policy compliance, development versus test, and so on
  •     Automatically update monitoring dashboard based on deployment
  •     Understand failure dependencies and identify the affected system if a shared component fails
As a MainframerZ I'm aiming to run it on IBM Z as soon as I can ....

Watch this space .......

Saturday, 25 May 2019

Setting Authorization Headers in IBM API Connect Test and Monitor

I've recently started exploring the IBM API Connect Test and Monitor tool on IBM Cloud here: -

https://www.ibm.com/cloud/api-connect/api-test

and was having some fun n' games setting an HTTP Header, using either the basic HTTP Client or the more detailed Test Composer.

My API requires that one send an IAM token, rather than the more Basic Auth of user/password, which is nice :-)

This kept failing with HTTP403 Unauthorized.

Thankfully IBM have a nice active support community on Stack Overflow, so I posted a question here: -

Setting Authorization Headers in IBM API Connect Test and Monitor

and got some rapid feedback from the community.

In essence, I was doing it wrong ....

This is what I needed to do: -




In other words, for the Key/Value Pair (KVP) that comprises the HTTP Header field, I needed to specify Authorization as the key and Bearer XXXXXXXXXXXXXXX as the value ( with a space between the word Bearer and the actual IAM token itself.

So I was holding it wrong .... #Doofus

Apart from the basic HTTP Client, there's also a very spiffing Test Composer, that has the same requirement wrt Authorisation headers: -



For the record, there's also a GitHub project for a native client here: -

https://ibm-apiconnect.github.io/test-and-monitor/

with which I've played a wee bit .....

Wednesday, 15 May 2019

LinuxONE for Dummies


As more companies transform their infrastructures with hybrid cloud services, they require environments that protect the safety of their intellectual property, such as data and business rules.

LinuxONE is a hardware system designed to support and exploit the Linux operating system based on the value of its unique underlying architecture. In this book, you’ll learn how to

• Secure your data and spark innovation

• Leverage a large partner ecosystem

• Dynamically scale to meet your business needs

Download now to discover an enterprise-grade Linux server with a unique architecture.

LinuxONE for Dummies

Thursday, 9 May 2019

Postman on macOS - Where are my Windows ?

Having used Postman for the past few months, mainly to poke REST APIs, I hit an interesting issue last night, whereby the main Postman window refused to appear even though I'd left it open when I suspended my Mac ( closed the lid ) before leaving the office.

Now, for clarity, at work I use an external Lenovo monitor, attached via USB-C / HDMI ( using the Apple dongle ), which is positioned to the right of the MacBook Pro itself.

When I got home, I plugged in a different AOC monitor, again via USB-C / HDMI ( using a different non-Apple dongle ), again with the MBR on the left of the monitor ( "On your left" ).

Despite stopping/starting Postman multiple times, and fiddling around with the various options available on the View and Window menus, I couldn't get the main window to appear ....

Thankfully, the internet brought me to this ....

Postman opens off screen(s) when display monitors are changed #2833

which said, in part: -

...
For the moment, if you guys see this, you can reset the app's window settings manually. To do this, you'll need to delete the requester.json file from the app's data directory, this is located:

on macOS: ~/Library/Application\ Support/Postman
on Windows: C:\Users\AppData\Roaming\Postman
on Linux: ~/.config/Postman
...

and: -

...
In fact, you also need to delete the window file in the same directory.
...

Having quit Postman, I dived into Terminal ( because, Terminal ) and did this: -

rm ~/Library/Application\ Support/Postman/storage/requester.json 
rm ~/Library/Application\ Support/Postman/window 

and started Postman again .... c'est voila, here's a Postman window :-)

For the record, I'm using Postman 7.0.9, which is interesting 'cos that post is circa 2017 ....

Tuesday, 30 April 2019

Go and Glide - Problems with Update - Cannot detect VCS

I saw this today: -

glide update

which returns: -

[WARN] Unable to checkout crypto/tls
[ERROR] Error looking for crypto/tls: Cannot detect VCS

I'm using Glide 0.13.2, and Go versions 1.10.6 and 1.11.5.

Using the more detailed glide --debug update, I found a wee bit more detail: -

[DEBUG] ImportDir error on /Users/hayd/.glide/cache/src/https-crypto-tls: cannot find package "." in:
/Users/hayd/.glide/cache/src/https-crypto-tls

*BUT* this did lead me to look at my Git source folder -  $GOPATH/src - which, amongst other things had this: -

drwxr-xr-x   3 hayd  staff   96 29 Mar 19:31 crypto

which was a directory containing a sinmgle empty subdirectory: -

drwxr-xr-x  2 hayd  staff   64 29 Mar 19:33 tls

So this kinda tied up with the symptom shown up in the --debug trace i.e. cannot find package "." even though it was in a completely different place.

Once I did rm -Rf $GOPATH/src/crypto, the glide update worked a treat.

I'd previously gone down a rabbit hole with glide clear-cache and glide mirror, both of which were poisson rouge.

Hope this helps others in the same situation :-)

Friday, 26 April 2019

Input validation of REST requests using GoLang

This is an ongoing W-I-P, as I look at a set of RESTful services that I'm co-developing using Go ( aka GoLang ).

Two things that have been of immense use are: -

Validating Data Structures And Variables In Golang

Regular Expressions 101

the latter of which has been very useful in terms of creating/testing/validating Regular Expressions ( aka RegExp ).

For me, the key thing is to be able to create validation rules within the JSON data structures, and then use the Go-Validator GoLang module/plugin/add-on: -

https://github.com/go-validator/validator

Specifically, I'm using the latest version 9 ( v9 ): -

"gopkg.in/go-playground/validator.v9"

Loving my job :-)

Tuesday, 23 April 2019

The Modern Mainframe Developer Hands On Drop-In Centre - Thursday, May 2, 2019

Come along at any time, grab a machine and away you go. Refreshments and snacks will be provided. This first meetup is funded by IBM.

The following labs will be provided:

> Lab: IBM Eclipse Mainframe Development for z Systems
> Lab: REST APIs for Mainframe Applications Using z/OS Connect
> Lab: Git for Mainframe Applications
> Lab: Mainframe Code Analyzer Tools

The Modern Mainframe Developer Hands On Drop-In Centre

Sunday, 21 April 2019

MainframerZ meetup at Mediaocean, London - Thursday 20 June 2019


Come along to meet other Z professionals, and grow your network. Listen to a series of lightning talks from a range of people currently working in the Z space, and help shape the future of this group.

The lightning talks will be 10 minutes each covering a broad range of topics.

Tentative Agenda

6:15 - 6:45 Arrival and registration
7:00 - 7:10 Introductions
7:10 - 8:10 Lightning talks
8:10 - 8:45 Pizza and networking
8:45 - 9:20 Discussion
9:20 - 9:30 Wrap up

I'm honoured to be one of the speakers for the Lightning talks, so it'd be awesome to see you there, and meet IRL

MainframerZ meetup at Mediaocean

Saturday, 20 April 2019

More tales from a GoLang newbie ... expected 'IDENT', found 'break'

I'm adopting, and loving, Microsoft Visual Studio Code ( VSCode ) for all my GoLang needs, but was somewhat confused by this: -

expected 'IDENT', found 'break'


This is my code: -

package break

import (
"fmt"
)

func BreakGoTest() {
snafu := []interface{}{"First", "Second", "Third"}
fmt.Println(snafu...)
}

Can you see what I did wrong ?

Yep, I've named my package .... break .... which is a reserved word.

Once I changed my package name: -

package breaker

import (
"fmt"
)

// BreakGoTest - This function does stuff
func BreakGoTest() {
snafu := []interface{}{"First", "Second", "Third"}
fmt.Println(snafu...)
}

all is well.

For the record, this code is merely to allow me to test go fmt .....

Wednesday, 10 April 2019

GoLang - weirdness with "panic: assignment to entry in nil map"

I kept seeing this: -

--- FAIL: TestClient (0.00s)
panic: assignment to entry in nil map [recovered]
panic: assignment to entry in nil map

goroutine 5 [running]:
testing.tRunner.func1(0xc42011e0f0)
/Users/hayd/Downloads/go/src/testing/testing.go:742 +0x29d
panic(0x128d080, 0x1314b50)
/Users/hayd/Downloads/go/src/runtime/panic.go:502 +0x229
net/textproto.MIMEHeader.Add(0x0, 0x12e2d99, 0xd, 0x12e1ac2, 0x8)
/Users/hayd/Downloads/go/src/net/textproto/header.go:15 +0xec
net/http.Header.Add(0x0, 0x12e2f05, 0xd, 0x12e1ac2, 0x8)
/Users/hayd/Downloads/go/src/net/http/header.go:24 +0x53
github.com/david-hay/GoStuff/cmd/sparkles.glob..func3(0x12e1aba, 0x8, 0x12e1ea3, 0x9, 0xc42006c800, 0x0, 0x0, 0x0, 0x0)
/Users/hayd/go/src/github.com/david-hay/GoStuff/cmd/sparkles/sparkles.go:38 +0xe1
github.com/david-hay/GoStuff/cmd/sparkles.TestClient(0xc42011e0f0)
/Users/hayd/go/src/github.com/david-hay/GoStuff/cmd/sparkles/sparkles_test.go:42 +0xc6
testing.tRunner(0xc42011e0f0, 0x12f4678)
/Users/hayd/Downloads/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/Users/hayd/Downloads/go/src/testing/testing.go:824 +0x2e0
FAIL github.com/david-hay/GoStuff/cmd/sparkles 0.033s

when attempting to test my very basic HTTP code, using Visual Studio Code or just through the GoLang CLI: -

go test ./...

or: -

go test -v ./...

Looking at the stack trace, I saw reference to line 38 of my code: -

req.Header.Add("cache-control", "no-cache")

I commented that out, and the test just ran :-)

Not sure why that line causes that particular exception, given that I'm not actually using a Map, but ....

C'est la vie

Friday, 5 April 2019

Seeing the tabs in Vi

I was trying to resolve some tabs vs. spaces issues in my code, and realised that I was using a combination of both.

Thankfully vi has a recipe for that.

In my ~/.vimrc file, I added this: -

set list
set listchars=tab:>-

and now I can see this: -


Nice !

Thursday, 4 April 2019

GoLang and the unexpected exit

As with most of my career to-date, I'm on a learning curve ...

This time, it's GoLang, and I'm learning at a fast pace AND loving it !

I was trying to work out why my tests: -

go test ./...

or the more verbose: -

go test -v ./...

were failing with: -

exit status 1

rather than a more useful Panic message.

This helped: -

golang test exit status -1 and shows nothing

by helping me realise that my code had: -

log.Fatal

rather than: -

log.Panic

Once I fixed my code, life got a WHOLE lot better.

#LifeIsGood

#EveryDayIsASchoolDay

Friday, 29 March 2019

IBM Cloud Hyper Protect Crypto Services is Now Available on IBM Public Cloud

A new offering from my IBM Z organisation: -

...
IBM offers now two choices for key management. IBM Cloud Key Protect supports Bring Your Own Key (BYOK) for protecting data at rest. Today, IBM Cloud is announcing the general availability of IBM Cloud Hyper Protect Crypto Services, a dedicated Key Management and Cloud HSM Service designed especially for customers looking for greater control over their data encryption keys and the hardware security modules (HSMs) that protect these keys. The service is now available in US South region, based out of Dallas, Texas.

Hyper Protect Crypto Services supports Keep Your Own Key (KYOK), which allows data encryption keys to be protected by a dedicated, customer-controlled HSM that uses FIPS 140-2 Level 4 certified hardware. Built on IBM LinuxONE technology and being part of the IBM Cloud Hyper Protect portfolio of services, this service guarantees that privileged users—including IBM Cloud administrators—have no access to customer keys. This provides an ideal base to onboard sensitive apps to the cloud. Key Protect and IBM Cloud Hyper Protect Crypto Services use a common Key Provider API to provide a consistent approach for managing keys.
...
High availability and disaster recovery: IBM Cloud Hyper Protect Crypto Services, which now supports three availability zones in a selected region, is a highly available service with automatic features that help keep your applications secure and operational. You can create IBM Cloud Hyper Protect Crypto Services resources in the supported IBM Cloud regions, which represent the geographic area where your IBM Cloud Hyper Protect Crypto Services requests are handled and processed.

Scalability: The service instance can be scaled out to a maximum of six crypto units to meet your performance requirement. Each crypto unit can crypto-process 5,000 keys. In a production environment, it is recommended to select at least two crypto units to enable high availability. By selecting three or more crypto units, these crypto units are distributed among three availability zones in the selected region.
...

 IBM Cloud Hyper Protect Crypto Services is Now Available on IBM Public Cloud

Friday, 22 March 2019

Tinkering with Docker manifests ? You need Manifest Tool and MQuery

MQuery

A simple utility and backend for querying Docker v2 API-supporting registry images and reporting on "manifest list" multi-platform image support.

This project uses IBM Cloud Functions (built on OpenWhisk) as a backend, in concert with the manifest-tool inspect capability (packaged as a Docker function) to easily report on the status of whether an image is a manifest list entry in the registry, and if so, what architecture/os pairs are supported by the image.

https://github.com/estesp/mquery

docker run --rm mplatform/mquery mplatform/mquery

Image: mplatform/mquery
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64
   - linux/arm
   - linux/arm64
   - linux/ppc64le
   - linux/s390x
   - windows/amd64:10.0.14393.1593

manifest-tool

manifest-tool is a command line utility that implements a portion of the client side of the Docker registry v2.2 API for interacting with manifest objects in a registry conforming to that specification.

This tool was mainly created for the purpose of viewing, creating, and pushing the new manifests list object type in the Docker registry. Manifest lists are defined in the v2.2 image specification and exist mainly for the purpose of supporting multi-architecture and/or multi-platform images within a Docker registry.

https://github.com/estesp/manifest-tool

docker run --rm mplatform/mquery mplatform/manifest-tool:latest

Image: mplatform/manifest-tool:latest
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64
   - linux/arm
   - linux/arm64
   - linux/ppc64le
   - linux/s390x
   - windows/amd64:10.0.14393.2312

Tainting and "untainting" nodes in a Kubernetes cluster

Having been tinkering with the taint function on an x86 node in an IBM Kubernetes Service (IKS) cluster, to force my pods to deploy onto another node in the same cluster: -

kubectl taint node node1 node1=DoNotSchedulePods:NoExecute

I was looking for an easy way to reverse the taint ( "untaint" ), and found this: -

kubectl patch node node1 -p '{"spec":{"taints":[]}}'

with thanks to this: -

Tuesday, 12 March 2019

IBM Notes 9 - How to munge the Shortcut Buttons

For too long, I've been trying to remember how I can add shortcuts to the IBM Notes client, alongside the existing two shortcuts: -

Guess what ?

It's easy !

This rather nice all-in-one IBM Notes 9 tutorial one-pager: -

https://www.quicksourcelearning.com/images/samplepdfs/978193551845.pdf

reminded me.

It's this :-)


So now I have mail, calendar AND contacts: -


Yay!

Wednesday, 27 February 2019

Kubernetes tooling - tinkering with versions

Having built a new Kubernetes cluster on the IBM Kubernetes Service (IKS), which reports as version 1.11.7_1543 within the IKS dashboard: -

https://cloud.ibm.com/containers-kubernetes/clusters/

I'd noticed that the kubectl tool was out-of-sync with the cluster itself: -

kubectl version

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7+IKS", GitCommit:"498bc5434e4bdc2dafddf57b2e8496f1cbd054bc", GitTreeState:"clean", BuildDate:"2019-02-01T08:10:15Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

Initially, I assumed (!) that it was covered by the IBM Cloud Plugins: -

Setting up the CLI and API

and checked my plugins: -

ibmcloud plugin list

Listing installed plug-ins...

Plugin Name                            Version   Status   
cloud-functions/wsk/functions/fn       1.0.29       
container-registry                     0.1.368      
container-service/kubernetes-service   0.2.53    Update Available   
dev                                    2.1.15       
sdk-gen                                0.1.12       

This appeared to confirm my suspicion so I updated the IKS plugin: -

ibmcloud plugin update kubernetes-service

Plug-in 'container-service/kubernetes-service 0.2.53' was installed.
Checking upgrades for plug-in 'container-service/kubernetes-service' from repository 'IBM Cloud'...
Update 'container-service/kubernetes-service 0.2.53' to 'container-service/kubernetes-service 0.2.61'
Attempting to download the binary file...
 23.10 MiB / 23.10 MiB [=====================================================================================================================================================] 100.00% 9s
24224568 bytes downloaded
Updating binary...
OK
The plug-in was successfully upgraded.

ibmcloud plugin list

Listing installed plug-ins...

Plugin Name                            Version   Status   
sdk-gen                                0.1.12       
cloud-functions/wsk/functions/fn       1.0.29       
container-registry                     0.1.368      
container-service/kubernetes-service   0.2.61       
dev                                    2.1.15       

BUT kubectl continued to show as back-level: -

kubectl version

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7+IKS", GitCommit:"498bc5434e4bdc2dafddf57b2e8496f1cbd054bc", GitTreeState:"clean", BuildDate:"2019-02-01T08:10:15Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

Therefore, I chose to reinstall kubectl etc. as per this: -


( specifically using Homebrew, as I'm running on macOS )

brew install kubernetes-cli

Updating Homebrew...
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
==> New Formulae
cafeobj                                       homeassistant-cli                             re-flex                                       riff
==> Updated Formulae
go ✔                cfengine            closure-compiler    couchdb             dartsim             dhex                fx                  node-build          pulumi
apache-arrow        cflow               cmark-gfm           cpprestsdk          davix               dialog              git-lfs             numpy               shadowsocks-libev
axel                cfr-decompiler      cointop             cproto              dcd                 diffoscope          godep               openssl@1.1         ship
azure-cli           chakra              collector-sidecar   crc32c              ddrescue            diffstat            grafana             pandoc-citeproc     siege
bzt                 check_postgres      conan               cryptominisat       deark               digdag              kube-ps1            passenger
calicoctl           checkstyle          configen            cscope              debianutils         elektra             kustomize           pgweb
cdk                 chkrootkit          consul-template     czmq                deja-gnu            fabio               libtensorflow       pre-commit
cdogs-sdl           cli53               coturn              darcs               deployer            flake8              nginx               protoc-gen-go

==> Downloading https://homebrew.bintray.com/bottles/kubernetes-cli-1.13.3.mojave.bottle.tar.gz
######################################################################## 100.0%
==> Pouring kubernetes-cli-1.13.3.mojave.bottle.tar.gz
Error: The `brew link` step did not complete successfully
The formula built, but is not symlinked into /usr/local
Could not symlink bin/kubectl
Target /usr/local/bin/kubectl
already exists. You may want to remove it:
  rm '/usr/local/bin/kubectl'

To force the link and overwrite all conflicting files:
  brew link --overwrite kubernetes-cli

To list all files that would be deleted:
  brew link --overwrite --dry-run kubernetes-cli

Possible conflicting files are:
/usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl
==> Caveats
Bash completion has been installed to:
  /usr/local/etc/bash_completion.d

zsh completions have been installed to:
  /usr/local/share/zsh/site-functions
==> Summary
🍺  /usr/local/Cellar/kubernetes-cli/1.13.3: 207 files, 43.7MB

Notice that it did NOT replace kubectl as it was already there :-)

So I chose to remove the existing kubectl : -

rm `which kubectl`

and then re-link: -

brew link kubernetes-cli

I then checked the version: -

kubectl version

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-04T04:48:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7+IKS", GitCommit:"498bc5434e4bdc2dafddf57b2e8496f1cbd054bc", GitTreeState:"clean", BuildDate:"2019-02-01T08:10:15Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

so now kubectl is at a later version than the cluster ....

Let's see how it goes ....

*UPDATE*

I then read this: -

If you use a kubectl CLI version that does not match at least the major.minor version of your clusters, you might experience unexpected results. Make sure to keep your Kubernetes cluster and CLI versions up-to-date.

here: -

Setting up the CLI and API

and realised that the page actually includes a download link for the right major/minor version ( 11.7 ) kubectl for macOS.

I downloaded this and replaced the existing version: -

mv ~/Downloads/kubectl  /usr/local/bin/

and then validated the versions: -

kubectl version

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7", GitCommit:"65ecaf0671341311ce6aea0edab46ee69f65d59e", GitTreeState:"clean", BuildDate:"2019-01-24T19:32:00Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7+IKS", GitCommit:"498bc5434e4bdc2dafddf57b2e8496f1cbd054bc", GitTreeState:"clean", BuildDate:"2019-02-01T08:10:15Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

which now match ( major/minor ).

Nice !

Jenkins and the Case of the Missing Body

I was repeatedly seeing this: - java.lang.IllegalStateException: There is no body to invoke with a Jenkins Pipeline that I was executing...