Monday, 30 October 2017

IBM Cloud Private - Docker, Ubuntu and Volumes

So this week I'm tinkering ( I love that word ) with IBM Cloud Private (ICP), and am planning to install the Community Edition (CE) variant on an Ubuntu VM on my Mac.

This is what I have: -
  • macOS 10.13 High Sierra
  • VMware Fusion 10.0.1
  • Ubuntu 17.10
  • Docker 17.0.6.1-ce
  • IBM Cloud Private 2.1.0
and I'm following the ICP installation from here: -


Having pulled the image: -

sudo docker pull ibmcom/icp-inception:2.1.0

Having previously created a target installation directory: -

sudo mkdir /opt/ibm-cloud-private-ce-2.1.0

and changed to that directory: -

cd /opt/ibm-cloud-private-ce-2.1.0

I then tried to start the image: -

sudo docker run -e LICENSE=accept \
  -v "$(pwd)":/data ibmcom/icp-inception:2.1.0 cp -r cluster /data

However, this didn't appear to do anything :-( 

I then dug further in: -

sudo bash
cd /opt/ibm-cloud-private-ce-2.1.0
docker run -e LICENSE=accept \
  -v "$(pwd)":/data ibmcom/icp-inception:2.1.0 cp -r cluster /data

which returned: -

docker: Error response from daemon: error while creating mount source path '/opt/ibm-cloud-private-ce-2.1.0': mkdir /opt/ibm-cloud-private-ce-2.1.0: read-only file system.

This made no sense, given that I'm effectively running as root :-(

I experimented further: -

docker run -it -v /opt/ibm-cloud-private-ce-2.1.0:/data -e LICENSE=accept ibmcom/icp-inception:2.1.0 /bin/bash

which resulted in much the same: -

docker: Error response from daemon: error while creating mount source path '/opt/ibm-cloud-private-ce-2.1.0': mkdir /opt/ibm-cloud-private-ce-2.1.0: read-only file system.

So, for the record, the switch -v /opt/ibm-cloud-private-ce-2.1.0:/data means that the local OS path ( /opt/ibm-cloud-private-ce-2.1.0 ) is being mapped to the local-to-the-container path ( /data ).

I Googled about a bit: -


which made me wonder whether the problem was with the location, rather than the permissions of the target directory.

I tested this theory: -

mkdir ~/ibm-cloud-private-ce-2.1.0
cd ~/ibm-cloud-private-ce-2.1.0
sudo docker run -e LICENSE=accept \
  -v "$(pwd)":/data ibmcom/icp-inception:2.1.0 cp -r cluster /data

This worked without error, and I was able to confirm that the last part of the command: -

cp -r cluster /data

( which copies data OUT of the container INTO the local filesystem, as mapped using the -v switch )

This is how I validated it: -

pwd

/home/dave/ibm-cloud-private-ce-2.1.0

ls ~/ibm-cloud-private-ce-2.1.0/ -R

/home/dave/ibm-cloud-private-ce-2.1.0/:
cluster

/home/dave/ibm-cloud-private-ce-2.1.0/cluster:
config.yaml  hosts  misc  ssh_key

/home/dave/ibm-cloud-private-ce-2.1.0/cluster/misc:
ldap  storage_class

/home/dave/ibm-cloud-private-ce-2.1.0/cluster/misc/ldap:
cacert  keystone.ldap.conf

/home/dave/ibm-cloud-private-ce-2.1.0/cluster/misc/ldap/cacert:

/home/dave/ibm-cloud-private-ce-2.1.0/cluster/misc/storage_class:

So, the moral of the story appears to be that, for Docker on Ubuntu, it's not possible to map volumes from a container to a directory that's NOT in the user's home directory.

I'll dig further …..

Meantime, I can now continue with my ICP implementation ...

For the record, here's how I checked the versions on the Ubuntu VM: -

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 17.10
Release: 17.10
Codename: artful


docker images

ibmcom/icp-inception   2.1.0               fa65473d72d8        7 days ago          445 MB

docker version

Client:
 Version:      1.13.1
 API version:  1.26
 Go version:   go1.8.3
 Git commit:   092cba3
 Built:        Thu Oct 12 22:34:44 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.06.1-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   5ff8f9c
 Built:        Fri Aug 18 14:48:14 2017
 OS/Arch:      linux/amd64
 Experimental: false

Thursday, 26 October 2017

IBM API Connect and IBM DataPower Gateway - The Fun Continues

As per previous posts, I'm continuing to enjoy the voyage of discovery that is IBM API Connect (APIC) and IBM DataPower Gateway (IDG).

This time it's me trying to understand (a) why things aren't properly working and (b) how APIC drives IDG

Thus far, I've discovered that API connects to the XML Management Interface on the IDG, and creates a new Domain ( with a semi-random name prefixed by APIMgmt_e.g. APIMgmt_BF8B3A8C34 ), and then creates a pair of Multiprotocol Gateways (MPG), named webapi-internal and webapi.

All seems fine but ….

I'm seeing this in the IDG logs: -

1,20171026T152710.907Z,APIMgmt_BF8B3A8C34,network,error,xmlmgr,webapi-wcc,36511,,,0xb30009,,,"Host connection could not be established"
1,20171026T152710.907Z,default,network,error,,,1759,,,0x80e00173,,,"TCP connection attempt refused from 127.0.0.1 to 127.58.140.52 port 2444"
1,20171026T152710.907Z,APIMgmt_BF8B3A8C34,network,error,xmlmgr,webapi-wcc,36511,,,0x80e00049,,,"Host connection failed to establish: 127.58.140.52 : tcp port 2444"
1,20171026T152710.907Z,APIMgmt_BF8B3A8C34,network,error,,,36511,,,0x80e00627,,,"Error occurred (port error) when connecting to URL 'http://127.58.140.52:2444/ODCInfo/ODCInfo?c=analytics-lb'"
1,20171026T152710.907Z,APIMgmt_BF8B3A8C34,wcc,warn,wcc-service,webapi-wcc,36511,,,0x80e0053c,,,"Request for WebSphere Cell information failed: Empty result set"


which is weird because my IDG has a static IP address of 192.168.1.200.

( How I got that address is a whole other blog post - ask me about VMware, Bridged networking and DHCP ! )

So I dug further into the configuration, via the IDG UI: -


and then searched for that particular IP address - 127.58.140.52 - which occurred in three places: -





So I dug into the configuration further: -


and found this: -


I changed the address to the correct address of the IDG ( yes, and I've NO idea why APIC / IDG things that we have a WebSphere Application Server (WAS) cell in the mix )….

I'm not yet fully there … but I'm learning as I go.

For the record, I did some other tinkering to IDG: -

Add hostname aliases for the APIC Management and Developer Portal boxes

config; dns; static-host management.uk.ibm.com 192.168.1.150; static-host portal.uk.ibm.com 192.168.1.151; static-host datapower.uk.ibm.com 192.168.1.200; exit; write mem; exit;

config; dns; show; exit; exit

Global configuration mode
Modify DNS Settings configuration

 admin-state enabled 
 name-server 8.8.8.8 53 53 3 
 static-host datapower.uk.ibm.com 192.168.1.200 "" 
 static-host localhost 127.0.0.1 "" 
 static-host management.uk.ibm.com 192.168.1.150 "" 
 static-host portal.uk.ibm.com 192.168.1.151 "" 
 force-ip-preference off 
 load-balance round-robin 
 retries 2 
 timeout 5 Seconds


test tcp-connection management.uk.ibm.com 443

TCP connection successful

test tcp-connection portal.uk.ibm.com 443

TCP connection successful

test tcp-connection datapower.uk.ibm.com 8443

TCP connection successful

Check the internal IDG load balancers

config; show domains

 Domain             Needs save File capture Debug log Probe enabled Diagnostics Command Quiesce state Interface state Failsafe mode 
 ------------------ ---------- ------------ --------- ------------- ----------- ------- ------------- --------------- ------------- 
 APIMgmt_BF8B3A8C34 off        off          off       off           off                               ok              none          
 default            off        off          off       off           off                               ok              none          


switch domain APIMgmt_BF8B3A8C34

show loadbalancer-st

 Group        Host          Port Operational state Weight Administrative state 
 ------------ ------------- ---- ----------------- ------ -------------------- 
 analytics-lb 192.168.1.150 9443 up                20     enabled              
 mgmt-lb      192.168.1.150 0    up                20     enabled              




IBM API Connect - Debugging

This is another of those work-in-progress posts, but I'm hitting an issue testing an API that I've created using IBM API Connect 5.0.7.2.

I've developed the API using an existing Web Service running on IBM Bluemix ( it's actually the IBM ODM Rules Service ), and this is a long-used Hello World Rule that I created a few years ago ( I even have a post or two for that ).

I


During the debugging phase, I wanted to check the logs that the API Manager ( aka Cloud Manager Console - CMC ) was producing.

I'm logged into the CMC via SSH: -

ssh admin@management

and am watching the CMC logs go by: -

debug tail file /var/log/cmc.out

...
2017-10-26 13:43:02.666 SEVERE [T-2226] [com.ibm.apimgmt.exception.APIGenericException$Serializer.serialize] Error(59f1e666e4b07d97f084e8ff), Http-Code(401), Message(The HTTP request requires user authentication. Please repeat the request with a suitable Authorization header field.), User(), Path(get:/catalogs/59f1e293e4b07d97f084e7fd/webhooks)

I realised that, during the API Assembly phase, I'd NOT specified the credentials that the ODM Rule Service requires.

I fixed this by selecting the Proxy component in my assembly diagram: -


and entering the credentials: -


This has got me further forward ….

… it's still not working, but I suspect that's a problem between the API Manager and the DataPower Gateway ….

Wednesday, 25 October 2017

IBM API Connect and the SshClientException

This post represents a frustrating, but extremely enjoyable and interesting, voyage of discovery, digging into problems with SSH on Ubuntu Linux …..

I saw this whilst attempting to create a new Developer Portal for an IBM API Connect 5.0.7.2 implementation: -

For the record, here's the text of the exception: -

Error

Error while performing action add during communication to the Advanced Portal. Please report this error to your server administrator. Error details: com.ibm.apimgmt.api.util.SshClient$SshClientException: An exception occurred during SSH call: com.jcraft.jsch.JSchException: Algorithm negotiation fail.
Error ID: 59ef3f69e4b07d97f084e2ee


I checked the authorized_keys file on the Portal box: -

cat /home/admin/.ssh/authorized_keys 

command="/home/admin/bin/site_action" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyA1P0bv68VRylLHGwNF+aRYR5FCYAtTJQYRPfbAaE286gPseddNEME0vCxQkAGwqUJX7hZRKNdplw9/o67BHbEedkF6d4O8JPON2ZkFPQTv+cxAhFkDLt86ClRlstvxroqbHYwsRKOl/cOVW/88LEQ90UuQVunUQYanI4A6AJZZ8OvxN+/pgq/bHZULfKF5148IKwY9/90uuavwp6t4Jjm62d2UOHplRv6LiT+qPY2Iykncmqr85X0riUExqwkyyOoVextC450Ui10bMFeQYO4KS0cTHTKd0LuiLUopy4hYmDbyJXNa9t6H6mQVe+P+MjAmJNKx8j4xZqZvojiwUf apim_advanced_portal_ssh_key

which matched that specified within the API Manager Cloud Manager UI: -


I even tried upgrading from the older version of the Developer Portal ( 5.0.7.2 based upon Debian 7 ) to the latest fix pack ( 5.0.8.0 based upon Ubuntu 16.0.4.3 LTS ), but to no avail.

As this is a test environment, running on my own Beast box, I set the Portal to trust ALL certificates: -

set_apim_cert -i

WARNING: This should only be used for development and testing purposes as it is not secure and leaves the Developer Portal exposed to a man-in-the-middle attack.

and checked the status: -

status

Operating System: Ubuntu 16.04.3 LTS
System version: 7.x-5.0.8.0-20170908-0855
Distribution version: 7.x-5.0.8.0-20170907-2206

Free disk space: 22G
 RAM Free/Total: 1941 MB / 3951 MB (49% free)
   Set Hostname: OK
     DNS Server: Reachable (8.8.8.8)
   APIC SSH Key: OK

Configuration:
  APIC Hostname: management.uk.ibm.com
  APIC IP: 192.168.1.150
  Devportal Hostname: portal.uk.ibm.com
  Devportal IP: 192.168.1.151
  APIC Certificate Status (Insecure): WARNING - Only suitable for development and PoC purposes.

Node is standalone

Site web check: All sites OK

Site services:
         Webhooks: All sites Up
  Background sync: All sites Up

Services:
  Queue                      is Up
  Database   [Mysql]         is Up (Standalone)
  Web Server [Nginx]         is Up
  PHP Pool   [Php7.0-fpm]    is Up
  Inetd      [Openbsd-inetd] is Up
  REST       [Restservice]   is Up

SUCCESS: All services are Up.


It took me a while, but I worked out how to debug …

Having switched to the Ubuntu version of the Developer Portal, I was able to turn on debugging in the SSH Daemon ( SSHD ), by editing the sshd_config file: -

sudo vi /etc/ssh/sshd_config 

and changing the logging level from: -

# Logging
SyslogFacility AUTH
LogLevel INFO

to: -

# Logging
SyslogFacility AUTH
#LogLevel INFO
LogLevel DEBUG3


I then restarted the SSHD service: -

sudo /etc/init.d/ssh restart

[ ok ] Restarting ssh (via systemctl): ssh.service.

and watched the logs whilst I reproduced the problem: -

tail -f /var/log/auth.log 

which gave me: -

...
Oct 25 08:55:12 portal sshd[24777]: debug1: kex: algorithm: (no match) [preauth]
Oct 25 08:55:12 portal sshd[24777]: fatal: Unable to negotiate with 192.168.1.150 port 52443: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group-exchange-sha1 [preauth]
Oct 25 08:55:12 portal sshd[24777]: debug1: do_cleanup [preauth]

...

I then checked the sshd_config file again: -

sudo vi /etc/ssh/sshd_config 

and looked at the KexAlgorithms line: -

KexAlgorithms ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1

which tied up nicely …. which was confusing :-(

And then …. 

IT JUST STARTED WORKING !!!

To prove it, I deleted the Developer Portal VM, and built it again from the original .OVA file ….

AND IT WORKED !!

First time, out of the box …

So I'm at a complete and utter loss to know what broke …

However, I learned shedloads in diagnosing the problem, so that's all good then :-)

For the record, the Ubuntu version of the Developer Portal ( 5.0.8.0 ) uses SSH-2.0-OpenSSH_7.2p2 - in case it becomes relevant down the line ….

I'm now going to tinker some more, before ditching the APIC Management Server, and rebuilding that with the new 5.0.8.0 OVA file …

In the context of 5.0.8.0, this is what I've downloaded …

APIConnect_Management_5.0.8.0_20170905-1133_a7fe4cd1d442_c04798a.ova (2.99 GB)

5.0.8.0-APIConnect-Portal-Ubuntu16-20170908-0855.ova (879.64 MB)

from IBM Fix Central, as per this: -

Please note: The Linux distribution for the Developer Portal OVA has moved from a Debian V7 base to an Ubuntu V16.04 base. Support for the Debian V7 OVA is being withdrawn in May 2018. You are strongly encouraged to migrate your Developer Portal to the Ubuntu V16.04 base now, as support for Debian V7 upgrades will be removed by May 2018.


Friday, 20 October 2017

More on Elasticsearch, Logstash and Kibana (ELK)

Following earlier posts: -




I've had a brief play with a new ( to me ) Docker image, ELK: -


Collect, search and visualise log data with Elasticsearch, Logstash, and Kibana.

using this documentation: -


This time around, I built it using Docker Compose ( on my Mac ) : -

Create a DC YAML

vi docker-compose.yml 

elk:
  image: sebp/elk
  ports:
    - "5601:5601"
    - "9200:9200"
    - "5044:5044"


Spin up the Container

docker-compose up elk

Creating elk_elk_1 ... 
Creating elk_elk_1 ... done
Attaching to elk_elk_1
elk_1  |  * Starting periodic command scheduler cron
elk_1  |    ...done.
elk_1  |  * Starting Elasticsearch Server
elk_1  |    ...done.
elk_1  | waiting for Elasticsearch to be up (1/30)
elk_1  | waiting for Elasticsearch to be up (2/30)
elk_1  | waiting for Elasticsearch to be up (3/30)
elk_1  | waiting for Elasticsearch to be up (4/30)
elk_1  | waiting for Elasticsearch to be up (5/30)
elk_1  | waiting for Elasticsearch to be up (6/30)
elk_1  | waiting for Elasticsearch to be up (7/30)
elk_1  | Waiting for Elasticsearch cluster to respond (1/30)
elk_1  | logstash started.
elk_1  |  * Starting Kibana5
elk_1  |    ...done.
elk_1  | ==> /var/log/elasticsearch/elasticsearch.log <==
elk_1  | [2017-10-20T09:58:07,375][INFO ][o.e.p.PluginsService     ] [Q6xLn7b] no plugins loaded
elk_1  | [2017-10-20T09:58:09,062][INFO ][o.e.d.DiscoveryModule    ] [Q6xLn7b] using discovery type [zen]
elk_1  | [2017-10-20T09:58:09,753][INFO ][o.e.n.Node               ] initialized
elk_1  | [2017-10-20T09:58:09,753][INFO ][o.e.n.Node               ] [Q6xLn7b] starting ...
elk_1  | [2017-10-20T09:58:09,960][INFO ][o.e.t.TransportService   ] [Q6xLn7b] publish_address {172.17.0.2:9300}, bound_addresses {0.0.0.0:9300}
elk_1  | [2017-10-20T09:58:09,974][INFO ][o.e.b.BootstrapChecks    ] [Q6xLn7b] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
elk_1  | [2017-10-20T09:58:13,044][INFO ][o.e.c.s.ClusterService   ] [Q6xLn7b] new_master {Q6xLn7b}{Q6xLn7bNR66inZlv5JcUaQ}{HPqd_E_QSJ2eHModlSUT6A}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
elk_1  | [2017-10-20T09:58:13,080][INFO ][o.e.h.n.Netty4HttpServerTransport] [Q6xLn7b] publish_address {172.17.0.2:9200}, bound_addresses {0.0.0.0:9200}
elk_1  | [2017-10-20T09:58:13,080][INFO ][o.e.n.Node               ] [Q6xLn7b] started
elk_1  | [2017-10-20T09:58:13,143][INFO ][o.e.g.GatewayService     ] [Q6xLn7b] recovered [0] indices into cluster_state
elk_1  | 
elk_1  | ==> /var/log/logstash/logstash-plain.log <==
elk_1  | 
elk_1  | ==> /var/log/kibana/kibana5.log <==


See what's running

docker ps -a

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS                      PORTS                                                                              NAMES
be3d5ee65642        sebp/elk                   "/usr/local/bin/st..."   2 minutes ago       Up 2 minutes                0.0.0.0:5044->5044/tcp, 0.0.0.0:5601->5601/tcp, 0.0.0.0:9200->9200/tcp, 9300/tcp   elk_elk_1
4f54bc00b67d        websphere-liberty:wlp101   "/opt/ibm/docker/d..."   8 days ago          Exited (143) 45 hours ago                                                                                      dazzling_mestorf


Start a shell on the container

docker exec -it be3d5ee65642 /bin/bash

Pull the logs to the foreground

Note the subtle use of the apostrophe ( ' )

/opt/logstash/bin/logstash --path.data /tmp/logstash/data -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'

Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties
[2017-10-20T10:24:30,729][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/opt/logstash/modules/fb_apache/configuration"}
[2017-10-20T10:24:30,741][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/opt/logstash/modules/netflow/configuration"}
[2017-10-20T10:24:31,388][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-10-20T10:24:31,390][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-10-20T10:24:31,516][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2017-10-20T10:24:31,607][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-10-20T10:24:31,614][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-10-20T10:24:31,620][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2017-10-20T10:24:31,623][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-10-20T10:24:31,792][INFO ][logstash.pipeline        ] Pipeline main started
The stdin plugin is now waiting for input:
[2017-10-20T10:24:31,976][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601}


Send a test message

The Quick Brown Fox Jumped Over The Lazy Dog!

Check the log


{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {
    "total" : 6,
    "successful" : 6,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 8,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : ".kibana",
        "_type" : "config",
        "_id" : "5.6.3",
        "_score" : 1.0,
        "_source" : {
          "buildNum" : 15554
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85PK0Ji95TIyQOdvFj",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:03:18.652Z",
          "message" : "this is a dummy entry"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85PT5gi95TIyQOdvFm",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:03:55.857Z",
          "message" : "I love it !"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85UBqvi95TIyQOdvFp",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:24:31.867Z",
          "message" : "Hello Fluffy"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85UWpEi95TIyQOdvFr",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:25:57.808Z",
          "message" : "The Quick Brown Fox Jumped Over The Lazy Dog!"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85PKzri95TIyQOdvFi",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:03:17.729Z",
          "message" : "this is a dummy entry"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85PK9Si95TIyQOdvFk",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:03:19.238Z",
          "message" : "this is a dummy entry"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85UCuXi95TIyQOdvFq",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:24:36.234Z",
          "message" : "Hello Fluffy"
        }
      }
    ]
  }
}

So we have Kibana running: -


and Elasticsearch: -


Next job is to wire my BPM Event Emitter up to this - but that's the easy part :-)

*UPDATE*

And, as expected, it just worked. I completed one of my running BPM processes, and immediately saw messages in Elasticsearch, including: -

elk_1  | [2017-10-20T13:14:49,377][INFO ][o.e.c.m.MetaDataCreateIndexService] [Q6xLn7b] [bpm-events] creating index, cause [api], templates [], shards [5]/[1], mappings []
elk_1  | [2017-10-20T13:14:50,641][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] create_mapping [ProcessEvent]
elk_1  | [2017-10-20T13:14:50,738][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] create_mapping [ActivityEvent]
elk_1  | [2017-10-20T13:14:50,828][INFO ][o.e.c.m.MetaDataCreateIndexService] [Q6xLn7b] [restore_task_index] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
elk_1  | [2017-10-20T13:14:52,022][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [restore_task_index/HMBr8hw4RAmDJrNzZCX-ag] create_mapping [configuration_type]
elk_1  | [2017-10-20T13:18:30,329][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] update_mapping [ActivityEvent]
elk_1  | [2017-10-20T13:18:38,529][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] update_mapping [ActivityEvent]
elk_1  | [2017-10-20T13:18:38,718][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] update_mapping [ProcessEvent]
elk_1  | [2017-10-20T13:18:38,810][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] update_mapping [ActivityEvent]

elk_1  | [2017-10-20T13:18:38,836][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] update_mapping [ActivityEvent]

which is nice.

Thursday, 19 October 2017

Zipping and Tarring on macOS - with added funkiness

So I had a specific requirement yesterday - I wanted to extract three specific files from a ZIP file.

This is what I had: -

unzip -l certificate-bundle.zip

Archive:  certificate-bundle.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  10-19-2017 16:58   ca/
     1310  10-19-2017 16:58   ca/ca.crt
     1679  10-19-2017 16:58   ca/ca.key
        0  10-19-2017 16:58   node1/
     1379  10-19-2017 16:58   node1/node1.crt
     1679  10-19-2017 16:58   node1/node1.key

---------                     -------
     6047                     6 files


So I wanted to extract the certificates and one of the keys …. and place them into specific locations

BUT…..

I didn't want the paths, just the files.

Whilst zip supports this: -

-j  junk paths (do not make directories) 

alas, unzip does not.

Thankfully, the internet had the answer: -

How do I exclude absolute paths for Tar?

I knew that I could use tar on a ZIP file, but this was a nuance.

So here're the commands that I used: -

tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack ca/ca.crt
tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack node1/node1.crt
tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack node1/node1.key


so we use —strip-components to remove the path and -C to place the files into specific locations.

So that's all good then :-)



IBM BPM and Elasticsearch - with added TLS

Following this: -



I've been tinkering further with Elasticsearch on Docker, establishing a TLS connection between it and IBM BPM.

Here's my notes: -

Pull Image


Start container

es=`docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:5.6.3`

Check logs

docker logs $es -f

Upload YAML for Certgen

docker cp ~/instances.yml $es:/usr/share/elasticsearch/config

Generate Self-Signed Certificate, plus Keys

docker exec -i -t $es /bin/bash -c "/usr/share/elasticsearch/bin/x-pack/certgen -in /usr/share/elasticsearch/config/instances.yml -out /usr/share/elasticsearch/certificate-bundle.zip"

Download Certificates

docker cp $es:/usr/share/elasticsearch/certificate-bundle.zip ~

Stop Container

docker stop $es

Remove Container

docker rm $es

Extract and place certificates and key

tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack ca/ca.crt

tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack node1/node1.crt

tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack node1/node1.key

Re-start container

Note; we're mapping ~/Desktop/elasticsearch-config as the ES config root

es=`docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -v /Users/davidhay/Desktop/elasticsearch-config:/usr/share/elasticsearch/config docker.elastic.co/elasticsearch/elasticsearch:5.6.3`

Check logs

docker logs $es -f

Test using Curl - on host

curl --insecure https://localhost:9200 -u elastic:changeme

Should return: -

{
  "name" : "-2S40f4",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "zV8P1a4FR26Q_J_h1E0QKA",
  "version" : {
    "number" : "5.6.3",
    "build_hash" : "1a2f265",
    "build_date" : "2017-10-06T20:33:39.012Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
}

or similar

Test using browser

Default credentials are elastic/changeme


Should return same JSON

Test on BPM box

Hostname node1.uk.ibm.com aliased to IP address of host Mac

curl --insecure https://node1.uk.ibm.com:9200 -u elastic:changeme

{
  "name" : "-2S40f4",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "zV8P1a4FR26Q_J_h1E0QKA",
  "version" : {
    "number" : "5.6.3",
    "build_hash" : "1a2f265",
    "build_date" : "2017-10-06T20:33:39.012Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
}

or similar

Place CA certificate on BPM box

scp ~/Desktop/elasticsearch-config/x-pack/ca.crt wasadmin@bpm86:~

Update BPM Event Emitter YAML files

vi /opt/ibm/WebSphereProfiles/Dmgr01/config/cells/PCCell1/nodes/Node1/servers/SupClusterMember1/analytics/config/BPMEventEmitter.yml

vi /opt/ibm/WebSphereProfiles/Dmgr01/config/cells/PCCell1/clusters/SupCluster/analytics/config/BPMEventEmitter.yml

ES configuration as follows: -

...
esConfiguration:
    enabled: true
    # The Elasticsearch index name
    index: bpm-events
    # Enable the following properties when Elasticsearch security is on.
    username: elastic
    password: changeme
    httpsTrustType: CRT
    trustFileLocation: /home/wasadmin/ca.crt
    hostnameVerifier: false
    esTaskIndex: restore_task_index
...

Synchronise Node

/opt/ibm/WebSphereProfiles/Dmgr01/bin/wsadmin.sh -lang jython -f fullSync.jy

Validate Sync

ls -al `find /opt/ibm/WebSphereProfiles -name BPMEventEmitter.yml`

-rw-r--r-- 1 wasadmin wasadmins 2793 Oct 19 16:54 /opt/ibm/WebSphereProfiles/AppSrv01/config/cells/PCCell1/clusters/SupCluster/analytics/config/BPMEventEmitter.yml
-rw-r--r-- 1 wasadmin wasadmins 2793 Oct 19 16:54 /opt/ibm/WebSphereProfiles/AppSrv01/config/cells/PCCell1/nodes/Node1/servers/SupClusterMember1/analytics/config/BPMEventEmitter.yml
-rw-r--r-- 1 wasadmin wasadmins 2762 Sep 18 08:51 /opt/ibm/WebSphereProfiles/AppSrv01/installedApps/PCCell1/BPMEventEmitter_war_De1.ear/BPMEventEmitter.war/WEB-INF/classes/BPMEventEmitter.yml
-rw-r--r-- 1 wasadmin wasadmins 2797 Oct 19 17:19 /opt/ibm/WebSphereProfiles/Dmgr01/config/cells/PCCell1/clusters/SupCluster/analytics/config/BPMEventEmitter.yml
-rw-r--r-- 1 wasadmin wasadmins 2797 Oct 19 17:19 /opt/ibm/WebSphereProfiles/Dmgr01/config/cells/PCCell1/nodes/Node1/servers/SupClusterMember1/analytics/config/BPMEventEmitter.yml

All but BPMEventEmitter_war_De1.ear version of file should be the same size/date/time

Start App

/opt/ibm/WebSphereProfiles/Dmgr01/bin/wsadmin.sh -lang jython

AdminControl.invoke('WebSphere:name=ApplicationManager,process=SupClusterMember1,platform=proxy,node=Node1,version=8.5.5.12,type=ApplicationManager,mbeanIdentifier=ApplicationManager,cell=PCCell1,spec=1.0', 'startApplication', '[BPMEventEmitter_war_De1]')

quit

Check Logs

tail -f /opt/ibm/WebSphereProfiles/AppSrv01/logs/SupClusterMember1/SystemOut.log

Note

If you see this: -

Caused by: javax.net.ssl.SSLPeerUnverifiedException: Host name '9.174.27.153' does not match the certificate subject provided by the peer (CN=node1, DC=uk, DC=ibm, DC=com)

use: -

hostnameVerifier: false

in BPMEventEmitter.yml

Backup





Monday, 16 October 2017

Apple Watch - go, no go, go

So I had a weird experience last evening, and not in a good way.

For no apparent reason, this was my Apple Watch: -


and this: -


I have no earthly idea what happened.

So, being a true nerd, and a big fan of The IT Crowd, I decided to ( all together now ) TURN IT OFF AND ON AGAIN ….

Obviously I couldn't read the display, what with it being all garbled n' all, so I just hit the big button on the right-hand side, below the digital crown and chose the appropriate gibberish - it was the one in red, so it must've been the right one ? Right ?

WRONG !!

The next, my Apple Watch has called 999 ( the UK's emergency services number, similar to 911 in the USA ), and I'm talking to an operator, who's asking how he can help.

When I don't immediately respond ( panic has set in at this point ), he's saying "If you're unable to speak, please press a digit on your phone's dial" etc. assuming, for good reason, that I am injured and cannot respond :-(

I manage to find my voice, and tell him that all is well, and apologise profusely for wasting his time and our public resources ….

Then the house phone rings … and my beloved gets a recorded message telling her that Dave Hay has called the emergency services.

And then I get SMS messages on all my Apple devices …..

And then the home phone rings again, with yet another recorded message with my location ( thanks to Apple Maps ).

In short, the Apple ecosystem has kicked in to save me … even though there's nothing wrong with me, apart from my obvious inability to use Apple hardware.

Finally, I manage to power the watch off, set it on its charging stand, so it can reboot - and all seems well.

For the record, this is what I should've done: -

data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAkGBwgHBgkIBwgKCgkLDRYPDQwMDRsUFRAWIB0iIiAdHx8kKDQsJCYxJx8fLT0tMTU3Ojo6Iys/RD84QzQ5OjcBCgoKDQwNGg8PGjclHyU3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3N//AABEIAKAAzwMBIgACEQEDEQH/xAAcAAABBAMBAAAAAAAAAAAAAAAABAUGBwECAwj/xABREAABAgQDAwYHCBAEBwEAAAABAgMABAURBhIhEzFBB1FhcZGSFCIyU4Gh0RUjMzVCUmOyFiUmREZUVVZicnOClLGzwnSiwfA0Q0WEk9LhJP/EABoBAQADAQEBAAAAAAAAAAAAAAABAgMEBQb/xAAiEQEAAgIBAwUBAAAAAAAAAAAAAQIDESEEEmEFEyIx8EH/2gAMAwEAAhEDEQA/ALxggggCIrjXHVIwjLFU45tZkjxJds+MqFOOcRIw1QXpzKVvqs2w2N61nQDtijqvT3aar3XrxE3iCbG1CHPGblUnydOJ5h6YCwpWs4rr8s3OvTktQpN8ZmWgAp5SeBud3UAY7ClPuav4vqbh50KIHqTFW0nHjkqnZVHOtwffCU3J/W49nZDyMcsrHiTzZvwzawE8FFFtcTVc/vq9kZFFb/OSr95Xsiv140I3Tye/HJWNFX/4wd+AsX3Fb/OKr9qvZB7itfnDWD6V+yK5+zRf44O/Gfs0X+NjvwFi+4rP5wVjtX7IDRmR+EFZ7V+yK7+zRf42O/8A/YwcarH32O/AWCqkND8Iqx2q9kcl01sa/ZHWO1XsiAKxqs/fY78cV4yWfvod6An6pJKfwmq4/fV7I4OMqRqnFdVHWtXsivnMXOH76v8AvQjdxa7r/wDo/wA0BZrM5WZdQFPxcXl38VubSFA9G4GFlJ5UvAqx7iYulkyc1pkmWzdtYO49F4pWZxS4oEFSnL8OeNqVX2qi/wC5+ImvCZBzRCv+ZLdKDzdEB61ZdQ82lxtQWhQuFDcRG8VDyc1iewzWkYWq75mJGaGemzd7hQIvlueiLdG6AzBBBAEEEEAQQQQBAYIDAQDEjIrHKFS5F43lKeyqbcTwzbk37VH0RV2OZxc3PzMws3Liyr0cOwWHoi10JvjPEMwf+XJJQOxUU5ikErV1QEGmjqTCImFszxhzo9Ep71JXU6xPuSksqZ8Fa2TW0UV5cxURceKARe2pvAR+CHxWGJ00p6qS7ktMSrSiCWXgVZc2ULynUAm2h11va0LJjAtZYmUMFEqtRecZWpuaQpLK2xdYcN/FsNYCLRm8SaWwPWJoPGXTKupQcrakTSCHzlzWbsdfFIPDm36QgrlIbpqpANOKX4VIszKs3yVLBJA6NIBovGIUCX03Rqpm0RscoxHUIjYMGGxwjMdFtFO+OcSC8bNqKXEqG8GNIIC8aK2azyd7VPjT1HdD8qriAPHAvzXCh1RdNMmRN0+WmRudaSvtEU3ySHPh+ptHUFlGnei08GKzYWph+gSOyAeoIIIAggggCCCCAIIIICEs/H2J1fQgf5VRTmKk2cVFyM/HGKT9GPqmKexYPfFwEAmt5hfSK+JCRXITVOlKhKl8TKG5grAQ6Ba/iqFwRvSd9oQTfGEXGAmKcZTkxSHKauUk0ocaLKlt50+LnzizYOQG99bbrQoZxROodmnBLyqhNzz8482sKKV7VOVaNCDltu49PPDmFEGFqHriMbzbfCsn9vFb8nLOystTpRmWKitplpx5GyVlCSSQsFdwBfNfXXTcWSeqLtSVKF5CEGWlW5ZGU+UlAIBN+OscXbq3RyykHdF67mOUReCkAZeEcHhGQvhGQkucIaXiP64spuYdGWEZTCRDBRwjsHigRE7Z325TbaRuhrWLGF0w8VQhXe94mqatIIIIuuvPkhWE0eoKUQlIlwVE8BrrE6wpialSdHp1PmZxhuayBCmtu2ooJ3ZgFXHpGkVFQZl2VwDVwyopXMNtS4I32Wux9VxDy3yXeAU33S2i1PhJc0SSbgE/zgL5EZjVpWdtKvnAGNoAggggCCCCAIDBBAQpn42xV+zH1TFP4t+EXFwM/GmKj+h/aYp/Fvwi4Cv5vUmOUnKPTkwhiWQp11xWVCEi5UeiO01oox6A5DsKy9Lw8muzCEqnZy6kKI+Db3C3XYmArSU5IcWzDAd8EQ1pfI4sBXZDRPYQqdLmvBqihUs9a4Q4g+MOcHiOqPUCaql9ynllPvU0tYusWJSEkgjrt2QnqkhI4roz0q+3uUpKSpPjMuDS/X/pFY1MpvS2nmpmguEavIP7pjZygLsbPI7piUUuXSaqJScsMji23QVWAKb316xDiZSnuMreaQpxrKtTjiVkBopbBsL6kFVwLi9rcY025NK8NBczfDI7phfL0Fdh76jsMTUUemB9hG1LhXMttKs5pZasyT/47ekxu1JSglJtS0JbdbzlCLrB8VCToFDnOt+F7boh0U4hD1UFfnUd0w3zVCd4PI7piVKcEIZhwExEwvPKMfY+8T8OjumENQpMxKJK1AKQN6k8ImKCI5TFsukVRpXx3xiF1Wlky82oIFkKGZPRCGLC0sNqQnCEwtwXSl2Xv3lRduIFoVh9ZSLp2avQMp6Dfq0iocDUpVawVXJFrR5UmlbJv8tKrp9Yt6YTUubxXNJYk35mXmG3yltjK6FE5hbdvHZAehZTSVZH6Cf5R2jCQAkAbhGYAggggCCCCAIIIICFs/GeKv1R9QxT2LfhFxcTPxhirqH1DFPYu+FXAV/NGyyY9Ocl04zWOTmQaZXq0wZVy29KgLesEH0x5jmvKiQ4Cx1P4NnFKlvfpR221l1k2V0jmMExOno+VnmJVmWZqcq8mblE5U7NlSgo5ct0W3giFtJQ5LsTc5OJDG2cLykKPwaQANemwuYg0py2YXdlgt9E607bVGzB167xEsacqor0uZCmqTJSS9HSpQLjo5jwA5xx/nWK8r2yRaNRGtmovprOIpt1Ky21MzLzwIFyElRXp02hailsOSTcyiYd2UxsyyklAIChcXBOv7u8a9UQYqUu04lxqbQhaTcKC7EekQ5tVZEwsren1LUVZ7qdJObQX67AdkXY9sF9Ukm5SdUwypSko0JUpKtbn5v8t8J7lAvcxuZyWWSozKCTqSVakxxfm5QJ+Hb7YJjhhcz0mErj1zvhM/PydyPCWu9HDw6U/GG+2CTgh63GBx3NpDeJ+U/GG+2E03WGW0WYVtF9WgiAir7gVNJQPko19MNUbuFTiytRuVG5MPchQD4K3UKzMpp1PcF21LGZ2YH0Te9X6xsnp4QFt8jBtTKh/hh/MxZWCA0MLU1WVAOxGthFYcl9TkUT8nT6XJrEtNKLTzk2vM4tIbWoaJsBqOnSLBpc4pjFj1IQWmaezJMqal0tpSlKlLdBtpfclOnRASwKHCNo5lpB4W6tI1KVo1SSocxgO0EcmnQvfvjrAEEEEAQQQQENZ+MMVdX9hincXfCri4WD9scVDo/sMU9i34RcBX81vMSrk35PZvGMyt1ayxT2SA49a9z80c5/3zRFZreY9SclUmzJYApAl0gbZnbLtxUrU3/l6IBvY5IMGMshtyRfeX5xUytJvz2SQPVERxxyWNUmTcqNCCn5dsZnmHEgrQnnB4gc2+JBVsTTrNYly8qXmFSanBtJdRS2vNbRW/VNonlFdenKW05OPS0wtwHMuXHvZB4DU9UZ0yxeZiHZ1HRZMGOt7fVnmFiUQ84EMy4cWdyUouT6Ic5CmuPISpmQW4kpzBSGbi17X3bo3eWKFimbMuMyZObeQ0BzJUpI9VoWjETTkuph2TLbRRkQhvKoIAVceWLc+vOTGrjaGnrbbQpcitKVpKkKLNgoAZiQbbraxyfpzt2kGQcCnvggWdV9Wmsdk11lts7GXczONJQ6VLBF0sFkZQB0316owqutIm3JxiWWZhaVGzrpKErIAJAFjuvre+o5tQYn5dvKlZYSErF0Eo8oXI09IMcNgz5pHdEO1TnpWdbRkZeZdQV5U5gW7KcUs9O9RENsBz2DPmkd0QkmqWw+k7NIac4KSNPTC+O8hKLn56Xk21ZVPOBOb5o4n0C59EQGqmU6XpdPVWqw0h7x1NyUkvdMOJ0Kl/Rp4j5R054aKlUpuqzjk3PvKefc0KjoAOAAG4DgBpC3FFVFTq7ipfSSlx4PKI4JZTonrJ3k8SSYZ+MBbPJJ8eUn9sv+i5E1xE06jGzc54Ol9gJlm7KUQEqUt4ZtN9s17G4iFcknx7SR9Mv+i5FpVJCXZ+bChcpLCx6FH2wE0EZgggOL7VwVo0UPXGJd0LTrvjuYQve8TII0S5r6YBdBGEm4BjMAQQQGAhUuftlirq/sMU/i74RcW+x8b4q/U/sioMXeWuAgE1vMXtyG4ylZukJw9PPIRNy6j4PmNg4g65esG/otzGKIm95jhLB0vJ8HzbS+mXfAexHKFJmckphpIZTK7QhptACV57Xv2QjxLXKdg+iqcs0hZCvB5ZOhcX0DmvvMUTTsRcoTMoG2KlM7O1hnIUbdZBMMc6/WpmZW/POByYVvW8pRUejXhGdLY5n4y3yRmisd8TrycpCbDVTM3NeNnK1uEDylKBue0wurlRYnGAhp1bytolacyLbFIbCSgHrF+aIxeo80v64z9seaX9casCyA674R/bHmY9cYvUTwl/XALYIR/bHml/XGPtjzS/rgFsLaS8ZcVOZBIXLUx9xCh8lRytg9rghm+2O+0v64V0hM47L19qZCRtKS5kybvFdacPqQYCIQDeIIIgW1ySfHtJ/bL/ouRaM2q1WnB9E39aKt5JPj2k/t1/0XIs2dVavzI52kfWEBOoIIIAhHUxaXSr5qx7IWQiqyrSmX5y0j13/0gO0qq6I7wmkh72IUjdAEYO6MxgwEKY+OMVD6P+yKgxbqtUXAwPt3ikfRD6hin8WeUrqgIBNbzfdE7wHh5S2EOoZDsy6CoX+SBxud0QSa8oxbWBZxkS0usvNtoUwps7RvOhWnkrA4Hnjh660xSI3qJnl6npVInLa2tzEbiC4UioEPESrlmCQ5zg2v6dNdIRVaivrkkOzLBQh34J08Da47REtFUpSX2HGZlaG5F51YaUFK2wUkCySem414Q2VWak36UwnwgOzacgzJSpJyhNjtBuJB3GPK9vHj3bHbmPL3/fy5tY8tPjPif3lXdIk0ztUYk3syUrUQrLvFgTpoebmh2lcOMzWyXt1tN7ZSHhcKKUADKQbC91EJ3cRDTKoM5VwG5gS5cdUUuXy5RqdNRrwha/SZhLitpOuF87QspcbUlTmRAWrMb+IbaDyrkb7WMfRVndY2+OvERaYh3aoMstbSVFwFUmmYUS+lNypLarXKLJ1XzncIHcPshDmRUyuzivfTlSlID2zsQdSrjcc6dLQgq8o/IpZQqdcmGlpOQ6hFhbyTcgp3brdQhCX3ikp2rmUnMRnNiee3PFlUiRhph6ouybTriFpmmmUBSwS4FKVnsco8bKm4Ft4hGmlSx8DbWmczzIbs+ACzmcuEjdewNr6njutDUuZfWoKW+8pQIIUpxRItu1J4axrtXciUbVzIg3QnObJPOBwgNpxtpqadaYWpaEKyhagAVW0J6ib26I70aZZlKow5NW8GXmaf/ZrBSq/QAb+iEZ1JJ4xgi4O6AjdWkHaXU5qQfBDku6ps9Njv9Oh9MJImNUlk4gk0iWSVVqnsgOIGqpqXSLBQ51oFgRvKbHgYhw39EQLZ5JPj6k/t1/0XIsqd1xDMD6NH1hFa8knx5Sv2y/6LkTWs1qXkcc+APXC5htkN2F7lSz/6QFpQQQQAYaqq5nmGmE8PGV/pC6bmW5ZlTjh6hxJ5oa5Btx50vu6rWbmAdZZOVAEdo1SLCNoAgMEYMBDWE/dJiZsbywk260GKbxUrU9UXJPOJp3KC2HtGanKZQTuKkbx2H1RU2OZFyUnpmXUNWllPo4eq0BW8z5RMOWGsQqpSi08CthRvb5sNs4CFqhEd8UvSuSvbb6aYct8N4vSdTC0msRUpxGbwkAHhb2QhqNeafbLcs4EoVoVFQBPVFd6QRyY/T8NLdz0c3rHVZadkzpMWJwS7qXWX0JWncbg+o7/TCg1uZKHEGeGV0WUAECwyhNk6eKMoAsmwsLbogkEdzyk3m6o5OZPCZpLmS9vJTv3nS1ybDU6wn27XnEd4REIICX7drziO8INu15xHeERCCAl+2a86gfvCEs3VJeXQQhQcc4BO70xGoIBS1PTLE6icl3ltTCF50OINik9ESAvUjEQC51xuk1dR8d/KfBplXOoJB2ajxIGW+thEWhRJsqfmW20gm5EBcfJlSJ+n1ylqmpYhsurKXUKDjahsl6haSUntiUrojNc5SXlOPuMOyLUs+hbdsx1dBSb8CCRDFSpSYoXJ09MSzrkvOT7uzlSgkG5shKrfrFR6osiTkpxllsqecU/s0hxw6qUbcTASO9tTCGbqjEv4qTtXPmI17TCBUlMO22zzqhzFRMKZamIR8mASobfnnttM8PJQNyYeJdrZoGkbNMpbGkdYAggggCCCCAjuN6E7WqUkyS9nUJRYflV/pjgegi4iBTrbOMpNQUjwauy42cxLOeKVEf70PERbxiMYowdKVp4Tss6qRqiB4s0yNSOZQ4iA89YgwxOSj7iFsOIUDqlSbGIzMUyZQo+9q7I9FLbxVT0GXq1GYrEunQPS5FyP1VbvQYTqco7ljN4OqDa+OWTWf5QHnNUq+D8ErsjHg73ml92PRf3L/Kw1UAf8G77Ix9yg/B6f/g3vZAedPB3vNOd0wbB7zS+6Y9FXwn+QZ/8AhHvZGb4T/IVQ/hHvZAedNg75pfdMGwd80vumPRf3Jj/oNQ/hXvZB9yf5BqH8K97IDzpsHvNOd0xnwd/zLndMeirYT/INR/hXvZGB9if5v1D+Fe9kB528He8y53TGyZOZV5Mu6f3DHogfYtww7UT/ANm97I3ScNA+Lhmok83gb3sgPPsvRqg+fFlnAniSLARZOAuT5xWWo1UmWkkeMt5emYcyL7+vd2RYTD6AR7i4MmlODyVPsbMJ6buQtawtV8QPJdxVMttSY19z5RRIV+uvS46AIBNQmTi6vMTyGdlQKSQJMcH3Bpcfop4HifRFhZRe5EaS0u1KsoZl20ttNpyoQkWCRzCOsBjKOaACMwQBBBBAEEEEB//Z

i.e. hit the FIRST rather than the THIRD control.

An update - the landline rang again today, 12 hours later, to tell my beloved that my location had changed - I wonder how much longer it's going to do that ……

Reminder - installing podman and skopeo on Ubuntu 22.04

This follows on from: - Lest I forget - how to install pip on Ubuntu I had reason to install podman  and skopeo  on an Ubuntu box: - lsb_rel...