Thursday, 29 March 2018

What's new in IBM Business Process Manager V8.6.0 cumulative fix 2018.03

IBM® Business Process Manager V8.6.0 Cumulative Fix 2018.03, which is now available for you to download and upgrade to, introduces many new features.

with a fair few changes: -

  • IBM BPM Platform Configuration
  • Modeling Enhancements in the web Process Designer
  • Developer Usability in the web Process Designer
  • Business UI
  • Process Analytics
  • ECM Integration
  • Documentation
  • IBM Process Federation Server

IBM X-Force Exchange

Definitely worth adding this to your frequent reading list: -

IBM X-Force Exchange




Frequently Asked Questions


What are the latest features released for X-Force Exchange?



Wednesday, 28 March 2018

Ooops, IBM BPM, JDBC and too many databases

Having started up a BPM 8.6 environment after a few weeks away from that particular VM, I'd ensured that I'd started the DB2 server BEFORE starting the Deployment Manager and Node Agent.

I then sought to validate the WAS -> DB2 connectivity using the Test Connection button against each of the BPM datasource.

This failed, and I saw this in the Node Agent's SystemOut.log file: -

...
[28/03/18 11:43:20:596 BST] 00000088 FfdcProvider  W com.ibm.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: FFDC Incident emitted on /opt/ibm/BPM/v8.5/profiles/Node01/logs/ffdc/nodeagent_c8275539_18.03.28_11.43.20.5927573313506751525980.txt com.ibm.ws.rsadapter.DSConfigHelper.getPooledConnection 568
[28/03/18 11:43:20:605 BST] 00000088 FfdcProvider  W com.ibm.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: FFDC Incident emitted on /opt/ibm/BPM/v8.5/profiles/Node01/logs/ffdc/nodeagent_c8275539_18.03.28_11.43.20.6032512775687274080769.txt com.ibm.ws.rsadapter.DSConfigurationHelper.testConnectionToDataSource 1486
[28/03/18 11:43:20:605 BST] 00000088 DSConfigurati W   DSRA8201W: DataSource Configuration: DSRA8040I: Failed to connect to the DataSource jdbc/mashupDS.  Encountered java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection DSRA0010E: SQL State = 08006, Error Code = 17,002.
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection DSRA0010E: SQL State = 08006, Error Code = 17,002
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
Caused by: java.net.ConnectException: Connection refused
[28/03/18 11:43:20:634 BST] 00000088 DataSourceCon E   DSRA8040I: Failed to connect to the DataSource "".  Encountered java.sql.SQLException: IO Error: The Network Adapter could not establish the connection DSRA0010E: SQL State = 08006, Error Code = 17,002

I checked the datasource configuration: -



In other words, whilst this VM has DB2 installed, I'm actually using Oracle :-)

I switched to the oracle user: -

su - oracle
Password: 
Last login: Mon Feb 26 15:06:36 GMT 2018


and started the listener: -

lsnrctl start LISTENER

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 28-MAR-2018 11:45:12

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Starting /home/oracle/app/oracle/product/12.2.0/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.2.0.1.0 - Production
System parameter file is /home/oracle/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Log messages written to /home/oracle/app/oracle/diag/tnslsnr/bpm86/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=bpm86.uk.ibm.com)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=bpm856.uk.ibm.com)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date                28-MAR-2018 11:45:13
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /home/oracle/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Listener Log File         /home/oracle/app/oracle/diag/tnslsnr/bpm86/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=bpm86.uk.ibm.com)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully


and then started the database: -

sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Mar 28 11:45:20 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to an idle instance.

SQL>
startup

ORACLE instance started.

Total System Global Area 1593835520 bytes
Fixed Size     8621184 bytes
Variable Size   973079424 bytes
Database Buffers   603979776 bytes
Redo Buffers     8155136 bytes
Database mounted.
Database opened.


and then validated the connection ( using Telnet ): -

telnet `hostname` 1521

Trying 192.168.153.131...
Connected to bpm86.uk.ibm.com.
Escape character is '^]'.
   
^]quit

telnet>
quit

Connection closed.

Finally, I re-tested the WAS to DB ( Oracle, NOT DB2 ) connection, which correctly returned: -


Moral of the story - too many databases !

Tuesday, 27 March 2018

Rational Key License Server - Some tinkering

In order to license my UrbanCode Deploy (UCD) box, I needed to: -

(a) generate a license key for UCD, using the Rational License Key Centre: -


(b) build/configure a Rational License Key Server (RLKS)

Without this, UCD rightly complains about being unlicensed: -


I built RLKS on a spare RHEL 7 box, as follows: -

Download

CIPQ0ML Rational License Key Server 8.1.4 for Linux x86 Multilingual

Validate

ls -al /tmp/RLKS_8.1.4_FOR_LINUX_X86_ML.zip 

-rw-r--r--. 1 root root 271406147 Mar 27 14:06 /tmp/RLKS_8.1.4_FOR_LINUX_X86_ML.zip

Extract

unzip /tmp/RLKS_8.1.4_FOR_LINUX_X86_ML.zip -d /tmp

Install IIM

/tmp/RLKSSERVER_SETUP_LINUX_X86/disk1/InstallerImage_linux_gtk_x86_64/installc -acceptLicense

Installed com.ibm.cic.agent_1.6.2000.20130301_2248 to the /opt/IBM/InstallationManager/eclipse directory.

See what's available to install

/opt/IBM/InstallationManager/eclipse/tools/imcl listAvailablePackages -repositories /tmp/RLKSSERVER_SETUP_LINUX_X86/disk1/

com.ibm.rational.license.key.server.linux.x86_8.1.4000.20130823_0513

Install RLKS

/opt/IBM/InstallationManager/eclipse/tools/imcl install com.ibm.rational.license.key.server.linux.x86_8.1.4000.20130823_0513 -repositories /tmp/RLKSSERVER_SETUP_LINUX_X86/disk1/ -acceptLicense

Installed com.ibm.rational.license.key.server.linux.x86_8.1.4000.20130823_0513 to the /opt/IBM/RationalRLKS directory.

Attempt to Start

/opt/IBM/RationalRLKS/config/start_lmgrd 

Starting IBM Rational License Key Server.

Check Logs

cat /opt/IBM/RationalRLKS/logs/lmgrd.log 

/opt/IBM/RationalRLKS/config/start_lmgrd: /opt/IBM/RationalRLKS/bin/lmgrd: /lib/ld-lsb.so.3: bad ELF interpreter: No such file or directory

Check Missing Dependency

yum provides /lib/ld-lsb.so.3

Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
...
redhat-lsb-core-4.1-27.el7.i686 : LSB Core module support
Repo        : server
Matched from:
Filename    : /lib/ld-lsb.so.3


Install Missing Dependency

yum  install -y redhat-lsb-core-4.1-27.el7.i686

Installed:
  redhat-lsb-core.i686 0:4.1-27.el7                                                                                                                                                                                                                                           

Dependency Installed:
  cups-client.x86_64 1:1.6.3-29.el7     cups-libs.x86_64 1:1.6.3-29.el7     m4.x86_64 0:1.4.16-10.el7      ncurses-libs.i686 0:5.9-14.20130511.el7_4     nspr.i686 0:4.13.1-1.0.el7_3     nss.i686 0:3.28.4-15.el7_4                       nss-pem.i686 0:1.0.3-4.el7    
  nss-softokn.i686 0:3.28.3-8.el7_4     nss-util.i686 0:3.28.4-3.el7        patch.x86_64 0:2.7.1-8.el7     psmisc.x86_64 0:22.20-15.el7                  readline.i686 0:6.2-10.el7       redhat-lsb-submod-security.i686 0:4.1-27.el7     spax.x86_64 0:1.5.2-13.el7    
  sqlite.i686 0:3.7.17-8.el7           

Complete!

Removed temporary configuration


Attempt to Start

/opt/IBM/RationalRLKS/config/start_lmgrd 

Starting IBM Rational License Key Server.

Check Logs

cat /opt/IBM/RationalRLKS/logs/lmgrd.log 

14:49:58 (lmgrd) -----------------------------------------------
14:49:58 (lmgrd)   Please Note:
14:49:58 (lmgrd) 
14:49:58 (lmgrd)   This log is intended for debug purposes only.
14:49:58 (lmgrd)   In order to capture accurate license
14:49:58 (lmgrd)   usage data into an organized repository,
14:49:58 (lmgrd)   please enable report logging. Use Flexera Software, Inc.'s
14:49:58 (lmgrd)   software license administration  solution,
14:49:58 (lmgrd)   FLEXnet Manager, to  readily gain visibility
14:49:58 (lmgrd)   into license usage data and to create
14:49:58 (lmgrd)   insightful reports on critical information like
14:49:58 (lmgrd)   license availability and usage. FLEXnet Manager
14:49:58 (lmgrd)   can be fully automated to run these reports on
14:49:58 (lmgrd)   schedule and can be used to track license
14:49:58 (lmgrd)   servers and usage across a heterogeneous
14:49:58 (lmgrd)   network of servers including Windows NT, Linux
14:49:58 (lmgrd)   and UNIX. Contact Flexera Software, Inc. at
14:49:58 (lmgrd)   www.flexerasoftware.com for more details on how to
14:49:58 (lmgrd)   obtain an evaluation copy of FLEXnet Manager
14:49:58 (lmgrd)   for your enterprise.
14:49:58 (lmgrd) 
14:49:58 (lmgrd) -----------------------------------------------
14:49:58 (lmgrd) 
14:49:58 (lmgrd) 
14:49:58 (lmgrd) The license server manager (lmgrd) running as root:
14:49:58 (lmgrd)  This is a potential security problem
14:49:58 (lmgrd)  and is not recommended.
14:49:58 (lmgrd) license manager: can't initialize:No SERVER lines in license file.
14:49:58 (lmgrd) License Path: "/opt/IBM/RationalRLKS/config/server_license.lic"
14:49:58 (lmgrd) FLEXnet Licensing error:-13,66
14:49:58 (lmgrd) For further information, refer to the FLEXnet Licensing documentation,available at "www.flexerasoftware.com".

Update License

vi /opt/IBM/RationalRLKS/config/server_license.lic

SERVER rlks.uk.ibm.com INTERNET=192.168.153.132 27000
VENDOR ibmratl
VENDOR telelogic
VENDOR rational
INCREMENT IBMUCD_PVU ibmratl 6.01 30-jan-2019 1 ISSUED=27-Mar-2018 \
NOTICE="Sales Order Number:ARGLE BARGLE GLOOP" \
SIGN="1056 0BA6 C70C A6E3 19BA 3876 575B EC21 D41E 0895 1F53 \
CAF4 8D72 4B75 24E7"

( note that this is NOT my license ! )

Attempt to Start

/opt/IBM/RationalRLKS/config/start_lmgrd 

Starting IBM Rational License Key Server.

Check Logs

cat /opt/IBM/RationalRLKS/logs/lmgrd.log 

14:57:52 (lmgrd) -----------------------------------------------
14:57:52 (lmgrd)   Please Note:
14:57:52 (lmgrd) 
14:57:52 (lmgrd)   This log is intended for debug purposes only.
14:57:52 (lmgrd)   In order to capture accurate license
14:57:52 (lmgrd)   usage data into an organized repository,
14:57:52 (lmgrd)   please enable report logging. Use Flexera Software, Inc.'s
14:57:52 (lmgrd)   software license administration  solution,
14:57:52 (lmgrd)   FLEXnet Manager, to  readily gain visibility
14:57:52 (lmgrd)   into license usage data and to create
14:57:52 (lmgrd)   insightful reports on critical information like
14:57:52 (lmgrd)   license availability and usage. FLEXnet Manager
14:57:52 (lmgrd)   can be fully automated to run these reports on
14:57:52 (lmgrd)   schedule and can be used to track license
14:57:52 (lmgrd)   servers and usage across a heterogeneous
14:57:52 (lmgrd)   network of servers including Windows NT, Linux
14:57:52 (lmgrd)   and UNIX. Contact Flexera Software, Inc. at
14:57:52 (lmgrd)   www.flexerasoftware.com for more details on how to
14:57:52 (lmgrd)   obtain an evaluation copy of FLEXnet Manager
14:57:52 (lmgrd)   for your enterprise.
14:57:52 (lmgrd) 
14:57:52 (lmgrd) -----------------------------------------------
14:57:52 (lmgrd) 
14:57:52 (lmgrd) 
14:57:52 (lmgrd) The license server manager (lmgrd) running as root:
14:57:52 (lmgrd)  This is a potential security problem
14:57:52 (lmgrd)  and is not recommended.
14:57:52 (lmgrd) FLEXnet Licensing (v11.10.0.3 build 96543 i86_lsb) started on rlks.uk.ibm.com (linux) (3/27/2018)
14:57:52 (lmgrd) Copyright (c) 1988-2011 Flexera Software, Inc. All Rights Reserved.
14:57:52 (lmgrd) US Patents 5,390,297 and 5,671,412.
14:57:52 (lmgrd) World Wide Web:  http://www.flexerasoftware.com
14:57:52 (lmgrd) License file(s): /opt/IBM/RationalRLKS/config/server_license.lic /opt/IBM/RationalRLKS/config/rational_server_temp.dat /opt/IBM/RationalRLKS/config/rational_server_perm.dat
14:57:52 (lmgrd) lmgrd tcp-port 27000
14:57:52 (lmgrd) Starting vendor daemons ... 
14:57:52 (lmgrd) Started ibmratl (internet tcp_port 41793 pid 29310)
14:57:52 (lmgrd) Started telelogic (internet tcp_port 43290 pid 29311)
14:57:52 (lmgrd) Started rational (internet tcp_port 40974 pid 29312)
14:57:52 (lmgrd) license daemon: execute process failed: (/opt/IBM/RationalRLKS/bin/telelogic) -T rlks.uk.ibm.com 11.10 3 -c /opt/IBM/RationalRLKS/config/server_license.lic:/opt/IBM/RationalRLKS/config/rational_server_temp.dat:/opt/IBM/RationalRLKS/config/rational_server_perm.dat
14:57:52 (lmgrd) license daemon: system error code: No such file or directory
14:57:52 (lmgrd) telelogic exited with status 45 (Child cannot exec requested server)
14:57:52 (lmgrd) Please correct problem and restart daemons
14:57:52 (lmgrd) license daemon: execute process failed: (/opt/IBM/RationalRLKS/bin/rational) -T rlks.uk.ibm.com 11.10 3 -c /opt/IBM/RationalRLKS/config/server_license.lic:/opt/IBM/RationalRLKS/config/rational_server_temp.dat:/opt/IBM/RationalRLKS/config/rational_server_perm.dat
14:57:52 (lmgrd) license daemon: system error code: No such file or directory
14:57:52 (lmgrd) rational exited with status 45 (Child cannot exec requested server)
14:57:52 (lmgrd) Please correct problem and restart daemons
14:57:52 (ibmratl) FLEXnet Licensing version v11.10.0.3 build 96543 i86_lsb
14:57:52 (ibmratl) Server started on rlks.uk.ibm.com for: IBMUCD_PVU
14:57:52 (ibmratl) EXTERNAL FILTERS are OFF
14:57:52 (lmgrd) ibmratl using TCP-port 41793
14:57:52 (ibmratl) Serving features for the following vendor names:
 ibmratl  rational  tlog_state  telelogic  tlog_rhaps  

Validate Listener

netstat -aon|grep 27000

tcp        0      0 192.168.153.132:56492   192.168.153.132:27000   TIME_WAIT   timewait (17.99/0/0)
tcp6       0      0 :::27000                :::*                    LISTEN      off (0.00/0/0)
tcp6       0      0 ::1:27000               ::1:43888               ESTABLISHED keepalive (7166.21/0/0)
tcp6       0      0 ::1:43888               ::1:27000               ESTABLISHED off (0.00/0/0)


Test connectivity from UCD to RLKS

telnet 192.168.153.132 27000

Trying 192.168.153.132...
telnet: connect to address 192.168.153.132: No route to host

Disable firewall on RLKS box

service firewalld stop

Redirecting to /bin/systemctl stop firewalld.service

Test connectivity from UCD to RLKS

telnet 192.168.153.132 27000

Trying 192.168.153.132...
Connected to 192.168.153.132.
Escape character is '^]'.

Connection closed by foreign host.


Add RLKS information to UCD




And that's the deal :-)


IBM UrbanCode Deploy and WebSphere Liberty Profile - Now playing nicely

A few months back, I'd experienced an issue using UrbanCode Deploy (UCD) to download/deploy WebSphere Liberty Profile (WLP).

The TL;DR; of the issue was that the UCD Plugin that's provided by the WLP team to handle the installation failed to preserve the executable permissions of the WLP binaries.

In other words, the Plugin would download the requisite WLP JAR/ZIP file to the target server ( upon which he UCD Agent is running ), expand it into a directory such as /opt/ibm/WebSphere/Liberty/wlp BUT not preserve the executable permissions.

This meant that one wasn't then able to actually use WLP, without changing the permissions ( via chmod +x ).

Thankfully, this has now been fixed with the latest version ( 18 ) of the WLP Plugin: -


I wrote about this more fully here: -


Again, YAY!

For the record, this is the version that I had been using: -

17.943072

WebSphereLiberty-17.943072.zip

and this is what I have now: -

18.974811

Friday, 2 March 2018

IBM Cloud Private - Helm not happy again

This follows on from yesterday's thread: -


Today, after booting up my ICP VMs ( Master/Boot, Worker and Proxy ), I see this in the GUI: -



and: -


and: -

and: -


all of which makes me think that Helm API isn't happy again.

Digging into the Helm API Pod: -






So I dived onto the command-line: -

docker ps -a|grep -i helm-api

f9ee8a0e5d20        b72c1d4155b8                      "npm start"              2 minutes ago       Exited (0) About a minute ago                       k8s_helmapi_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_60
de1a7b2d1bf2        ibmcom/pause:3.0                  "/pause"                 2 hours ago         Up 2 hours                                          k8s_POD_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_8

which makes me think that the first container is having a bad day.

I bounced it: -

docker restart f9ee8a0e5d20

and watched the logs: -

docker logs f9ee8a0e5d20 -f

> helmApi@0.0.0 start /usr/src/app
> node ./bin/www

2018-03-02T11:15:31.070Z 'FINE' 'HELM_REPOS  helm_repos'
2018-03-02T11:15:31.074Z 'FINE' 'process.env.DBHOST https://cloudantdb:6984'
2018-03-02T11:15:31.075Z 'FINE' 'db_host https://cloudantdb:6984'
2018-03-02T11:15:31.318Z 'INFO' 'dbutils Authenticating with the database host \'https://cloudantdb:6984\'...'
D0302 11:15:31.459818780      15 env_linux.c:66]             Warning: insecure environment read function 'getenv' used
2018-03-02T11:15:31.503Z 'FINE' 'Tiller url  tiller-deploy.kube-system:44134'
2018-03-02T11:15:31.503Z 'FINE' 'eval ISICP true'
2018-03-02T11:15:31.507Z 'INFO' 'dbutils createMultipleDb(helm_repos)'
2018-03-02T11:15:31.509Z 'FINE' 'ABOUT TO START SYNCH'
2018-03-02T11:15:31.509Z 'FINE' 'startInitialSynch'
2018-03-02T11:15:33.509Z 'INFO' 'dbutils  getDbConnection(\'helm_repos\') Initializing connection to database \'helm_repos\''
2018-03-02T11:15:36.549Z 'WARN' 'dbutils Using db with no auth'
2018-03-02T11:15:36.550Z 'INFO' 'dbutilsPOST Headers using db with no auth'
2018-03-02T11:15:36.551Z 'INFO' 'dbutils Initial DB connections tested.'
2018-03-02T11:15:37.513Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
2018-03-02T11:15:37.513Z 'INFO' 'dbutils createDb helm_repos'
2018-03-02T11:15:42.536Z 'INFO' 'dbutils addViewsToDb'
2018-03-02T11:15:52.579Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'

> helmApi@0.0.0 start /usr/src/app
> node ./bin/www

2018-03-02T11:18:54.589Z 'FINE' 'HELM_REPOS  helm_repos'
2018-03-02T11:18:54.593Z 'FINE' 'process.env.DBHOST https://cloudantdb:6984'
2018-03-02T11:18:54.595Z 'FINE' 'db_host https://cloudantdb:6984'
2018-03-02T11:18:55.378Z 'INFO' 'dbutils Authenticating with the database host \'https://cloudantdb:6984\'...'
D0302 11:18:55.643869950      15 env_linux.c:66]             Warning: insecure environment read function 'getenv' used
2018-03-02T11:18:55.730Z 'FINE' 'Tiller url  tiller-deploy.kube-system:44134'
2018-03-02T11:18:55.730Z 'FINE' 'eval ISICP true'
2018-03-02T11:18:55.744Z 'INFO' 'dbutils createMultipleDb(helm_repos)'
2018-03-02T11:18:55.745Z 'FINE' 'ABOUT TO START SYNCH'
2018-03-02T11:18:55.745Z 'FINE' 'startInitialSynch'
2018-03-02T11:18:57.745Z 'INFO' 'dbutils  getDbConnection(\'helm_repos\') Initializing connection to database \'helm_repos\''
2018-03-02T11:19:00.809Z 'WARN' 'dbutils Using db with no auth'
2018-03-02T11:19:00.814Z 'INFO' 'dbutilsPOST Headers using db with no auth'
2018-03-02T11:19:00.815Z 'INFO' 'dbutils Initial DB connections tested.'
2018-03-02T11:19:01.745Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
2018-03-02T11:19:01.746Z 'INFO' 'dbutils createDb helm_repos'
2018-03-02T11:19:06.761Z 'INFO' 'dbutils addViewsToDb'
2018-03-02T11:19:16.781Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'

but things don't appear to be any better: -

docker ps -a|grep -i helm-api

a6616b26719e        b72c1d4155b8                      "npm start"              2 minutes ago       Exited (0) About a minute ago                       k8s_helmapi_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_61
de1a7b2d1bf2        ibmcom/pause:3.0                  "/pause"                 2 hours ago         Up 2 hours                                          k8s_POD_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_8

( Note that the container ID has changed )

I checked the logs for this new container: -

docker logs a6616b26719e -f

> helmApi@0.0.0 start /usr/src/app
> node ./bin/www

2018-03-02T11:19:24.518Z 'FINE' 'HELM_REPOS  helm_repos'
2018-03-02T11:19:24.529Z 'FINE' 'process.env.DBHOST https://cloudantdb:6984'
2018-03-02T11:19:24.530Z 'FINE' 'db_host https://cloudantdb:6984'
2018-03-02T11:19:24.981Z 'INFO' 'dbutils Authenticating with the database host \'https://cloudantdb:6984\'...'
D0302 11:19:25.206140919      15 env_linux.c:66]             Warning: insecure environment read function 'getenv' used
2018-03-02T11:19:25.287Z 'FINE' 'Tiller url  tiller-deploy.kube-system:44134'
2018-03-02T11:19:25.287Z 'FINE' 'eval ISICP true'
2018-03-02T11:19:25.291Z 'INFO' 'dbutils createMultipleDb(helm_repos)'
2018-03-02T11:19:25.292Z 'FINE' 'ABOUT TO START SYNCH'
2018-03-02T11:19:25.293Z 'FINE' 'startInitialSynch'
2018-03-02T11:19:27.294Z 'INFO' 'dbutils  getDbConnection(\'helm_repos\') Initializing connection to database \'helm_repos\''
2018-03-02T11:19:30.322Z 'WARN' 'dbutils Using db with no auth'
2018-03-02T11:19:30.322Z 'INFO' 'dbutilsPOST Headers using db with no auth'
2018-03-02T11:19:30.323Z 'INFO' 'dbutils Initial DB connections tested.'
2018-03-02T11:19:31.296Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
2018-03-02T11:19:31.296Z 'INFO' 'dbutils createDb helm_repos'
2018-03-02T11:19:36.314Z 'INFO' 'dbutils addViewsToDb'
2018-03-02T11:19:46.335Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'

so no obvious exception …..

And yet …..

I dug further, looking specifically at the Kubernetes pods in the kube-system namespace ( rather than the default namespace where my "user" workloads reside )

kubectl get pods --namespace kube-system

NAME                                                      READY     STATUS             RESTARTS   AGE
auth-apikeys-fxlxs                                        1/1       Running            5          2d
auth-idp-wwhlw                                            3/3       Running            16         2d
auth-pap-2x8fs                                            1/1       Running            5          2d
auth-pdp-bpk7h                                            1/1       Running            5          2d
calico-node-amd64-ls5pg                                   2/2       Running            18         2d
calico-node-amd64-pk49d                                   2/2       Running            16         2d
calico-node-amd64-qwt42                                   2/2       Running            17         2d
calico-policy-controller-5997c6c956-m9nmd                 1/1       Running            6          2d
catalog-catalog-apiserver-6mgbq                           1/1       Running            17         2d
catalog-catalog-controller-manager-bd9f49c8c-f625b        1/1       Running            23         2d
catalog-ui-dgcq2                                          1/1       Running            6          2d
default-http-backend-8448fbc655-fnqv2                     1/1       Running            1          1d
elasticsearch-client-6c9fc8b5b6-h87wr                     2/2       Running            12         2d
elasticsearch-data-0                                      1/1       Running            0          2h
elasticsearch-master-667485dfc5-bppms                     1/1       Running            6          2d
filebeat-ds-amd64-2h55z                                   1/1       Running            7          2d
filebeat-ds-amd64-dz2dz                                   1/1       Running            7          2d
filebeat-ds-amd64-pnjzf                                   1/1       Running            7          2d
heapster-5fd94775d5-8tjb9                                 2/2       Running            13         2d
helm-api-5874f9d746-9qcjg                                 0/1       CrashLoopBackOff   66         2d
helmrepo-77dccffb66-9xwgd                                 0/1       Running            32         2d
icp-ds-0                                                  1/1       Running            6          2d
icp-router-86p4z                                          1/1       Running            25         2d
image-manager-0                                           2/2       Running            13         2d
k8s-etcd-192.168.1.100                                    1/1       Running            7          2d
k8s-mariadb-192.168.1.100                                 1/1       Running            7          2d
k8s-master-192.168.1.100                                  3/3       Running            22         2d
k8s-proxy-192.168.1.100                                   1/1       Running            7          2d
k8s-proxy-192.168.1.101                                   1/1       Running            8          2d
k8s-proxy-192.168.1.102                                   1/1       Running            7          2d
kube-dns-9494dc977-8tkmv                                  3/3       Running            20         2d
logstash-5ccb9849d6-z9ntw                                 1/1       Running            7          2d
metering-dm-8587b865b4-ng6rc                              1/1       Running            8          2d
metering-reader-amd64-wtk6s                               1/1       Running            9          2d
metering-reader-amd64-xp5h2                               1/1       Running            10         2d
metering-reader-amd64-z2s9j                               1/1       Running            11         2d
metering-server-748d8f8f5b-x57fs                          1/1       Running            7          2d
metering-ui-6c56c5778f-xnnfr                              1/1       Running            11         2d
monitoring-exporter-76b94fdd94-djr96                      1/1       Running            8          2d
monitoring-grafana-5c49f54dd-w498k                        2/2       Running            15         2d
monitoring-prometheus-77d4df9dd6-zqtt5                    3/3       Running            22         2d
monitoring-prometheus-alertmanager-564496655f-np9hn       3/3       Running            22         2d
monitoring-prometheus-kubestatemetrics-776b5dcb86-r6jmg   1/1       Running            8          2d
monitoring-prometheus-nodeexporter-amd64-9vhn6            1/1       Running            9          2d
monitoring-prometheus-nodeexporter-amd64-lr8x5            1/1       Running            7          2d
monitoring-prometheus-nodeexporter-amd64-pbzk8            1/1       Running            8          2d
nginx-ingress-lb-amd64-8jjtz                              1/1       Running            15         2d
platform-api-bkgs8                                        1/1       Running            6          2d
platform-ui-qmtg7                                         1/1       Running            12         2d
rescheduler-xn7jf                                         1/1       Running            6          2d
tiller-deploy-55fb4d8dcc-2b75v                            1/1       Running            7          2d
unified-router-np97k                                      1/1       Running            13         2d


and then looked at the logs for that particular offending pod: -

kubectl logs helm-api-5874f9d746-9qcjg -f --namespace kube-system

2018-03-02T11:37:29.661Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:29.662Z 'FINE' 'dbHealthcheck \nrepoName: ibm-charts\n'
2018-03-02T11:37:29.908Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:29.914Z 'FINE' 'loadMessages en'
GET /healthcheck 200 252.498 ms - 16
2018-03-02T11:37:36.088Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:36.089Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:36.089Z 'FINE' 'loadMessages en'
GET /healthcheck 200 1.314 ms - 16
2018-03-02T11:37:46.088Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:46.089Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:46.089Z 'FINE' 'loadMessages en'
GET /healthcheck 200 0.439 ms - 16
2018-03-02T11:37:56.088Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:56.089Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:56.090Z 'FINE' 'loadMessages en'
GET /healthcheck 200 3.081 ms - 16
2018-03-02T11:37:58.656Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:58.656Z 'FINE' 'dbHealthcheck \nrepoName: ibm-charts\n'
2018-03-02T11:37:58.669Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:58.670Z 'FINE' 'loadMessages en'
GET /healthcheck 200 14.629 ms - 16
2018-03-02T11:38:06.107Z 'FINE' 'GET /healthcheck'
2018-03-02T11:38:06.107Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:38:06.108Z 'FINE' 'loadMessages en'
GET /healthcheck 200 0.393 ms - 16


However, after a bit of time ( and patience ), we have this: -

kubectl get pods --namespace kube-system
 
NAME                                                      READY     STATUS    RESTARTS   AGE
auth-apikeys-fxlxs                                        1/1       Running   5          2d
auth-idp-wwhlw                                            3/3       Running   16         2d
auth-pap-2x8fs                                            1/1       Running   5          2d
auth-pdp-bpk7h                                            1/1       Running   5          2d
calico-node-amd64-ls5pg                                   2/2       Running   18         2d
calico-node-amd64-pk49d                                   2/2       Running   16         2d
calico-node-amd64-qwt42                                   2/2       Running   17         2d
calico-policy-controller-5997c6c956-m9nmd                 1/1       Running   6          2d
catalog-catalog-apiserver-6mgbq                           1/1       Running   17         2d
catalog-catalog-controller-manager-bd9f49c8c-f625b        1/1       Running   23         2d
catalog-ui-dgcq2                                          1/1       Running   6          2d
default-http-backend-8448fbc655-fnqv2                     1/1       Running   1          1d
elasticsearch-client-6c9fc8b5b6-h87wr                     2/2       Running   12         2d
elasticsearch-data-0                                      1/1       Running   0          2h
elasticsearch-master-667485dfc5-bppms                     1/1       Running   6          2d
filebeat-ds-amd64-2h55z                                   1/1       Running   7          2d
filebeat-ds-amd64-dz2dz                                   1/1       Running   7          2d
filebeat-ds-amd64-pnjzf                                   1/1       Running   7          2d
heapster-5fd94775d5-8tjb9                                 2/2       Running   13         2d
helm-api-5874f9d746-9qcjg                                 1/1       Running   67         2d
helmrepo-77dccffb66-9xwgd                                 1/1       Running   33         2d
icp-ds-0                                                  1/1       Running   6          2d
icp-router-86p4z                                          1/1       Running   25         2d
image-manager-0                                           2/2       Running   13         2d
k8s-etcd-192.168.1.100                                    1/1       Running   7          2d
k8s-mariadb-192.168.1.100                                 1/1       Running   7          2d
k8s-master-192.168.1.100                                  3/3       Running   22         2d
k8s-proxy-192.168.1.100                                   1/1       Running   7          2d
k8s-proxy-192.168.1.101                                   1/1       Running   8          2d
k8s-proxy-192.168.1.102                                   1/1       Running   7          2d
kube-dns-9494dc977-8tkmv                                  3/3       Running   20         2d
logstash-5ccb9849d6-z9ntw                                 1/1       Running   7          2d
metering-dm-8587b865b4-ng6rc                              1/1       Running   8          2d
metering-reader-amd64-wtk6s                               1/1       Running   9          2d
metering-reader-amd64-xp5h2                               1/1       Running   10         2d
metering-reader-amd64-z2s9j                               1/1       Running   11         2d
metering-server-748d8f8f5b-x57fs                          1/1       Running   7          2d
metering-ui-6c56c5778f-xnnfr                              1/1       Running   11         2d
monitoring-exporter-76b94fdd94-djr96                      1/1       Running   8          2d
monitoring-grafana-5c49f54dd-w498k                        2/2       Running   15         2d
monitoring-prometheus-77d4df9dd6-zqtt5                    3/3       Running   22         2d
monitoring-prometheus-alertmanager-564496655f-np9hn       3/3       Running   22         2d
monitoring-prometheus-kubestatemetrics-776b5dcb86-r6jmg   1/1       Running   8          2d
monitoring-prometheus-nodeexporter-amd64-9vhn6            1/1       Running   9          2d
monitoring-prometheus-nodeexporter-amd64-lr8x5            1/1       Running   7          2d
monitoring-prometheus-nodeexporter-amd64-pbzk8            1/1       Running   8          2d
nginx-ingress-lb-amd64-8jjtz                              1/1       Running   15         2d
platform-api-bkgs8                                        1/1       Running   6          2d
platform-ui-qmtg7                                         1/1       Running   12         2d
rescheduler-xn7jf                                         1/1       Running   6          2d
tiller-deploy-55fb4d8dcc-2b75v                            1/1       Running   7          2d
unified-router-np97k                                      1/1       Running   13         2d




I still have one unhealthy deployment: -


but that's my DataPower pod: -

kubectl get pods --namespace default

NAME                                                READY     STATUS             RESTARTS   AGE
davesdatapower-ibm-datapower-dev-57f6cf4c95-bl7sk   0/1       CrashLoopBackOff   39         20h
virtuous-joey-ibm-open-l-6db4566d6d-zbgf9           1/1       Running            1          17h

kubectl logs davesdatapower-ibm-datapower-dev-57f6cf4c95-bl7sk -f

20180302T113711.926Z [0x8040006b][system][notice] logging target(default-log): Logging started.
20180302T113712.167Z [0x804000fb][system][error] : Incorrect number of CPUs. Expected minimum is 2, but have 1.
20180302T113712.167Z [0x804000fe][system][notice] : Container instance UUID: 807dada3-37e8-4dea-938d-443e351cd96e, Cores: 1, vCPUs: 1, CPU model: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz, Memory: 7954.3MB, Platform: docker, OS: dpos, Edition: developers-limited, Up time: 0 minutes
20180302T113712.171Z [0x8040001c][system][notice] : DataPower IDG is on-line.
20180302T113712.172Z [0x8100006f][system][notice] : Executing default startup configuration.
20180302T113712.464Z [0x8100006d][system][notice] : Executing system configuration.
20180302T113712.465Z [0x8100006b][mgmt][notice] domain(default): tid(8175): Domain operational state is up.
davesdatapower-ibm-datapower-dev-57f6cf4c95-bl7sk
Unauthorized access prohibited.
20180302T113715.531Z [0x806000dd][system][notice] cert-monitor(Certificate Monitor): tid(399): Enabling Certificate Monitor to scan once every 1 days for soon to expire certificates
20180302T113716.388Z [0x8100006e][system][notice] : Executing startup configuration.
20180302T113716.409Z [0x8040009f][system][notice] throttle(Throttler): tid(1391): Disabling throttle.
20180302T113716.447Z [0x00350015][mgmt][notice] b2b-persistence(B2BPersistence): tid(111): Operational state down
20180302T113716.663Z [0x0034000d][mgmt][warn] ssh(SSH Service): tid(111): Object is disabled
20180302T113717.447Z [0x00350015][mgmt][notice] smtp-server-connection(default): tid(7055): Operational state down
login: 20180302T113717.447Z [0x00350014][mgmt][notice] smtp-server-connection(default): tid(7055): Operational state up
20180302T113717.546Z [0x0035008f][mgmt][notice] quota-enforcement-server(QuotaEnforcementServer): tid(687): Operational state down pending
20180302T113717.605Z [0x00350014][mgmt][notice] web-mgmt(WebGUI-Settings): tid(303): Operational state up
20180302T113717.671Z [0x8100006b][mgmt][notice] domain(webApplicationProxy): tid(29615): Domain operational state is up.
20180302T113717.683Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'local:'
20180302T113717.683Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'logtemp:'
20180302T113717.683Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'logstore:'
20180302T113717.683Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'temporary:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'export:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'chkpoints:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'policyframework:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'dpnfsstatic:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'dpnfsauto:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'ftp-response:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'xm70store:'
20180302T113717.712Z [0x8100003b][mgmt][notice] domain(default): Domain configured successfully.
20180302T113718.795Z [webApplicationProxy][0x8040006b][system][notice] logging target(default-log): tid(111): Logging started.
20180302T113721.374Z [webApplicationProxy][0x00330019][mgmt][error] source-https(webApplicationProxy_Web_HTTPS): tid(111): Operation state transition to up failed
20180302T113721.387Z [webApplicationProxy][0x00350015][mgmt][notice] smtp-server-connection(default): tid(35151): Operational state down
20180302T113721.387Z [webApplicationProxy][0x00350014][mgmt][notice] smtp-server-connection(default): tid(35151): Operational state up
20180302T113721.476Z [webApplicationProxy][0x00350016][mgmt][notice] source-https(webApplicationProxy_Web_HTTPS): tid(111): Service installed on port
20180302T113721.476Z [webApplicationProxy][0x00350014][mgmt][notice] source-https(webApplicationProxy_Web_HTTPS): tid(111): Operational state up
20180302T113721.476Z [webApplicationProxy][0x00350014][mgmt][notice] mpgw(webApplicationProxy): tid(111): Operational state up
20180302T113721.531Z [0x8100003b][mgmt][notice] domain(webApplicationProxy): Domain configured successfully.
20180302T113800.626Z [0x80e0047a][system][error] : tid(176): DataPower QuotaEnforcement task is not responding, restart in progress
20180302T113804.626Z [0x00350014][mgmt][notice] quota-enforcement-server(QuotaEnforcementServer): tid(687): Operational state up


This pod was deployed using a Helm chart: -

helm list

NAME           REVISION UPDATED                  STATUS   CHART                   NAMESPACE
davesdatapower 1        Thu Mar  1 14:54:21 2018 DEPLOYED ibm-datapower-dev-1.0.4 default  
virtuous-joey  1        Thu Mar  1 18:03:30 2018 DEPLOYED ibm-open-liberty-1.0.0  default  

so I removed the release: -

helm delete --purge davesdatapower

release "davesdatapower" deleted

helm list

NAME          REVISION UPDATED                  STATUS   CHART                  NAMESPACE
virtuous-joey 1        Thu Mar  1 18:03:30 2018 DEPLOYED ibm-open-liberty-1.0.0 default 

and now all looks good: -


kubectl get pods --namespace default

NAME                                        READY     STATUS    RESTARTS   AGE
virtuous-joey-ibm-open-l-6db4566d6d-zbgf9   1/1       Running   1          17h


I've since redeployed my DataPower pod: -

helm install --name davesdatapower -f dp.yaml ibm-charts/ibm-datapower-dev

NAME:   davesdatapower
LAST DEPLOYED: Fri Mar  2 12:07:41 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME                              CLUSTER-IP  EXTERNAL-IP  PORT(S)         AGE
davesdatapower-ibm-datapower-dev  10.0.0.96   <nodes>      8443:31954/TCP  1s

==> v1beta1/Deployment
NAME                              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
davesdatapower-ibm-datapower-dev  1        1        1           0          1s

==> v1/Secret
NAME                                     TYPE    DATA  AGE
davesdatapower-ibm-datapower-dev-secret  Opaque  2     1s

==> v1/ConfigMap
NAME                                     DATA  AGE
davesdatapower-ibm-datapower-dev-config  3     1s


NOTES:

1. Get the application URL by running these commands:
      export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services davesdatapower-ibm-datapower-dev)
      export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
      echo https://$NODE_IP:$NODE_PORT

and, as suggested, grabbed the endpoint URL: -

export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services davesdatapower-ibm-datapower-dev)

export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")

and checked that the pod containing the DataPower instance was up and running: -

kubectl get pods

NAME                                                READY     STATUS    RESTARTS   AGE
davesdatapower-ibm-datapower-dev-57f6cf4c95-9dqwp   1/1       Running   0          2m
virtuous-joey-ibm-open-l-6db4566d6d-zbgf9           1/1       Running   1          18h

and, finally, hit the DP endpoint: -



For the record, DP is acting as a Web Application Proxy (WAP) against IBM.COM :-)

I'm following this recipe: -


on IBM developerWorks.


Thursday, 1 March 2018

IBM Cloud Private - Helm via the GUI - Not playing nicely

I had an interesting glitch with Helm on IBM Cloud Private earlier.

For some reason, the Helm UI, accessible via the ICP Console: -


seemed to get out-of-sync with reality, in terms of Helm repositories.

Whilst I could see multiple repositories via the CLI: -

helm repo list

NAME       URL                                                             
stable     https://kubernetes-charts.storage.googleapis.com                
local      http://127.0.0.1:8879/charts                                    
ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable/

when I went into the Catalog ( Catalog -> Helm Charts ) there was nowt there: -


and the repositories didn't show up via  Manage -> Helm Repositories  : -


I also saw this: -


when I tried to add a new repository.

However, I was able to add a repo using the Helm command line: -

I found a solution …..

Assuming that Helm was running as one of a small number of Docker containers, I checked ( on the Master/Boot node ): -

docker ps -a|grep -i helm|grep Up

d7ba0075a9a3        2cb2b0c0ca02                      "npm start"              5 hours ago         Up 5 hours                                      k8s_helmrepo_helmrepo-77dccffb66-9xwgd_kube-system_71bdb2d6-1bcb-11e8-ab0b-000c290f4d7f_26
25a17810ee7b        b72c1d4155b8                      "npm start"              5 hours ago         Up 19 minutes                                   k8s_helmapi_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_25
10647a5b7925        ibmcom/pause:3.0                  "/pause"                 5 hours ago         Up 5 hours                                      k8s_POD_helmrepo-77dccffb66-9xwgd_kube-system_71bdb2d6-1bcb-11e8-ab0b-000c290f4d7f_6
e47f5153f715        ibmcom/pause:3.0                  "/pause"                 5 hours ago         Up 5 hours                                      k8s_POD_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_6


and chose to restart the helm-api container that was actually running ( in status nom_start rather than /pause ): -

docker restart 25a17810ee7b

and monitored the logs: -

docker logs 25a17810ee7b -f

until I started seeing messages such as this: -

2018-03-01T17:11:01.426Z 'FINE' 'GET /healthcheck'
2018-03-01T17:11:01.428Z 'FINE' 'dbHealthcheck \nrepoName: ibm-charts\n'
2018-03-01T17:11:01.440Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-01T17:11:01.440Z 'FINE' 'loadMessages en'
GET /healthcheck 200 15.368 ms - 16
2018-03-01T17:11:10.117Z 'FINE' 'GET /healthcheck'
2018-03-01T17:11:10.118Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-01T17:11:10.118Z 'FINE' 'loadMessages en'
GET /healthcheck 200 0.902 ms - 16
2018-03-01T17:11:20.117Z 'FINE' 'GET /healthcheck'
2018-03-01T17:11:20.120Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-01T17:11:20.122Z 'FINE' 'loadMessages en'
GET /healthcheck 200 5.209 ms - 16


At that point, the repositories synced back up: -


and the Catalog caught up: -


So that's all good then.

The moral of the story ?

IBM Cloud Private is a container orchestration/management solution BUILT UPON CONTAINERS !