Monday, 31 December 2018

Docker Secrets - And there's more ....

Hot on the heels of my last post: -

Shush, it's a secret .... 

having created my Docker secret: -

   kubectl create secret docker-registry --docker-username= --docker-password= --docker-email= -n services

as per this: -

Creating Docker Store secret

I made a slight mistake ....

For the --docker-username parameter, I used my email address - with which I can log into the Docker Hub: -


*BUT* this proved to be a bad idea.

Whilst building out my IBM Cloud Automation Manager (CAM) environment, I saw this: -

kubectl get -n services pods

NAME                                        READY     STATUS              RESTARTS   AGE
cam-bpd-cds-5f57588776-jnw8c                0/1       Init:ErrImagePull   0          58s
cam-bpd-mariadb-6b98577f65-g75mj            0/1       Pending             0          58s
cam-bpd-mds-69f5d6988c-tfdrh                0/1       Init:ErrImagePull   0          58s
cam-bpd-ui-6c86d7d6f7-wm22d                 0/1       Pending             0          57s
cam-broker-7b86c6cff5-wcwd9                 0/1       Init:ErrImagePull   0          56s
cam-iaas-7884798b9-ztvmg                    0/1       Init:ErrImagePull   0          56s
cam-mongo-55c5976cf5-p5xtb                  0/1       ErrImagePull        0          55s
cam-orchestration-664f9647d8-sgxv2          0/1       Init:ErrImagePull   0          54s
cam-portal-ui-858b7dfcbd-qxd4k              0/1       Init:ErrImagePull   0          53s
cam-provider-helm-84b6fd45c6-zt8mt          0/1       Init:ErrImagePull   0          52s
cam-provider-terraform-79794ff875-82k4b     0/1       Init:ErrImagePull   0          52s
cam-proxy-5d64b478d6-68whm                  0/1       Init:ErrImagePull   0          51s
cam-service-composer-api-64ff5c747c-ctq5g   0/1       Init:ErrImagePull   0          49s
cam-service-composer-ui-54799748fb-dkpfn    0/1       Init:ErrImagePull   0          48s
cam-tenant-api-84f9bc79d-lsgvp              0/1       Pending             0          48s
cam-ui-basic-7d9ffc5858-b7qkr               0/1       Pending             0          47s
cam-ui-connections-6cdb6cf45b-wcw69         0/1       Pending             0          46s
cam-ui-instances-54f85dfbd4-xx2zd           0/1       Pending             0          45s
cam-ui-templates-57b467554f-l6nmh           0/1       Pending             0          44s
redis-74b9dc6d48-l4g5d                      0/1       ErrImagePull        0          50s

and, when I drilled into one of the failing pods via the IBM Cloud Private (ICP) web UI, I saw this: -

Failed to pull image "store/ibmcorp/icam-busybox:3.1.0.0-x86_64": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/store/ibmcorp/icam-busybox/manifests/3.1.0.0-x86_64: unauthorized: incorrect username or password

I validated the exception via the CLI: -

docker login

Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: david_hay@uk.ibm.com
Password: 
Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or password

I then re-read the documentation AND the prompt from docker login again, and realised that I was using my email address RATHER than my username.

Once I used my username: -

docker login

Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: davidhay1969
Password: 
Login Succeeded

Back in the ICP world, I deleted my secret: -

kubectl delete secret david-hay

and recreated it using the username rather than the email address.

I then killed the failing pod deployments: -

kubectl delete pod cam-broker-7b86c6cff5-wcwd9
kubectl delete pod cam-bpd-cds-5f57588776-jnw8c
kubectl delete pod cam-bpd-mariadb-6b98577f65-g75mj
kubectl delete pod cam-bpd-mds-69f5d6988c-tfdrh
kubectl delete pod cam-bpd-ui-6c86d7d6f7-wm22d
kubectl delete pod cam-iaas-7884798b9-ztvmg
kubectl delete pod cam-log-rotation-1546259700-zgx5v
kubectl delete pod cam-mongo-55c5976cf5-p5xtb
kubectl delete pod cam-orchestration-664f9647d8-sgxv2
kubectl delete pod cam-portal-ui-858b7dfcbd-qxd4k
kubectl delete pod cam-provider-helm-84b6fd45c6-zt8mt
kubectl delete pod cam-provider-terraform-79794ff875-82k4b
kubectl delete pod cam-proxy-5d64b478d6-68whm
kubectl delete pod cam-service-composer-api-64ff5c747c-ctq5g
kubectl delete pod cam-service-composer-ui-54799748fb-dkpfn
kubectl delete pod cam-tenant-api-84f9bc79d-lsgvp
kubectl delete pod cam-ui-basic-7d9ffc5858-b7qkr
kubectl delete pod cam-ui-connections-6cdb6cf45b-wcw69
kubectl delete pod cam-ui-instances-54f85dfbd4-xx2zd
kubectl delete pod cam-ui-templates-57b467554f-l6nmh

which then "forced" the ICP / Kubernetes Replica Set to recreate the pods.

Now I see this: -

kubectl get -n services pods

NAME                                        READY     STATUS    RESTARTS   AGE
cam-bpd-cds-5f57588776-fzq92                1/1       Running   0          28m
cam-bpd-mariadb-6b98577f65-qj7kf            0/1       Pending   0          28m
cam-bpd-mds-69f5d6988c-z9m6q                1/1       Running   0          28m
cam-bpd-ui-6c86d7d6f7-kxdrc                 0/1       Pending   0          28m
cam-broker-7b86c6cff5-kxz9r                 0/1       Pending   0          31m
cam-iaas-7884798b9-bcnhd                    1/1       Running   0          28m
cam-log-rotation-1546263600-w62hj           0/1       Pending   0          16m
cam-mongo-55c5976cf5-q5b9s                  1/1       Running   0          28m
cam-orchestration-664f9647d8-r7qxv          1/1       Running   0          28m
cam-portal-ui-858b7dfcbd-wwshz              1/1       Running   0          27m
cam-provider-helm-84b6fd45c6-rb5cl          1/1       Running   0          27m
cam-provider-terraform-79794ff875-6hh64     0/1       Pending   0          27m
cam-proxy-5d64b478d6-kwpg5                  1/1       Running   0          27m
cam-service-composer-api-64ff5c747c-2d6v9   1/1       Running   0          27m
cam-service-composer-ui-54799748fb-424lg    1/1       Running   0          27m
cam-tenant-api-84f9bc79d-bpnf5              1/1       Running   0          26m
cam-ui-basic-7d9ffc5858-drrh8               0/1       Pending   0          26m
cam-ui-connections-6cdb6cf45b-465tk         0/1       Pending   0          26m
cam-ui-instances-54f85dfbd4-r7kfv           1/1       Running   0          25m
cam-ui-templates-57b467554f-wfmpc           0/1       Pending   0          25m
redis-74b9dc6d48-l4g5d                      1/1       Running   0          1h

which is much much better.

Every day, it's a school day !

Shush, it's a secret ....

Fiddling about with IBM Cloud Private (ICP) and IBM Cloud Automation Manager (CAM), one of the pre-requisites required me to "cache" my Docker Store credentials in a Kubernetes (K8S) secrets: _

Creating Docker Store secret

The syntax is thus: -

   kubectl create secret docker-registry --docker-username= --docker-password= --docker-email= -n services

So off I went ....

The first hurdle was that my Docker password has special characters, including an ampersand ( & ), which broke the kubectl command; shells tend NOT to like ampersands in commands :-)

That was easily resolved - I just wrapped my password in double quotes ( " ), which resolved THAT particular issue.

I was using a randomly generated secret name, for no particular reason: -

DitYPtiansUP

I then hit this: -

The Secret "DitYPtiansUP" is invalid: metadata.name: Invalid value: "DitYPtiansUP": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

which didn't really help ....

I dug into the K8S documentation: -


and then looked at the existing secrets on my ICP cluster: -

kubectl get secrets

NAME                  TYPE                                  DATA      AGE
default-token-rvscx   kubernetes.io/service-account-token   3         2d
infra-registry-key    kubernetes.io/dockerconfigjson        1         2d

which gave me a clue ...

It looks like the secret name needs to be formatted thusly: -

  • lower-case
  • separated with a hyphen ( - ) or full stop / period ( . )
Therefore, I went for the path of least resistance, and used my name as my secret: -

david-hay

which did the job.

One other thing ....

This: -

kubectl get secrets

NAME                  TYPE                                  DATA      AGE
default-token-rvscx   kubernetes.io/service-account-token   3         2d
infra-registry-key    kubernetes.io/dockerconfigjson        1         2d

didn't show up my newly created secret, even though I knew it was there; I tried to create it again, and saw this: -

Error from server (AlreadyExists): secrets "david-hay" already exists

Thankfully, I realised where I was going wrong - it's all in the namespace ....

My newly created secret was placed in the services namespace, so I needed to look specifically at that: -

kubectl get secrets -n services

NAME                  TYPE                                  DATA      AGE
david-hay             kubernetes.io/dockerconfigjson        1         11m
default-token-nnz4v   kubernetes.io/service-account-token   3         2d
oauth-client-secret   Opaque                                2         2d

For the record, here's how to find the namespaces: -

kubectl get namespaces

NAME           STATUS    AGE
cert-manager   Active    2d
default        Active    2d
ibmcom         Active    2d
istio-system   Active    2d
kube-public    Active    2d
kube-system    Active    2d
platform       Active    2d
services       Active    2d

I could've done this: -

kubectl get secrets --all-namespaces=true

...
NAMESPACE      NAME                                                        TYPE                                  DATA      AGE
cert-manager   default-token-rvscx                                         kubernetes.io/service-account-token   3         2d
cert-manager   infra-registry-key                                          kubernetes.io/dockerconfigjson        1         2d
default        default-token-kj5xp                                         kubernetes.io/service-account-token   3         2d
ibmcom         default-token-5vhkl                                         kubernetes.io/service-account-token   3         2d
ibmcom         infra-registry-key                                          kubernetes.io/dockerconfigjson        1         2d
ibmcom         sa-ibmcom                                                   kubernetes.io/dockerconfigjson        1         2d
...
services       david-hay                                                   kubernetes.io/dockerconfigjson        1         16m
services       default-token-nnz4v                                         kubernetes.io/service-account-token   3         2d
services       oauth-client-secret                                         Opaque                                2         2d
...

Friday, 28 December 2018

CWOAU0062E and IBM Client Private authentication

Hmmm, I saw this: -

CWOAU0062E: The OAuth service provider could not redirect the request because the redirect URI was not valid. Contact your system administrator to resolve the problem.

whilst trying to log into my IBM Cloud Private (ICP) 3.1.1 cluster, using it's host/service name: -

https://dmhayicp-boot.fyre.ibm.com:8443

whereas it works OK using the IP address: -

https://9.20.194.53:8443

Using this: -

Accessing your IBM® Cloud Private cluster by using the management console

as reference, the suggestion is that one always use the IP address ( of the boot/management node ), but I wondered if/whether I can use the host/service name.

Reading up, I *wonder* whether I had neglected to set this: -

cluster_access_ip

in the config.yaml when I first built the cluster.

However, that appears to have "gone away" with 3.1.1, in that it's not mentioned in the KC here: -

Customizing the cluster with the config.yaml file

although it may have been replaced by: -

cluster_lb_address

...
In an environment that has multiple network interfaces (NICs), use cluster_lb_address to set a public or external IP address for the management services in your cluster. You can specify a fully-qualified domain name instead of the IP address.

This public address is assigned to the master node, used to access the console, and also used to configure kubectl.

In an HA environment, cluster_lb_address masks the cluster_vip as the leading master IP.
...

Ah well, some more digging .....


*UPDATE*

Darn, I'm good ....

I updated my config.yaml file, adding: -

cluster_lb_address: dmhayicp-boot.fyre.ibm.com

and dropped the cluster: -

docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.1.1-ee uninstall

and rebuilt the same: -

docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.1.1-ee install

and now we are in like the veritable Flynn.

I can access my cluster via this URL: -

https://dmhayicp-boot.fyre.ibm.com:8443/console/

as well as via the IP address: -

https://9.20.194.53:8443/console/

Obviously this is somewhat disruptive AND destructive.

Therefore, please please please use caution before trying this in YOUR environment.

Remember, folks, YMMV and Caveat Emptor :-)

Tuesday, 18 December 2018

DB2 on Windows - More fun with uninstallation

Following my earlier post: -

Fun and Games with IBM DB2 on Windows

using a different Windows 2012 R2 box, which is NOT and has NEVER been an Active Directory domain controller, I was again trying and failing to uninstall DB2.

No matter what I tried, including: -

msiexec.exe /i {5F3AC8C5-2EB8-4443-AC5D-D4AA4BD5BC21} /qb /norestart REMOVE=ALL 

where I used the previous suggestion: -

wmic product get /format:csv > c:\temp\software.csv

to get the registry ID of the DB2 installation, or this: -

db2unins.bat -f

I couldn't remove DB2 - instead, the server just logged out, leaving DB2 intact.

I did some digging into the syntax of the latter command, and found the -t tracing flag.

This time around, I did this: -

db2unins.bat -f -t c:\temp\foobar.trc

which resulted in this: -

Start to uninstall DB2 products......
Successfully killed all running DB2 processes.
Successfully removed DB2 environment variables.
Successfully cleaned the registry.
Finished manually uninstalling all the DB2 products.  Restart your computer.

and it worked !

One reboot later, back in the game ....

Easy when you know how - although rm -Rf is SO much easier !

Monday, 17 December 2018

Fun and Games with IBM DB2 on Windows

Long story, very short, I've been having some fun n' games with a DB2 11.1 installation on Windows Server 2012 FP2.

I was struggling to install DB2 afresh even I thought I'd uninstalled it earlier.

As ever, being a doofus, I'd also fiddled around with AD: -

  1. Installed DB2 11.1, plus fix pack
  2. Turned my Windows server into an Active Directory Domain Controller
  3. Tried/failed to uninstall DB2
  4. Turned my Windows server back into a standalone server
  5. Tried/failed to uninstall DB2
In one particular case, I managed to cause the server crash n' burn simply by running the command: -

db2unins -f

which was singularly impressive.

However, I was still unable to install a fresh copy of DB2, even though I thought I'd uninstalled it .....

I fiddled and faffed around, and eventually got some more debug: -


C:\temp\UNIVERSAL\db2\windows>wininst.exe -e ese -t c:\db2log.trc -l c:\db2log.t
xt

The resulting log file was of interest: -

type c:\db2log.txt

=== Verbose logging started: 17/12/2018  16:05:36  Build type: SHIP UNICODE 5.00
.9600.00  Calling process: C:\Windows\system32\MSIEXEC.EXE ===
MSI (c) (EC:20) [16:05:36:996]: Font created.  Charset: Req=0, Ret=0, Font: Req=
MS Shell Dlg, Ret=MS Shell Dlg

MSI (c) (EC:20) [16:05:36:996]: Font created.  Charset: Req=0, Ret=0, Font: Req=
MS Shell Dlg, Ret=MS Shell Dlg

MSI (c) (EC:D0) [16:05:36:996]: Resetting cached policy values
MSI (c) (EC:D0) [16:05:36:996]: Machine policy value 'Debug' is 0
MSI (c) (EC:D0) [16:05:36:996]: ******* RunEngine:
           ******* Product: C:\temp\UNIVER~1\db2\windows\DB2 Server.msi
           ******* Action:
           ******* CommandLine: **********
MSI (c) (EC:D0) [16:05:36:996]: Machine policy value 'TransformsSecure' is 1
MSI (c) (EC:D0) [16:05:36:996]: Machine policy value 'DisableUserInstalls' is 0
MSI (c) (EC:D0) [16:05:36:996]: Specified instance {0C2D546C-8629-44E4-9202-0D5F
EA15FECF} via transform :ESE1033.mst;C:\temp\UNIVER~1\db2\windows\SERVER\1033.MS
T is already installed. MSINEWINSTANCE requires a new instance that is not insta
lled.
MSI (c) (EC:D0) [16:05:36:996]: MainEngineThread is returning 1639
=== Verbose logging stopped: 17/12/2018  16:05:36 ===

This blog post: -


introduced me to the Windows Installed Command Line: -

wmic product get /format:csv > c:\temp\software.csv

which provided me with this: -

Node,AssignmentType,Caption,Description,HelpLink,HelpTelephone,IdentifyingNumber,InstallDate,InstallDate2,InstallLocation,InstallSource,InstallState,Language,LocalPackage,Name,PackageCache,PackageCode,PackageName,ProductID,RegCompany,RegOwner,SKUNumber,Transforms,URLInfoAbout,URLUpdateInfo,Vendor,Version,WordCount
WIN2012,1,Microsoft Visual C++ 2017 x86 Additional Runtime - 14.12.25810,Microsoft Visual C++ 2017 x86 Additional Runtime - 14.12.25810,http://go.microsoft.com/fwlink/?LinkId=133405,,{7FED75A1-600C-394B-8376-712E2A8861F2},20181213,,,C:\ProgramData\Package Cache\{7FED75A1-600C-394B-8376-712E2A8861F2}v14.12.25810\packages\vcRuntimeAdditional_x86\,5,1033,C:\Windows\Installer\a2987.msi,Microsoft Visual C++ 2017 x86 Additional Runtime - 14.12.25810,C:\Windows\Installer\a2987.msi,{93D97CAA-B398-4F6A-8898-9AAA305C718C},vc_runtimeAdditional_x86.msi,,,,,,,,Microsoft Corporation,14.12.25810,2
WIN2012,1,VMware Tools,VMware Tools,,,{748D3A12-9B82-4B08-A0FF-CFDE83612E87},20181213,,C:\Program Files\VMware\VMware Tools\,C:\Program Files\Common Files\VMware\InstallerCache\,5,1033,C:\Windows\Installer\a2992.msi,VMware Tools,C:\Windows\Installer\a2992.msi,{F8A2F64E-4E92-4740-A305-687ECF5B7653},{748D3A12-9B82-4B08-A0FF-CFDE83612E87}.msi,,,,,,,,VMware, Inc.,10.3.2.9925305,2
WIN2012,1,Microsoft Visual C++ 2017 x64 Additional Runtime - 14.12.25810,Microsoft Visual C++ 2017 x64 Additional Runtime - 14.12.25810,http://go.microsoft.com/fwlink/?LinkId=133405,,{2CD849A7-86A1-34A6-B8F9-D72F5B21A9AE},20181213,,,C:\ProgramData\Package Cache\{2CD849A7-86A1-34A6-B8F9-D72F5B21A9AE}v14.12.25810\packages\vcRuntimeAdditional_amd64\,5,1033,C:\Windows\Installer\a298f.msi,Microsoft Visual C++ 2017 x64 Additional Runtime - 14.12.25810,C:\Windows\Installer\a298f.msi,{6D0A1ACD-F1C9-464F-8C70-F10295482CBE},vc_runtimeAdditional_x64.msi,,,,,,,,Microsoft Corporation,14.12.25810,2
WIN2012,1,Google Update Helper,Google Update Helper,,,{60EC980A-BDA2-4CB6-A427-B07A5498B4CA},20181217,,,C:\Program Files (x86)\Google\Update\1.3.33.17\,5,1033,C:\Windows\Installer\7761e6.msi,Google Update Helper,C:\Windows\Installer\7761e6.msi,{D91BA6B5-F113-423F-B5A8-A4E6EA34919E},GoogleUpdateHelper.msi,,,,,,,,Google Inc.,1.3.33.17,0
WIN2012,1,Microsoft Visual C++ 2017 x86 Minimum Runtime - 14.12.25810,Microsoft Visual C++ 2017 x86 Minimum Runtime - 14.12.25810,http://go.microsoft.com/fwlink/?LinkId=133405,,{828952EB-5572-3666-8CA9-000B6CE79350},20181213,,,C:\ProgramData\Package Cache\{828952EB-5572-3666-8CA9-000B6CE79350}v14.12.25810\packages\vcRuntimeMinimum_x86\,5,1033,C:\Windows\Installer\a2983.msi,Microsoft Visual C++ 2017 x86 Minimum Runtime - 14.12.25810,C:\Windows\Installer\a2983.msi,{F194F15D-77FE-4813-9B85-C2FA80E6E984},vc_runtimeMinimum_x86.msi,,,,,,,,Microsoft Corporation,14.12.25810,2
WIN2012,1,DB2 Server Edition - DBSKLMV30,DB2 Server Edition - DBSKLMV30,http://www.ibm.com/support/docview.wss?rs=71&uid=swg27009474,,{0C2D546C-8629-44E4-9202-0D5FEA15FECF},20181217,,C:\IBM\DB2SKLMV30\,C:\IBM\DB2SKL~1\awse\image\db2\Windows\,5,0,C:\Windows\Installer\66fa6e.msi,DB2 Server Edition - DBSKLMV30,C:\Windows\Installer\66fa6e.msi,{F463DB06-86CB-44C9-A09F-C75B3020B3AF},DB2 Server.msi,,,,,|:ESEinst0.mst;:ESE1033.mst;C:\IBM\DB2SKL~1\awse\image\db2\Windows\SERVER\1033.MST,http://www.software.ibm.com/db2,http://www.ibm.com/db2,IBM,11.1.2020.1393,0
WIN2012,1,Microsoft Visual C++ 2017 x64 Minimum Runtime - 14.12.25810,Microsoft Visual C++ 2017 x64 Minimum Runtime - 14.12.25810,http://go.microsoft.com/fwlink/?LinkId=133405,,{C99E2ADC-0347-336E-A603-F1992B09D582},20181213,,,C:\ProgramData\Package Cache\{C99E2ADC-0347-336E-A603-F1992B09D582}v14.12.25810\packages\vcRuntimeMinimum_amd64\,5,1033,C:\Windows\Installer\a298b.msi,Microsoft Visual C++ 2017 x64 Minimum Runtime - 14.12.25810,C:\Windows\Installer\a298b.msi,{1C423F21-E891-44F3-8FE9-E37D44470EF1},vc_runtimeMinimum_x64.msi,,,,,,,,Microsoft Corporation,14.12.25810,2

which proved that DB2 was still installed.

I dug around further, and then decided to retry my previous command: -

db2unins -f

which just worked, and I was then able to validate that DB2 was suitably removed, via the aforementioned wmic command.

So, the moral of the story .....

(a) Don't fiddle with things
(b) Products installed BEFORE promoting a Windows box to a domain controller MAY need to be removed AFTER reverting the box BACK to a standalone server

But I've learned more .....

.... which is nice :-) 

CTGKM9063E The Application Server Administrator Password field is empty

During an deployment of IBM Security Key Lifecycle Manager (SKLM), I'm going through a manual silent installation of the stack, which includes: -


  • IBM Installation Manager 1.8.5
  • IBM DB2 11.1
  • IBM Java 8
  • IBM WebSphere Application Server 9.0.0.5
  • IBM SKLM 3.0.0.2
Specifically, this is what I have to install: -


C:\temp\disk1\im\tools\imcl.exe -version

Installation Manager (install kit)
Version: 1.8.5
Internal Version: 1.8.5000.20160506_1125
Architecture: 64-bit

No installed Installation Manager was detected.



C:\\temp\disk1\im\tools\imcl.exe listAvailablePackages -repositories
c:\temp\disk1\diskTag.inf,c:\temp\sklmFP\sklm\repository.config

com.ibm.sklm30.win_3.0.0.2
com.ibm.java.jdk.v8_8.0.5000.20170906_1611
com.ibm.sklm30.db2.win.ofng_11.1.2.2
com.ibm.sklm30.win_3.0.0.0
com.ibm.websphere.BASE.v90_9.0.5.20170918_1844


C:\temp\disk1\im\tools\imcl.exe listAvailablePackages -repositories
c:\temp\disk1\diskTag.inf,c:\temp\sklmFP\sklm\repository.config -features

com.ibm.sklm30.win_3.0.0.2 : main.feature
com.ibm.java.jdk.v8_8.0.5000.20170906_1611 : com.ibm.sdk.8
com.ibm.sklm30.db2.win.ofng_11.1.2.2 : main.feature
com.ibm.sklm30.win_3.0.0.0 : main.feature
com.ibm.websphere.BASE.v90_9.0.5.20170918_1844 : core.feature,ejbdeploy,thinclie
nt,embeddablecontainer,samples

Having populated the response file: -

\temp\disk1\SKLM_Silent_Win_Resp.xml

I started the installation: -

C:\temp\disk1\im\tools\imcl.exe -input \temp\disk1\SKLM_Silent_Win_R
esp.xml -acceptLicense

which fairly quickly failed with: -

ERROR: The following errors were generated while installing.
  ERROR: CTGKM9063E The Application Server Administrator Password field is empty

It took me a while to work out what was going wrong - eventually I read this: -

...
  • To add the encrypted passwords to the relevant elements of the response file, use the IBM Installation Manager utility to create encrypted passwords.
    For information about how to encrypt the password, see Encrypted password for response file elements.
  • ...


    and then this: -


    which led me to encode ( encrypt !! ) the password: -

    C:\temp\disk1\im\tools\imcl.exe encryptString Qp455w0rd!

    9AvB2f9EhO8MtRhZbk/dXQ==

    and updated the response file accordingly: -

    ...
    ...

    Once I did this, I was able to move forward with the installation .......

    Friday, 14 December 2018

    Consume and provide APIs with API Connect and Node.js

    From a friend: -


    When organizations need to expose functionality to the outside world, they can do so at a business-to-business level through collaboration or through a front-end service for customers. In the past, they would have used Service Oriented Architecture (SOA) practices to create web services which could be reused for each new business wishing to use the same functionality.

    Recently, organizations have looked to take advantage of the API economy which exposes the services through APIs. There are many key differences between services and APIs.

    IBM API Connect allows organizations to manage, run, and secure the APIs they provide. Their Developer Portal exposes the APIs for discovery so consumers can subscribe to the APIs and call them with their own applications.

    This tutorial will demonstrates both the API provider and API consumer journey using IBM API Connect as an API hub. We cover basic concepts and show API providers how to use Node.js for internal service and how to use Node.js in an application for the API consumer to call the API.

    Thursday, 13 December 2018

    SQL1022C There is not enough memory available to process the command.

    So this occurred again: -

    2018-12-13-20.21.11.332000+000 I64925F462           LEVEL: Event
    PID     : 1476                 TID : 1488           PROC : db2syscs.exe
    INSTANCE: DB2                  NODE : 000
    HOSTNAME: WIN2012
    EDUID   : 1488
    FUNCTION: DB2 UDB, base sys utilities, DB2StartMain, probe:5792
    MESSAGE : ZRC=0xFFFFFC02=-1022
              SQL1022C  There is not enough memory available to process the
              command.


    following a similar experience a few years back: -

    As with last time, this occurred after I switched a Windows server ( this time Windows 2012 R2 ) from a standalone server to an Active Directory domain controller.

    This time around, I think I fixed it more quickly ….

    It's al to do with Extended Security …

    Even though the actual hostname didn't change, I wonder whether that broke it …

    This is relevant: -

    Adding extended security after installation (db2extsec command)

    If the Db2 database system was installed without extended security enabled, you can enable it by executing the command db2extsec. To execute the db2extsec command you must be a member of the local Administrators group so that you have the authority to modify the ACL of the protected objects.

    You can run the db2extsec command multiple times, if necessary, however, if this is done, you cannot disable extended security unless you issue the db2extsec -r command immediately after each execution of db2extsec.

    Removing extended security

    CAUTION
    Do not remove extended security after it has been enabled unless absolutely necessary.


    You can remove extended security by running the command db2extsec -r, however, this will only succeed if no other database operations (such as creating a database, creating a new instance, adding table spaces, and so on) have been performed after enabling extended security. The safest way to remove the extended security option is to uninstall the Db2 database system, delete all the relevant Db2 directories (including the database directories) and then reinstall the Db2 database system without extended security enabled.


    Note the caution - for me, this is ONLY a demo/test box so it's less of a concern.

    This also helped: -


    I did check the db2nodes.cfg as that's caused me no end of fun on Unix in the past: -

    but both were OK: -

    type "C:\ProgramData\IBM\DB2\DB2COPY1\DB2\db2nodes.cfg"

    0 WIN2012 WIN2012 0

    type "C:\Users\All Users\IBM\DB2\DB2COPY1\DB2\db2nodes.cfg"

    0 WIN2012 WIN2012 0

    which matched the hostname: -

    hostname

    WIN2012

    In the end, I removed Extended Security, and things worked OK even after a reboot: -

    db2extsec /r

    Obviously I do NOT NOT NOT recommend disabling security - if this happens to you, call IBM Support.

    Remember, YMMV and Caveat Emptor

    IBM Cloud Private 3.1.1 - Debugging an installation

    I’ve been tinkering with an ICP 3.1.1 deployment, and kept seeing the same exception at the end of the installation: -

    left).
    fatal: [localhost]: FAILED! => changed=true 
     attempts: 100
     cmd: kubectl -n kube-system get daemonset auth-pdp -o=custom-columns=A:.status.numberAvailable,B:.status.desiredNumberScheduled --no-headers=true | tr -s " " | awk '$1 == $2 {print "READY"}'
     delta: '0:00:01.308879'
     end: '2018-11-22 17:00:56.092611'
     rc: 0
     start: '2018-11-22 17:00:54.783732'
     stderr: ''
     stderr_lines:
     stdout: ''
     stdout_lines:

    along with loads of these: -

    FAILED - RETRYING: Waiting for auth-pdp to start (100 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (99 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (98 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (97 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (96 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (95 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (94 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (93 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (92 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (91 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (90 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (89 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (88 retries left).

    FAILED - RETRYING: Waiting for auth-pdp to start (9 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (8 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (7 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (6 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (5 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (4 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (3 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (2 retries left).
    FAILED - RETRYING: Waiting for auth-pdp to start (1 retries left).

    ...

    I suspected that I was hitting a resource constraint, in terms of CPU or RAM.

    Looking here: -



    My wonder team suggested a number of debugging commands, all of which I ran from the boot/master node: -

    docker ps -a | grep pdp

    - This showed NOTHING running

    kubectl get ds -n kube-system

    kubectl describe node 9.20.194.53
    kubectl describe ds auth-pdp -n kube-system
    the last of which threw this up: -

    FailedPlacement - Failed to place pod on 9.20.194.53: Node didn't have enough resource

    which did confirm that it WAS a resource constraint.

    My ICP cluster has three nodes: -
    • Boot/Master
    • Management
    • Worker
    • Proxy
    as it’s just a test environment.

    The Boot/Master node ONLY had 2 CPU cores and 16 GB RAM.

    I dynamically increased the CPU cores from 2 to 8, which is the recommended minimum number, as per this: -


    and uninstalled: -

    cd /opt/ibm-cloud-private-3.1.1/cluster
    docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.1.1-ee uninstall

    and then reinstalled: -

    docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.1.1-ee install

    After an hour or so, this finished A-OK, with: -

    PLAY [Uploading images and charts of archive addons] ***************************

    TASK [archive-addon : include_tasks] *******************************************

    PLAY RECAP *********************************************************************
    9.20.194.53                : ok=157  changed=95   unreachable=0    failed=0   
    9.20.194.58                : ok=102  changed=57   unreachable=0    failed=0   
    9.20.194.61                : ok=167  changed=107  unreachable=0    failed=0   
    9.20.194.95                : ok=101  changed=56   unreachable=0    failed=0   
    localhost                  : ok=248  changed=155  unreachable=0    failed=0   


    POST DEPLOY MESSAGE ************************************************************

    The Dashboard URL: https://9.20.194.53:8443, default username/password is admin/admin

    Playbook run took 0 days, 0 hours, 54 minutes, 15 seconds
    For reference, the logs are located here: -

    ls -altrc /opt/ibm-cloud-private-3.1.1/cluster/logs

    total 368
    -rw-r--r-- 1 root root 180108 Nov 23 16:58 install.log.20181123141759
    -rw-r--r-- 1 root root  18130 Nov 26 14:34 uninstall.log.20181126143338
    drwxr-xr-x 3 root root    125 Nov 26 14:45 .
    drwxr-xr-x 2 root root     80 Nov 26 14:46 .detail
    drwxr-xr-x 9 root root    184 Nov 26 15:35 ..
    -rw-r--r-- 1 root root 175228 Nov 26 15:40 install.log.20181126144552

    Now that things are working, the debug commands are also looking good: -

    docker ps -a | grep pdp
    5474a1ae3020        5a7e7a8abb4b                          "bash -c ./startiam.…"   About an hour ago   Up About an hour                                     k8s_auth-pdp_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
    85dcec4493eb        769824455743                          "audit-entrypoint.sh"    About an hour ago   Up About an hour                                     k8s_icp-audit-service_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
    d79e794efbf1        493b365fcc13                          "sh -c 'until curl -…"   About an hour ago   Exited (0) About an hour ago                         k8s_init-pap_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
    00e86d7fcc18        493b365fcc13                          "sh -c 'until curl -…"   About an hour ago   Exited (0) About an hour ago                         k8s_init-token-service_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
    1295d90a2fca        493b365fcc13                          "sh -c 'until curl -…"   About an hour ago   Exited (0) About an hour ago                         k8s_init-identity-manager_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
    7609629cd307        493b365fcc13                          "sh -c 'until curl -…"   About an hour ago   Exited (0) About an hour ago                         k8s_init-identity-provider_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
    44523d1ef769        493b365fcc13                          "sh -c 'until curl -…"   About an hour ago   Exited (0) About an hour ago                         k8s_init-auth-service_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
    e45b13e73bed        mycluster.icp:8500/ibmcom/pause:3.1   "/pause"                 About an hour ago   Up About an hour                                     k8s_POD_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
    kubectl get ds -n kube-system

    NAME                                 DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    audit-logging-fluentd-ds             4         4         4         4            4                     40m
    auth-apikeys                         1         1         1         1            1           master=true     51m
    auth-idp                             1         1         1         1            1           master=true     51m
    auth-pap                             1         1         1         1            1           master=true     51m
    auth-pdp                             1         1         1         1            1           master=true     51m
    calico-node                          4         4         4         4            4                     54m
    catalog-ui                           1         1         1         1            1           master=true     41m
    icp-management-ingress               1         1         1         1            1           master=true     51m
    kube-dns                             1         1         1         1            1           master=true     54m
    logging-elk-filebeat-ds              4         4         4         4            4                     46m
    metering-reader                      4         4         4         4            4                     40m
    monitoring-prometheus-nodeexporter   4         4         4         4            4                     40m
    nginx-ingress-controller             1         1         1         1            1           proxy=true      53m
    nvidia-device-plugin                 4         4         4         4            4                     53m
    platform-ui                          1         1         1         1            1           master=true     40m
    service-catalog-apiserver            1         1         1         1            1           master=true     53m
    unified-router                       1         1         1         1            1           master=true     40m
    kubectl describe ds auth-pdp -n kube-system
    Name:           auth-pdp
    Selector:       component=auth-pdp,k8s-app=auth-pdp,release=auth-pdp
    Node-Selector:  master=true
    Labels:         app=auth-pdp
                    chart=auth-pdp-3.1.1
                    component=auth-pdp
                    heritage=Tiller
                    release=auth-pdp
    Annotations:    
    Desired Number of Nodes Scheduled: 1
    Current Number of Nodes Scheduled: 1
    Number of Nodes Scheduled with Up-to-date Pods: 1
    Number of Nodes Scheduled with Available Pods: 1
    Number of Nodes Misscheduled: 0
    Pods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed
    Pod Template:
      Labels:       component=auth-pdp
                    k8s-app=auth-pdp
                    release=auth-pdp
      Annotations:  scheduler.alpha.kubernetes.io/critical-pod=
      Init Containers:
       init-auth-service:
        Image:      mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
        Port:       
        Host Port:  
        Command:
          sh
          -c
          until curl -k -i -fsS https://platform-auth-service:9443/oidc/endpoint/OP/.well-known/openid-configuration | grep "200 OK"; do sleep 3; done;
        Environment:  
        Mounts:       
       init-identity-provider:
        Image:      mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
        Port:       
        Host Port:  
        Command:
          sh
          -c
          until curl --cacert /certs/ca.crt -i -fsS https://platform-identity-provider:4300 | grep "200 OK"; do sleep 3; done;
        Environment:  
        Mounts:
          /certs from cluster-ca (rw)
       init-identity-manager:
        Image:      mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
        Port:       
        Host Port:  
        Command:
          sh
          -c
          until curl --cacert /certs/ca.crt -i -fsS https://platform-identity-management:4500 | grep "200 OK"; do sleep 3; done;
        Environment:  
        Mounts:
          /certs from cluster-ca (rw)
       init-token-service:
        Image:      mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
        Port:       
        Host Port:  
        Command:
          sh
          -c
          until curl -k -i -fsS https://iam-token-service:10443/oidc/keys | grep "200 OK"; do sleep 3; done;
        Environment:  
        Mounts:       
       init-pap:
        Image:      mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
        Port:       
        Host Port:  
        Command:
          sh
          -c
          until curl --cacert /certs/ca.crt -i -fsS https://iam-pap:39001/v1/health | grep "200 OK"; do sleep 3; done;
        Environment:  
        Mounts:
          /certs from cluster-ca (rw)
      Containers:
       icp-audit-service:
        Image:      mycluster.icp:8500/ibmcom/icp-audit-service:3.1.1
        Port:       
        Host Port:  
        Limits:
          cpu:     200m
          memory:  512Mi
        Requests:
          cpu:     100m
          memory:  256Mi
        Environment:
          AUDIT_DIR:  /app/logs/audit
        Mounts:
          /app/logs/audit from shared (rw)
          /etc/logrotate.conf from logrotate-conf (rw)
          /etc/logrotate.d/audit from logrotate (rw)
          /run/systemd/journal from journal (rw)
       auth-pdp:
        Image:      mycluster.icp:8500/ibmcom/iam-policy-decision:3.1.1
        Port:       
        Host Port:  
        Requests:
          cpu:      500m
          memory:   512Mi
        Readiness:  http-get http://:7998/v1/health delay=0s timeout=1s period=10s #success=1 #failure=3
        Environment:
          DEFAULT_ADMIN_USER:        Optional: false
          AUDIT_ENABLED:                               Optional: false
          DEFAULT_ADMIN_PASSWORD:    Optional: false
          POD_NAME:                 (v1:metadata.name)
          POD_NAMESPACE:            (v1:metadata.namespace)
          CLUSTER_NAME:              Optional: false
          MONGO_DB:                platform-db
          MONGO_COLLECTION:        iam
          MONGO_USERNAME:                Optional: false
          MONGO_PASSWORD:            Optional: false
          MONGO_HOST:              mongodb
          MONGO_PORT:              27017
          MONGO_AUTHSOURCE:        admin
          CF_DB_NAME:              security-data
          DB_NAME:                 platform-db
          CAMS_PDP_URL:            http://iam-pdp:7998
          IAM_TOKEN_SERVICE_URL:   https://iam-token-service:10443
          IDENTITY_PROVIDER_URL:   https://platform-identity-provider:4300
          IAM_PAP_URL:             https://iam-pap:39001
          DEFAULT_TTL:               Optional: false
        Mounts:
          /app/logs/audit from shared (rw)
          /certs from cluster-ca (rw)
          /certs/mongodb-ca from mongodb-ca-cert (rw)
          /certs/mongodb-client from mongodb-client-cert (rw)
      Volumes:
       mongodb-ca-cert:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  cluster-ca-cert
        Optional:    false
       cluster-ca:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  cluster-ca-cert
        Optional:    false
       journal:
        Type:          HostPath (bare host directory volume)
        Path:          /run/systemd/journal
        HostPathType:  
       shared:
        Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:  
       logrotate:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      auth-pdp
        Optional:  false
       logrotate-conf:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      auth-pdp
        Optional:  false
       mongodb-client-cert:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  icp-mongodb-client-cert
        Optional:    false
    Events:
      Type    Reason            Age   From                  Message
      ----    ------            ----  ----                  -------
      Normal  SuccessfulCreate  52m   daemonset-controller  Created pod: auth-pdp-jlb6f

    kubectl describe node 9.20.194.53

    Name:               9.20.194.53
    Roles:              etcd,master
    Labels:             beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        etcd=true
                        kubernetes.io/hostname=9.20.194.53
                        master=true
                        node-role.kubernetes.io/etcd=true
                        node-role.kubernetes.io/master=true
                        role=master
    Annotations:        node.alpha.kubernetes.io/ttl=0
                        volumes.kubernetes.io/controller-managed-attach-detach=true
    CreationTimestamp:  Mon, 26 Nov 2018 14:48:44 +0000
    Taints:             dedicated=infra:NoSchedule
    Unschedulable:      false
    Conditions:
      Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
      ----             ------  -----------------                 ------------------                ------                       -------
      OutOfDisk        False   Mon, 26 Nov 2018 16:07:34 +0000   Mon, 26 Nov 2018 14:48:44 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
      MemoryPressure   False   Mon, 26 Nov 2018 16:07:34 +0000   Mon, 26 Nov 2018 14:48:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
      DiskPressure     False   Mon, 26 Nov 2018 16:07:34 +0000   Mon, 26 Nov 2018 14:48:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
      PIDPressure      False   Mon, 26 Nov 2018 16:07:34 +0000   Mon, 26 Nov 2018 14:48:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
      Ready            True    Mon, 26 Nov 2018 16:07:34 +0000   Mon, 26 Nov 2018 15:11:48 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
    Addresses:
      InternalIP:  9.20.194.53
      Hostname:    9.20.194.53
    Capacity:
     cpu:                8
     ephemeral-storage:  249436164Ki
     hugepages-1Gi:      0
     hugepages-2Mi:      0
     memory:             16424812Ki
     pods:               80
    Allocatable:
     cpu:                8
     ephemeral-storage:  249333764Ki
     hugepages-1Gi:      0
     hugepages-2Mi:      0
     memory:             16322412Ki
     pods:               80
    System Info:
     Machine ID:                 428e44fb1ec74efba5d4e3ca11fa2ac9
     System UUID:                9E82ABA4-CABA-4645-B285-409E35FDF986
     Boot ID:                    321104d1-77ca-48ee-af9d-8f8311a749a5
     Kernel Version:             4.15.0-38-generic
     OS Image:                   Ubuntu 18.04.1 LTS
     Operating System:           linux
     Architecture:               amd64
     Container Runtime Version:  docker://18.3.1
     Kubelet Version:            v1.11.3+icp-ee
     Kube-Proxy Version:         v1.11.3+icp-ee
    Non-terminated Pods:         (35 in total)
      Namespace                  Name                                                  CPU Requests  CPU Limits   Memory Requests  Memory Limits
      ---------                  ----                                                  ------------  ----------   ---------------  -------------
      cert-manager               ibm-cert-manager-cert-manager-7d656f5dd5-c7lqt        0 (0%)        0 (0%)       0 (0%)           0 (0%)
      kube-system                audit-logging-fluentd-ds-lsjs2                        0 (0%)        0 (0%)       0 (0%)           0 (0%)
      kube-system                auth-apikeys-qbg4k                                    200m (2%)     1 (12%)      300Mi (1%)       1Gi (6%)
      kube-system                auth-idp-rl8dj                                        300m (3%)     3200m (40%)  768Mi (4%)       3584Mi (22%)
      kube-system                auth-pap-s76p7                                        150m (1%)     1200m (15%)  456Mi (2%)       1536Mi (9%)
      kube-system                auth-pdp-jlb6f                                        600m (7%)     200m (2%)    768Mi (4%)       512Mi (3%)
      kube-system                calico-kube-controllers-d775694f-pzph9                250m (3%)     0 (0%)       100Mi (0%)       0 (0%)
      kube-system                calico-node-5xwb6                                     300m (3%)     0 (0%)       150Mi (0%)       0 (0%)
      kube-system                catalog-ui-vjj45                                      300m (3%)     300m (3%)    300Mi (1%)       300Mi (1%)
      kube-system                heapster-569fdfd65-ndvxh                              20m (0%)      0 (0%)       64Mi (0%)        0 (0%)
      kube-system                helm-api-6c9756484f-ql4vl                             350m (4%)     550m (6%)    556Mi (3%)       656Mi (4%)
      kube-system                helm-repo-5c8fcc8899-kd87g                            150m (1%)     200m (2%)    640Mi (4%)       640Mi (4%)
      kube-system                ibmcloud-image-enforcement-c558c6c95-xxfbx            128m (1%)     256m (3%)    128Mi (0%)       256Mi (1%)
      kube-system                icp-management-ingress-bdgt7                          200m (2%)     0 (0%)       256Mi (1%)       0 (0%)
      kube-system                icp-mongodb-0                                         0 (0%)        0 (0%)       0 (0%)           0 (0%)
      kube-system                image-manager-0                                       110m (1%)     0 (0%)       192Mi (1%)       0 (0%)
      kube-system                k8s-etcd-9.20.194.53                                  0 (0%)        0 (0%)       0 (0%)           0 (0%)
      kube-system                k8s-master-9.20.194.53                                5m (0%)       0 (0%)       10Mi (0%)        0 (0%)
      kube-system                k8s-proxy-9.20.194.53                                 0 (0%)        0 (0%)       0 (0%)           0 (0%)
      kube-system                kube-dns-n5qfk                                        100m (1%)     0 (0%)       70Mi (0%)        0 (0%)
      kube-system                logging-elk-filebeat-ds-z6xpm                         0 (0%)        0 (0%)       0 (0%)           0 (0%)
      kube-system                mariadb-0                                             500m (6%)     1 (12%)      128Mi (0%)       512Mi (3%)
      kube-system                metering-reader-vslkf                                 250m (3%)     0 (0%)       512Mi (3%)       0 (0%)
      kube-system                mgmt-repo-5cb9f9dc7b-thc28                            150m (1%)     200m (2%)    640Mi (4%)       640Mi (4%)
      kube-system                monitoring-prometheus-nodeexporter-xh787              0 (0%)        0 (0%)       0 (0%)           0 (0%)
      kube-system                nvidia-device-plugin-vm9m8                            150m (1%)     0 (0%)       0 (0%)           0 (0%)
      kube-system                platform-api-86dff555db-llbz2                         100m (1%)     100m (1%)    128Mi (0%)       512Mi (3%)
      kube-system                platform-deploy-749fc56fb7-cmjql                      100m (1%)     100m (1%)    128Mi (0%)       512Mi (3%)
      kube-system                platform-ui-qtvwm                                     300m (3%)     300m (3%)    256Mi (1%)       256Mi (1%)
      kube-system                secret-watcher-7994f75f9b-l4ffh                       0 (0%)        0 (0%)       0 (0%)           0 (0%)
      kube-system                service-catalog-apiserver-79nmn                       100m (1%)     100m (1%)    20Mi (0%)        200Mi (1%)
      kube-system                service-catalog-controller-manager-9c7bcf586-6kp2c    100m (1%)     100m (1%)    20Mi (0%)        200Mi (1%)
      kube-system                tiller-deploy-5677cc5dfb-m5k9h                        100m (1%)     0 (0%)       128Mi (0%)       0 (0%)
      kube-system                unified-router-zf4fs                                  20m (0%)      0 (0%)       64Mi (0%)        0 (0%)
      kube-system                web-terminal-55c549d48d-bn98q                         10m (0%)      100m (1%)    64Mi (0%)        512Mi (3%)
    Allocated resources:
      (Total limits may be over 100 percent, i.e., overcommitted.)
      Resource  Requests      Limits
      --------  --------      ------
      cpu       5043m (63%)   8906m (111%)
      memory    6846Mi (42%)  11852Mi (74%)
    Events:
      Type    Reason     Age   From                  Message
      ----    ------     ----  ----                  -------
      Normal  NodeReady  55m   kubelet, 9.20.194.53  Node 9.20.194.53 status is now: NodeReady

    Reminder - installing podman and skopeo on Ubuntu 22.04

    This follows on from: - Lest I forget - how to install pip on Ubuntu I had reason to install podman  and skopeo  on an Ubuntu box: - lsb_rel...