Tuesday 29 January 2019

Helm and "Error: trying to send message larger than max (23014173 vs. 20971520)"

I've been seeing this: -

Error: trying to send message larger than max (23014173 vs. 20971520)

when running: -

helm list --tls

against my IBM Cloud Private 3.1.1 environment.

This had been working until I rebuilt my IBM Cloud Automation Manager (CAM) 3.1.0 environment last week - which MAY be coincidence :-)

I'm using Helm / Tiller v2.9.1, as per this: -

helm version --tls

Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

This appears to be the subject of a huge number of issues on Github, but with no obvious fix.

The best ( ! ) solution is to simply limit the number of results that the command returns, using the --max XXX switch, as per this example: -

helm list --tls  --max 100

Through trial and error, I realised that a number of my Helm releases were FAILED, as per this: -

NAME                   REVISION UPDATED                 STATUS   CHART                           NAMESPACE   
agile-tiger           1       Fri Jan 25 14:25:24 2019 FAILED   ibm-cam-3.1.0                   cert-manager
audit-logging         1       Tue Jan  1 17:53:11 2019 DEPLOYED audit-logging-3.1.1             kube-system 
auth-apikeys           1       Tue Jan  1 17:44:00 2019 DEPLOYED auth-apikeys-3.1.1               kube-system 
auth-idp               1       Tue Jan  1 17:43:52 2019 DEPLOYED auth-idp-3.1.1                   kube-system 
auth-pap               1       Tue Jan  1 17:44:09 2019 DEPLOYED auth-pap-3.1.1                   kube-system 
auth-pdp               1       Tue Jan  1 17:44:17 2019 DEPLOYED auth-pdp-3.1.1                   kube-system 
broken-narwhal         1       Fri Jan 25 14:12:45 2019 FAILED   ibm-cam-3.1.0                   cert-manager
calico                 1       Tue Jan  1 17:39:56 2019 DEPLOYED calico-3.1.1                     kube-system 
catalog-ui             1       Tue Jan  1 17:50:55 2019 DEPLOYED icp-catalog-chart-3.1.1         kube-system 
cert-manager           1       Tue Jan  1 17:41:17 2019 DEPLOYED ibm-cert-manager-3.1.1           cert-manager
custom-metrics-adapter 1       Tue Jan  1 17:52:12 2019 DEPLOYED ibm-custom-metrics-adapter-3.1.1 kube-system 
handy-hummingbird     1       Fri Jan 25 13:46:22 2019 FAILED   ibm-cam-3.1.0                   cert-manager
harping-abalone       1       Fri Jan 25 14:08:24 2019 FAILED   ibm-cam-3.1.0                   cert-manager
heapster               1       Tue Jan  1 17:50:18 2019 DEPLOYED heapster-3.1.1                   kube-system 
helm-api               1       Tue Jan  1 17:51:06 2019 DEPLOYED helm-api-3.1.1                   kube-system 

Working on the assumption (!) that this may be part of the problem i.e. the helm list command was simply choking on the number of releases, I did a little bit of purging, ending up with a command like this: -

helm delete --purge `helm list --tls  --max 100|grep FAILED|awk '{print $1}'` --tls

Again, through trial and error, I ended up with NO failed releases and, even better, a working helm list --tls command :-)

I no longer seem to need to specify the --max switch ....

... which is nice

One other thing - the problem also seemed to affect the Workloads -> Helm Releases element of the ICP UI. Having got rid of the FAILED releases, that also now works.......

Monday 28 January 2019

App Connect Enterprise v11 for IBM Cloud Private on Red Hat OpenShift or natively on OpenShift

As recommended by a colleague today, in the context of IBM AppConnect Enterprise in a containerised world: -

Docker, Kubernetes, and Helm work together to provide a platform for managing, packaging, and orchestrating containerized workloads. For IBM App Connect Enterprise this enables the packaging of an integration server into a standardized unit for deployment that can be promoted through a development pipeline then deployed, managed, and scaled. This blog will discuss how to run IBM App Connect Enterprise (ACE) on OpenShift with IBM Cloud Private (ICP) as well as running ACE natively on OpenShift.

App Connect Enterprise v11 for IBM Cloud Private on Red Hat OpenShift or natively on OpenShift

Friday 25 January 2019

IBM DB2 - Databases, Users, Schemas and Tables

I knocked this up for an IBM colleague, as a basic illustration of the difference between users and schemas.

Hope it's (a) right and (b) of some use: -

Connect to the DB as the DB administrator

db2 connect to sample

Create a table

- The schema is foobar

Create db2 create table foobar.snafu(name varchar(20))

DB20000I  The SQL command completed successfully.

Grant access to the table for a user

- The user is sklmdb31

db2 grant all on table foobar.snafu to user sklmdb31

DB20000I  The SQL command completed successfully.

Connect to the DB

db2 connect to sample user sklmdb31 using Qp455w0rd@

   Database Connection Information

 Database server        = DB2/NT64 11.1.4.4
 SQL authorization ID   = SKLMDB31
 Local database alias   = SAMPLE

Query the table

db2 select * from foobar.snafu

NAME
--------------------

  0 record(s) selected.

Insert a new row

db2 insert into foobar.snafu(name) values('Dave')

DB20000I  The SQL command completed successfully.

Query the table

db2 select * from foobar.snafu

NAME
--------------------
Dave

  1 record(s) selected.

Set the current schema

db2 set current schema foobar

List the table

db2 list tables for schema foobar

Table/View                      Schema          Type  Creation time

------------------------------- --------------- ----- --------------------------

SNAFU                           FOOBAR          T     2019-01-25-11.42.03.697001


  1 record(s) selected.

Validate connection

db2 connect

   Database Connection Information

Database server        = DB2/NT64 11.1.4.4
SQL authorization ID   = SKLMDB31
Local database alias   = SAMPLE

Terminate the connection


db2 terminate

Thursday 24 January 2019

IBM (Lotus) Notes and the borked bookmarks

Wow, this dusted off some braincells which I've not used in a while ....

Whilst setting up a new MacBook Pro, I realised that I'd forgotten to copy across the Notes database that contains my bookmarks - bookmarks.nsf.

I copied this across from my SuperDuper backup: -

cp /Volumes/Untitled/Users/davidhay/Library/Application\ Support/IBM\ Notes\ Data/bookmark.nsf ~/Library/Application\ Support/IBM\ Notes\ Data

and started Notes.

Which immediately failed with a less-than-useful message.

So I force-quit Notes, and brought it up from the command-line ( via Terminal ) : -

/Applications/IBM\ Notes.app/Contents/MacOS/Notes -basic

which threw up the same error in the UI, but also gave me some more diags in the terminal: -

[27DE:0002-10BBE75C] **** DbMarkCorruptAgain(RRVReadPage: RRV container header was invalid), DB=/Users/hayd/Library/Application Support/IBM Notes Data/bookmark.nsf  TID=[27DE:0002-10BBE75C] File=dbrrv.c Line=2850 ***

I checked the permissions etc. of the file: -

ls -al ~/Library/Application\ Support/IBM\ Notes\ Data/bookmark.nsf 

which all looked as expected: -

-rw-r--r--@ 1 hayd  staff  30670848 24 Jan 16:38 /Users/hayd/Library/Application Support/IBM Notes Data/bookmark.nsf

so I tried compacting the file: -

export DYLD_LIBRARY_PATH="/Applications/IBM Notes.app/Contents/MacOS"
cd ~/Library/Application\ Support/IBM\ Notes\ Data
/Applications/IBM\ Notes.app/Contents/MacOS/Support/NotesCompact -c bookmark.nsf 

which borked with this: -

[2882:0002-10A78A5C] **** DbMarkCorruptAgain(RRVReadPage: RRV container header was invalid), DB=/Users/hayd/Library/Application Support/IBM Notes Data/bookmark.nsf  TID=[2882:0002-10A78A5C] File=dbrrv.c Line=2850 ***
[2882:0002-10A78A5C] 24/01/2019 16:50:09   Performing consistency check on bookmark.nsf... 
[2882:0002-10A78A5C] 24/01/2019 16:50:09   Completed consistency check on bookmark.nsf 
[2882:0002-10A78A5C] 24/01/2019 16:50:11   Database compactor error: RRV bucket is corrupt.

RRV bucket is corrupt.

so I tried fixing up the file: -

/Applications/IBM\ Notes.app/Contents/MacOS/Support/NotesFixup bookmark.nsf 

which borked again: -

[2883:0002-11564D5C] 24/01/2019 16:50:32   Database Fixup: Started:  bookmark.nsf
[2883:0002-11564D5C] **** DbMarkCorruptAgain(RRVReadPage: RRV container header was invalid), DB=/Users/hayd/Library/Application Support/IBM Notes Data/bookmark.nsf  TID=[2883:0002-11564D5C] File=dbrrv.c Line=2850 ***
[2883:0002-11564D5C] 24/01/2019 16:50:32   Performing consistency check on bookmark.nsf... 
[2883:0002-11564D5C] 24/01/2019 16:50:32   Completed consistency check on bookmark.nsf 
[2883:0002-11564D5C] 24/01/2019 16:50:32   Database Fixup: Unable to fixup database /hayd/Library/Application Support/IBM Notes Data/bookmark.nsf: RRV bucket is corrupt.
[2883:0002-11564D5C] 24/01/2019 16:50:32   Database Fixup: Shutdown

so I gave up and copied the file again: -

cp /Volumes/Untitled/Users/davidhay/Library/Application\ Support/IBM\ Notes\ Data/bookmark.nsf ~/Library/Application\ Support/IBM\ Notes\ Data

and Notes is now happy ...

The moral of the story ? If at first you don't succeed, try try again ( thanks Mum ! )

PS For the record, once I re-copied the file, the fixup and compact processes just worked: -

 /Applications/IBM\ Notes.app/Contents/MacOS/Support/NotesFixup bookmark.nsf 

[28CA:0002-11171A5C] 24/01/2019 17:02:31   Database Fixup: Started:  bookmark.nsf
[28CA:0002-11171A5C] 24/01/2019 17:02:31   Performing consistency check on bookmark.nsf... 
[28CA:0002-11171A5C] 24/01/2019 17:02:31   Completed consistency check on bookmark.nsf 
[28CA:0002-11171A5C] 24/01/2019 17:02:31   Performing consistency check on views in database bookmark.nsf 
[28CA:0002-11171A5C] 24/01/2019 17:02:31   Informational, rebuilding view - user specified REBUILD (reading /Users/hayd/Library/Application Support/IBM Notes Data/bookmark.nsf default design note Title:'')
[28CA:0002-11171A5C] 24/01/2019 17:02:32   View selection or column formula changed
[28CA:0002-11171A5C] 24/01/2019 17:02:32   Informational, rebuilding view - notes have been purged since last update (reading /Users/hayd/Library/Application Support/IBM Notes Data/bookmark.nsf view note Title:'(Collaborations) Collaborations')
[28CA:0002-11171A5C] 24/01/2019 17:02:32   Informational, rebuilding view - notes have been purged since last update (reading /Users/hayd/Library/Application Support/IBM Notes Data/bookmark.nsf view note Title:'(Downloads)')
[28CA:0002-11171A5C] 24/01/2019 17:02:32   Informational, rebuilding view - notes have been purged since last update (reading /Users/hayd/Library/Application Support/IBM Notes Data/bookmark.nsf view note Title:'(Home Pages and Links) PersPage')
[28CA:0002-11171A5C] 24/01/2019 17:02:32   Informational, rebuilding view - notes have been purged since last update (reading /Users/hayd/Library/Application Support/IBM Notes Data/bookmark.nsf view note Title:'(Layouts) (Layouts)')
[28CA:0002-11171A5C] 24/01/2019 17:02:32   Informational, rebuilding view - notes have been purged since last update (reading /Users/hayd/Library/Application Support/IBM Notes Data/bookmark.nsf view note Title:'(URLs)')
[28CA:0002-11171A5C] 24/01/2019 17:02:32   Completed consistency check on views in database bookmark.nsf 
[28CA:0002-11171A5C] 24/01/2019 17:02:32   Database Fixup: Shutdown

/Applications/IBM\ Notes.app/Contents/MacOS/Support/NotesCompact -c bookmark.nsf

[28CB:0002-10BBC85C] The ID file being used is: /Users/hayd/Library/Application Support/IBM Notes Data/gb006734.id
[28CB:0002-10BBC85C] Enter password (press the Esc key to abort): 

[28CB:0005-719000] 24/01/2019 17:02:56   Compacting bookmark.nsf (Bookmarks (8.5)),  -c bookmark.nsf
[28CB:0005-719000] 24/01/2019 17:02:57   Compacted  bookmark.nsf, 14848K bytes recovered (50%),  -c bookmark.nsf
[28CB:0002-10BBC85C] 24/01/2019 17:02:58   Database compactor process shutdown 

Wednesday 23 January 2019

IBM Microclimate on IBM Cloud Private - From Soup to Nuts

As per my previous post: -

W00t, IBM Microclimate running on IBM Cloud Private ...

here's a very quick run-through my build process, having just REDONE FROM START.

It's worth reiterating that the official documentation here: -

https://github.com/IBM/charts/blob/master/stable/ibm-microclimate/README.md

is absolutely the way to go.

My notes are MY notes; YMMV

And, with that caveat, here we go: -

Create Non-Default Name Space

kubectl create namespace microclimate

Export HELM_HOME variable

export HELM_HOME=~/.helm

Configure Kubectl and Helm clients to use new namespaces

cloudctl login -a https://mycluster.icp:8443 -n microclimate --skip-ssl-validation -u admin -p admin

Create a namespace for the Microclimate pipeline

kubectl create namespace microclimate-pipeline-deployments

Create Cluster Image Policy

vi mycip.yaml

apiVersion: securityenforcement.admission.cloud.ibm.com/v1beta1
kind: ClusterImagePolicy
metadata:
  name: microclimate-cluster-image-policy
spec:
  repositories:
  - name: mycluster.icp:8500/*
  - name: docker.io/maven:*
  - name: docker.io/jenkins/*
  - name: docker.io/docker:*

kubectl apply -f mycip.yaml

Create Docker Registry Secret

- From Microclimate to Docker
- Used to push newly created applications to internal Docker registry

kubectl create secret docker-registry microclimate-registry-secret \
  --docker-server=mycluster.icp:8500 \
  --docker-username=admin \
  --docker-password=admin

Create Generic Secret

- From Microclimate to Helm

kubectl create secret generic microclimate-helm-secret --from-file=cert.pem=$HELM_HOME/cert.pem --from-file=ca.pem=$HELM_HOME/ca.pem --from-file=key.pem=$HELM_HOME/key.pem

Create Docker Regisry Secret

- From Microclimate to Pipeline

kubectl create secret docker-registry microclimate-pipeline-secret \
  --docker-server=mycluster.icp:8500 \
  --docker-username=admin \
  --docker-password=admin \
  --namespace=microclimate-pipeline-deployments

Validate default Service Account

kubectl describe serviceaccount default --namespace microclimate-pipeline-deployments

Add microclimate-pipeline-secret to default Service Account

kubectl patch serviceaccount default --namespace microclimate-pipeline-deployments -p "{\"imagePullSecrets\": [{\"name\": \"microclimate-pipeline-secret\"}]}"

Retrieve Cluster Proxy Address

kubectl get configmaps ibmcloud-cluster-info -n kube-public -o jsonpath='{.data.proxy_address}'

10.51.4.87

kubectl get nodes -l proxy=true

NAME         STATUS    ROLES     AGE       VERSION
10.51.4.87   Ready     proxy     13d       v1.11.3+icp-ee

Note that my Proxy node has a private 10.X.X.X IP address, and thus I cannot use this for the Microclimate Ingress; instead, I'll use the ICP dashboard ( Management/Master node ) address, which is public ( to me ).

This is further explained in the README.md: -

If the name of this node is an IP address, you can test that this IP is usable as an ingress domain by navigating to https://. If you receive a default backend - 404 error, then this IP is externally accessible and should be used as the global.ingressDomain value. If you cannot reach this address, copy the IP address that you use to access the IBM Cloud Private dashboard. Use the copied address to set the global.ingressDomain value.

Create Persistent Volumes / Persistent Volume Claims

- Note that I'm using YAML to create the Persistent Volumes and the corresponding Claims
- In my case, the PVs are actually "pointing" to NFS volumes, exported from my Boot node

kubectl apply -f createMC_PV1.yaml
kubectl apply -f createMC_PV2.yaml
kubectl apply -f createMC_PVC1.yaml
kubectl apply -f createMC_PVC2.yaml

Add IBM Helm charts repo

helm repo add ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable/

Install Microclimate Helm chart

helm install --name microclimate --namespace microclimate --set global.rbac.serviceAccountName=micro-sa,jenkins.rbac.serviceAccountName=pipeline-sa,global.ingressDomain=9.20.193.177.nip.io,persistence.useDynamicProvisioning=false,persistence.size=8Gi,jenkins.Persistence.ExistingClaim=microclimate-jenkins,persistence.existingClaimName=microclimate-ibm-microclimate ibm-charts/ibm-microclimate --tls

...
1. Access the Microclimate portal at the following URL: https://microclimate.9.20.193.177.nip.io

Target namespace set to: microclimate-pipeline-deployments, please verify this exists before creating pipelines
...

Validate Microclimate pods

kubectl get pods -n microclimate

...
NAME                                                    READY     STATUS    RESTARTS   AGE
microclimate-ibm-microclimate-65f559cf48-ml587          1/1       Running   0          2m
microclimate-ibm-microclimate-atrium-5c7dc4d4f9-7hnv7   1/1       Running   0          2m
microclimate-ibm-microclimate-devops-7b7dd69655-g8pjv   0/1       Running   0          2m
microclimate-jenkins-64c7446647-glrpr                   1/1       Running   0          2m
...

Valiate Ingress Points

kubectl get ing

...
NAME                            HOSTS                              ADDRESS      PORTS     AGE
microclimate-ibm-microclimate   microclimate.9.20.193.177.nip.io   10.51.4.87   80, 443   3m
microclimate-jenkins            jenkins.9.20.193.177.nip.io        10.51.4.87   80, 443   3m
...

Validate Helm chart

helm list --tls --namespace microclimate

...
NAME         REVISION UPDATED                 STATUS   CHART                   NAMESPACE
microclimate 1       Wed Jan 23 14:14:45 2019 DEPLOYED ibm-microclimate-1.10.0 microclimate
...

helm status microclimate --tls

...
LAST DEPLOYED: Wed Jan 23 14:14:45 2019
NAMESPACE: microclimate
STATUS: DEPLOYED
...

Access MC UI

- Note that this uses the NIP.IO service

...
 NIP.IO maps ..nip.io to the corresponding , even 127.0.0.1.nip.io maps to 127.0.0.1 
...

https://microclimate.9.20.193.177.nip.io

Login as admin/admin

Attempt create a new project - I chose Java / Lagom as per this: -

Create and deploy Lagom Reactive applications with Microclimate

Finally, if it helps, the File Watcher pod can be monitored, via a command such as this: -

kubectl logs -f `kubectl get pods -n microclimate | grep -i watcher | awk '{print $1}'` -n microclimate

( watch out for the so-called back-tick character, which doesn't always paste well from a browser )


W00t, IBM Microclimate running on IBM Cloud Private ...

So another "Voyage of Discovery" post .....

I'm tinkering with IBM Microclimate : -

Microclimate provides an end-to-end, cloud-native solution for creating, building, testing and deploying applications. The solution offers services and tools to help you create and modernize applications in one seamless experience. It covers each step of the process from writing and testing code to building and deployment. The solution enables containerized development, rapid iteration with real-time performance insights, intelligent feedback, diagnostic services, an integrated DevOps pipeline and deployment to the cloud.

also well documented here: -

Microclimate is an end to end development environment that lets you rapidly create, edit, and deploy applications. Applications are run in containers from day one and can be delivered into production on Kubernetes through an automated DevOps pipeline using Jenkins. Microclimate can be installed locally or on IBM Cloud Private, and currently supports Java, Node.js, and Swift.

https://microclimate-dev2ops.github.io/

I've played with this before: -

Playing with Microclimate on IBM Cloud Private

Microclimate on IBM Cloud Private - Permission to write

and will be posting my own build notes, but I'm still following the official documentation here: -

https://github.com/IBM/charts/blob/master/stable/ibm-microclimate/README.md

Having followed all of the pre-requisite steps ( which mainly involve creating lots of artefacts using kubectl ), and having installed the Helm chart, I was following this tutorial: -

Create and deploy Lagom Reactive applications with Microclimate

but found that the resulting Docker container would never start.

I dug around within IBM Cloud Private (ICP) or, to be more accurate, within Kubernetes, upon which ICP is built.

Microclimate comprises a number of Pods, sitting within a dedicated namespace - I'm using micro-climate : -

kubectl get pods -n micro-climate

NAME                                                              READY     STATUS    RESTARTS   AGE
mc-adamjava-381f2a40-1ef8-11e9-bb42-adamjava-5c9697c464-dbqdk     1/1       Running   0          27m
mc-yoda-22f5f600-1efb-11e9-964a-yoda-bbb88b5d4-lqzwh              1/1       Running   0          52m
microclimate-ibm-microclimate-67cfd99c7b-bj7p2                    1/1       Running   0          55m
microclimate-ibm-microclimate-admin-editor-77ddbdd86-xzlbj        2/2       Running   0          53m
microclimate-ibm-microclimate-admin-filewatcher-6cc6c785cf6lsjx   1/1       Running   0          53m
microclimate-ibm-microclimate-admin-loadrunner-856b4b48b6-jqlmc   1/1       Running   0          53m
microclimate-ibm-microclimate-atrium-7f75d754fd-fp244             1/1       Running   0          1h
microclimate-ibm-microclimate-devops-568c4c5989-kjcqs             1/1       Running   0          1h
microclimate-jenkins-678584959-64jlm                              1/1       Running   0          1h

Given that all were running happily, I chose to dive into the logs of the File Watcher pod: -

kubectl logs -f microclimate-ibm-microclimate-admin-filewatcher-6cc6c785cf6lsjx -n micro-climate

and spotted this: -

[ERROR Tue Jan 22 13:58:43 UTC 2019 | Project: foobar | File Name: null | Function Name: null | Line Number: null] _tickCallback : 189 | unauthorized: authentication required

This took me down the rabbit hole of testing that I could push Docker images to the local registry that's part of the ICP cluster: -

Pushing Docker images to IBM Cloud Private

but Microclimate still refused to play ball.

It did, however, confirm my suspicion that the problem was with the credentials between Microclimate and the Docker registry.

I looked back at my build notes, and saw that I'd wrongly read this: -

Create the Microclimate registry secret

This secret is used by both Microclimate and Microclimate's pipelines. It allows images to be pushed and pulled from the private registry on your Kubernetes cluster.

Use the following code to create a Docker registry secret:

kubectl create secret docker-registry microclimate-registry-secret \
  --docker-server=:8500 \
  --docker-username= \
  --docker-password= \
  --docker-email=

Verify that the secret was created successfully and exists in the target namespace for Microclimate before you continue. This secret does not need to be patched to a service account as the Microclimate installation will manage this step.

as meaning that the secret needed to point at DockerHub i.e. the official Docker registry rather than the ICP registry.

After much faffing around, including a full nuke of the Helm chart, I was able to resolve this ... mainly thanks to some awesome support from the IBM Microclimate developer team in Hursley :-)

At one point, I stupidly retraced the very same steps ( using creds for DockerHub ) because my notes were out-of-sync with reality.

However, once I deleted the microclimate-registry-secret  : -

kubectl delete secret microclimate-registry-secret -n micro-climate

and recreated it: -

kubectl create secret docker-registry microclimate-registry-secret   --docker-server=mycluster.icp:8500   --docker-username=admin   --docker-password=admin

pointing at the ICP server ( my cluster.icp:8500 ) and using the ICP credentials, things started to behave.

Just in case, I nuked the main Microclimate pod: -

kubectl delete pod microclimate-ibm-microclimate-67cfd99c7b-zqs8m 

which forced the Replica Set to spawn a new pod ( remember, cattle, not pets ).

At that point, I was able to create a new Java / Lagom project, and watch it happily start and work :-)

I also noticed that my other failing-to-start project magically started working.

So, the long story very short ( TL;DR; ), get the darn credentials right.

For the record, I also nuked/recreated the other secret - microclimate-pipeline-secret - as follows: -

kubectl create secret docker-registry microclimate-pipeline-secret \
  --docker-server=mycluster.icp:8500 \
  --docker-username=admin \
  --docker-password=admin \
  --namespace=microclimate-pipeline-deployments

So things are now working - now I'm going to rip it all down and REDO FROM START, to see what I've learned :-)

PS For what it's worth, a quick cheat to get the logs of the File Watcher pod, regardless of it's name is: -

kubectl logs -f `kubectl get pods -n micro-climate | grep -i watcher | awk '{print $1}'` -n micro-climate

Tuesday 22 January 2019

Pushing Docker images to IBM Cloud Private

As part of an ongoing journey of discovery, I was trying - and failing - to push a Docker image to the Docker registry that's part of the IBM Cloud Private cluster.

Having logged in ( this on the ICP Master/Management node, using the default - and very insecure - credentials ) : -

docker login mycluster.icp:8500 -u admin -p admin
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

I then attempted to push an image: -

docker push mycluster.icp:8500/daveh/docker-whale

The push refers to repository [mycluster.icp:8500/daveh/docker-whale]
b978ff7f6d77: Preparing 
75cb9f6a9a68: Preparing 
5393f225a933: Preparing 
7bff100f35cb: Preparing 
unauthorized: authentication required

After much fiddling - and reading up on the subject - I realised where I was going wrong.

From the docs: -

...
Users are assigned to organizational units called namespaces.

Namespaces are also known as tenants or accounts. In IBM® Cloud Private, users are assigned to teams. You can assign multiple namespaces to a team. Users of a team are members of the team's namespaces.

An IBM Cloud Private namespace corresponds to a single namespace in Kubernetes. All deployments, pods, and volumes that are created in a single namespace, belongs to the same Kubernetes namespace.

The default namespace is available when you install IBM Cloud Private. The default namespace is assigned to all the super administrators in a cluster. The default namespace must not be deleted.
....

Source: Namespaces


Therefore, given that I'm logging in as admin, I need to target the default namespace.

This I did: -

docker tag docker-whale:latest mycluster.icp:8500/default/docker-whale:latest
docker push mycluster.icp:8500/default/docker-whale:latest

which returned: -

...
The push refers to repository [mycluster.icp:8500/default/docker-whale]
6761c00f4fed: Pushed 
eb264472db3a: Pushed 
9deed7976b27: Pushed 
7bff100f35cb: Pushed 
latest: digest: sha256:30f760b9716fdd5bc61ad7006a520862bf0ef2d653b8cb63d00e1c149ce8091b size: 1154
...

For the record, Whalesay is the first Docker image that I ever used - same for everyone - and I grabbed the Dockerfile etc from here: -

git clone https://github.com/docker/whalesay.git

and built it: -

cd whalesay/
docker build -t docker-whale .

This was also of great use: -



Using SSH without passwords OR pass phrases

For some reason, I did NOT know about this ... until I found about it, and now I know :-)

I've been happily SSHing into the VMs that comprise my IBM Cloud Private (ICP) environment, and less-than-happily needing to find/copy/paste the passphrase of my SSH key pair ....

So I'd run the command: -

ssh root@dmhicp-mgmtmaster.fyre.ibm.com

and then have to go off to my password vault to find the pass phrase for my public key: -

~/.ssh/id_rsa.pub

Well, no longer, thanks to a combination of ssh-add and the native macOS Keychain.

I used this command: -

ssh-add -k ~/.ssh/id_rsa

and this command: -

ssh-add -l

to validate that it'd been added ( this returns a fingerprint ) and this command: -

ssh-add -L

to show the public key in its entirety.

The ssh-add command is documented here: -

https://www.ssh.com/ssh/add

I also ended up with a config file in my ~/.ssh directory: -

cat ~/.ssh/config 

Host *
 AddKeysToAgent yes
 UseKeychain yes
 IdentityFile ~/.ssh/id_rsa

probably because I've also been tinkering with GitHub: -

Connecting to GitHub with SSH

but the TL;DR; is that I can now access my Ubuntu VMs without a darn password OR passphrase.

Which is nice!

Firefox on macOS - Tabs broken, now fixed

Using Firefox 64.0.2 on a new MacBook Pro, I was annoyed to find that the tab switching behaviour - I use [Ctrl][TAB] to switch between browser tabs - wasn't working as it had on my old Mac.

Instead, I was seeing a preview of each tab, which really really slows down my workflow.

Thankfully, it's fixable via the about:config tweak: -

browser.ctrlTab.recentlyUsedOrder;true

to: -

browser.ctrlTab.recentlyUsedOrder;false

Apparently it's a "feature" in the latest versions of Firefox: -

https://support.mozilla.org/en-US/questions/1232460

Thankfully there's always a better way .....

Monday 21 January 2019

Microclimate on IBM Cloud Private - Permission to write

Following on from my earlier post: -

 Playing with Microclimate on IBM Cloud Private 

it took me a while, and a lot of PD, but I was able to successfully resolve my issue with this: -

kubectl get pods -n micro-climate

NAME                                                    READY     STATUS                  RESTARTS   AGE
microclimate-ibm-microclimate-67cfd99c7b-zspb2          1/1       Running                 0          18m
microclimate-ibm-microclimate-atrium-7f75d754fd-t97m7   1/1       Running                 0          19m
microclimate-ibm-microclimate-devops-5b88cf9bcc-fr7jg   1/1       Running                 0          18m
microclimate-jenkins-755d769675-jp9fj                   0/1       Init:CrashLoopBackOff   8          19m

kubectl describe pod microclimate-jenkins-755d769675-jp9fj -n micro-climate

  Type     Reason     Age                From                 Message
  ----     ------     ----               ----                 -------
  Normal   Scheduled  20m                default-scheduler    Successfully assigned micro-climate/microclimate-jenkins-755d769675-jp9fj to 10.51.4.37
  Normal   Started    18m (x4 over 20m)  kubelet, 10.51.4.37  Started container
  Normal   Pulling    17m (x5 over 20m)  kubelet, 10.51.4.37  pulling image "ibmcom/microclimate-jenkins:1812"
  Normal   Pulled     17m (x5 over 20m)  kubelet, 10.51.4.37  Successfully pulled image "ibmcom/microclimate-jenkins:1812"
  Normal   Created    17m (x5 over 20m)  kubelet, 10.51.4.37  Created container
  Warning  BackOff    3s (x81 over 19m)  kubelet, 10.51.4.37  Back-off restarting failed container

I dug into the Docker logs on one of my ICP worker nodes, using the command: -

docker logs 7ed80b2f7174 -f

...
cp: cannot create regular file '/var/jenkins_home/config.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml': Permission denied
/var/jenkins_config/apply_config.sh: 12: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/config1.xml: Permission denied
cp: cannot stat '/var/jenkins_home/config1.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/jenkins.CLI.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/jenkins.model.JenkinsLocationConfiguration.xml': Permission denied
/var/jenkins_config/apply_config.sh: 18: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/secret.yaml: Permission denied
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
error: the path "/var/jenkins_home/secret.yaml" does not exist
mkdir: cannot create directory ‘/var/jenkins_home/users’: Permission denied
cp: cannot create regular file '/var/jenkins_home/users/admin/config.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/plugins.txt': Permission denied
cat: /var/jenkins_home/plugins.txt: No such file or directory
Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Downloading plugins...

WAR bundled plugins:


Installed plugins:
*:
Cleaning up locks
rm: cannot remove '/usr/share/jenkins/ref/plugins/*.lock': No such file or directory
cp: cannot stat '/usr/share/jenkins/ref/plugins/*': No such file or directory
root@dmhicp-worker-3:~# docker logs 7ed80b2f7174 -f
cp: cannot create regular file '/var/jenkins_home/config.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml': Permission denied
/var/jenkins_config/apply_config.sh: 12: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/config1.xml: Permission denied
cp: cannot stat '/var/jenkins_home/config1.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/jenkins.CLI.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/jenkins.model.JenkinsLocationConfiguration.xml': Permission denied
/var/jenkins_config/apply_config.sh: 18: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/secret.yaml: Permission denied
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
error: the path "/var/jenkins_home/secret.yaml" does not exist
mkdir: cannot create directory ‘/var/jenkins_home/users’: Permission denied
cp: cannot create regular file '/var/jenkins_home/users/admin/config.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/plugins.txt': Permission denied
cat: /var/jenkins_home/plugins.txt: No such file or directory
Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Downloading plugins...

WAR bundled plugins:


Installed plugins:
*:
Cleaning up locks
rm: cannot remove '/usr/share/jenkins/ref/plugins/*.lock': No such file or directory
cp: cannot stat '/usr/share/jenkins/ref/plugins/*': No such file or directory
root@dmhicp-worker-3:~# docker logs 7ed80b2f7174 -f
cp: cannot create regular file '/var/jenkins_home/config.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml': Permission denied
/var/jenkins_config/apply_config.sh: 12: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/config1.xml: Permission denied
cp: cannot stat '/var/jenkins_home/config1.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/jenkins.CLI.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/jenkins.model.JenkinsLocationConfiguration.xml': Permission denied
/var/jenkins_config/apply_config.sh: 18: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/secret.yaml: Permission denied
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
error: the path "/var/jenkins_home/secret.yaml" does not exist
mkdir: cannot create directory ‘/var/jenkins_home/users’: Permission denied
cp: cannot create regular file '/var/jenkins_home/users/admin/config.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/plugins.txt': Permission denied
cat: /var/jenkins_home/plugins.txt: No such file or directory
Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Downloading plugins...

WAR bundled plugins:


Installed plugins:
*:
Cleaning up locks
rm: cannot remove '/usr/share/jenkins/ref/plugins/*.lock': No such file or directory
cp: cannot stat '/usr/share/jenkins/ref/plugins/*': No such file or directory
...

which was strange, given that I was using a NFS v4 service ( on the Boot node of my ICP cluster ) to host the required file-systems ( as per the previous post, using Persistent Volumes and Persistent Volume Claims ).

Thankfully, a colleague had hit the same problem, albeit NOT using NFS, and the solution was, as one might imagine, permissions :-)

Yes, even though I'm running everything as root (!), and had exported the file-systems via /etc/exports : -

/export/CAM_logs *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/export/CAM_db *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/export/CAM_terraform *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/export/CAM_BPD_appdata *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/export/MC_jenkins *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/export/MC_microclimate *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)

I needed to ensure that the underlying file-system permissions were also correct.

So this was what I had: -

ls -altrc /export

total 12
drwxr-xr-x 23 root root 4096 Jan  2 11:34 ..
drwxr-xr-x  3 root root   36 Jan  2 16:48 CAM_terraform
drwxr-xr-x 19 root root 4096 Jan  2 16:48 CAM_logs
drwxr-xr-x  5 root root   56 Jan  2 16:48 CAM_BPD_appdata
drwxr-xr-x  2 root root    6 Jan 15 16:19 MC_jenkins
drwxr-xr-x  8 root root  121 Jan 15 16:20 .
drwxr-xr-x  3 root root   25 Jan 15 16:28 MC_microclimate
drwxr-xr-x  4  999 root 4096 Jan 21 10:30 CAM_db

and this is what I needed to do: -

chmod -R 777 /export/MC_jenkins/

giving me this: -

drwxr-xr-x  3 root root   36 Jan  2 16:48 CAM_terraform
drwxr-xr-x 19 root root 4096 Jan  2 16:48 CAM_logs
drwxr-xr-x  5 root root   56 Jan  2 16:48 CAM_BPD_appdata
drwxr-xr-x  3 root root   25 Jan 15 16:28 MC_microclimate
drwxr-xr-x  4  999 root 4096 Jan 21 10:30 CAM_db
drwxrwxrwx  2 root root    6 Jan 21 15:34 MC_jenkins

In other words, I needed to set the group and world permissions to write as well as the user, thus changing FROM 755 TO 777.

As soon as I did this, the ICP ( Kubernetes ) Replica Set automatically spun up container instances on the Worker nodes which happily grabbed the storage, and I immediately saw stuff being written by Jenkins: -

ls -al /export/MC_jenkins/

total 84
drwxrwxrwx 17 root root     4096 Jan 21 15:36 .
drwxr-xr-x  8 root root      121 Jan 15 16:20 ..
drwxr-xr-x  3 fyre lpadmin    24 Jan 21 15:35 .cache
-rw-r--r--  1 fyre lpadmin  6997 Jan 21 15:35 config1.xml
-rw-r--r--  1 fyre lpadmin  7645 Jan 21 15:36 config.xml
-rw-r--r--  1 fyre lpadmin  2640 Jan 21 15:35 copy_reference_file.log
drwxr-xr-x  3 fyre lpadmin    20 Jan 21 15:36 .groovy
-rw-r--r--  1 fyre lpadmin   156 Jan 21 15:36 hudson.model.UpdateCenter.xml
-rw-r--r--  1 fyre lpadmin   370 Jan 21 15:36 hudson.plugins.git.GitTool.xml
-rw-------  1 fyre lpadmin  1712 Jan 21 15:36 identity.key.enc
drwxr-xr-x  2 fyre lpadmin    41 Jan 21 15:35 init.groovy.d
drwxr-xr-x  3 fyre lpadmin    19 Jan 21 15:35 .java
-rw-r--r--  1 fyre lpadmin    94 Jan 21 15:35 jenkins.CLI.xml
-rw-r--r--  1 fyre lpadmin     5 Jan 21 15:36 jenkins.install.InstallUtil.lastExecVersion
-rw-r--r--  1 fyre lpadmin   274 Jan 21 15:35 jenkins.model.JenkinsLocationConfiguration.xml
drwxr-xr-x  2 fyre lpadmin     6 Jan 21 15:36 jobs
drwxr-xr-x  4 fyre lpadmin    37 Jan 21 15:35 .kube
drwxr-xr-x  3 fyre lpadmin    19 Jan 21 15:36 logs
-rw-r--r--  1 fyre lpadmin   907 Jan 21 15:36 nodeMonitors.xml
drwxr-xr-x  2 fyre lpadmin     6 Jan 21 15:36 nodes
-rw-r--r--  1 fyre lpadmin  1034 Jan 21 15:35 org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml
drwxr-xr-x 56 fyre lpadmin 12288 Jan 21 15:36 plugins
-rw-r--r--  1 fyre lpadmin    24 Jan 21 15:35 plugins.txt
-rw-r--r--  1 fyre lpadmin    64 Jan 21 15:35 secret.key
-rw-r--r--  1 fyre lpadmin     0 Jan 21 15:35 secret.key.not-so-secret
drwxr-xr-x  4 fyre lpadmin   263 Jan 21 15:36 secrets
-rw-r--r--  1 fyre lpadmin     0 Jan 21 15:35 secret.yaml
drwxr-xr-x  2 fyre lpadmin    67 Jan 21 15:36 updates
drwxr-xr-x  2 fyre lpadmin    24 Jan 21 15:36 userContent
drwxr-xr-x  3 fyre lpadmin    19 Jan 21 15:35 users
drwxr-xr-x 11 fyre lpadmin  4096 Jan 21 15:35 war
drwxr-xr-x  2 fyre lpadmin     6 Jan 21 15:36 workflow-libs

with happy containers: -

docker logs 7f347b0a46e2 -f

Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
error: no objects passed to create
Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Downloading plugins...
Downloading plugin: credentials-binding from https://updates.jenkins.io/download/plugins/credentials-binding/1.16/credentials-binding.hpi
 > credentials-binding depends on workflow-step-api:2.10,credentials:2.1.7,plain-credentials:1.3,ssh-credentials:1.11,structs:1.7
Downloading plugin: workflow-step-api from https://updates.jenkins.io/download/plugins/workflow-step-api/latest/workflow-step-api.hpi
Downloading plugin: credentials from https://updates.jenkins.io/download/plugins/credentials/latest/credentials.hpi
Downloading plugin: plain-credentials from https://updates.jenkins.io/download/plugins/plain-credentials/latest/plain-credentials.hpi
Downloading plugin: ssh-credentials from https://updates.jenkins.io/download/plugins/ssh-credentials/latest/ssh-credentials.hpi
Downloading plugin: structs from https://updates.jenkins.io/download/plugins/structs/latest/structs.hpi
 > credentials depends on structs:1.7
 > workflow-step-api depends on structs:1.5
 > plain-credentials depends on credentials:2.1.16
 > ssh-credentials depends on credentials:2.1.17

WAR bundled plugins:


Installed plugins:
credentials-binding:1.16
credentials:2.1.18
plain-credentials:1.5
ssh-credentials:1.14
structs:1.17
workflow-step-api:2.18
Cleaning up locks
root@dmhicp-worker-3:~# 
root@dmhicp-worker-3:~# docker logs 7f347b0a46e2 -f
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
error: no objects passed to create
Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Downloading plugins...
Downloading plugin: credentials-binding from https://updates.jenkins.io/download/plugins/credentials-binding/1.16/credentials-binding.hpi
 > credentials-binding depends on workflow-step-api:2.10,credentials:2.1.7,plain-credentials:1.3,ssh-credentials:1.11,structs:1.7
Downloading plugin: workflow-step-api from https://updates.jenkins.io/download/plugins/workflow-step-api/latest/workflow-step-api.hpi
Downloading plugin: credentials from https://updates.jenkins.io/download/plugins/credentials/latest/credentials.hpi
Downloading plugin: plain-credentials from https://updates.jenkins.io/download/plugins/plain-credentials/latest/plain-credentials.hpi
Downloading plugin: ssh-credentials from https://updates.jenkins.io/download/plugins/ssh-credentials/latest/ssh-credentials.hpi
Downloading plugin: structs from https://updates.jenkins.io/download/plugins/structs/latest/structs.hpi
 > credentials depends on structs:1.7
 > workflow-step-api depends on structs:1.5
 > plain-credentials depends on credentials:2.1.16
 > ssh-credentials depends on credentials:2.1.17

WAR bundled plugins:


Installed plugins:
credentials-binding:1.16
credentials:2.1.18
plain-credentials:1.5
ssh-credentials:1.14
structs:1.17
workflow-step-api:2.18
Cleaning up locks

and all of the Microclimate pods started playing nicely: -

kubectl get pods -n micro-climate

NAME                                                    READY     STATUS    RESTARTS   AGE
microclimate-ibm-microclimate-67cfd99c7b-66jj4          1/1       Running   0          29m
microclimate-ibm-microclimate-atrium-7f75d754fd-txn5p   1/1       Running   0          29m
microclimate-ibm-microclimate-devops-568c4c5989-prcbr   1/1       Running   0          29m
microclimate-jenkins-678584959-9hqld                    1/1       Running   0          29m

and a working Microclimate environment .....

Note to self - use kubectl to query images in a pod or deployment

In both cases, we use JSON ... For a deployment, we can do this: - kubectl get deployment foobar --namespace snafu --output jsonpath="{...