Wednesday, 23 January 2019

IBM Microclimate on IBM Cloud Private - From Soup to Nuts

As per my previous post: -

W00t, IBM Microclimate running on IBM Cloud Private ...

here's a very quick run-through my build process, having just REDONE FROM START.

It's worth reiterating that the official documentation here: -

https://github.com/IBM/charts/blob/master/stable/ibm-microclimate/README.md

is absolutely the way to go.

My notes are MY notes; YMMV

And, with that caveat, here we go: -

Create Non-Default Name Space

kubectl create namespace microclimate

Export HELM_HOME variable

export HELM_HOME=~/.helm

Configure Kubectl and Helm clients to use new namespaces

cloudctl login -a https://mycluster.icp:8443 -n microclimate --skip-ssl-validation -u admin -p admin

Create a namespace for the Microclimate pipeline

kubectl create namespace microclimate-pipeline-deployments

Create Cluster Image Policy

vi mycip.yaml

apiVersion: securityenforcement.admission.cloud.ibm.com/v1beta1
kind: ClusterImagePolicy
metadata:
  name: microclimate-cluster-image-policy
spec:
  repositories:
  - name: mycluster.icp:8500/*
  - name: docker.io/maven:*
  - name: docker.io/jenkins/*
  - name: docker.io/docker:*

kubectl apply -f mycip.yaml

Create Docker Registry Secret

- From Microclimate to Docker
- Used to push newly created applications to internal Docker registry

kubectl create secret docker-registry microclimate-registry-secret \
  --docker-server=mycluster.icp:8500 \
  --docker-username=admin \
  --docker-password=admin

Create Generic Secret

- From Microclimate to Helm

kubectl create secret generic microclimate-helm-secret --from-file=cert.pem=$HELM_HOME/cert.pem --from-file=ca.pem=$HELM_HOME/ca.pem --from-file=key.pem=$HELM_HOME/key.pem

Create Docker Regisry Secret

- From Microclimate to Pipeline

kubectl create secret docker-registry microclimate-pipeline-secret \
  --docker-server=mycluster.icp:8500 \
  --docker-username=admin \
  --docker-password=admin \
  --namespace=microclimate-pipeline-deployments

Validate default Service Account

kubectl describe serviceaccount default --namespace microclimate-pipeline-deployments

Add microclimate-pipeline-secret to default Service Account

kubectl patch serviceaccount default --namespace microclimate-pipeline-deployments -p "{\"imagePullSecrets\": [{\"name\": \"microclimate-pipeline-secret\"}]}"

Retrieve Cluster Proxy Address

kubectl get configmaps ibmcloud-cluster-info -n kube-public -o jsonpath='{.data.proxy_address}'

10.51.4.87

kubectl get nodes -l proxy=true

NAME         STATUS    ROLES     AGE       VERSION
10.51.4.87   Ready     proxy     13d       v1.11.3+icp-ee

Note that my Proxy node has a private 10.X.X.X IP address, and thus I cannot use this for the Microclimate Ingress; instead, I'll use the ICP dashboard ( Management/Master node ) address, which is public ( to me ).

This is further explained in the README.md: -

If the name of this node is an IP address, you can test that this IP is usable as an ingress domain by navigating to https://. If you receive a default backend - 404 error, then this IP is externally accessible and should be used as the global.ingressDomain value. If you cannot reach this address, copy the IP address that you use to access the IBM Cloud Private dashboard. Use the copied address to set the global.ingressDomain value.

Create Persistent Volumes / Persistent Volume Claims

- Note that I'm using YAML to create the Persistent Volumes and the corresponding Claims
- In my case, the PVs are actually "pointing" to NFS volumes, exported from my Boot node

kubectl apply -f createMC_PV1.yaml
kubectl apply -f createMC_PV2.yaml
kubectl apply -f createMC_PVC1.yaml
kubectl apply -f createMC_PVC2.yaml

Add IBM Helm charts repo

helm repo add ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable/

Install Microclimate Helm chart

helm install --name microclimate --namespace microclimate --set global.rbac.serviceAccountName=micro-sa,jenkins.rbac.serviceAccountName=pipeline-sa,global.ingressDomain=9.20.193.177.nip.io,persistence.useDynamicProvisioning=false,persistence.size=8Gi,jenkins.Persistence.ExistingClaim=microclimate-jenkins,persistence.existingClaimName=microclimate-ibm-microclimate ibm-charts/ibm-microclimate --tls

...
1. Access the Microclimate portal at the following URL: https://microclimate.9.20.193.177.nip.io

Target namespace set to: microclimate-pipeline-deployments, please verify this exists before creating pipelines
...

Validate Microclimate pods

kubectl get pods -n microclimate

...
NAME                                                    READY     STATUS    RESTARTS   AGE
microclimate-ibm-microclimate-65f559cf48-ml587          1/1       Running   0          2m
microclimate-ibm-microclimate-atrium-5c7dc4d4f9-7hnv7   1/1       Running   0          2m
microclimate-ibm-microclimate-devops-7b7dd69655-g8pjv   0/1       Running   0          2m
microclimate-jenkins-64c7446647-glrpr                   1/1       Running   0          2m
...

Valiate Ingress Points

kubectl get ing

...
NAME                            HOSTS                              ADDRESS      PORTS     AGE
microclimate-ibm-microclimate   microclimate.9.20.193.177.nip.io   10.51.4.87   80, 443   3m
microclimate-jenkins            jenkins.9.20.193.177.nip.io        10.51.4.87   80, 443   3m
...

Validate Helm chart

helm list --tls --namespace microclimate

...
NAME         REVISION UPDATED                 STATUS   CHART                   NAMESPACE
microclimate 1       Wed Jan 23 14:14:45 2019 DEPLOYED ibm-microclimate-1.10.0 microclimate
...

helm status microclimate --tls

...
LAST DEPLOYED: Wed Jan 23 14:14:45 2019
NAMESPACE: microclimate
STATUS: DEPLOYED
...

Access MC UI

- Note that this uses the NIP.IO service

...
 NIP.IO maps ..nip.io to the corresponding , even 127.0.0.1.nip.io maps to 127.0.0.1 
...

https://microclimate.9.20.193.177.nip.io

Login as admin/admin

Attempt create a new project - I chose Java / Lagom as per this: -

Create and deploy Lagom Reactive applications with Microclimate

Finally, if it helps, the File Watcher pod can be monitored, via a command such as this: -

kubectl logs -f `kubectl get pods -n microclimate | grep -i watcher | awk '{print $1}'` -n microclimate

( watch out for the so-called back-tick character, which doesn't always paste well from a browser )


W00t, IBM Microclimate running on IBM Cloud Private ...

So another "Voyage of Discovery" post .....

I'm tinkering with IBM Microclimate : -

Microclimate provides an end-to-end, cloud-native solution for creating, building, testing and deploying applications. The solution offers services and tools to help you create and modernize applications in one seamless experience. It covers each step of the process from writing and testing code to building and deployment. The solution enables containerized development, rapid iteration with real-time performance insights, intelligent feedback, diagnostic services, an integrated DevOps pipeline and deployment to the cloud.

also well documented here: -

Microclimate is an end to end development environment that lets you rapidly create, edit, and deploy applications. Applications are run in containers from day one and can be delivered into production on Kubernetes through an automated DevOps pipeline using Jenkins. Microclimate can be installed locally or on IBM Cloud Private, and currently supports Java, Node.js, and Swift.

https://microclimate-dev2ops.github.io/

I've played with this before: -

Playing with Microclimate on IBM Cloud Private

Microclimate on IBM Cloud Private - Permission to write

and will be posting my own build notes, but I'm still following the official documentation here: -

https://github.com/IBM/charts/blob/master/stable/ibm-microclimate/README.md

Having followed all of the pre-requisite steps ( which mainly involve creating lots of artefacts using kubectl ), and having installed the Helm chart, I was following this tutorial: -

Create and deploy Lagom Reactive applications with Microclimate

but found that the resulting Docker container would never start.

I dug around within IBM Cloud Private (ICP) or, to be more accurate, within Kubernetes, upon which ICP is built.

Microclimate comprises a number of Pods, sitting within a dedicated namespace - I'm using micro-climate : -

kubectl get pods -n micro-climate

NAME                                                              READY     STATUS    RESTARTS   AGE
mc-adamjava-381f2a40-1ef8-11e9-bb42-adamjava-5c9697c464-dbqdk     1/1       Running   0          27m
mc-yoda-22f5f600-1efb-11e9-964a-yoda-bbb88b5d4-lqzwh              1/1       Running   0          52m
microclimate-ibm-microclimate-67cfd99c7b-bj7p2                    1/1       Running   0          55m
microclimate-ibm-microclimate-admin-editor-77ddbdd86-xzlbj        2/2       Running   0          53m
microclimate-ibm-microclimate-admin-filewatcher-6cc6c785cf6lsjx   1/1       Running   0          53m
microclimate-ibm-microclimate-admin-loadrunner-856b4b48b6-jqlmc   1/1       Running   0          53m
microclimate-ibm-microclimate-atrium-7f75d754fd-fp244             1/1       Running   0          1h
microclimate-ibm-microclimate-devops-568c4c5989-kjcqs             1/1       Running   0          1h
microclimate-jenkins-678584959-64jlm                              1/1       Running   0          1h

Given that all were running happily, I chose to dive into the logs of the File Watcher pod: -

kubectl logs -f microclimate-ibm-microclimate-admin-filewatcher-6cc6c785cf6lsjx -n micro-climate

and spotted this: -

[ERROR Tue Jan 22 13:58:43 UTC 2019 | Project: foobar | File Name: null | Function Name: null | Line Number: null] _tickCallback : 189 | unauthorized: authentication required

This took me down the rabbit hole of testing that I could push Docker images to the local registry that's part of the ICP cluster: -

Pushing Docker images to IBM Cloud Private

but Microclimate still refused to play ball.

It did, however, confirm my suspicion that the problem was with the credentials between Microclimate and the Docker registry.

I looked back at my build notes, and saw that I'd wrongly read this: -

Create the Microclimate registry secret

This secret is used by both Microclimate and Microclimate's pipelines. It allows images to be pushed and pulled from the private registry on your Kubernetes cluster.

Use the following code to create a Docker registry secret:

kubectl create secret docker-registry microclimate-registry-secret \
  --docker-server=:8500 \
  --docker-username= \
  --docker-password= \
  --docker-email=

Verify that the secret was created successfully and exists in the target namespace for Microclimate before you continue. This secret does not need to be patched to a service account as the Microclimate installation will manage this step.

as meaning that the secret needed to point at DockerHub i.e. the official Docker registry rather than the ICP registry.

After much faffing around, including a full nuke of the Helm chart, I was able to resolve this ... mainly thanks to some awesome support from the IBM Microclimate developer team in Hursley :-)

At one point, I stupidly retraced the very same steps ( using creds for DockerHub ) because my notes were out-of-sync with reality.

However, once I deleted the microclimate-registry-secret  : -

kubectl delete secret microclimate-registry-secret -n micro-climate

and recreated it: -

kubectl create secret docker-registry microclimate-registry-secret   --docker-server=mycluster.icp:8500   --docker-username=admin   --docker-password=admin

pointing at the ICP server ( my cluster.icp:8500 ) and using the ICP credentials, things started to behave.

Just in case, I nuked the main Microclimate pod: -

kubectl delete pod microclimate-ibm-microclimate-67cfd99c7b-zqs8m 

which forced the Replica Set to spawn a new pod ( remember, cattle, not pets ).

At that point, I was able to create a new Java / Lagom project, and watch it happily start and work :-)

I also noticed that my other failing-to-start project magically started working.

So, the long story very short ( TL;DR; ), get the darn credentials right.

For the record, I also nuked/recreated the other secret - microclimate-pipeline-secret - as follows: -

kubectl create secret docker-registry microclimate-pipeline-secret \
  --docker-server=mycluster.icp:8500 \
  --docker-username=admin \
  --docker-password=admin \
  --namespace=microclimate-pipeline-deployments

So things are now working - now I'm going to rip it all down and REDO FROM START, to see what I've learned :-)

PS For what it's worth, a quick cheat to get the logs of the File Watcher pod, regardless of it's name is: -

kubectl logs -f `kubectl get pods -n micro-climate | grep -i watcher | awk '{print $1}'` -n micro-climate

Tuesday, 22 January 2019

Pushing Docker images to IBM Cloud Private

As part of an ongoing journey of discovery, I was trying - and failing - to push a Docker image to the Docker registry that's part of the IBM Cloud Private cluster.

Having logged in ( this on the ICP Master/Management node, using the default - and very insecure - credentials ) : -

docker login mycluster.icp:8500 -u admin -p admin
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

I then attempted to push an image: -

docker push mycluster.icp:8500/daveh/docker-whale

The push refers to repository [mycluster.icp:8500/daveh/docker-whale]
b978ff7f6d77: Preparing 
75cb9f6a9a68: Preparing 
5393f225a933: Preparing 
7bff100f35cb: Preparing 
unauthorized: authentication required

After much fiddling - and reading up on the subject - I realised where I was going wrong.

From the docs: -

...
Users are assigned to organizational units called namespaces.

Namespaces are also known as tenants or accounts. In IBM® Cloud Private, users are assigned to teams. You can assign multiple namespaces to a team. Users of a team are members of the team's namespaces.

An IBM Cloud Private namespace corresponds to a single namespace in Kubernetes. All deployments, pods, and volumes that are created in a single namespace, belongs to the same Kubernetes namespace.

The default namespace is available when you install IBM Cloud Private. The default namespace is assigned to all the super administrators in a cluster. The default namespace must not be deleted.
....

Source: Namespaces


Therefore, given that I'm logging in as admin, I need to target the default namespace.

This I did: -

docker tag docker-whale:latest mycluster.icp:8500/default/docker-whale:latest
docker push mycluster.icp:8500/default/docker-whale:latest

which returned: -

...
The push refers to repository [mycluster.icp:8500/default/docker-whale]
6761c00f4fed: Pushed 
eb264472db3a: Pushed 
9deed7976b27: Pushed 
7bff100f35cb: Pushed 
latest: digest: sha256:30f760b9716fdd5bc61ad7006a520862bf0ef2d653b8cb63d00e1c149ce8091b size: 1154
...

For the record, Whalesay is the first Docker image that I ever used - same for everyone - and I grabbed the Dockerfile etc from here: -

git clone https://github.com/docker/whalesay.git

and built it: -

cd whalesay/
docker build -t docker-whale .

This was also of great use: -



Using SSH without passwords OR pass phrases

For some reason, I did NOT know about this ... until I found about it, and now I know :-)

I've been happily SSHing into the VMs that comprise my IBM Cloud Private (ICP) environment, and less-than-happily needing to find/copy/paste the passphrase of my SSH key pair ....

So I'd run the command: -

ssh root@dmhicp-mgmtmaster.fyre.ibm.com

and then have to go off to my password vault to find the pass phrase for my public key: -

~/.ssh/id_rsa.pub

Well, no longer, thanks to a combination of ssh-add and the native macOS Keychain.

I used this command: -

ssh-add -k ~/.ssh/id_rsa

and this command: -

ssh-add -l

to validate that it'd been added ( this returns a fingerprint ) and this command: -

ssh-add -L

to show the public key in its entirety.

The ssh-add command is documented here: -

https://www.ssh.com/ssh/add

I also ended up with a config file in my ~/.ssh directory: -

cat ~/.ssh/config 

Host *
 AddKeysToAgent yes
 UseKeychain yes
 IdentityFile ~/.ssh/id_rsa

probably because I've also been tinkering with GitHub: -

Connecting to GitHub with SSH

but the TL;DR; is that I can now access my Ubuntu VMs without a darn password OR passphrase.

Which is nice!

Firefox on macOS - Tabs broken, now fixed

Using Firefox 64.0.2 on a new MacBook Pro, I was annoyed to find that the tab switching behaviour - I use [Ctrl][TAB] to switch between browser tabs - wasn't working as it had on my old Mac.

Instead, I was seeing a preview of each tab, which really really slows down my workflow.

Thankfully, it's fixable via the about:config tweak: -

browser.ctrlTab.recentlyUsedOrder;true

to: -

browser.ctrlTab.recentlyUsedOrder;false

Apparently it's a "feature" in the latest versions of Firefox: -

https://support.mozilla.org/en-US/questions/1232460

Thankfully there's always a better way .....

Monday, 21 January 2019

Microclimate on IBM Cloud Private - Permission to write

Following on from my earlier post: -

 Playing with Microclimate on IBM Cloud Private 

it took me a while, and a lot of PD, but I was able to successfully resolve my issue with this: -

kubectl get pods -n micro-climate

NAME                                                    READY     STATUS                  RESTARTS   AGE
microclimate-ibm-microclimate-67cfd99c7b-zspb2          1/1       Running                 0          18m
microclimate-ibm-microclimate-atrium-7f75d754fd-t97m7   1/1       Running                 0          19m
microclimate-ibm-microclimate-devops-5b88cf9bcc-fr7jg   1/1       Running                 0          18m
microclimate-jenkins-755d769675-jp9fj                   0/1       Init:CrashLoopBackOff   8          19m

kubectl describe pod microclimate-jenkins-755d769675-jp9fj -n micro-climate

  Type     Reason     Age                From                 Message
  ----     ------     ----               ----                 -------
  Normal   Scheduled  20m                default-scheduler    Successfully assigned micro-climate/microclimate-jenkins-755d769675-jp9fj to 10.51.4.37
  Normal   Started    18m (x4 over 20m)  kubelet, 10.51.4.37  Started container
  Normal   Pulling    17m (x5 over 20m)  kubelet, 10.51.4.37  pulling image "ibmcom/microclimate-jenkins:1812"
  Normal   Pulled     17m (x5 over 20m)  kubelet, 10.51.4.37  Successfully pulled image "ibmcom/microclimate-jenkins:1812"
  Normal   Created    17m (x5 over 20m)  kubelet, 10.51.4.37  Created container
  Warning  BackOff    3s (x81 over 19m)  kubelet, 10.51.4.37  Back-off restarting failed container

I dug into the Docker logs on one of my ICP worker nodes, using the command: -

docker logs 7ed80b2f7174 -f

...
cp: cannot create regular file '/var/jenkins_home/config.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml': Permission denied
/var/jenkins_config/apply_config.sh: 12: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/config1.xml: Permission denied
cp: cannot stat '/var/jenkins_home/config1.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/jenkins.CLI.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/jenkins.model.JenkinsLocationConfiguration.xml': Permission denied
/var/jenkins_config/apply_config.sh: 18: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/secret.yaml: Permission denied
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
error: the path "/var/jenkins_home/secret.yaml" does not exist
mkdir: cannot create directory ‘/var/jenkins_home/users’: Permission denied
cp: cannot create regular file '/var/jenkins_home/users/admin/config.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/plugins.txt': Permission denied
cat: /var/jenkins_home/plugins.txt: No such file or directory
Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Downloading plugins...

WAR bundled plugins:


Installed plugins:
*:
Cleaning up locks
rm: cannot remove '/usr/share/jenkins/ref/plugins/*.lock': No such file or directory
cp: cannot stat '/usr/share/jenkins/ref/plugins/*': No such file or directory
root@dmhicp-worker-3:~# docker logs 7ed80b2f7174 -f
cp: cannot create regular file '/var/jenkins_home/config.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml': Permission denied
/var/jenkins_config/apply_config.sh: 12: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/config1.xml: Permission denied
cp: cannot stat '/var/jenkins_home/config1.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/jenkins.CLI.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/jenkins.model.JenkinsLocationConfiguration.xml': Permission denied
/var/jenkins_config/apply_config.sh: 18: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/secret.yaml: Permission denied
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
error: the path "/var/jenkins_home/secret.yaml" does not exist
mkdir: cannot create directory ‘/var/jenkins_home/users’: Permission denied
cp: cannot create regular file '/var/jenkins_home/users/admin/config.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/plugins.txt': Permission denied
cat: /var/jenkins_home/plugins.txt: No such file or directory
Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Downloading plugins...

WAR bundled plugins:


Installed plugins:
*:
Cleaning up locks
rm: cannot remove '/usr/share/jenkins/ref/plugins/*.lock': No such file or directory
cp: cannot stat '/usr/share/jenkins/ref/plugins/*': No such file or directory
root@dmhicp-worker-3:~# docker logs 7ed80b2f7174 -f
cp: cannot create regular file '/var/jenkins_home/config.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml': Permission denied
/var/jenkins_config/apply_config.sh: 12: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/config1.xml: Permission denied
cp: cannot stat '/var/jenkins_home/config1.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/jenkins.CLI.xml': Permission denied
cp: cannot create regular file '/var/jenkins_home/jenkins.model.JenkinsLocationConfiguration.xml': Permission denied
/var/jenkins_config/apply_config.sh: 18: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/secret.yaml: Permission denied
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
error: the path "/var/jenkins_home/secret.yaml" does not exist
mkdir: cannot create directory ‘/var/jenkins_home/users’: Permission denied
cp: cannot create regular file '/var/jenkins_home/users/admin/config.xml': No such file or directory
cp: cannot create regular file '/var/jenkins_home/plugins.txt': Permission denied
cat: /var/jenkins_home/plugins.txt: No such file or directory
Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Downloading plugins...

WAR bundled plugins:


Installed plugins:
*:
Cleaning up locks
rm: cannot remove '/usr/share/jenkins/ref/plugins/*.lock': No such file or directory
cp: cannot stat '/usr/share/jenkins/ref/plugins/*': No such file or directory
...

which was strange, given that I was using a NFS v4 service ( on the Boot node of my ICP cluster ) to host the required file-systems ( as per the previous post, using Persistent Volumes and Persistent Volume Claims ).

Thankfully, a colleague had hit the same problem, albeit NOT using NFS, and the solution was, as one might imagine, permissions :-)

Yes, even though I'm running everything as root (!), and had exported the file-systems via /etc/exports : -

/export/CAM_logs *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/export/CAM_db *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/export/CAM_terraform *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/export/CAM_BPD_appdata *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/export/MC_jenkins *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/export/MC_microclimate *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)

I needed to ensure that the underlying file-system permissions were also correct.

So this was what I had: -

ls -altrc /export

total 12
drwxr-xr-x 23 root root 4096 Jan  2 11:34 ..
drwxr-xr-x  3 root root   36 Jan  2 16:48 CAM_terraform
drwxr-xr-x 19 root root 4096 Jan  2 16:48 CAM_logs
drwxr-xr-x  5 root root   56 Jan  2 16:48 CAM_BPD_appdata
drwxr-xr-x  2 root root    6 Jan 15 16:19 MC_jenkins
drwxr-xr-x  8 root root  121 Jan 15 16:20 .
drwxr-xr-x  3 root root   25 Jan 15 16:28 MC_microclimate
drwxr-xr-x  4  999 root 4096 Jan 21 10:30 CAM_db

and this is what I needed to do: -

chmod -R 777 /export/MC_jenkins/

giving me this: -

drwxr-xr-x  3 root root   36 Jan  2 16:48 CAM_terraform
drwxr-xr-x 19 root root 4096 Jan  2 16:48 CAM_logs
drwxr-xr-x  5 root root   56 Jan  2 16:48 CAM_BPD_appdata
drwxr-xr-x  3 root root   25 Jan 15 16:28 MC_microclimate
drwxr-xr-x  4  999 root 4096 Jan 21 10:30 CAM_db
drwxrwxrwx  2 root root    6 Jan 21 15:34 MC_jenkins

In other words, I needed to set the group and world permissions to write as well as the user, thus changing FROM 755 TO 777.

As soon as I did this, the ICP ( Kubernetes ) Replica Set automatically spun up container instances on the Worker nodes which happily grabbed the storage, and I immediately saw stuff being written by Jenkins: -

ls -al /export/MC_jenkins/

total 84
drwxrwxrwx 17 root root     4096 Jan 21 15:36 .
drwxr-xr-x  8 root root      121 Jan 15 16:20 ..
drwxr-xr-x  3 fyre lpadmin    24 Jan 21 15:35 .cache
-rw-r--r--  1 fyre lpadmin  6997 Jan 21 15:35 config1.xml
-rw-r--r--  1 fyre lpadmin  7645 Jan 21 15:36 config.xml
-rw-r--r--  1 fyre lpadmin  2640 Jan 21 15:35 copy_reference_file.log
drwxr-xr-x  3 fyre lpadmin    20 Jan 21 15:36 .groovy
-rw-r--r--  1 fyre lpadmin   156 Jan 21 15:36 hudson.model.UpdateCenter.xml
-rw-r--r--  1 fyre lpadmin   370 Jan 21 15:36 hudson.plugins.git.GitTool.xml
-rw-------  1 fyre lpadmin  1712 Jan 21 15:36 identity.key.enc
drwxr-xr-x  2 fyre lpadmin    41 Jan 21 15:35 init.groovy.d
drwxr-xr-x  3 fyre lpadmin    19 Jan 21 15:35 .java
-rw-r--r--  1 fyre lpadmin    94 Jan 21 15:35 jenkins.CLI.xml
-rw-r--r--  1 fyre lpadmin     5 Jan 21 15:36 jenkins.install.InstallUtil.lastExecVersion
-rw-r--r--  1 fyre lpadmin   274 Jan 21 15:35 jenkins.model.JenkinsLocationConfiguration.xml
drwxr-xr-x  2 fyre lpadmin     6 Jan 21 15:36 jobs
drwxr-xr-x  4 fyre lpadmin    37 Jan 21 15:35 .kube
drwxr-xr-x  3 fyre lpadmin    19 Jan 21 15:36 logs
-rw-r--r--  1 fyre lpadmin   907 Jan 21 15:36 nodeMonitors.xml
drwxr-xr-x  2 fyre lpadmin     6 Jan 21 15:36 nodes
-rw-r--r--  1 fyre lpadmin  1034 Jan 21 15:35 org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml
drwxr-xr-x 56 fyre lpadmin 12288 Jan 21 15:36 plugins
-rw-r--r--  1 fyre lpadmin    24 Jan 21 15:35 plugins.txt
-rw-r--r--  1 fyre lpadmin    64 Jan 21 15:35 secret.key
-rw-r--r--  1 fyre lpadmin     0 Jan 21 15:35 secret.key.not-so-secret
drwxr-xr-x  4 fyre lpadmin   263 Jan 21 15:36 secrets
-rw-r--r--  1 fyre lpadmin     0 Jan 21 15:35 secret.yaml
drwxr-xr-x  2 fyre lpadmin    67 Jan 21 15:36 updates
drwxr-xr-x  2 fyre lpadmin    24 Jan 21 15:36 userContent
drwxr-xr-x  3 fyre lpadmin    19 Jan 21 15:35 users
drwxr-xr-x 11 fyre lpadmin  4096 Jan 21 15:35 war
drwxr-xr-x  2 fyre lpadmin     6 Jan 21 15:36 workflow-libs

with happy containers: -

docker logs 7f347b0a46e2 -f

Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
error: no objects passed to create
Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Downloading plugins...
Downloading plugin: credentials-binding from https://updates.jenkins.io/download/plugins/credentials-binding/1.16/credentials-binding.hpi
 > credentials-binding depends on workflow-step-api:2.10,credentials:2.1.7,plain-credentials:1.3,ssh-credentials:1.11,structs:1.7
Downloading plugin: workflow-step-api from https://updates.jenkins.io/download/plugins/workflow-step-api/latest/workflow-step-api.hpi
Downloading plugin: credentials from https://updates.jenkins.io/download/plugins/credentials/latest/credentials.hpi
Downloading plugin: plain-credentials from https://updates.jenkins.io/download/plugins/plain-credentials/latest/plain-credentials.hpi
Downloading plugin: ssh-credentials from https://updates.jenkins.io/download/plugins/ssh-credentials/latest/ssh-credentials.hpi
Downloading plugin: structs from https://updates.jenkins.io/download/plugins/structs/latest/structs.hpi
 > credentials depends on structs:1.7
 > workflow-step-api depends on structs:1.5
 > plain-credentials depends on credentials:2.1.16
 > ssh-credentials depends on credentials:2.1.17

WAR bundled plugins:


Installed plugins:
credentials-binding:1.16
credentials:2.1.18
plain-credentials:1.5
ssh-credentials:1.14
structs:1.17
workflow-step-api:2.18
Cleaning up locks
root@dmhicp-worker-3:~# 
root@dmhicp-worker-3:~# docker logs 7f347b0a46e2 -f
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
Error from server (NotFound): secrets "microclimate-ibm-microclimate" not found
error: no objects passed to create
Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Downloading plugins...
Downloading plugin: credentials-binding from https://updates.jenkins.io/download/plugins/credentials-binding/1.16/credentials-binding.hpi
 > credentials-binding depends on workflow-step-api:2.10,credentials:2.1.7,plain-credentials:1.3,ssh-credentials:1.11,structs:1.7
Downloading plugin: workflow-step-api from https://updates.jenkins.io/download/plugins/workflow-step-api/latest/workflow-step-api.hpi
Downloading plugin: credentials from https://updates.jenkins.io/download/plugins/credentials/latest/credentials.hpi
Downloading plugin: plain-credentials from https://updates.jenkins.io/download/plugins/plain-credentials/latest/plain-credentials.hpi
Downloading plugin: ssh-credentials from https://updates.jenkins.io/download/plugins/ssh-credentials/latest/ssh-credentials.hpi
Downloading plugin: structs from https://updates.jenkins.io/download/plugins/structs/latest/structs.hpi
 > credentials depends on structs:1.7
 > workflow-step-api depends on structs:1.5
 > plain-credentials depends on credentials:2.1.16
 > ssh-credentials depends on credentials:2.1.17

WAR bundled plugins:


Installed plugins:
credentials-binding:1.16
credentials:2.1.18
plain-credentials:1.5
ssh-credentials:1.14
structs:1.17
workflow-step-api:2.18
Cleaning up locks

and all of the Microclimate pods started playing nicely: -

kubectl get pods -n micro-climate

NAME                                                    READY     STATUS    RESTARTS   AGE
microclimate-ibm-microclimate-67cfd99c7b-66jj4          1/1       Running   0          29m
microclimate-ibm-microclimate-atrium-7f75d754fd-txn5p   1/1       Running   0          29m
microclimate-ibm-microclimate-devops-568c4c5989-prcbr   1/1       Running   0          29m
microclimate-jenkins-678584959-9hqld                    1/1       Running   0          29m

and a working Microclimate environment .....

Playing with Microclimate on IBM Cloud Private

Now you know that I love to tinker with technology .....

Today, it is Microclimate deployed on IBM Cloud Private 3.1.1.

The Helm chart etc. are here: -

https://github.com/IBM/charts/tree/master/stable/ibm-microclimate

Whilst I'm NOT yet there, I did resolve an issue - one of my own making ...

Having gone through the pre-requisite steps, I chose to use the Helm CLI to create the deployment: -

helm install --name microclimate --namespace micro-climate --set global.rbac.serviceAccountName=micro-sa,jenkins.rbac.serviceAccountName=pipeline-sa,global.ingressDomain=dmhicp-mgmtmaster.fyre.ibm.com,persistence.useDynamicProvisioning=false,jenkins.Persistence.ExistingClaim=micro-climate/microclimate-jenkins,persistence.existingClaimName=micro-climate/microclimate-ibm-microclimate ibm-charts/ibm-microclimate --tls

specifying Persistent Volumes / Persistent Volume Claims that I'd previously created ( using NFS v4 as the underlying storage sharing mechanism ).

Having deployed the chart, I used kubectl to check the environment: -

kubectl get pods -n micro-climate

NAME                                                    READY     STATUS    RESTARTS   AGE
microclimate-ibm-microclimate-55b8b7d46c-bhmw6          0/1       Pending   0          3d
microclimate-ibm-microclimate-atrium-7f75d754fd-r9h8f   1/1       Running   0          3d
microclimate-ibm-microclimate-devops-5b88cf9bcc-xb784   1/1       Running   0          3d
microclimate-jenkins-6d8fcb9ff8-7zw5v                   0/1       Pending   0          3d

When I dug into one of the Pending pods: -

kubectl describe pod microclimate-jenkins-6d8fcb9ff8-7zw5v -n micro-climate

Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  45s (x187390 over 3d)  default-scheduler  persistentvolumeclaim "micro-climate/microclimate-jenkins" not found

even though the PVC and it's associated PV were both shown

kubectl get pv

..
microclimate-ibm-microclimate   8Gi        RWX            Retain           Bound     micro-climate/microclimate-ibm-microclimate                                        35m
microclimate-jenkins            8Gi        RWO            Retain           Bound     micro-climate/microclimate-jenkins                                                 3h
...

kubectl get pvc -n micro-climate

NAME                            STATUS    VOLUME                          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
microclimate-ibm-microclimate   Bound     microclimate-ibm-microclimate   8Gi        RWX                           35m
microclimate-jenkins            Bound     microclimate-jenkins            8Gi        RWO                           3h

It is worth noting that, I learned, it's necessary to create the Persistent Volume Claim before the Persistent Volume.

Once I did things the right way around, I could see that both the PVs AND the PVCs were in the Bound state.

However, that didn't solve the problem with the Microclimate Helm chart ....

Further tinkering did ....

So, for the record, it appears that one can use Name Spaces to segregate PVCs, thus the afore-mentioned command: -

kubectl get pvc -n micro-climate

where the namespace is specified using the -n switch.

However, PVs are NOT similarly segregated ....

Back to the Helm chart ...

When I deployed the chart, I chose to specific the PVCs: -

helm install --name microclimate --namespace micro-climate --set global.rbac.serviceAccountName=micro-sa,jenkins.rbac.serviceAccountName=pipeline-sa,global.ingressDomain=dmhicp-mgmtmaster.fyre.ibm.com,persistence.useDynamicProvisioning=false,jenkins.Persistence.ExistingClaim=micro-climate/microclimate-jenkins,persistence.existingClaimName=micro-climate/microclimate-ibm-microclimate ibm-charts/ibm-microclimate --tls

and referenced the name space.

Well, that was a bad idea ....

Once I updated my command: -

helm install --name microclimate --namespace micro-climate --set global.rbac.serviceAccountName=micro-sa,jenkins.rbac.serviceAccountName=pipeline-sa,global.ingressDomain=dmhicp-mgmtmaster.fyre.ibm.com,persistence.useDynamicProvisioning=false,persistence.size=8Gi,jenkins.Persistence.ExistingClaim=microclimate-jenkins,persistence.existingClaimName=microclimate-ibm-microclimate ibm-charts/ibm-microclimate --tls

to no longer specify the namespace for each PVC, things were slightly better ...

kubectl get pods -n micro-climate

NAME                                                    READY     STATUS                  RESTARTS   AGE
microclimate-ibm-microclimate-67cfd99c7b-zspb2          1/1       Running                 0          18m
microclimate-ibm-microclimate-atrium-7f75d754fd-t97m7   1/1       Running                 0          19m
microclimate-ibm-microclimate-devops-5b88cf9bcc-fr7jg   1/1       Running                 0          18m
microclimate-jenkins-755d769675-jp9fj                   0/1       Init:CrashLoopBackOff   8          19m

I'm now digging into the reason why the Jenkins pod is failing: -

kubectl describe pod microclimate-jenkins-755d769675-jp9fj -n micro-climate

  Type     Reason     Age                From                 Message
  ----     ------     ----               ----                 -------
  Normal   Scheduled  20m                default-scheduler    Successfully assigned micro-climate/microclimate-jenkins-755d769675-jp9fj to 10.51.4.37
  Normal   Started    18m (x4 over 20m)  kubelet, 10.51.4.37  Started container
  Normal   Pulling    17m (x5 over 20m)  kubelet, 10.51.4.37  pulling image "ibmcom/microclimate-jenkins:1812"
  Normal   Pulled     17m (x5 over 20m)  kubelet, 10.51.4.37  Successfully pulled image "ibmcom/microclimate-jenkins:1812"
  Normal   Created    17m (x5 over 20m)  kubelet, 10.51.4.37  Created container
  Warning  BackOff    3s (x81 over 19m)  kubelet, 10.51.4.37  Back-off restarting failed container

Watch this space ....

Friday, 18 January 2019

WebSphere Application Server as a Windows service - the detail

As an update, here's the full detail on adding/updating WAS as a Windows Service: -

Add Service

c:\IBM\WebSphere\AppServer\bin\wasservice.exe -add "SKLM301Server" -serverName "server1" -profilePath "C:\ibm\WebSphere\AppServer\profiles\KLMProfile" -encodeParams -stopArgs "-username wasadmin -password Qg4ggl3@" -startType manual -restart true

...
IBM WebSphere Application Server V9.0 - SKLM301Server service successfully added.
...

Start Server

c:\IBM\WebSphere\AppServer\bin\wasservice.exe -start "SKLM301Server" -serverName "server1"

...
Starting Service: SKLM301Server
Successfully started service.
...

Get Status

c:\IBM\WebSphere\AppServer\bin\wasservice.exe -status "SKLM301Server" -serverName "server1"

...
The service is running.
...

Stop Server

c:\IBM\WebSphere\AppServer\bin\wasservice.exe -stop "SKLM301Server" -serverName "server1"

...
Successfully stopped service.
...

Get Status

c:\IBM\WebSphere\AppServer\bin\wasservice.exe -status "SKLM301Server" -serverName "server1"

...
The service is stopped.
...

Update Service

- When WAS admin password changes

c:\IBM\WebSphere\AppServer\bin\wasservice.exe -add "SKLM301Server" -serverName "server1" -profilePath "C:\ibm\WebSphere\AppServer\profiles\KLMProfile" -encodeParams -stopArgs "-username wasadmin -password Qp455w0rd@" -startType manual -restart true

...
Service already exists, updating parameters...
...

Remove Service

- If needed

c:\IBM\WebSphere\AppServer\bin\wasservice.exe -remove "SKLM301Server" -serverName "server1"

...
Successfully removed service
...


Sources: -

WASService command


Using the WASServiceHelper utility to create Windows services for application servers


DB2 and Windows and the missing database

I was guiding a client through a process of setting up a new database within DB2 11.1 on a Windows Server 2012 R2 box ....

I'd had them run this command: -

db2cmd.exe

and then: -

set DB2INSTANCE=SKLMDB30

and then: -

db2 list db directory

BUT no databases were displayed, even though we KNEW that they existed, and that they could "see" the database from within WebSphere Application Server, via the JDBC data source Test Connection button.

Some fiddling about ensued ....

... and then inspiration struck me.

This is Windows, right ?

So, let's try running the db2cmd.exe command as Administrator


Yep, that did it.

If in doubt ......

Wednesday, 9 January 2019

IBM Cloud Automation Manager and the Case of the Missing Software

As per previous posts: -

Docker Secrets - And there's more ....



I am on another voyage of discovery, this time using IBM Cloud Automation Manager (CAM) to build out IBM middleware such as DB2, WebSphere Application Server and WebSphere Liberty Profile on VMs, hosted on the IBM Cloud ( aka SoftLayer ).

So, having got CAM up-and-running, and having learned a few lessons about building VMs ( the subject of the next few posts ), I started by attempting to build out a DB2 VM ....

This went smoothly, until I hit this exception: -

...
Error: Error applying plan:
1 error(s) occurred:
* camc_softwaredeploy.DB2Node01_db2_v111_install: 1 error(s) occurred:
* camc_softwaredeploy.DB2Node01_db2_v111_install: 
**********
Error: Response from pattern manager:
StatusCode:500
Message:
{
  "failures_found": [
    "10.143.162.16     Error executing action `run` on resource 'ruby_block[fixpack_packages_validation]'", 
    "10.143.162.16     RuntimeError", 
    "10.143.162.16     116:       rescue OpenSSL::SSL::SSLError", 
    "10.143.162.16 Chef Client failed. 0 resources updated in 02 seconds", 
    "10.143.162.16 [2019-01-09T00:11:54-06:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out", 
    "10.143.162.16 [2019-01-09T00:11:54-06:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report", 
    "10.143.162.16 [2019-01-09T00:11:54-06:00] FATAL: RuntimeError: ruby_block[fixpack_packages_validation] (db2::prereq_check line 99) had an error: RuntimeError: 404 Please make sure v11.1.2fp2_linuxx64_server_t.tar.gz is available in your binary repository"
  ], 
...

It took me but a little while to work out what I'd missed ...

Having built CAM, and the underlying IBM Cloud Private (ICP) platform, I somehow assumed that the requisite IBM software packages would magically download themselves from Passport Advantage, Fix Central etc.

Well, guess what ....

They did not :-)

Using this as inspiration: -

Managing a software repository


I downloaded the required DB2 binary, fix pack and activation kit: -

ls -al /tmp/v11.1.2fp2_linuxx64_server_t.tar.gz

-rw-r--r-- 1 root root 2092136927 Jun 15  2017 /tmp/v11.1.2fp2_linuxx64_server_t.tar.gz

ls -al /tmp/DB2_Svr_11.1_Linux_x86-64.tar.gz 

-rw-r--r-- 1 root root 1945576016 Jun 11  2016 /tmp/DB2_Svr_11.1_Linux_x86-64.tar.gz

ls -al /tmp/DB2_ESE_AUSI_Activation_11.1.zip 

-rw-r--r-- 1 root root 3993254 Jun  3  2016 /tmp/DB2_ESE_AUSI_Activation_11.1.zip

and placed the files in the requisite locations: -

cp /tmp/DB2_Svr_11.1_Linux_x86-64.tar.gz /opt/ibm/docker/software-repo/var/swRepo/private/db2/v111/base

cp /tmp/v11.1.2fp2_linuxx64_server_t.tar.gz /opt/ibm/docker/software-repo/var/swRepo/private/db2/v111/maint

unzip -j -p DB2_ESE_AUSI_Activation_11.1.zip \*db2ese_u.lic > /opt/ibm/docker/software-repo/var/swRepo/private/db2/v111/license

and validated the same: -

ls -R -al /opt/ibm/docker/software-repo/var/swRepo/private/db2/v111/

/opt/ibm/docker/software-repo/var/swRepo/private/db2/v111/:
total 20
drwxr-xr-x 4 root root 4096 Jan  9 20:55 .
drwxr-xr-x 4 root root 4096 Jan  8 19:30 ..
drwxr-xr-x 2 root root 4096 Jan  9 11:45 base
-rw-r--r-- 1 root root  913 Jan  9 20:54 license
drwxr-xr-x 2 root root 4096 Jan  9 14:51 maint

/opt/ibm/docker/software-repo/var/swRepo/private/db2/v111/base:
total 1899992
drwxr-xr-x 2 root root       4096 Jan  9 11:45 .
drwxr-xr-x 4 root root       4096 Jan  9 20:55 ..
-rw-r--r-- 1 root root 1945576016 Jan  9 11:46 DB2_Svr_11.1_Linux_x86-64.tar.gz

/opt/ibm/docker/software-repo/var/swRepo/private/db2/v111/maint:
total 2043116
drwxr-xr-x 2 root root       4096 Jan  9 14:51 .
drwxr-xr-x 4 root root       4096 Jan  9 20:55 ..
-rw-r--r-- 1 root root 2092136927 Jan  9 14:51 v11.1.2fp2_linuxx64_server_t.tar.gz

With the right files in the right place, guess what, the CAM build of a DB2 VM just flippin' worked :-)