Monday, 21 January 2019

Playing with Microclimate on IBM Cloud Private

Now you know that I love to tinker with technology .....

Today, it is Microclimate deployed on IBM Cloud Private 3.1.1.

The Helm chart etc. are here: -

https://github.com/IBM/charts/tree/master/stable/ibm-microclimate

Whilst I'm NOT yet there, I did resolve an issue - one of my own making ...

Having gone through the pre-requisite steps, I chose to use the Helm CLI to create the deployment: -

helm install --name microclimate --namespace micro-climate --set global.rbac.serviceAccountName=micro-sa,jenkins.rbac.serviceAccountName=pipeline-sa,global.ingressDomain=dmhicp-mgmtmaster.fyre.ibm.com,persistence.useDynamicProvisioning=false,jenkins.Persistence.ExistingClaim=micro-climate/microclimate-jenkins,persistence.existingClaimName=micro-climate/microclimate-ibm-microclimate ibm-charts/ibm-microclimate --tls

specifying Persistent Volumes / Persistent Volume Claims that I'd previously created ( using NFS v4 as the underlying storage sharing mechanism ).

Having deployed the chart, I used kubectl to check the environment: -

kubectl get pods -n micro-climate

NAME                                                    READY     STATUS    RESTARTS   AGE
microclimate-ibm-microclimate-55b8b7d46c-bhmw6          0/1       Pending   0          3d
microclimate-ibm-microclimate-atrium-7f75d754fd-r9h8f   1/1       Running   0          3d
microclimate-ibm-microclimate-devops-5b88cf9bcc-xb784   1/1       Running   0          3d
microclimate-jenkins-6d8fcb9ff8-7zw5v                   0/1       Pending   0          3d

When I dug into one of the Pending pods: -

kubectl describe pod microclimate-jenkins-6d8fcb9ff8-7zw5v -n micro-climate

Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  45s (x187390 over 3d)  default-scheduler  persistentvolumeclaim "micro-climate/microclimate-jenkins" not found

even though the PVC and it's associated PV were both shown

kubectl get pv

..
microclimate-ibm-microclimate   8Gi        RWX            Retain           Bound     micro-climate/microclimate-ibm-microclimate                                        35m
microclimate-jenkins            8Gi        RWO            Retain           Bound     micro-climate/microclimate-jenkins                                                 3h
...

kubectl get pvc -n micro-climate

NAME                            STATUS    VOLUME                          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
microclimate-ibm-microclimate   Bound     microclimate-ibm-microclimate   8Gi        RWX                           35m
microclimate-jenkins            Bound     microclimate-jenkins            8Gi        RWO                           3h

It is worth noting that, I learned, it's necessary to create the Persistent Volume Claim before the Persistent Volume.

Once I did things the right way around, I could see that both the PVs AND the PVCs were in the Bound state.

However, that didn't solve the problem with the Microclimate Helm chart ....

Further tinkering did ....

So, for the record, it appears that one can use Name Spaces to segregate PVCs, thus the afore-mentioned command: -

kubectl get pvc -n micro-climate

where the namespace is specified using the -n switch.

However, PVs are NOT similarly segregated ....

Back to the Helm chart ...

When I deployed the chart, I chose to specific the PVCs: -

helm install --name microclimate --namespace micro-climate --set global.rbac.serviceAccountName=micro-sa,jenkins.rbac.serviceAccountName=pipeline-sa,global.ingressDomain=dmhicp-mgmtmaster.fyre.ibm.com,persistence.useDynamicProvisioning=false,jenkins.Persistence.ExistingClaim=micro-climate/microclimate-jenkins,persistence.existingClaimName=micro-climate/microclimate-ibm-microclimate ibm-charts/ibm-microclimate --tls

and referenced the name space.

Well, that was a bad idea ....

Once I updated my command: -

helm install --name microclimate --namespace micro-climate --set global.rbac.serviceAccountName=micro-sa,jenkins.rbac.serviceAccountName=pipeline-sa,global.ingressDomain=dmhicp-mgmtmaster.fyre.ibm.com,persistence.useDynamicProvisioning=false,persistence.size=8Gi,jenkins.Persistence.ExistingClaim=microclimate-jenkins,persistence.existingClaimName=microclimate-ibm-microclimate ibm-charts/ibm-microclimate --tls

to no longer specify the namespace for each PVC, things were slightly better ...

kubectl get pods -n micro-climate

NAME                                                    READY     STATUS                  RESTARTS   AGE
microclimate-ibm-microclimate-67cfd99c7b-zspb2          1/1       Running                 0          18m
microclimate-ibm-microclimate-atrium-7f75d754fd-t97m7   1/1       Running                 0          19m
microclimate-ibm-microclimate-devops-5b88cf9bcc-fr7jg   1/1       Running                 0          18m
microclimate-jenkins-755d769675-jp9fj                   0/1       Init:CrashLoopBackOff   8          19m

I'm now digging into the reason why the Jenkins pod is failing: -

kubectl describe pod microclimate-jenkins-755d769675-jp9fj -n micro-climate

  Type     Reason     Age                From                 Message
  ----     ------     ----               ----                 -------
  Normal   Scheduled  20m                default-scheduler    Successfully assigned micro-climate/microclimate-jenkins-755d769675-jp9fj to 10.51.4.37
  Normal   Started    18m (x4 over 20m)  kubelet, 10.51.4.37  Started container
  Normal   Pulling    17m (x5 over 20m)  kubelet, 10.51.4.37  pulling image "ibmcom/microclimate-jenkins:1812"
  Normal   Pulled     17m (x5 over 20m)  kubelet, 10.51.4.37  Successfully pulled image "ibmcom/microclimate-jenkins:1812"
  Normal   Created    17m (x5 over 20m)  kubelet, 10.51.4.37  Created container
  Warning  BackOff    3s (x81 over 19m)  kubelet, 10.51.4.37  Back-off restarting failed container

Watch this space ....

No comments:

Note to self - use kubectl to query images in a pod or deployment

In both cases, we use JSON ... For a deployment, we can do this: - kubectl get deployment foobar --namespace snafu --output jsonpath="{...