Wednesday 6 February 2019

Fun and games with pushing Docker images to Kubernetes registry

I hit a wee problem today, whilst pushing a Docker image ( WebSphere Liberty Profile ) from my local Docker registry to the registry associated with my IBM Kubernetes Service (IKS) cluster.

For reference, I wrote about Liberty on Docker earlier: -

 WebSphere Liberty Profile on Docker - An update 

I'd tagged the image: -

docker tag websphere-liberty:latest registry.ng.bluemix.net/davehay42/wlp

and pushed it to the newly created IKS cluster registry: -

docker push registry.ng.bluemix.net/davehay42/wlp:latest

I validated that the image was there: -

ibmcloud cr image-list

...
Listing images...

REPOSITORY                                        TAG          DIGEST         NAMESPACE     CREATED       SIZE     SECURITY STATUS
registry.ng.bluemix.net/davehay42/davehay         helloworld   92c7f9c92844   davehay42     1 month ago   977 B    No Issues
registry.ng.bluemix.net/davehay42/wlp             latest       6631eaf721ad   davehay42     4 days ago    335 MB   No Issues
registry.ng.bluemix.net/dmh_k8s_poc/dmh_k8s_poc   hello        92c7f9c92844   dmh_k8s_poc   1 month ago   977 B    No Issues
...

and then created a K8S deployment: -

kubectl create deployment wlp --image=latest

Alas when I checked the pod to which the deployment was pushed: -

kubectl describe pod `kubectl get pods | grep wlp | awk '{print $1}'`

I saw this: -

...
  Type     Reason                 Age                From                   Message
  ----     ------                 ----               ----                   -------
  Normal   Scheduled              36s                default-scheduler      Successfully assigned wlp-598d758678-w4tj7 to 10.76.195.65
  Normal   SuccessfulMountVolume  36s                kubelet, 10.76.195.65  MountVolume.SetUp succeeded for volume "default-token-8znfb"
  Normal   Pulling                19s (x2 over 35s)  kubelet, 10.76.195.65  pulling image "a83fa38506a5"
  Warning  Failed                 18s (x2 over 34s)  kubelet, 10.76.195.65  Failed to pull image "a83fa38506a5": rpc error: code = Unknown desc = Error response from daemon: pull access denied for a83fa38506a5, repository does not exist or may require 'docker login'
  Warning  Failed                 18s (x2 over 34s)  kubelet, 10.76.195.65  Error: ErrImagePull
  Normal   BackOff                4s (x2 over 33s)   kubelet, 10.76.195.65  Back-off pulling image "a83fa38506a5"
  Warning  Failed                 4s (x2 over 33s)   kubelet, 10.76.195.65  Error: ImagePullBackOff
...

Thankfully, after reading this tutorial: -


This tutorial shows you how to run a simple Hello World Node.js app on Kubernetes using Minikube and Katacoda. Katacoda provides a free, in-browser Kubernetes environment.

I realised where I was going wrong ....

The image specified in the kubectl create deployment command was WAY too vague.

I deleted my deployment

kubectl delete deployment wlp

and then recreated it using the full tag ( registry/namespace/tag ): -

kubectl create deployment wlp --image=registry.ng.bluemix.net/davehay42/wlp

I was then able to validate the deployment: -

kubectl get deployments

NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
wlp       1         1         1            1           16m

and validated the deployed pod: -

kubectl describe pod `kubectl get pods | grep wlp | awk '{print $1}'`

...
Events:
  Type    Reason                 Age   From                   Message
  ----    ------                 ----  ----                   -------
  Normal  Scheduled              17m   default-scheduler      Successfully assigned wlp-7865f8b77c-xlnf2 to 10.76.195.65
  Normal  SuccessfulMountVolume  17m   kubelet, 10.76.195.65  MountVolume.SetUp succeeded for volume "default-token-8znfb"
  Normal  Pulling                17m   kubelet, 10.76.195.65  pulling image "registry.ng.bluemix.net/davehay42/wlp"
  Normal  Pulled                 16m   kubelet, 10.76.195.65  Successfully pulled image "registry.ng.bluemix.net/davehay42/wlp"
  Normal  Created                16m   kubelet, 10.76.195.65  Created container
  Normal  Started                16m   kubelet, 10.76.195.65  Started container
...

I was then able to validate that Liberty was up-and-running: -

kubectl logs `kubectl get pods | grep wlp | awk '{print $1}'`

Launching defaultServer (WebSphere Application Server 19.0.0.1/wlp-1.0.24.cl190120190124-2339) on IBM J9 VM, version 8.0.5.27 - pxa6480sr5fp27-20190104_01(SR5 FP27) (en_US)
[AUDIT   ] CWWKE0001I: The server defaultServer has been launched.
[AUDIT   ] CWWKE0100I: This product is licensed for development, and limited production use. The full license terms can be viewed here: https://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/wasdev/license/base_ilan/ilan/19.0.0.1/lafiles/en.html
[AUDIT   ] CWWKG0093A: Processing configuration drop-ins resource: /opt/ibm/wlp/usr/servers/defaultServer/configDropins/defaults/keystore.xml
[WARNING ] CWWKS3103W: There are no users defined for the BasicRegistry configuration of ID com.ibm.ws.security.registry.basic.config[basic].
[AUDIT   ] CWWKZ0058I: Monitoring dropins for applications.
[AUDIT   ] CWWKS4104A: LTPA keys created in 1.268 seconds. LTPA key file: /opt/ibm/wlp/output/defaultServer/resources/security/ltpa.keys
[AUDIT   ] CWPKI0803A: SSL certificate created in 2.690 seconds. SSL key file: /opt/ibm/wlp/output/defaultServer/resources/security/key.jks
[AUDIT   ] CWWKI0001I: The CORBA name server is now available at corbaloc:iiop:localhost:2809/NameService.
[AUDIT   ] CWWKF0012I: The server installed the following features: [beanValidation-2.0, servlet-4.0, ssl-1.0, jndi-1.0, jca-1.7, cdi-2.0, jdbc-4.2, jms-2.0, ejbPersistentTimer-3.2, appSecurity-3.0, appSecurity-2.0, j2eeManagement-1.1, wasJmsServer-1.0, javaMail-1.6, jaxrs-2.1, webProfile-8.0, jpa-2.2, jcaInboundSecurity-1.0, jsp-2.3, jsonb-1.0, ejbLite-3.2, managedBeans-1.0, jsf-2.3, ejbHome-3.2, jaxws-2.2, jsonp-1.1, jaxrsClient-2.1, el-3.0, concurrent-1.0, appClientSupport-1.0, ejbRemote-3.2, jaxb-2.2, mdb-3.2, jacc-1.5, javaee-8.0, batch-1.0, ejb-3.2, jpaContainer-2.2, jaspic-1.1, distributedMap-1.0, websocket-1.1, wasJmsSecurity-1.0, wasJmsClient-2.0].
[AUDIT   ] CWWKF0011I: The server defaultServer is ready to run a smarter planet.

which is a good sign.

I could then open a command prompt ( shell ) on the WLP container within the pod: -

kubectl exec -i -t `kubectl get pods | grep wlp | awk '{print $1}'` /bin/bash

default@wlp-7865f8b77c-xlnf2:/$ 

and run a WLP command: -

/opt/ibm/wlp/bin/server version

WebSphere Application Server 19.0.0.1 (1.0.24.cl190120190124-2339) on IBM J9 VM, version 8.0.5.27 - pxa6480sr5fp27-20190104_01(SR5 FP27) (en_US)

and examine the WLP server.xml file: -

cat /opt/ibm/wlp/usr/servers/defaultServer/server.xml 


   
   
        javaee-8.0
   

   
    
   
   
    
   
     
         
   
    
   
   
                  host="*"
 httpPort="9080"
                  httpsPort="9443" />
                  
   
   


This means that we have a pod deployed, hosting the WebSphere Liberty Profile container, and that WLP is looking clean-and-green.

I then used kubectl cp to copy a JEE web application ( Ferret ) into the WLP container in the pod: -

kubectl cp /tmp/ferret-1.2.war `kubectl get pods | grep wlp | awk '{print $1}'`:/opt/ibm/wlp/usr/servers/defaultServer/dropins/

and validated that it started: -

kubectl logs `kubectl get pods | grep wlp | awk '{print $1}'`

....
[AUDIT   ] CWWKF0011I: The server defaultServer is ready to run a smarter planet.
[AUDIT   ] CWWKT0016I: Web application available (default_host): http://wlp-7865f8b77c-xlnf2:9080/ferret/
[AUDIT   ] CWWKZ0001I: Application ferret-1.2 started in 1.414 seconds.
...

I then created a nodeport service: -

kubectl create service nodeport wlp --tcp=80:9080

and retrieved the node details: -

kubectl describe node `kubectl get nodes | grep -i iks|awk '{print $1}'`
...
Addresses:
  InternalIP:  10.76.195.65
  ExternalIP:  173.193.82.117
  Hostname:    10.76.195.65
...

and the newly created nodeport service: -

kubectl get services

...
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.21.0.1              443/TCP        1h
wlp          NodePort    172.21.67.96          80:31135/TCP   23m
...

Using the combination of the external IP ( 173.193.82.117 ) and the generated node port ( 31135 ), I was then able to access Liberty: -

http://173.193.82.117:31135/


AND the Ferret application: -

http://173.193.82.117:31135/ferret/


So, to summarise, we've taken a Docker image from the official repo ( https://hub.docker.com/_/websphere-liberty ), tagged it to make unique to us, pushed it to a newly created IBM Kubernetes Service (IKS) cluster, created a deployment ( deploying the container to a pod on a node ), created a service to expose the Liberty server's port 9080, and accessed Liberty via the web UI.

We also showed how one can use kubectl cp and kubectl exec to access the internals of the running container, similar to the way that docker cp and docker exec work.

Nice.

No comments:

Visual Studio Code - Wow 🙀

Why did I not know that I can merely hit [cmd] [p]  to bring up a search box allowing me to search my project e.g. a repo cloned from GitHub...