Wednesday, 13 February 2019

Following up ... defining K8S Services using YAML

As a fup to this: -

Playing with Kubernetes deployments and NodePort services

life is SO much easier if I choose to define the service using YAML ( YAML Ain't Markup Language ).

So this is with what I ended up: -

cat ~/Desktop/nginx.yaml
 
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  namespace: default
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: nginx

remember that YAML is very positional and, apparently, tabs are abhorrent :-)

Having created - and validated using various listing plugins for Atom - the YAML, I was then able to apply it: -

kubectl apply -f ~/Desktop/nginx.yaml

service "my-nginx" created

and then validate: -

kubectl get services

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.21.0.1              443/TCP        14d
my-nginx     NodePort    172.21.24.197          80:31665/TCP   7s

and then test: -

curl http://192.168.132.131:31665

Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
Commercial support is available at

Thank you for using nginx.

For reference, one can get YAML or JSON out of most, but not all of the K8S, commands: -

kubectl get services -o yaml

apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    creationTimestamp: 2019-01-29T16:18:07Z
    labels:
      component: apiserver
      provider: kubernetes
    name: kubernetes
    namespace: default
    resourceVersion: "33"
    selfLink: /api/v1/namespaces/default/services/kubernetes
    uid: 76a02dea-23e1-11e9-b35e-2a02ed9d765d
  spec:
    clusterIP: 172.21.0.1
    ports:
    - name: https
      port: 443
      protocol: TCP
      targetPort: 2040
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"my-nginx","namespace":"default"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":80}],"selector":{"app":"nginx"},"type":"NodePort"}}
    creationTimestamp: 2019-02-13T11:40:09Z
    labels:
      app: nginx
    name: my-nginx
    namespace: default
    resourceVersion: "2072491"
    selfLink: /api/v1/namespaces/default/services/my-nginx
    uid: 1da46471-2f84-11e9-9f99-1201bf98c5fb
  spec:
    clusterIP: 172.21.24.197
    externalTrafficPolicy: Cluster
    ports:
    - name: http
      nodePort: 31665
      port: 80
      protocol: TCP
      targetPort: 80
    selector:
      app: nginx
    sessionAffinity: None
    type: NodePort
  status:
    loadBalancer: {}
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

kubectl get services -o json

{
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "v1",
            "kind": "Service",
            "metadata": {
                "creationTimestamp": "2019-01-29T16:18:07Z",
                "labels": {
                    "component": "apiserver",
                    "provider": "kubernetes"
                },
                "name": "kubernetes",
                "namespace": "default",
                "resourceVersion": "33",
                "selfLink": "/api/v1/namespaces/default/services/kubernetes",
                "uid": "76a02dea-23e1-11e9-b35e-2a02ed9d765d"
            },
            "spec": {
                "clusterIP": "172.21.0.1",
                "ports": [
                    {
                        "name": "https",
                        "port": 443,
                        "protocol": "TCP",
                        "targetPort": 2040
                    }
                ],
                "sessionAffinity": "None",
                "type": "ClusterIP"
            },
            "status": {
                "loadBalancer": {}
            }
        },
        {
            "apiVersion": "v1",
            "kind": "Service",
            "metadata": {
                "annotations": {
                    "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"my-nginx\",\"namespace\":\"default\"},\"spec\":{\"ports\":[{\"name\":\"http\",\"port\":80,\"protocol\":\"TCP\",\"targetPort\":80}],\"selector\":{\"app\":\"nginx\"},\"type\":\"NodePort\"}}\n"
                },
                "creationTimestamp": "2019-02-13T11:40:09Z",
                "labels": {
                    "app": "nginx"
                },
                "name": "my-nginx",
                "namespace": "default",
                "resourceVersion": "2072491",
                "selfLink": "/api/v1/namespaces/default/services/my-nginx",
                "uid": "1da46471-2f84-11e9-9f99-1201bf98c5fb"
            },
            "spec": {
                "clusterIP": "172.21.24.197",
                "externalTrafficPolicy": "Cluster",
                "ports": [
                    {
                        "name": "http",
                        "nodePort": 31665,
                        "port": 80,
                        "protocol": "TCP",
                        "targetPort": 80
                    }
                ],
                "selector": {
                    "app": "nginx"
                },
                "sessionAffinity": "None",
                "type": "NodePort"
            },
            "status": {
                "loadBalancer": {}
            }
        }
    ],
    "kind": "List",
    "metadata": {
        "resourceVersion": "",
        "selfLink": ""
    }
}

which is nice.

Playing with Kubernetes deployments and NodePort services

Today I'm fiddling with nginx as a workload on my IBM Kubernetes Service (IKS) cluster.

My default process was this: -

kubectl create deployment nginx --image=nginx

...
deployment.extensions "nginx" created
...

kubectl get deployments

...
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     1         1         1            1           20s
...

kubectl describe pod `kubectl get pods | grep nginx | awk '{print $1}'`

...
Events:
  Type     Reason             Age                From                     Message
  ----     ------             ----               ----                     -------
  Normal   Scheduled          39m                default-scheduler        Successfully assigned default/nginx-78f5d695bd-jxfvm to 192.168.132.123
...

kubectl create service nodeport nginx --tcp=80:80

...
service "nginx" created
...

kubectl get services

...
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.21.0.1              443/TCP        14d
nginx        NodePort    172.21.113.167          80:30923/TCP   20s
...

kubectl get nodes

...
  ROLES     AGE       VERSION
...
192.168.132.123   Ready          13d       v1.11.5
...

and then combine the public IP address of the node ( 192.168.132.123 ) and the generated NodePort ( 30923 ) to allow me to access nginx: -

curl http://192.168.132.123:30923

...
Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
Commercial support is available at

Thank you for using nginx.
...

I also realised that I could "debug" the pod hosting the nginx service: -

docker ps -a

...
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
fda90a370446        nginx                  "nginx -g 'daemon of…"   19 seconds ago      Up 18 seconds                           k8s_nginx_nginx-78f5d695bd-8vrgl_default_d39dad7e-2f79-11e9-9f99-1201bf98c5fb_0
...

docker logs fda90a370446

...
10.23.2.59 - - [13/Feb/2019:10:30:42 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.54.0" "-"
...

However, I also "discovered" that there seems to be a correlation between the NAME of the NodePort service and the NAME of the nginx deployment.

If I create a NodePort service with a different name: -

kubectl create service nodeport foobar --tcp=80:80

and then get the newly created service: -

kubectl get services

...
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
foobar       NodePort    172.21.0.34          80:30952/TCP   1m
kubernetes   ClusterIP   172.21.0.1            443/TCP        14d
...

I'm no longer able to hit nginx: -

curl http://192.168.132.123:30952

returns: -

...
curl: (7) Failed to connect to 192.168.132.123 port 30952: Connection refused
...

If I delete the service: -

kubectl delete service foobar

...
service "foobar" deleted
...

and recreate it with the SAME name as the deployment: -

kubectl create service nodeport nginx --tcp=80:80

...
service "nginx" created
...

I'm back in the game: -

kubectl get services

...
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.21.0.1            443/TCP        14d
nginx        NodePort    172.21.31.44          80:30281/TCP   33s
...

curl http://192.168.132.123:30281

...
Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
Commercial support is available at

Thank you for using nginx.
...

So there does APPEAR to be a correlation between the deployment name and the service name.

Obviously, K8S provides tagging for this, but I don't believe that's applicable to a NodePort service.

However, there is a different way ....

It is possible to expose an existing deployment and create a NodePort service "on the fly", as per the following: -

kubectl expose deployment nginx --type=NodePort --name=my-nginx --port 80

This creates a NodePort service: -

kubectl get services

...
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.21.0.1            443/TCP        14d
my-nginx     NodePort    172.21.8.192          80:30628/TCP   39s
...

NOTE that the name of the service - my-nginx - does NOT tie up with the deployment per se and yet ....

curl http://192.168.132.123:30281

...
Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
Commercial support is available at

Thank you for using nginx.
...

If I dig into the newly created service: -

kubectl describe service my-nginx

Name:                     my-nginx
Namespace:                default
Labels:                   app=nginx
Annotations:              
Selector:                 app=nginx
Type:                     NodePort
IP:                       172.21.8.192
Port:                      80/TCP
TargetPort:               80/TCP
NodePort:                  30628/TCP
Endpoints:                172.30.148.204:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                  

I can see that it's tagged against the deployment - nginx - using the Label and Selector; I suspect that it's the latter that made the difference.

If I revert back to my previous service: -

kubectl create service nodeport nginx --tcp=80:80

...
service "nginx" created
...

and dig into it: -

kubectl describe service nginx

Name:                     nginx
Namespace:                default
Labels:                   app=nginx
Annotations:              
Selector:                 app=nginx
Type:                     NodePort
IP:                       172.21.68.125
Port:                     80-80  80/TCP
TargetPort:               80/TCP
NodePort:                 80-80  31411/TCP
Endpoints:                172.30.148.204:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                  

So the name of the service IS used to "tag" the target deployment.

If I misname my service: -

kubectl create service nodeport foobar --tcp=80:80

service "foobar" created

kubectl describe service foobar

Name:                     foobar
Namespace:                default
Labels:                   app=foobar
Annotations:              
Selector:                 app=foobar
Type:                     NodePort
IP:                       172.21.232.250
Port:                     80-80  80/TCP
TargetPort:               80/TCP
NodePort:                 80-80  32146/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                  

which explains why this does not: -

curl http://192.168.132.123:32146

In other words, the name of the service DOES MATTER, but only where one specifically creates the service, as opposed to letting the kubectl expose deployment do it for one.


Tuesday, 12 February 2019

Bringing it together ... Docker and AWK

Building on two earlier posts: -

AWK - It's my "new" best friend forever ....


here's a quick spot of tidying up completed/unwanted Docker containers, using the power of AWK: -

docker rm `docker ps -a|grep Exit|awk '{print $1}'`

So we're executing the result of: -

docker ps -a|grep Exit

e717fa412d6a        hello-world         "/hello"                 19 seconds ago      Exited (0) 18 seconds ago                          romantic_chatterjee
761e4ec73bb5        docker/whalesay     "cowsay 'Awwwww, tha…"   About an hour ago   Exited (0) About an hour ago                       jolly_sanderson
950cd9684ee6        docker/whalesay     "cowsay 'Aww Shucks …"   About an hour ago   Exited (0) About an hour ago                       clever_kare
d0d444d6f4f1        docker/whalesay     "cowsay Aww"             About an hour ago   Exited (0) About an hour ago                       pedantic_dirac
5c20a293793f        docker/whalesay     "cowsay Awwww Shucks…"   About an hour ago   Exited (0) About an hour ago                       dreamy_bell
d67c6d05cc24        docker/whalesay     "cowsay Awwww Shucks"    About an hour ago   Exited (0) About an hour ago                       quizzical_curran
7507f0df1db9        docker/whalesay     "cowsay Awwww"           About an hour ago   Exited (0) About an hour ago                       lucid_wright
77997983c07b        docker/whalesay     "cowsay"                 About an hour ago   Exited (0) About an hour ago                       festive_nobel
1b34cc7f227d        docker/whalesay     "cowsay boo"             About an hour ago   Exited (0) About an hour ago                       eager_khorana
517237d924bf        docker/whalesay     "cowsay boo"             27 hours ago        Exited (0) 27 hours ago                            naughty_ramanujan
6b6bb56464a9        docker/whalesay     "/bin/bash"              27 hours ago        Exited (0) 27 hours ago                            dreamy_gauss
3d0a99dfd9a4        hello-world         "/hello"                 27 hours ago        Exited (0) 27 hours ago                            frosty_hypatia

In other words, a list of all the containers in the Exited status .....

... and then parsing the returned list to ONLY give us the container ID: -

docker ps -a|grep Exit|awk '{print $1}'

e717fa412d6a
761e4ec73bb5
950cd9684ee6
d0d444d6f4f1
5c20a293793f
d67c6d05cc24
7507f0df1db9
77997983c07b
1b34cc7f227d
517237d924bf
6b6bb56464a9
3d0a99dfd9a4

Note that the grep pulls out everything with the word "Exit" thus ignoring the title line: -

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                         PORTS               NAMES

We can then feed that into AWK to give us the container IDs.

We then use the power of the back tick ( ` ) to allow us to run the docker rm command over the output from the previous command: -

rm `docker ps -a|grep Exit|awk '{print $1}'`

e717fa412d6a
761e4ec73bb5
950cd9684ee6
d0d444d6f4f1
5c20a293793f
d67c6d05cc24
7507f0df1db9
77997983c07b
1b34cc7f227d
517237d924bf
6b6bb56464a9
3d0a99dfd9a4

which clears the field: -

docker ps -a

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Nice !

Ooops, broke my Red Hat

I had a brief issue with a Red Hat Enterprise Linux (RHEL) 7.6 VM this AM.

For no obvious reason (!), the VM kept booting into so-called Emergency Mode: -

Emergency mode, provides the minimal bootable environment and allows you to repair your system even in situations when rescue mode is unavailable. In emergency mode, the system mounts only the root file system, and it is mounted as read-only. Also, the system does not activate any network interfaces and only a minimum of the essential services are set up. The system does not load any init scripts, therefore you can still mount file systems to recover data that would be lost during a re-installation if init is corrupted or not working. 

33.3. Emergency Mode

Thankfully, I was prompted to look at the logs: -

journalctl -xb

which, after a few pages, showed this: -


which reminded me of yesterday's post: -


where I had been "playing" with file-systems, mount points and permissions.

I checked the mount points: -

cat /etc/fstab


Yep, I'd attempted to mount, as /snafu, a since-deleted directory - /foobar - and wondered why things didn't work :-)

Once I removed the entry from /etc/fstab and rebooted, all was well.

Two morals of the story: -
  1. Don't fiddle
  2. Watch the logs
😅😅

Monday, 11 February 2019

Bash - Conditions and loops

Again, an aide memoire

A Bash script that tests for a single input, and moans if none are found: -

cat plob.sh 

#!/bin/bash

if [ -z "$1" ]
  then
    echo "For what product are you creating this ?"
    exit 1
  else
    echo $1

fi

With no argument ....

./plob.sh 

For what product are you creating this ?

With one argument ...

./plob.sh BPM

BPM

With multiple arguments ....

./plob.sh BPM ODM

BPM

In other words, only the first argument is used ...

A Bash script that tests for TWO arguments: -

cat plab.sh 

#!/bin/bash
if [ $# = 2 ]
then
echo "Nice arguments"
else
echo "More arguments, please"
exit 1
fi

With no argument ...

./plab.sh 

More arguments, please

With one argument ...

./plab.sh BPM

More arguments, please

With two arguments ...

./plab.sh BPM ODM

Nice arguments

A Bash script that tests for TWO arguments: -

cat plib.sh 

#!/bin/bash
echo $#
if [ -z "$1"  ] & [ -z "$2" ]
then
        echo "Usage: Two arguments please"
        exit 1
else
echo $1 $2
fi

With no argument ...

./plib.sh 

0
Usage: Two arguments please

./plib.sh BPM

1
Usage: Two arguments please

./plib.sh BPM ODM

2
BPM ODM

So, for reference, we're using the following: -

$# - Counts the arguments
#1 - Argument 1
#2 - Argument 2
-z - Checks whether the arguments are empty 

From a looping perspective, I've used this one many times before ....

A Bash script that unpacks some TAR files: -

for i in /tmp/*.tar.gz; do tar xvzf $i -C /tmp/snafu; done

This is with what we started: -

ls -al *.tar.gz

-rw-r--r--  1 hayd  wheel  135 11 Feb 16:22 billing.tar.gz
-rw-r--r--  1 hayd  wheel  154 11 Feb 16:22 docs.tar.gz
-rw-r--r--  1 hayd  wheel  138 11 Feb 16:22 preso.tar.gz

and this is what we ended: -

ls -al /tmp/snafu/

total 0
drwxr-xr-x   6 hayd  wheel  192 11 Feb 16:23 .
drwxrwxrwt  21 root  wheel  672 11 Feb 16:22 ..
-rw-r--r--   1 hayd  wheel    0 11 Feb 16:22 Expenses.xls
-rw-r--r--   1 hayd  wheel    0 11 Feb 16:21 Journal.doc
-rw-r--r--   1 hayd  wheel    0 11 Feb 16:22 Presentation.ppt
-rw-r--r--   1 hayd  wheel    0 11 Feb 16:21 Readme.doc

Final thing, with tar, we use -C to specify the target directory and for unzip, we use -d to specify the target directory.

Aide Memoire - Docker Tinkerings

Writing it down here so that I don't forget .....

A quick run-down of some of my most useful Docker commands: -

See what's running

docker ps -a

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

See what images I have

docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

Run, and pull - if needed, a Docker container

docker run hello-world

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

See what images I have

docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

hello-world         latest              fce289e99eb9        5 weeks ago         1.84kB

See what images I have - without truncation

docker images --no-trunc

REPOSITORY          TAG                 IMAGE ID                                                                  CREATED             SIZE

hello-world         latest              sha256:fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e   5 weeks ago         1.84kB

See what's running

docker ps -a

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES

3d0a99dfd9a4        hello-world         "/hello"            16 seconds ago      Exited (0) 15 seconds ago                       frosty_hypatia

See what's running - without truncation

docker ps -a --no-trunc

CONTAINER ID                                                       IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
3d0a99dfd9a47cd9e336d56ab2fc83bd78ca80978d971dc276946041fdb40995   hello-world         "/hello"            21 seconds ago      Exited (0) 19 seconds ago                       frosty_hypatia

See what's under the hood - in terms of layers

docker history fce289e99eb9

IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
fce289e99eb9        5 weeks ago         /bin/sh -c #(nop)  CMD ["/hello"]               0B                  
          5 weeks ago         /bin/sh -c #(nop) COPY file:f77490f70ce51da2…   1.84kB              

And with a slightly (!) more advanced Docker image: -

docker run docker/whalesay cowsay boo

Unable to find image 'docker/whalesay:latest' locally
latest: Pulling from docker/whalesay
e190868d63f8: Pull complete 
909cd34c6fd7: Pull complete 
0b9bfabab7c1: Pull complete 
a3ed95caeb02: Pull complete 
00bf65475aba: Pull complete 
c57b6bcc83e3: Pull complete 
8978f6879e2f: Pull complete 
8eed3712d2cf: Pull complete 
Digest: sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b
Status: Downloaded newer image for docker/whalesay:latest

 _____ 
< boo >
 ----- 
    \
     \
      \     
                    ##        .            
              ## ## ##       ==            
           ## ## ## ##      ===            
       /""""""""""""""""___/ ===        
  ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~   
       \______ o          __/            
        \    \        __/             
          \____\______/   


docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
hello-world         latest              fce289e99eb9        5 weeks ago         1.84kB
docker/whalesay     latest              6b362a9f73eb        3 years ago         247MB

docker history 6b362a9f73eb

IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
6b362a9f73eb        3 years ago         /bin/sh -c #(nop) ENV PATH=/usr/local/bin:/u…   0B                  
          3 years ago         /bin/sh -c sh install.sh                        30.4kB              
          3 years ago         /bin/sh -c git reset --hard origin/master       43.3kB              
          3 years ago         /bin/sh -c #(nop) WORKDIR /cowsay               0B                  
          3 years ago         /bin/sh -c git clone https://github.com/moxi…   89.9kB              
          3 years ago         /bin/sh -c apt-get -y update && apt-get inst…   58.6MB              
          3 years ago         /bin/sh -c #(nop) CMD ["/bin/bash"]             0B                  
          3 years ago         /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$…   1.9kB               
          3 years ago         /bin/sh -c echo '#!/bin/sh' > /usr/sbin/poli…   195kB               
          3 years ago         /bin/sh -c #(nop) ADD file:f4d7b4b3402b5c53f…   188MB         

docker inspect 6b362a9f73eb

[
    {
        "Id": "sha256:6b362a9f73eb8c33b48c95f4fcce1b6637fc25646728cf7fb0679b2da273c3f4",
        "RepoTags": [
            "docker/whalesay:latest"
        ],
        "RepoDigests": [
            "docker/whalesay@sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2015-05-25T22:04:23.303454458Z",
        "Container": "5460b2353ce4e2b3e3e81b4a523a61c5adc238ae21d3ec3a5774674652e6317f",
        "ContainerConfig": {
            "Hostname": "9ec8c01a6a48",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/sh",
                "-c",
                "#(nop) ENV PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Image": "5d5bd9951e26ca0301423625b19764bda914ae39c3f2bfd6f1824bf5354d10ee",
            "Volumes": null,
            "WorkingDir": "/cowsay",
            "Entrypoint": null,
            "OnBuild": [],
            "Labels": {}
        },
        "DockerVersion": "1.6.0",
        "Author": "",
        "Config": {
            "Hostname": "9ec8c01a6a48",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/bash"
            ],
            "Image": "5d5bd9951e26ca0301423625b19764bda914ae39c3f2bfd6f1824bf5354d10ee",
            "Volumes": null,
            "WorkingDir": "/cowsay",
            "Entrypoint": null,
            "OnBuild": [],
            "Labels": {}
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 247049019,
        "VirtualSize": 247049019,
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/793768003cdb0f4c4b76627e2fdaef447af26a570f293c9cd0efd4e93c980cea/diff:/var/lib/docker/overlay2/2b675a0417a328024a4d12480cbc8d59655c3fba039c870057cd37537e5bd05b/diff:/var/lib/docker/overlay2/2e2deecb066dd3582e77ffe750975cde90e45c77cce842be439e0ca5c4f0d0f6/diff:/var/lib/docker/overlay2/6f5991826733cb88f5e356deb6807ffb58160072cef0db687a5d9820bc7e3ba0/diff:/var/lib/docker/overlay2/84adea855c8ed41a82c005e381d5a3d31f9b5a84579fb0bc644d707c90f54dd3/diff:/var/lib/docker/overlay2/95b856d151d6253bc8311c4ab6908893274e7162517ac265e739a544da9f9190/diff:/var/lib/docker/overlay2/d3bfc7095c61ef75f4943fe3f9926c275b30a434e5af0dd276b2b635834b4467/diff:/var/lib/docker/overlay2/d0aa0bcc22e6a4bd5511d9e7933869cbcc4794087fa1520f77199597328a4f5c/diff:/var/lib/docker/overlay2/2ba2e03e60d5e8e0c84fcb59941417607ee78265f2545fe2cb4a489dd1dbd186/diff",
                "MergedDir": "/var/lib/docker/overlay2/db32366024b39eb798f94baa6a84ca982ecbf7c575bed43aecc32ec137ebb180/merged",
                "UpperDir": "/var/lib/docker/overlay2/db32366024b39eb798f94baa6a84ca982ecbf7c575bed43aecc32ec137ebb180/diff",
                "WorkDir": "/var/lib/docker/overlay2/db32366024b39eb798f94baa6a84ca982ecbf7c575bed43aecc32ec137ebb180/work"
            },
            "Name": "overlay2"
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:1154ba695078d29ea6c4e1adb55c463959cd77509adf09710e2315827d66271a",
                "sha256:528c8710fd95f61d40b8bb8a549fa8dfa737d9b9c7c7b2ae55f745c972dddacd",
                "sha256:37ee47034d9b78f10f0c5ce3a25e6b6e58997fcadaf5f896c603a10c5f35fb31",
                "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
                "sha256:b26122d57afa5c4a2dc8db3f986410805bc8792af3a4fa73cfde5eed0a8e5b6d",
                "sha256:091abc5148e4d32cecb5522067509d7ffc1e8ac272ff75d2775138639a6c50ca",
                "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
                "sha256:d511ed9e12e17ab4bfc3e80ed7ce86d4aac82769b42f42b753a338ed9b8a566d",
                "sha256:d061ee1340ecc8d03ca25e6ca7f7502275f558764c1ab46bd1f37854c74c5b3f",
                "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
            ]
        },
        "Metadata": {
            "LastTagTime": "0001-01-01T00:00:00Z"
        }
    }
]

Following up ... defining K8S Services using YAML

As a fup to this: - Playing with Kubernetes deployments and NodePort services life is SO much easier if I choose to define the service...