Wednesday 27 February 2019

Kubernetes tooling - tinkering with versions

Having built a new Kubernetes cluster on the IBM Kubernetes Service (IKS), which reports as version 1.11.7_1543 within the IKS dashboard: -

https://cloud.ibm.com/containers-kubernetes/clusters/

I'd noticed that the kubectl tool was out-of-sync with the cluster itself: -

kubectl version

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7+IKS", GitCommit:"498bc5434e4bdc2dafddf57b2e8496f1cbd054bc", GitTreeState:"clean", BuildDate:"2019-02-01T08:10:15Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

Initially, I assumed (!) that it was covered by the IBM Cloud Plugins: -

Setting up the CLI and API

and checked my plugins: -

ibmcloud plugin list

Listing installed plug-ins...

Plugin Name                            Version   Status   
cloud-functions/wsk/functions/fn       1.0.29       
container-registry                     0.1.368      
container-service/kubernetes-service   0.2.53    Update Available   
dev                                    2.1.15       
sdk-gen                                0.1.12       

This appeared to confirm my suspicion so I updated the IKS plugin: -

ibmcloud plugin update kubernetes-service

Plug-in 'container-service/kubernetes-service 0.2.53' was installed.
Checking upgrades for plug-in 'container-service/kubernetes-service' from repository 'IBM Cloud'...
Update 'container-service/kubernetes-service 0.2.53' to 'container-service/kubernetes-service 0.2.61'
Attempting to download the binary file...
 23.10 MiB / 23.10 MiB [=====================================================================================================================================================] 100.00% 9s
24224568 bytes downloaded
Updating binary...
OK
The plug-in was successfully upgraded.

ibmcloud plugin list

Listing installed plug-ins...

Plugin Name                            Version   Status   
sdk-gen                                0.1.12       
cloud-functions/wsk/functions/fn       1.0.29       
container-registry                     0.1.368      
container-service/kubernetes-service   0.2.61       
dev                                    2.1.15       

BUT kubectl continued to show as back-level: -

kubectl version

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7+IKS", GitCommit:"498bc5434e4bdc2dafddf57b2e8496f1cbd054bc", GitTreeState:"clean", BuildDate:"2019-02-01T08:10:15Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

Therefore, I chose to reinstall kubectl etc. as per this: -


( specifically using Homebrew, as I'm running on macOS )

brew install kubernetes-cli

Updating Homebrew...
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
==> New Formulae
cafeobj                                       homeassistant-cli                             re-flex                                       riff
==> Updated Formulae
go ✔                cfengine            closure-compiler    couchdb             dartsim             dhex                fx                  node-build          pulumi
apache-arrow        cflow               cmark-gfm           cpprestsdk          davix               dialog              git-lfs             numpy               shadowsocks-libev
axel                cfr-decompiler      cointop             cproto              dcd                 diffoscope          godep               openssl@1.1         ship
azure-cli           chakra              collector-sidecar   crc32c              ddrescue            diffstat            grafana             pandoc-citeproc     siege
bzt                 check_postgres      conan               cryptominisat       deark               digdag              kube-ps1            passenger
calicoctl           checkstyle          configen            cscope              debianutils         elektra             kustomize           pgweb
cdk                 chkrootkit          consul-template     czmq                deja-gnu            fabio               libtensorflow       pre-commit
cdogs-sdl           cli53               coturn              darcs               deployer            flake8              nginx               protoc-gen-go

==> Downloading https://homebrew.bintray.com/bottles/kubernetes-cli-1.13.3.mojave.bottle.tar.gz
######################################################################## 100.0%
==> Pouring kubernetes-cli-1.13.3.mojave.bottle.tar.gz
Error: The `brew link` step did not complete successfully
The formula built, but is not symlinked into /usr/local
Could not symlink bin/kubectl
Target /usr/local/bin/kubectl
already exists. You may want to remove it:
  rm '/usr/local/bin/kubectl'

To force the link and overwrite all conflicting files:
  brew link --overwrite kubernetes-cli

To list all files that would be deleted:
  brew link --overwrite --dry-run kubernetes-cli

Possible conflicting files are:
/usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl
==> Caveats
Bash completion has been installed to:
  /usr/local/etc/bash_completion.d

zsh completions have been installed to:
  /usr/local/share/zsh/site-functions
==> Summary
🍺  /usr/local/Cellar/kubernetes-cli/1.13.3: 207 files, 43.7MB

Notice that it did NOT replace kubectl as it was already there :-)

So I chose to remove the existing kubectl : -

rm `which kubectl`

and then re-link: -

brew link kubernetes-cli

I then checked the version: -

kubectl version

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-04T04:48:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7+IKS", GitCommit:"498bc5434e4bdc2dafddf57b2e8496f1cbd054bc", GitTreeState:"clean", BuildDate:"2019-02-01T08:10:15Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

so now kubectl is at a later version than the cluster ....

Let's see how it goes ....

*UPDATE*

I then read this: -

If you use a kubectl CLI version that does not match at least the major.minor version of your clusters, you might experience unexpected results. Make sure to keep your Kubernetes cluster and CLI versions up-to-date.

here: -

Setting up the CLI and API

and realised that the page actually includes a download link for the right major/minor version ( 11.7 ) kubectl for macOS.

I downloaded this and replaced the existing version: -

mv ~/Downloads/kubectl  /usr/local/bin/

and then validated the versions: -

kubectl version

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7", GitCommit:"65ecaf0671341311ce6aea0edab46ee69f65d59e", GitTreeState:"clean", BuildDate:"2019-01-24T19:32:00Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7+IKS", GitCommit:"498bc5434e4bdc2dafddf57b2e8496f1cbd054bc", GitTreeState:"clean", BuildDate:"2019-02-01T08:10:15Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

which now match ( major/minor ).

Nice !

Friday 22 February 2019

End-to-End Hyper Protection of Data on the IBM Cloud

Finally ( for now ), there's this one: -

Learn how the IBM Cloud security portfolio is helping clients achieve continuous security for cloud applications while protecting data in every form—at rest, in transit and in use. You will see hands-on demos on: 1) IBM Hyper Protect Services - Key management backed by a dedicated cloud Hardware Security Module (HSM) for customers looking for complete control over data encryption keys and HSMs; 2) IBM Hyper Protect DBaaS - Industry-leading data confidentiality that allows data owners to have complete control over their data by preventing cloud operator access, backed by unparalleled vertical scale and performance; and 3) IBM Data Shield - Data in use protection for sensitive workloads.

https://myibm.ibm.com/events/think/all-sessions/session/7994A

Security and Flexibility in the IBM Cloud: A Deep-Dive into IBM Hyper Protect Services

And there's more: -

In the last few years, there have been a lot of large-scale data leaks by major companies revealed in the news. You wouldn't want this to be you. In this session, we’ll take a deep look into how IBM Hyper Protect Services can protect sensitive personal information and prevent against the attack vectors used to compromise these systems. We will take you through Hyper Protect DBaaS, offering secure databases on-demand, as well as Hyper Protect Crypto Services, providing secure cryptographic operations. Including demos and discussion, we'll see how new cloud services acting as always-encrypted platforms can help.

https://myibm.ibm.com/events/think/all-sessions/session/7629A

Protect Your Data and Code in the Cloud with IBM Hyper Protect Services

And from one of our Senior Technical Staff Members, Angel Nunez Mencias: -

Keeping your data and code protected while deployed in the cloud is not an easy task since the hardware and system administrators are not under your control. With IBM Hyper Protect Services, code and data is protected from any access by technology, so there is no need to trust external system admins. During this season, the chief architect of Hyper Protect will present the underlying technology and how the existing set of Hyper Protect Services leverage it today. He will also show how these services are used today to protect end-to-end solutions, and what additional services are being considered for the future.

https://myibm.ibm.com/events/think/all-sessions/session/3140A

Tech Talk: Leveraging IBM Z Security in the Cloud with IBM Cloud Hyper Protect Services

My IBM colleague, Dr Chris Poole, presented upon this: -

Tech Talk: Leveraging IBM Z Security in the Cloud with IBM Cloud Hyper Protect Services 

at IBM Think last week.

His most excellent session is here: -

Thinking about how to make your cloud-based applications compliant with GDPR and other regulations? Need data-at-rest encryption but don’t want to refactor? Try IBM Hyper Protect Services! In this talk, you’ll learn about Hyper Protect Crypto Services for secure key storage, and Hyper Protect DBaaS for an encrypted MongoDB or PostgreSQL data layer.

https://myibm.ibm.com/events/think/all-sessions/session/8110A

Enjoy !

Wednesday 20 February 2019

MainframerZ meetup at Lloyds - Tuesday 19 March 2019 - See you there


Having recently moved into the IBM Z development organisation, as mentioned before: -

New day, new job - more of the same, but in a VERY good way

I'm now working in a new area, with a new ( to me ) technology, brining what I know - Linux, Containers, Kubernetes etc.

And .... thanks to the MainframerZ community, I have the perfect opportunity to talk about what I do.

Lloyds Banking Group (LBG) have kindly invited us to bring the Community to them, hosting a Meet-up at their offices in London on Tuesday 19 March 2019.

I'm totally looking forward to meeting my new community, as I've got a HUGE amount to learn about the IBM Z and LinuxOne platforms, and the workloads that our clients are looking to host.

The details are on Meetup here: -

MainframerZ meetup at Lloyds


See you there 😀

Monday 18 February 2019

Following my previous post: -

Security Bulletin: IBM Cloud Kubernetes Service is affected by a privilege escalation vulnerability in runc 

I also needed to update Docker on the Mac: -



to mitigate the effect of: -

CVE-2019-5736

runc through 1.0-rc6, as used in Docker before 18.09.2 and other products, allows attackers to overwrite the host runc binary (and consequently obtain host root access) by leveraging the ability to execute a command as root within one of these types of containers: (1) a new container with an attacker-controlled image, or (2) an existing container, to which the attacker previously had write access, that can be attached with docker exec. This occurs because of file-descriptor mishandling, related to /proc/self/exe. 

It's a fairly large update: -


but it's definitely worth doing.

Make it so ....

Security Bulletin: IBM Cloud Kubernetes Service is affected by a privilege escalation vulnerability in runc

Following on from this: -

CVE-2019-5736

runc through 1.0-rc6, as used in Docker before 18.09.2 and other products, allows attackers to overwrite the host runc binary (and consequently obtain host root access) by leveraging the ability to execute a command as root within one of these types of containers: (1) a new container with an attacker-controlled image, or (2) an existing container, to which the attacker previously had write access, that can be attached with docker exec. This occurs because of file-descriptor mishandling, related to /proc/self/exe. 

IBM issued this: -

Security Bulletin: IBM Cloud Kubernetes Service is affected by a privilege escalation vulnerability in runc

...
IBM Cloud Kubernetes Service is affected by a security vulnerability in runc which could allow an attacker that is authorized to run a process as root inside a container to execute arbitrary commands with root privileges on the container’s host system.
...
Updates for IBM Cloud Kubernetes Service cluster worker nodes at versions 1.10 and later will be available shortly that fix this vulnerability.  Customers must update their worker nodes to address the vulnerability.  See Updating worker nodes for details on updating worker nodes.  To verify your cluster worker nodes have been updated, use the following IBM Cloud CLI command to confirm the currently running version:
...

I've got an IKS cluster running: -

https://cloud.ibm.com/containers-kubernetes/overview

so wanted to ensure that my worker node was suitably patched.

So, having logged into IBM Cloud: -

ibmcloud login ....

I checked my cluster: -

ibmcloud ks workers --cluster dmhIKSCluster

OK
ID                                                 Public IP       Private IP      Machine Type        State    Status   Zone    Version   
kube-dal10-crbd60afb0c7ff4a98a4017fb784ee4e96-w1   192.168.153.123   10.94.221.198   u2c.2x4.encrypted   normal   Ready    dal10   1.11.6_1541*   

* To update to 1.11.7_1544 version, run 'ibmcloud ks worker-update'. Review and make any required version changes before you update: https://console.bluemix.net/docs/containers/cs_cluster_update.html#worker_node

and then updated the worker: -

ibmcloud ks worker-update --cluster dmhIKSCluster --workers kube-dal10-crbd60afb0c7ff4a98a4017fb784ee4e96-w1

Updating the worker node version can cause downtime for your apps and services. During the update, all pods might be rescheduled onto other worker nodes and data is deleted if not stored outside the pod. To avoid downtime, ensure that you have enough worker nodes to handle your workload while the selected worker nodes are updating.
You might need to change your YAML files for deployments before updating. Review the docs for details: https://console.bluemix.net/docs/containers/cs_cluster_update.html#worker_node
Are you sure you want to update your worker node [kube-dal10-crbd60afb0c7ff4a98a4017fb784ee4e96-w1] to 1.11.7_1544? [y/N]> y

Updating worker kube-dal10-crbd60afb0c7ff4a98a4017fb784ee4e96-w1...
OK

ibmcloud ks workers --cluster dmhIKSCluster

OK
ID                                                 Public IP       Private IP      Machine Type        State    Status   Zone    Version   
kube-dal10-crbd60afb0c7ff4a98a4017fb784ee4e96-w1   192.168.153.123   10.94.221.198   u2c.2x4.encrypted   normal   Ready    dal10   1.11.6_1541 --> 1.11.7_1544 (pending)   

ibmcloud ks workers --cluster dmhIKSCluster

OK
ID                                                 Public IP       Private IP      Machine Type        State    Status   Zone    Version   
kube-dal10-crbd60afb0c7ff4a98a4017fb784ee4e96-w1   192.168.153.123   10.94.221.198   u2c.2x4.encrypted   normal   Ready    dal10   1.11.6_1541 --> 1.11.7_1544 (pending)   

ibmcloud ks workers --cluster dmhIKSCluster

OK
ID                                                 Public IP       Private IP      Machine Type        State       Status                                                                Zone    Version   
kube-dal10-crbd60afb0c7ff4a98a4017fb784ee4e96-w1   192.168.153.123   10.94.221.198   u2c.2x4.encrypted   reloading   Waiting for IBM Cloud infrastructure: Setup provision configuration   dal10   1.11.6_1541 --> 1.11.7_1544 (pending)   

ibmcloud ks workers --cluster dmhIKSCluster

OK
ID                                                 Public IP       Private IP      Machine Type        State    Status   Zone    Version   
kube-dal10-crbd60afb0c7ff4a98a4017fb784ee4e96-w1   192.168.153.123   10.94.221.198   u2c.2x4.encrypted   normal   Ready    dal10   1.11.7_1544   

So, after a small amount of time, I'm all updated.

Shortly afterwards, I received an email from IBM Cloud: -

...
The operating system reload is complete for computing instance kube-dal10-crbd60afb0c7ff4a98a4017fb784ee4e96-w1.cloud.ibm [192.168.153.123].
...

and my cluster is clean n' green.

Now to finish updating Docker elsewhere ... including on the Mac

Wednesday 13 February 2019

Following up ... defining K8S Services using YAML

As a fup to this: -

Playing with Kubernetes deployments and NodePort services

life is SO much easier if I choose to define the service using YAML ( YAML Ain't Markup Language ).

So this is with what I ended up: -

cat ~/Desktop/nginx.yaml
 
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  namespace: default
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: nginx

remember that YAML is very positional and, apparently, tabs are abhorrent :-)

Having created - and validated using various listing plugins for Atom - the YAML, I was then able to apply it: -

kubectl apply -f ~/Desktop/nginx.yaml

service "my-nginx" created

and then validate: -

kubectl get services

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.21.0.1              443/TCP        14d
my-nginx     NodePort    172.21.24.197          80:31665/TCP   7s

and then test: -

curl http://192.168.132.131:31665

Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
Commercial support is available at

Thank you for using nginx.

For reference, one can get YAML or JSON out of most, but not all of the K8S, commands: -

kubectl get services -o yaml

apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    creationTimestamp: 2019-01-29T16:18:07Z
    labels:
      component: apiserver
      provider: kubernetes
    name: kubernetes
    namespace: default
    resourceVersion: "33"
    selfLink: /api/v1/namespaces/default/services/kubernetes
    uid: 76a02dea-23e1-11e9-b35e-2a02ed9d765d
  spec:
    clusterIP: 172.21.0.1
    ports:
    - name: https
      port: 443
      protocol: TCP
      targetPort: 2040
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"my-nginx","namespace":"default"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":80}],"selector":{"app":"nginx"},"type":"NodePort"}}
    creationTimestamp: 2019-02-13T11:40:09Z
    labels:
      app: nginx
    name: my-nginx
    namespace: default
    resourceVersion: "2072491"
    selfLink: /api/v1/namespaces/default/services/my-nginx
    uid: 1da46471-2f84-11e9-9f99-1201bf98c5fb
  spec:
    clusterIP: 172.21.24.197
    externalTrafficPolicy: Cluster
    ports:
    - name: http
      nodePort: 31665
      port: 80
      protocol: TCP
      targetPort: 80
    selector:
      app: nginx
    sessionAffinity: None
    type: NodePort
  status:
    loadBalancer: {}
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

kubectl get services -o json

{
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "v1",
            "kind": "Service",
            "metadata": {
                "creationTimestamp": "2019-01-29T16:18:07Z",
                "labels": {
                    "component": "apiserver",
                    "provider": "kubernetes"
                },
                "name": "kubernetes",
                "namespace": "default",
                "resourceVersion": "33",
                "selfLink": "/api/v1/namespaces/default/services/kubernetes",
                "uid": "76a02dea-23e1-11e9-b35e-2a02ed9d765d"
            },
            "spec": {
                "clusterIP": "172.21.0.1",
                "ports": [
                    {
                        "name": "https",
                        "port": 443,
                        "protocol": "TCP",
                        "targetPort": 2040
                    }
                ],
                "sessionAffinity": "None",
                "type": "ClusterIP"
            },
            "status": {
                "loadBalancer": {}
            }
        },
        {
            "apiVersion": "v1",
            "kind": "Service",
            "metadata": {
                "annotations": {
                    "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"my-nginx\",\"namespace\":\"default\"},\"spec\":{\"ports\":[{\"name\":\"http\",\"port\":80,\"protocol\":\"TCP\",\"targetPort\":80}],\"selector\":{\"app\":\"nginx\"},\"type\":\"NodePort\"}}\n"
                },
                "creationTimestamp": "2019-02-13T11:40:09Z",
                "labels": {
                    "app": "nginx"
                },
                "name": "my-nginx",
                "namespace": "default",
                "resourceVersion": "2072491",
                "selfLink": "/api/v1/namespaces/default/services/my-nginx",
                "uid": "1da46471-2f84-11e9-9f99-1201bf98c5fb"
            },
            "spec": {
                "clusterIP": "172.21.24.197",
                "externalTrafficPolicy": "Cluster",
                "ports": [
                    {
                        "name": "http",
                        "nodePort": 31665,
                        "port": 80,
                        "protocol": "TCP",
                        "targetPort": 80
                    }
                ],
                "selector": {
                    "app": "nginx"
                },
                "sessionAffinity": "None",
                "type": "NodePort"
            },
            "status": {
                "loadBalancer": {}
            }
        }
    ],
    "kind": "List",
    "metadata": {
        "resourceVersion": "",
        "selfLink": ""
    }
}

which is nice.

Playing with Kubernetes deployments and NodePort services

Today I'm fiddling with nginx as a workload on my IBM Kubernetes Service (IKS) cluster.

My default process was this: -

kubectl create deployment nginx --image=nginx

...
deployment.extensions "nginx" created
...

kubectl get deployments

...
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     1         1         1            1           20s
...

kubectl describe pod `kubectl get pods | grep nginx | awk '{print $1}'`

...
Events:
  Type     Reason             Age                From                     Message
  ----     ------             ----               ----                     -------
  Normal   Scheduled          39m                default-scheduler        Successfully assigned default/nginx-78f5d695bd-jxfvm to 192.168.132.123
...

kubectl create service nodeport nginx --tcp=80:80

...
service "nginx" created
...

kubectl get services

...
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.21.0.1              443/TCP        14d
nginx        NodePort    172.21.113.167          80:30923/TCP   20s
...

kubectl get nodes

...
  ROLES     AGE       VERSION
...
192.168.132.123   Ready          13d       v1.11.5
...

and then combine the public IP address of the node ( 192.168.132.123 ) and the generated NodePort ( 30923 ) to allow me to access nginx: -

curl http://192.168.132.123:30923

...
Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
Commercial support is available at

Thank you for using nginx.
...

I also realised that I could "debug" the pod hosting the nginx service: -

docker ps -a

...
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
fda90a370446        nginx                  "nginx -g 'daemon of…"   19 seconds ago      Up 18 seconds                           k8s_nginx_nginx-78f5d695bd-8vrgl_default_d39dad7e-2f79-11e9-9f99-1201bf98c5fb_0
...

docker logs fda90a370446

...
10.23.2.59 - - [13/Feb/2019:10:30:42 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.54.0" "-"
...

However, I also "discovered" that there seems to be a correlation between the NAME of the NodePort service and the NAME of the nginx deployment.

If I create a NodePort service with a different name: -

kubectl create service nodeport foobar --tcp=80:80

and then get the newly created service: -

kubectl get services

...
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
foobar       NodePort    172.21.0.34          80:30952/TCP   1m
kubernetes   ClusterIP   172.21.0.1            443/TCP        14d
...

I'm no longer able to hit nginx: -

curl http://192.168.132.123:30952

returns: -

...
curl: (7) Failed to connect to 192.168.132.123 port 30952: Connection refused
...

If I delete the service: -

kubectl delete service foobar

...
service "foobar" deleted
...

and recreate it with the SAME name as the deployment: -

kubectl create service nodeport nginx --tcp=80:80

...
service "nginx" created
...

I'm back in the game: -

kubectl get services

...
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.21.0.1            443/TCP        14d
nginx        NodePort    172.21.31.44          80:30281/TCP   33s
...

curl http://192.168.132.123:30281

...
Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
Commercial support is available at

Thank you for using nginx.
...

So there does APPEAR to be a correlation between the deployment name and the service name.

Obviously, K8S provides tagging for this, but I don't believe that's applicable to a NodePort service.

However, there is a different way ....

It is possible to expose an existing deployment and create a NodePort service "on the fly", as per the following: -

kubectl expose deployment nginx --type=NodePort --name=my-nginx --port 80

This creates a NodePort service: -

kubectl get services

...
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.21.0.1            443/TCP        14d
my-nginx     NodePort    172.21.8.192          80:30628/TCP   39s
...

NOTE that the name of the service - my-nginx - does NOT tie up with the deployment per se and yet ....

curl http://192.168.132.123:30281

...
Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
Commercial support is available at

Thank you for using nginx.
...

If I dig into the newly created service: -

kubectl describe service my-nginx

Name:                     my-nginx
Namespace:                default
Labels:                   app=nginx
Annotations:              
Selector:                 app=nginx
Type:                     NodePort
IP:                       172.21.8.192
Port:                      80/TCP
TargetPort:               80/TCP
NodePort:                  30628/TCP
Endpoints:                172.30.148.204:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                  

I can see that it's tagged against the deployment - nginx - using the Label and Selector; I suspect that it's the latter that made the difference.

If I revert back to my previous service: -

kubectl create service nodeport nginx --tcp=80:80

...
service "nginx" created
...

and dig into it: -

kubectl describe service nginx

Name:                     nginx
Namespace:                default
Labels:                   app=nginx
Annotations:              
Selector:                 app=nginx
Type:                     NodePort
IP:                       172.21.68.125
Port:                     80-80  80/TCP
TargetPort:               80/TCP
NodePort:                 80-80  31411/TCP
Endpoints:                172.30.148.204:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                  

So the name of the service IS used to "tag" the target deployment.

If I misname my service: -

kubectl create service nodeport foobar --tcp=80:80

service "foobar" created

kubectl describe service foobar

Name:                     foobar
Namespace:                default
Labels:                   app=foobar
Annotations:              
Selector:                 app=foobar
Type:                     NodePort
IP:                       172.21.232.250
Port:                     80-80  80/TCP
TargetPort:               80/TCP
NodePort:                 80-80  32146/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                  

which explains why this does not: -

curl http://192.168.132.123:32146

In other words, the name of the service DOES MATTER, but only where one specifically creates the service, as opposed to letting the kubectl expose deployment do it for one.


Tuesday 12 February 2019

Bringing it together ... Docker and AWK

Building on two earlier posts: -

AWK - It's my "new" best friend forever ....


here's a quick spot of tidying up completed/unwanted Docker containers, using the power of AWK: -

docker rm `docker ps -a|grep Exit|awk '{print $1}'`

So we're executing the result of: -

docker ps -a|grep Exit

e717fa412d6a        hello-world         "/hello"                 19 seconds ago      Exited (0) 18 seconds ago                          romantic_chatterjee
761e4ec73bb5        docker/whalesay     "cowsay 'Awwwww, tha…"   About an hour ago   Exited (0) About an hour ago                       jolly_sanderson
950cd9684ee6        docker/whalesay     "cowsay 'Aww Shucks …"   About an hour ago   Exited (0) About an hour ago                       clever_kare
d0d444d6f4f1        docker/whalesay     "cowsay Aww"             About an hour ago   Exited (0) About an hour ago                       pedantic_dirac
5c20a293793f        docker/whalesay     "cowsay Awwww Shucks…"   About an hour ago   Exited (0) About an hour ago                       dreamy_bell
d67c6d05cc24        docker/whalesay     "cowsay Awwww Shucks"    About an hour ago   Exited (0) About an hour ago                       quizzical_curran
7507f0df1db9        docker/whalesay     "cowsay Awwww"           About an hour ago   Exited (0) About an hour ago                       lucid_wright
77997983c07b        docker/whalesay     "cowsay"                 About an hour ago   Exited (0) About an hour ago                       festive_nobel
1b34cc7f227d        docker/whalesay     "cowsay boo"             About an hour ago   Exited (0) About an hour ago                       eager_khorana
517237d924bf        docker/whalesay     "cowsay boo"             27 hours ago        Exited (0) 27 hours ago                            naughty_ramanujan
6b6bb56464a9        docker/whalesay     "/bin/bash"              27 hours ago        Exited (0) 27 hours ago                            dreamy_gauss
3d0a99dfd9a4        hello-world         "/hello"                 27 hours ago        Exited (0) 27 hours ago                            frosty_hypatia

In other words, a list of all the containers in the Exited status .....

... and then parsing the returned list to ONLY give us the container ID: -

docker ps -a|grep Exit|awk '{print $1}'

e717fa412d6a
761e4ec73bb5
950cd9684ee6
d0d444d6f4f1
5c20a293793f
d67c6d05cc24
7507f0df1db9
77997983c07b
1b34cc7f227d
517237d924bf
6b6bb56464a9
3d0a99dfd9a4

Note that the grep pulls out everything with the word "Exit" thus ignoring the title line: -

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                         PORTS               NAMES

We can then feed that into AWK to give us the container IDs.

We then use the power of the back tick ( ` ) to allow us to run the docker rm command over the output from the previous command: -

rm `docker ps -a|grep Exit|awk '{print $1}'`

e717fa412d6a
761e4ec73bb5
950cd9684ee6
d0d444d6f4f1
5c20a293793f
d67c6d05cc24
7507f0df1db9
77997983c07b
1b34cc7f227d
517237d924bf
6b6bb56464a9
3d0a99dfd9a4

which clears the field: -

docker ps -a

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Nice !

Ooops, broke my Red Hat

I had a brief issue with a Red Hat Enterprise Linux (RHEL) 7.6 VM this AM.

For no obvious reason (!), the VM kept booting into so-called Emergency Mode: -

Emergency mode, provides the minimal bootable environment and allows you to repair your system even in situations when rescue mode is unavailable. In emergency mode, the system mounts only the root file system, and it is mounted as read-only. Also, the system does not activate any network interfaces and only a minimum of the essential services are set up. The system does not load any init scripts, therefore you can still mount file systems to recover data that would be lost during a re-installation if init is corrupted or not working. 

33.3. Emergency Mode

Thankfully, I was prompted to look at the logs: -

journalctl -xb

which, after a few pages, showed this: -


which reminded me of yesterday's post: -


where I had been "playing" with file-systems, mount points and permissions.

I checked the mount points: -

cat /etc/fstab


Yep, I'd attempted to mount, as /snafu, a since-deleted directory - /foobar - and wondered why things didn't work :-)

Once I removed the entry from /etc/fstab and rebooted, all was well.

Two morals of the story: -
  1. Don't fiddle
  2. Watch the logs
😅😅

Visual Studio Code - Wow 🙀

Why did I not know that I can merely hit [cmd] [p]  to bring up a search box allowing me to search my project e.g. a repo cloned from GitHub...