Thursday, 17 June 2021

Networking notworking on Ubuntu ?

 Whilst helping a colleague dig into some issues he was seeing pulling an IBM WebSphere Liberty image from Docker Hub, I realised that my Ubuntu box was missing a couple of useful utilities, including nslookup and traceroute.

In the latter case, I did have traceroute6 but not the IP v4 equivalent.

Easily fixed ...

apt-get update && apt-get install -y dnsutils traceroute

and now we're good to go: -

which nslookup

/usr/bin/nslookup

nslookup -version

nslookup 9.16.1-Ubuntu

which traceroute

/usr/sbin/traceroute

traceroute --version

Modern traceroute for Linux, version 2.1.0
Copyright (c) 2016  Dmitry Butskoy,   License: GPL v2 or any later

Monday, 14 June 2021

Design, build, and deploy universal application images

 From IBM ( for whom I've worked for ~30 years ) : -

Building high-quality container images and their corresponding pod specifications is the foundation for Kubernetes to effectively run and manage an application in production. There are numerous ways to build images, so knowing where to start can be confusing.
This learning path introduces you to the universal application image (UAI). A UAI is an image that uses Red Hat’s Universal Base Image (UBI) as its foundation, includes the application being deployed, and also adds extra elements that make it more secure and scalable in Kubernetes and Red Hat OpenShift.
Specifically, a universal application image:
    Is built from a Red Hat UBI
    Can run on Kubernetes and OpenShift
    Does not require any Red Hat licensing, so it’s freely distributable
    Includes qualities that make it run more efficiently
    Is supported by Red Hat when run in OpenShift
The articles in this learning path describes best practices for packaging an application, highlighting elements that are critical to include in designing the image, performing the build, and deploying the application.

Design, build, and deploy universal application images

Thursday, 10 June 2021

Tinkering with containerd and the ctr tool

 Some notes from a recent tinkering with containerd and ctr ...

which ctr

/usr/bin/ctr

ctr version

Client:
  Version:  1.4.4-0ubuntu1~20.04.2
  Revision:
  Go version: go1.13.8
Server:
  Version:  1.4.4-0ubuntu1~20.04.2
  Revision:
  UUID: 47a84416-93a1-4934-b850-fecb8dddf519

Pull an image

ctr image pull docker.io/library/nginx:latest -u davidhay1969

docker.io/library/nginx:latest:                                                   resolved       |++++++++++++++++++++++++++++++++++++++|
index-sha256:6d75c99af15565a301e48297fa2d121e15d80ad526f8369c526324f0f7ccb750:    exists         |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:61191087790c31e43eb37caa10de1135b002f10c09fdda7fa8a5989db74033aa: exists         |++++++++++++++++++++++++++++++++++++++|
layer-sha256:351ad75a6cfabc7f2e103963945ff803d818f0bdcf604fd2072a0eefd6674bde:    exists         |++++++++++++++++++++++++++++++++++++++|
layer-sha256:596b1d696923618bec6ff5376cc9aed03a3724bc75b6c03221fd877b62046d05:    exists         |++++++++++++++++++++++++++++++++++++++|
layer-sha256:30afc0b18f67ae8441c2d26e356693009bb8927ab7e3bce05d5ed99531c9c1d4:    exists         |++++++++++++++++++++++++++++++++++++++|
layer-sha256:febe5bd23e98102ed5ff64b8f5987f516a945745c08bbcf2c61a50fb6e7b2257:    exists         |++++++++++++++++++++++++++++++++++++++|
layer-sha256:8283eee92e2f756bd57f96ea295e332ab9031724267d4f939de1f7d19fe9611a:    exists         |++++++++++++++++++++++++++++++++++++++|
config-sha256:d1a364dc548d5357f0da3268c888e1971bbdb957ee3f028fe7194f1d61c6fdee:   exists         |++++++++++++++++++++++++++++++++++++++|
layer-sha256:69692152171afee1fd341febc390747cfca2ff302f2881d8b394e786af605696:    exists         |++++++++++++++++++++++++++++++++++++++|
elapsed: 1.3 s                                                                    total:   0.0 B (0.0 B/s)                                         
unpacking linux/amd64 sha256:6d75c99af15565a301e48297fa2d121e15d80ad526f8369c526324f0f7ccb750...
done

List images

ctr image list

docker.io/library/nginx:latest              application/vnd.docker.distribution.manifest.list.v2+json sha256:6d75c99af15565a301e48297fa2d121e15d80ad526f8369c526324f0f7ccb750 51.3 MiB  linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x              -      

Create a container ( in background mode via -d )

ctr run --net-host -d --rm -t docker.io/library/nginx:latest nginx

Nothing returned

List running containers

ctr container list

CONTAINER    IMAGE                                                RUNTIME
nginx        docker.io/library/nginx:latest    io.containerd.runc.v2    

List tasks

ctr task list

TASK                 PID     STATUS    
nginx    1287661    RUNNING

List Linux processes

ps aux | grep containerd | grep -v grep

root       39604  0.8  1.6 1287024 67348 ?       Ssl  Jun08  18:11 /usr/bin/containerd
root     1287636  0.0  0.1 111852  7952 ?        Sl   01:44   0:00 /usr/bin/containerd-shim-runc-v2 -namespace default -id nginx -address /run/containerd/containerd.sock

Inspect task

ctr task ps nginx

PID        INFO
1287661    -
1287712    -
1287713    -

Attempt to remove task

ctr task delete nginx

ERRO[0000] unable to delete nginx                        error="task must be stopped before deletion: running: failed precondition"
ctr: task must be stopped before deletion: running: failed precondition

Attempt to remove container

ctr container delete nginx

ERRO[0000] failed to delete container "nginx"            error="cannot delete a non stopped container: {running 0 0001-01-01 00:00:00 +0000 UTC}"
ctr: cannot delete a non stopped container: {running 0 0001-01-01 00:00:00 +0000 UTC}

Kill the task

ctr task kill nginx

Nothing returned

Attempt to remove task

ctr task delete nginx

Nothing returned

Attempt to remove container

ctr container delete nginx

Nothing returned

Create a container ( in foreground mode via -t with bash )

- note that the container automatically terminates, and is removed, upon exit, via the --rm remove switch

ctr run --net-host -t --rm -t docker.io/library/nginx:latest nginx sh

#

Inspect Nginx configuration ( inside container )

cat /etc/nginx/nginx.conf

user  nginx;
worker_processes  auto;
error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    #tcp_nopush     on;
    keepalive_timeout  65;
    #gzip  on;
    include /etc/nginx/conf.d/*.conf;
}

Exit container

exit

Create a container ( in foreground mode via -t with bash mounting a local /k8s directory into the container as /k8s )

mkdir /k8s

echo "Hello World!" >> /k8s/greeting.txt

ctr run --net-host --mount type=bind,src=/k8s,dst=/k8s,options=rbind -t --rm -t docker.io/library/nginx:latest nginx sh

#

Display greeting from inside container

cat /k8s/greeting.txt

Hello World!

Exit container

exit

Wednesday, 9 June 2021

Tinkering with OpenLDAP on Docker on Ubuntu

 Following a discussion with a colleague on Slack, I thought I'd remind myself how OpenLDAP works as a service running inside a container, via the Docker container runtime interface (CRI).

Using this for inspiration: -

Docker image for OpenLDAP support

I pulled the requisite image from Docker Hub: -

docker pull osixia/openldap:1.5.0 -u davidhay1969:<DOCKER TOKEN>

and created a container: -

docker run --detach -p 3389:389 osixia/openldap:1.5.0 

Note that I'm using port mapping via -p 3389:389 to map the external ( host ) port of 3389 to the internal ( container ) port of 389

This allows me to run the container without needing to run it in privileged mode ( as Unix typically blocks non-root processes from listening on ports lower than 1,024 ).

Once the container was running happily: -

docker ps -a

CONTAINER ID   IMAGE                   COMMAND                 CREATED          STATUS          PORTS                            NAMES
23a39685da58   osixia/openldap:1.5.0   "/container/tool/run"   20 minutes ago   Up 20 minutes   636/tcp, 0.0.0.0:3389->389/tcp   agitated_mendel
55de9ae1b94a   busybox                 "sh"                    2 days ago       Created                                          nostalgic_mclean
da6a3136a33e   busybox                 "sh"                    13 days ago      Created                                          happy_swirles

I installed ldap-utils to give me the ldapsearch command: -

apt-get install -y ldap-utils

and then ran ldapsearch against the container via the mapped port: -

ldapsearch -H ldap://localhost:3389 -D cn=admin,dc=example,dc=org -w admin -b dc=example,dc=org

Note that I'm using the default credentials of admin / admin and would, of course, be changing this if this was a real-world environment .....

Bash and fun with escape characters

Whilst writing the world's simplest Bash script to: -

  • delete a pod
  • deploy a new pod
  • list the running pods
  • describe the newly deployed pod

I wanted to add newline characters into my echo statements, to make things more readable.

I've written before about echo -e so was just double-checking my understanding via the command-line ...

I entered: -

echo -e "Hello World!\n"

but, instead of a friendly greeting, I saw: -

-bash: !\n: event not found

Wait, what now ?

Yeah, of course, I've entered a magical place escape sequence of: -

pling slash n

which, obviously, isn't what I meant to do ....

Easy solution - stick in a space character ...

echo -e "Hello World! \n"

Hello World! 

So here's the finished script: -

#!/bin/bash

clear

echo -e "Deleting existing busybox-kata pod\n"

kubectl delete pod busybox-kata

echo -e "\nDeploying new busybox-kata pod\n"

kubectl apply -f busybox-kata.yaml 

echo -e "\nSleeping ...\n"

sleep 10

echo -e "\nChecking pods ...\n"

kubectl get pods

echo -e "\nSleeping ...\n"

sleep 5

echo -e "\nDescribing busybox-kata pod to see all is good ...\n"

kubectl describe pod busybox-kata | tail -8

Sunday, 6 June 2021

Doh, Kubernetes fails to run due to a lack of ...

Having started the build of a new K8s 1.21 cluster: -

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=$private_ip --apiserver-cert-extra-sans=$public_ip --kubernetes-version ${KUBE_VERSION}

I saw: -
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

I did a spot of digging in the system log ( /var/log/syslog ) and found this: -

Jun  6 12:17:53 hurlinux2 containerd[39485]: time="2021-06-06T12:17:53.210883120-07:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-telicity1.fyre.ibm.com,Uid:599ab88dc99dd5fbdb7c6a92e4e965ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: failed to create shim: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/177b079efbe1b71b65586a467313d1a15e802a2b2d323a7ed974d5d2e99e33f5/log.json: no such file or directory): exec: \"runc\": executable file not found in $PATH: unknown"
Jun  6 12:17:53 hurlinux2 kubelet[41466]: E0606 12:17:53.211759   41466 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/177b079efbe1b71b65586a467313d1a15e802a2b2d323a7ed974d5d2e99e33f5/log.json: no such file or directory): exec: \"runc\": executable file not found in $PATH: unknown"

which made me think "Hmmmm, wonder what I forgot

A quick trip to Aptitude ...

apt-get install runc

Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  runc
0 upgraded, 1 newly installed, 0 to remove and 8 not upgraded.
Need to get 4,018 kB of archives.
After this operation, 15.7 MB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 runc amd64 1.0.0~rc93-0ubuntu1~20.04.2 [4,018 kB]
Fetched 4,018 kB in 1s (4,037 kB/s)
Selecting previously unselected package runc.
(Reading database ... 107797 files and directories currently installed.)
Preparing to unpack .../runc_1.0.0~rc93-0ubuntu1~20.04.2_amd64.deb ...
Unpacking runc (1.0.0~rc93-0ubuntu1~20.04.2) ...
Setting up runc (1.0.0~rc93-0ubuntu1~20.04.2) ...
Processing triggers for man-db (2.9.1-1) ...

and a kubeadm reset ( to clear down the borked cluster creation ) and I was then able to re-run kubeadm init as before, and we're good to go .....


Networking notworking on Ubuntu ?

 Whilst helping a colleague dig into some issues he was seeing pulling an IBM WebSphere Liberty image from Docker Hub, I realised that my Ub...