Monday, 26 April 2021

Wow, SSHD on Synology - fun and games

 Having created a new user on my Synology DS414+, I was trying/failing to SSH using a non-admin user, via the command: -

ssh hayd@diskstation

I kept getting asked for a password, even though I was expecting to authenticate using my private key, having added my public key to the ~/.ssh/authorized_keys file on the NAS.

After lots of digging using ssh -vvv etc., I saw this: -

...

debug3: send packet: type 50

debug2: we sent a publickey packet, wait for reply

debug3: receive packet: type 51

debug1: Authentications that can continue: publickey,password

debug2: we did not send a packet, disable method

debug3: authmethod_lookup password

debug3: remaining preferred: ,password

debug3: authmethod_is_enabled password

debug1: Next authentication method: password

hayd@diskstation's password: 

...

which led me down a path of checking permissions to the user's home directory.

Firstly, I changed the permission of the .ssh subdirectory: -

chmod 700 /var/services/homes/hayd/.ssh/

but no dice.

Secondly, I changed the permission of the authorized_keys file: -

chmod 600 /var/services/homes/hayd/.ssh/authorized_keys 

Still nada.

Thirdly, I changed the permission of the home directory itself: -

chmod g-w /var/services/homes/hayd/

C'est voila.

Sigh!

Wednesday, 14 April 2021

New day, new Docker, new capability - image scanning

 Whilst I was upgrading some of my Ubuntu boxes the other day, I noticed a new plugin - docker-scan-plugin - in the list of things being upgraded.

A quick Google brought me this: -

Vulnerability scanning for Docker local images

Having since upgraded Docker on my Mac: -

docker version

Client: Docker Engine - Community
 Cloud integration: 1.0.12
 Version:           20.10.5
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        55c4c88
 Built:             Tue Mar  2 20:13:00 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true
Server: Docker Engine - Community
 Engine:
  Version:          20.10.5
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       363e9a8
  Built:            Tue Mar  2 20:15:47 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.4
  GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

I checked my available images: -

docker images

REPOSITORY    TAG       IMAGE ID       CREATED       SIZE
busybox       latest    388056c9a683   6 days ago    1.23MB
tekton-lint   latest    b79680846c0c   10 days ago   93.1MB

and then scanned one of them: -

docker scan busybox

Docker Scan relies upon access to Snyk, a third party provider, do you consent to proceed using Snyk? (y/N)
y

Testing busybox...

Organization:      undefined
Package manager:   linux
Project name:      docker-image|busybox
Docker image:      busybox
Platform:          linux/amd64

✓ Tested busybox for known vulnerabilities, no vulnerable paths found.

Note that we do not currently have vulnerability data for your image.

For more free scans that keep your images secure, sign up to Snyk at https://dockr.ly/3ePqVcp

Definitely a nice capability to have in the kitbag - we're also using IBM Container Registry's built-in Vulnerability Advisor tool, but more insights are better than fewer ....

Friday, 9 April 2021

Penny dropped - why is my job immutable ?

I'm tinkering with kube-bench at present, and wanted to deploy it as a Kubernetes Job using the job.yaml that's included with the repo.

However, my custom-built kube-bench image is stored in a private registry ( via IBM Container Registry ) so I needed to reference a Kubernetes Secret which I'd previously defined within my K8s cluster.

So far, so good ...

Having defined the Secret, I needed to reference it in the Job description ( job.yaml ) via an imagePullSecrets which I'd added to the YAML: -

---
apiVersion: batch/v1
kind: Job
metadata:
  name: kube-bench
spec:
  template:
    metadata:
      labels:
        app: kube-bench
    spec:
      hostPID: true
      imagePullSecrets:
        - name: mysecret
      containers:
        - name: kube-bench
          image: aquasec/kube-bench:latest
          command: ["kube-bench"]

However, when I tried to apply the YAML: -

kubectl apply -f job.yaml 

The Job "kube-bench" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kube-bench", "controller-uid":"849af2dc-fd58-48af-bdbc-529fb8b65a56", "job-name":"kube-bench"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:"var-lib-etcd", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b100), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"var-lib-kubelet", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b160), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"var-lib-kube-scheduler", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b180), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"var-lib-kube-controller-manager", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b1a0), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"etc-systemd", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b1e0), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"lib-systemd", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b200), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"srv-kubernetes", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b240), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"etc-kubernetes", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b260), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"usr-bin", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b4e0), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"etc-cni-netd", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b500), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"opt-cni-bin", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b520), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}}, InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"kube-bench", Image:"aquasec/kube-bench:latest", Command:[]string{"kube-bench"}, Args:[]string(nil), WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:"var-lib-etcd", ReadOnly:true, MountPath:"/var/lib/etcd", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"var-lib-kubelet", ReadOnly:true, MountPath:"/var/lib/kubelet", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"var-lib-kube-scheduler", ReadOnly:true, MountPath:"/var/lib/kube-scheduler", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"var-lib-kube-controller-manager", ReadOnly:true, MountPath:"/var/lib/kube-controller-manager", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"etc-systemd", ReadOnly:true, MountPath:"/etc/systemd", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"lib-systemd", ReadOnly:true, MountPath:"/lib/systemd/", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"srv-kubernetes", ReadOnly:true, MountPath:"/srv/kubernetes/", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"etc-kubernetes", ReadOnly:true, MountPath:"/etc/kubernetes", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"usr-bin", ReadOnly:true, MountPath:"/usr/local/mount-from-host/bin", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"etc-cni-netd", ReadOnly:true, MountPath:"/etc/cni/net.d/", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"opt-cni-bin", ReadOnly:true, MountPath:"/opt/cni/bin/", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc01ba2e338), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc01905b480), ImagePullSecrets:[]core.LocalObjectReference{core.LocalObjectReference{Name:"zaasking"}}, Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable

What the heck ?

Then I read this: -


which says, in part: -

This is not a bug, you can't change the template after the job has been created.

Ah!!!

I listed my jobs: -

kubectl get jobs -A

NAMESPACE          NAME                                                  COMPLETIONS   DURATION   AGE
default            kube-bench                                            0/1           117m       117m
tekton-pipelines   pw-3282a77a-680d-4731-a3f0-6c73f67ddb42-cleaner-job   0/1           6d20h      6d20h
tekton-pipelines   pw-54d9eabf-ade2-47ae-bfa1-d1b7b3be2240-cleaner-job   0/1           6d20h      6d20h
tekton-pipelines   pw-b12cc256-3ff6-445a-86f1-1b0e1ef6ed08-cleaner-job   1/1           3s         5d21h

So the job already exists - so how can I change the job description for an inflight job ?

Oh, yeah, I can just delete it ...

kubectl delete job kube-bench

job.batch "kube-bench" deleted

kubectl get jobs -A

NAMESPACE          NAME                                                  COMPLETIONS   DURATION   AGE
tekton-pipelines   pw-3282a77a-680d-4731-a3f0-6c73f67ddb42-cleaner-job   0/1           6d20h      6d20h
tekton-pipelines   pw-54d9eabf-ade2-47ae-bfa1-d1b7b3be2240-cleaner-job   0/1           6d20h      6d20h
tekton-pipelines   pw-b12cc256-3ff6-445a-86f1-1b0e1ef6ed08-cleaner-job   1/1           3s         5d21h

kubectl apply -f job.yaml

job.batch/kube-bench created

kubectl get jobs

NAME         COMPLETIONS   DURATION   AGE
kube-bench   0/1           115s       115s

kubectl get pods

NAME                           READY   STATUS             RESTARTS   AGE
el-listener-867d86b9f7-gllf6   0/1     CrashLoopBackOff   23         28h
kube-bench-qhg82               1/1     Running            0          2m1s

Nice!

Wednesday, 17 March 2021

Not in Kansas anymore - apparently I don't exist

Whilst trying to upgrade the Tekton CLI tool tkn using Homebrew: -

brew upgrade tektoncd/tools/tektoncd-cli

==> Upgrading 1 outdated package:

tektoncd/tools/tektoncd-cli 0.8.0 -> 0.15.0

==> Upgrading tektoncd/tools/tektoncd-cli 0.8.0 -> 0.15.0 

==> Downloading https://github.com/tektoncd/cli/releases/download/v0.15.0/tkn_0.15.0_Darwin_x86_64.tar.gz

==> Downloading from https://github-releases.githubusercontent.com/181939372/9d001b00-4060-11eb-9efa-57717f8e92f7?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJY

######################################################################## 100.0%

Error: An exception occurred within a child process:

  ArgumentError: user david_hay@uk.ibm.com doesn't exist

To narrow things down, I ran: -

brew upgrade

but that failed with much the same: -

==> Upgrading tektoncd/tools/tektoncd-cli 0.8.0 -> 0.15.0 
==> Downloading https://github.com/tektoncd/cli/releases/download/v0.15.0/tkn_0.15.0_Darwin_x86_64.tar.gz
Already downloaded: /Users/hayd/Library/Caches/Homebrew/downloads/d911addd12ba79ea06e5d8d0002a0523ec0b53d6aa5407a94f6e9836f3ff6fa5--tkn_0.15.0_Darwin_x86_64.tar.gz
Error: An exception occurred within a child process:
  ArgumentError: user david_hay@uk.ibm.com doesn't exist

Following this: -


I checked my environment: -

printenv | grep david_hay@uk.ibm.com


which returned: -

USER=david_hay@uk.ibm.com

Given that my macOS user is actually hayd, I suspect that this is the issue.

I reset that environment variable: -

export USER=hayd

and tried again: -

brew upgrade tektoncd/tools/tektoncd-cli

Updating Homebrew...
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
==> Updated Formulae
Updated 1 formula.

==> Upgrading 1 outdated package:
tektoncd/tools/tektoncd-cli 0.8.0 -> 0.15.0
==> Upgrading tektoncd/tools/tektoncd-cli 0.8.0 -> 0.15.0 
==> Downloading https://github.com/tektoncd/cli/releases/download/v0.15.0/tkn_0.15.0_Darwin_x86_64.tar.gz
Already downloaded: /Users/hayd/Library/Caches/Homebrew/downloads/d911addd12ba79ea06e5d8d0002a0523ec0b53d6aa5407a94f6e9836f3ff6fa5--tkn_0.15.0_Darwin_x86_64.tar.gz
==> Caveats
Bash completion has been installed to:
  /usr/local/etc/bash_completion.d
==> Summary
🍺  /usr/local/Cellar/tektoncd-cli/0.15.0: 8 files, 43.7MB, built in 7 seconds
Removing: /usr/local/Cellar/tektoncd-cli/0.8.0... (8 files, 32.4MB)

Job done!

For the record, tkn was back-level: -

tkn version

Client version: 0.8.0
Pipeline version: unknown

and is now up-to-date: -

tkn version

Client version: 0.15.0
Pipeline version: v0.22.0
Triggers version: v0.12.1

( which, usefully, also reports the versions of Tekton Pipelines and Triggers deployed to my cluster )




Tuesday, 16 March 2021

Two of my favourite things - Kubernetes and jq

 As per recent posts, I've been falling in love with jq and using it for more and more and more ....

Today, it's Kubernetes meets jq

Specifically, working with secrets ... and following on from an earlier post: -

Gah, again with the ImagePullBackOff

So I'm creating a dummy secret, wrapped around some credentials for an IBM Container Registry (ICR) instance: -

kubectl create secret docker-registry foobar --docker-server='https://us.icr.io' --docker-username='iamapikey' --docker-password='THIS_IS_NOT_A_VALID_APIKEY'

secret/foobar created

Now the secret of type docker-registry essentially wraps the credentials ( whether it be for Docker Hub or an ICR instance or similar ) up in a Base64-encoded "blob", as evidenced by the query: -

kubectl get secret foobar --output json

{
    "apiVersion": "v1",
    "data": {
        ".dockerconfigjson": "eyJhdXRocyI6eyJodHRwczovL3VzLmljci5pbyI6eyJ1c2VybmFtZSI6ImlhbWFwaWtleSIsInBhc3N3b3JkIjoiVEhJU19JU19OT1RfQV9WQUxJRF9BUElLRVkiLCJhdXRoIjoiYVdGdFlYQnBhMlY1T2xSSVNWTmZTVk5mVGs5VVgwRmZWa0ZNU1VSZlFWQkpTMFZaIn19fQ=="
    },
    "kind": "Secret",
    "metadata": {
        "creationTimestamp": "2021-03-16T16:57:52Z",
        "managedFields": [
            {
                "apiVersion": "v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:data": {
                        ".": {},
                        "f:.dockerconfigjson": {}
                    },
                    "f:type": {}
                },
                "manager": "kubectl-create",
                "operation": "Update",
                "time": "2021-03-16T16:57:52Z"
            }
        ],
        "name": "foobar",
        "namespace": "default",
        "resourceVersion": "19983",
        "selfLink": "/api/v1/namespaces/default/secrets/foobar",
        "uid": "26bdd49b-49c6-4133-a331-3e9cb6150a26"
    },
    "type": "kubernetes.io/dockerconfigjson"
}

so we can quickly inspect ( decode ) the secret, using jq to parse the output: -

kubectl get secret foobar --output json | jq -r .data[] | base64 -d

{"auths":{"https://us.icr.io":{"username":"iamapikey","password":"THIS_IS_NOT_A_VALID_APIKEY","auth":"aWFtYXBpa2V5OlRISVNfSVNfTk9UX0FfVkFMSURfQVBJS0VZ"}}}

This is a useful way to check input against output, and thus avoid GIGO.

The other jq related tip is in the context of Calico Node, where I was looking to inspect a daemonset to grab one specific data element - known as IP_AUTODETECTION_METHOD - in short, this can be used to ensure that the Calico Node pods use a specific network adapter inside each of the K8s nodes.

So I'd take a generic kubectl command such as: -

kubectl get daemonset/calico-node -n kube-system --output json

and then parse the output to only retrieve the value of IP_AUTODETECTION_METHOD : -

{
  "name": "IP_AUTODETECTION_METHOD",
  "value": "interface=eth.*"
}

I could, if I so chose, override that by editing the daemonset : -

kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=eth0

and then re-run the kubectl get command to check that the value had changed ....

Nice !

Thursday, 4 March 2021

Gah, again with the ImagePullBackOff

 So, following on from this: -

Gah, ImagePullBackOff with Calico CNI running on Kubernetes

I was again seeing this: -

kube-system   calico-node-lxmk4                          0/1     Init:ImagePullBackOff   0          5m26s

and, upon further digging: -

kubectl describe pod calico-node-lxmk4 --namespace kube-system

Type     Reason     Age                    From                   Message
----     ------     ----                   ----                   -------
Normal   Scheduled  5m47s                  default-scheduler      Successfully assigned kube-system/calico-node-lxmk4 to 667ceb40fc75
Normal   Pulling    4m24s (x4 over 5m46s)  kubelet, 667ceb40fc75  Pulling image "us.icr.io/mynamespace/calico/cni:v3.16.5"
Warning  Failed     4m23s (x4 over 5m45s)  kubelet, 667ceb40fc75  Failed to pull image "us.icr.io/mynamespace/calico/cni:v3.16.5": rpc error: code = Unknown desc = Error response from daemon: Get https://us.icr.io/v2/mynamespace/calico/cni/manifests/v3.16.5: unauthorized: The login credentials are not valid, or your IBM Cloud account is not active.
Warning  Failed     4m23s (x4 over 5m45s)  kubelet, 667ceb40fc75  Error: ErrImagePull
Warning  Failed     3m57s (x7 over 5m45s)  kubelet, 667ceb40fc75  Error: ImagePullBackOff
Normal   BackOff    46s (x21 over 5m45s)   kubelet, 667ceb40fc75  Back-off pulling image "us.icr.io/mynamespace/calico/cni:v3.16.5"

Note that my images are coming from IBM Container Registry, rather than Docker Hub, and that's the key .....

I was following this: -


which describes how one can generate a K8s secret from an existing docker login by grabbing the content of ~/.docker/config.json

Therefore, I was doing this: -

kubectl create secret generic regcred --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson

having previously logged in: -

echo "<MY API KEY>" | docker login -u iamapikey --password-stdin us.icr.io

which creates/updates /root/.docker/config.json

And that's where I was failing .....

Finally, after a few hours of head-banging, I looked back through my notes and realised that, for previous activities, including Tekton Pipelines / Triggers, I used a different approach to generate the secret: -

kubectl create secret docker-registry regcred --namespace kube-system --docker-server='https://us.icr.io' --docker-username='iamapikey' --docker-password='<MY API KEY>'

And, of course, it worked .....

Every day is ......

Tuesday, 2 March 2021

Fun with IBM Container Registry, Vulnerability Advisor and Nginx

 So I'm tinkering with IBM Container Registry (ICR) at present, and am testing the Vulnerability Advisor (VA) feature, by building/tagging/pushing a basic Nginx image.

Having configured my Nginx server for HTTPS ( HTTP over TLS ) - or so I thought - I was baffled that VA kept throwing up configuration errors: -

The scan results show that 5 ISSUES were found for the image.
Configuration Issues Found
==========================
Configuration Issue ID                                Policy Status   Security Practice                                  How to Resolve   
application_configuration:nginx.ssl_certificate_key   Active          Specifies the private key file for server cert.    ssl_certificate_key is not present in   
                                                                                                                         /etc/nginx/nginx.conf or   
                                                                                                                         /etc/nginx/sites-enabled/default.   
application_configuration:nginx.ssl_ciphers           Active          Specifies ciphers used in TLS.                     ssl_ciphers is not present in   
                                                                                                                         /etc/nginx/nginx.conf or   
                                                                                                                         /etc/nginx/sites-enabled/default. Defaults may not   
                                                                                                                         be secure.   
application_configuration:nginx.server_tokens         Active          Enables or disables emitting nginx version in      server_tokens is present but value is off. nginx   
                                                                      error messages and in the Server response header   will sends its version in HTTP responses which can   
                                                                      field.                                             be used by attackers for version-specific attacks   
                                                                                                                         against this nginx server.   
                                                                                                                         File: /etc/nginx/nginx.conf   
application_configuration:nginx.ssl_protocols         Active          Enables the specified protocols.                   ssl_protocols is not present in   
                                                                                                                         /etc/nginx/nginx.conf or   
                                                                                                                         /etc/nginx/sites-enabled/default.   
application_configuration:nginx.ssl_certificate       Active          Specifies a file with the certificate in the PEM   ssl_certificate is not present in   
                                                                      format for the given virtual server.               /etc/nginx/nginx.conf or   
                                                                                                                         /etc/nginx/sites-enabled/default.   
OK

even though I thought I'd configured Nginx to support the required configuration items e.g. server_tokens and ssl_protocols etc.

Well, I kinda had ....

I'd added these items: -

ssl_certificate     /etc/nginx/nginx.crt;
ssl_certificate_key /etc/nginx/nginx.key;
ssl_ciphers         EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
ssl_protocols       TLSv1.2;
ssl_prefer_server_ciphers   on;
server_tokens       on;
 
into nginx.conf BUT in the wrong place.

I had them in the http{} section rather than in the server{} section.

After some further digging, I realised that all but server_tokens should go in the server{} block, so we end up with this: -

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;

    server_tokens   off;
 
    server {
        listen                      443 ssl default_server;
        listen                      [::]:443 ssl default_server ;
        server_name                 example.com www.example.com;
        root                        /usr/share/nginx/html;
        ssl_certificate             /etc/nginx/nginx.crt;
        ssl_certificate_key         /etc/nginx/nginx.key;
        ssl_ciphers                 EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
        ssl_protocols               TLSv1.2;
        ssl_prefer_server_ciphers   on;
    }
}

and, more importantly, this: -

The scan results show that NO ISSUES were found for the image.

OK

For further reading, there's a useful tutorial covering ICR and VA here: -

Tuesday, 23 February 2021

Munging Dockerfiles using sed

 So I had a requirement to update a Dockerfile, which I'd pulled from a GitHub repository, without actually adding my changes ( via git add and git commit ) to that repo ...

Specifically, I wanted to add a command to update the to-be-built Alpine image.

Here's how I solved it ...

Having cloned the target repo, which included the Dockerfile ( example below ): -

FROM alpine:3.7
RUN apk add --no-cache mysql-client
ENTRYPOINT ["mysql"]

At present, this is what happens when I build the image: -

docker build -f Dockerfile .

Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM alpine:3.7
3.7: Pulling from library/alpine
5d20c808ce19: Pull complete 
Digest: sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10
Status: Downloaded newer image for alpine:3.7
 ---> 6d1ef012b567
Step 2/3 : RUN apk add --no-cache mysql-client
 ---> Running in 07bc9ca0e14a
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/6) Installing mariadb-common (10.1.41-r0)
(2/6) Installing ncurses-terminfo-base (6.0_p20171125-r1)
(3/6) Installing ncurses-terminfo (6.0_p20171125-r1)
(4/6) Installing ncurses-libs (6.0_p20171125-r1)
(5/6) Installing mariadb-client (10.1.41-r0)
(6/6) Installing mysql-client (10.1.41-r0)
Executing busybox-1.27.2-r11.trigger
OK: 41 MiB in 19 packages
Removing intermediate container 07bc9ca0e14a
 ---> 43862371f8a4
Step 3/3 : ENTRYPOINT ["mysql"]
 ---> Running in d8b08c967cc1
Removing intermediate container d8b08c967cc1
 ---> 1ee30800ffbd
Successfully built 1ee30800ffbd

I wanted to add a command to run apk upgrade before the ENTRYPOINT entry ( so, in essence, inserting it between lines 2 and 3 )

This is what I did: -

sed -i '' -e "2s/^//p; 2s/^.*/RUN apk --no-cache upgrade/" Dockerfile

which results in: -

FROM alpine:3.7
RUN apk add --no-cache mysql-client
RUN apk --no-cache upgrade
ENTRYPOINT ["mysql"]

Now the build looks like this: -

docker build -f Dockerfile .

Sending build context to Docker daemon  2.048kB
Step 1/4 : FROM alpine:3.7
3.7: Pulling from library/alpine
5d20c808ce19: Pull complete 
Digest: sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10
Status: Downloaded newer image for alpine:3.7
 ---> 6d1ef012b567
Step 2/4 : RUN apk add --no-cache mysql-client
 ---> Running in b29668e70377
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/6) Installing mariadb-common (10.1.41-r0)
(2/6) Installing ncurses-terminfo-base (6.0_p20171125-r1)
(3/6) Installing ncurses-terminfo (6.0_p20171125-r1)
(4/6) Installing ncurses-libs (6.0_p20171125-r1)
(5/6) Installing mariadb-client (10.1.41-r0)
(6/6) Installing mysql-client (10.1.41-r0)
Executing busybox-1.27.2-r11.trigger
OK: 41 MiB in 19 packages
Removing intermediate container b29668e70377
 ---> 971a3d538edf
Step 3/4 : RUN apk --no-cache upgrade
 ---> Running in 8dfa10b481ad
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/2) Upgrading musl (1.1.18-r3 -> 1.1.18-r4)
(2/2) Upgrading musl-utils (1.1.18-r3 -> 1.1.18-r4)
Executing busybox-1.27.2-r11.trigger
OK: 41 MiB in 19 packages
Removing intermediate container 8dfa10b481ad
 ---> 35d7cbec77c0
Step 4/4 : ENTRYPOINT ["mysql"]
 ---> Running in c0d7d310d396
Removing intermediate container c0d7d310d396
 ---> 4ad02b88f4d4
Successfully built 4ad02b88f4d4

In essence, the sed command duplicates line 2: -

RUN apk add --no-cache mysql-client

as line 3, and then replaces the newly duplicated text with the required replacement: -

RUN apk --no-cache upgrade

Neat, eh ?

I've got this in a script, and it works a treat .....

Wednesday, 17 February 2021

Tinkering with YQ, it's like JQ but for YAML rather than JSON

 A colleague was tinkering with Mike Farah's excellent yq tool, and had asked about updating it on his Mac.

I've got yq installed via Homebrew: -

brew update

which threw up: -

Error: 

  homebrew-core is a shallow clone.

To `brew update`, first run:

  git -C /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core fetch --unshallow

so I did this: -

git -C /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core fetch --unshallow

and then: -

brew upgrade yq

which resulted in the latest version: -

yq --version

yq version 4.5.1

Subsequently, my friend was then looking to use the evaluate feature of yq to run queries against YAML documents, with the query string being defined in an environment variable.

Therefore, I had to try this ...

Firstly, I created sample.yaml 

name:
 - Ben
 - Dave
 - Tom
 - Jerry

which yq correctly evaluates: -

yq eval sample.yaml

name:
  - Ben
  - Dave
  - Tom
  - Jerry

and: -

yq eval .name sample.yaml

- Ben
- Dave
- Tom
- Jerry

I then set the environment variable to define my search query: -

export FRIEND="Ben"

and updated my yq query: -

yq eval ".name = \"$FRIEND\"" sample.yaml

name: Ben

and then changed my variable: -

export FRIEND="Tom"

and re-ran my query: -

yq eval ".name = \"$FRIEND\"" sample.yaml

name: Tom

Thanks to this: -



https://mikefarah.gitbook.io/yq/commands/evaluate


Monday, 15 February 2021

JQ - more filtering, more fun

 Building upon earlier posts, I wanted to streamline the output from a REST API that returns a list of running containers: -

curl -s -k -X GET https://my.endpoint.com/containers -H 'Accept: application/json' -H 'Content-Type: application/json' -H 'Authorization: Bearer '"$ACCESS_TOKEN" | jq '.data[] | select (.Names[] | contains("dave"))' | jq .Names

[
  "/davehay_k8s_worker_1"
]
[
  "/davehay_k8s_master"
]
[
  "/davehay_k8s_worker_2"
]

Specifically, I wanted to remove the extraneous square brackets and double quotes ...

Here we go ...

curl -s -k -X GET https://my.endpoint.com/containers -H 'Accept: application/json' -H 'Content-Type: application/json' -H 'Authorization: Bearer '"$ACCESS_TOKEN"  | jq '.data[] | select (.Names[] | contains("dave"))' | jq -r .Names[]

/davehay_k8s_worker_1
/davehay_k8s_master
/davehay_k8s_worker_2

As with everything, there's a better way ....

But this works for me :-)

I learn something new each and every day - finding big files on macOS

 This came up in the context of a colleague trying to work out what's eating his Mac's disk.

Whilst I'm familiar with using the built-in Storage Management app, which includes a Recommendations tab: -


I'd not realised that there's an easy way to look for files, based upon size, using the mdfind command: -

mdfind "kMDItemFSSize >$[1024*1024*1024]"

/System/Library/dyld/dyld_shared_cache_x86_64
/System/Library/dyld/dyld_shared_cache_x86_64h
/System/Library/dyld/dyld_shared_cache_arm64e
/System/Library/dyld/aot_shared_cache
/Applications/HCL Notes.app
/Applications/Docker.app
/Users/hayd/Virtual Machines.localized/Ubuntu.vmwarevm
/Users/hayd/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
/Applications/Microsoft Excel.app
/Applications/iMovie.app
/Applications/Microsoft Word.app
/Applications/Microsoft PowerPoint.app
/Applications/VMware Fusion.app
/Users/hayd/Virtual Machines.localized/Windows10.vmwarevm

Now my example is looking for really large files - 1024^3 - or, to be more specific, files exceeding 1 GB in size, but it's good to know .....

Thursday, 11 February 2021

Gah, ImagePullBackOff with Calico CNI running on Kubernetes

Whilst deploying Calico Node etc. to my cluster, via: -

kubectl apply -f calico.yaml

and whilst checking the running Pods, via: -

kubectl get pods -A

I was seeing: -

...

kube-system   calico-node-9srv6                         0/1     Init:ErrImagePull   0          8s

...

I dug into that failing Pod with: -

kubectl describe pod calico-node-9srv6 --namespace kube-system

which showed: -

...
  Type     Reason   Age                    From                   Message
  ----     ------   ----                   ----                   -------
  Normal   BackOff  17m (x303 over 88m)    kubelet, 50c933ad26be  Back-off pulling image "us.icr.io/davehay/calico/cni:latest-s390x"
  Warning  Failed   2m56s (x368 over 88m)  kubelet, 50c933ad26be  Error: ImagePullBackOff
...


Now I knew that it wasn't an authentication issue, as my YAML was also defining a Secret, as per my previous post: -


and had defined that Secret within the YAML, using: -

...
imagePullSecrets:
- name: my_secret
...

So why wasn't it working .... ?

And then it struck me .... DOH!

My Pod is running inside the kube-system Namespace ....

You know where I'm going with this, am I right ?

Yes, my Secret was NOT inside the same kube-system Namespace, but was in default.

Once I updated my YAML to redefine my Secret: -

...
apiVersion: v1
kind: Secret
data:
  .dockerconfigjson:
    <HERE'S THE BASE64 ENCODED STUFF>
metadata:
  name: my_secret
  namespace: kube-system
type: kubernetes.io/dockerconfigjson
...

re-applied the YAML, and deleted the failing Pod, all was well

Wednesday, 10 February 2021

Argh, Kubernetes and YAML hell

I was trying to create a Kubernetes (K8s) Secret, containing existing Docker credentials, as per this: -

Create a Secret based on existing Docker credentials

and kept hitting syntax errors with the YAML.

For reference, in this scenario, we've already logged into a container registry, such as IBM Container Registry or Docker Hub, and want to grab the credentials that Docker itself "caches" in ~/.docker/config.json

Wait, what ? You didn't know that Docker helpfully does that ? Another good reason to NOT leave yourself logged into a container registry when you step away from your box ....

Anyhow, as per the above linked documentation, the trick is to encapsulate the content of that file, encoded using Base64, into a YAML file that looks something like this: -

---
apiVersion: v1
kind: Secret
data:
  .dockerconfigjson:
    <HERE'S THE BASE64 ENCODED STUFF>
metadata:
  name: my_secret
type: kubernetes.io/dockerconfigjson

The trick is to get the Base64 encoded stuff just right ....

I was doing this: -

cat ~/.docker/config.json | base64 

which resulted in: -

ewoJImF1dGhzIjoge30sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2Vy
LUNsaWVudC8xOS4wMy42IChsaW51eCkiCgl9Cn0=

I kept seeing exceptions such as: -

error: error parsing secret.yaml: error converting YAML to JSON: yaml: line 7: could not find expected ':'

and: -

Error from server (BadRequest): error when creating "secret.yaml": Secret in version "v1" cannot be handled as a Secret: v1.Secret.ObjectMeta: v1.ObjectMeta.TypeMeta: Kind: Data: decode base64: illegal base64 data at input byte 76, error found in #10 byte of ...|BLAHBLAH=="},"kind":"|..., bigger context ...|BLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAH=="},"kind":"Secret","metadata":{"annotations":{"kube|...

when I tried to apply the YAML: -

kubectl apply -f secret.yaml

And then I re-read the documentation, for the 11th time, and saw: -

base64 encode the docker file and paste that string, unbroken as the value for field data[".dockerconfigjson"]

Can you see what I was doing wrong ?

Yep, I wasn't "telling" the Base64 encoded to produce an unbroken ( and, more importantly, unwrapped ) string.

This time I did it right: -

cat ~/.docker/config.json | base64 --wrap=0

resulting in this: -

ewoJImF1dGhzIjoge30sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy42IChsaW51eCkiCgl9Cn0=root@379cd9170839:~# 

Having discarded the user@hostname stuff, I was left with this: -

ewoJImF1dGhzIjoge30sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy42IChsaW51eCkiCgl9Cn0=

I updated my YAML: -

---
apiVersion: v1
kind: Secret
data:
  .dockerconfigjson: ewoJImF1dGhzIjoge30sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy42IChsaW51eCkiCgl9Cn0=
metadata:
  name: my_secret
type: kubernetes.io/dockerconfigjson

and applied it: -

kubectl apply -f secret.yaml 

secret/armadamultiarch created

and we're off to the races!

Wednesday, 27 January 2021

More about jq - this time it's searching for stuff

 Having written a lot about jq recently, I'm continuing to have fun.

Today it's about searching for stuff, as I was seeking to parse a huge amount of output ( a list of running containers ) for a snippet of the container's name ....

Here's an example of how I solved it ...

Take an example JSON document: -

cat family.json 

{
    "friends": [
        {
            "givenName": "Dave",
            "familyName": "Hay"
        },
        {
            "givenName": "Homer",
            "familyName": "Simpson"
        },
        {
            "givenName": "Marge",
            "familyName": "Simpson"
        },
        {
            "givenName": "Lisa",
            "familyName": "Simpson"
        },
        {
            "givenName": "Bart",
            "familyName": "Simpson"
        }
    ]
}

I can then use jq to dump out the entire document: -

cat family.json | jq

{
  "friends": [
    {
      "givenName": "Dave",
      "familyName": "Hay"
    },
    {
      "givenName": "Homer",
      "familyName": "Simpson"
    },
    {
      "givenName": "Marge",
      "familyName": "Simpson"
    },
    {
      "givenName": "Lisa",
      "familyName": "Simpson"
    },
    {
      "givenName": "Bart",
      "familyName": "Simpson"
    }
  ]
}

but, say, I want to find all the records where the familyName is Simpson ?

cat family.json | jq -c '.friends[] | select(.familyName | contains("Simpson"))'

{"givenName":"Homer","familyName":"Simpson"}
{"givenName":"Marge","familyName":"Simpson"}
{"givenName":"Lisa","familyName":"Simpson"}
{"givenName":"Bart","familyName":"Simpson"}

or all the records where the givenName contains the letter a ?

cat family.json | jq -c '.friends[] | select(.givenName | contains("a"))'

{"givenName":"Dave","familyName":"Hay"}
{"givenName":"Marge","familyName":"Simpson"}
{"givenName":"Lisa","familyName":"Simpson"}
{"givenName":"Bart","familyName":"Simpson"}

or, as an edge-case, where the givenName contains the letter A or the letter a i.e. ignore the case ?

cat family.json | jq -c '.friends[] | select(.givenName | match("A";"i"))'

{"givenName":"Dave","familyName":"Hay"}
{"givenName":"Marge","familyName":"Simpson"}
{"givenName":"Lisa","familyName":"Simpson"}
{"givenName":"Bart","familyName":"Simpson"}

TL;DR; jq rules!

JQ - Syntax on macOS vs. Linux

 I keep forgetting that the syntax of commands on macOS often varies from Linux platforms, such as Ubuntu.

JQ ( jq ) is a good example.

So here's an example using json_pp ( JSON Print Pretty )

echo '{"givenName":"Dave","familyName":"Hay"}' | json_pp

{
   "givenName" : "Dave",
   "familyName" : "Hay"
}

and here's the same example using jq 

echo '{"givenName":"Dave","familyName":"Hay"}' | jq

{
  "givenName": "Dave",
  "familyName": "Hay"
}

both on macOS.

Having spun up an Ubuntu container: -

docker run -it ubuntu:latest bash

and installed json_pp and jq: -

apt-get update && apt-get install -y libjson-pp-perl

and: -

apt-get update && apt-get install -y jq

here's the same pair of examples: -

echo '{"givenName":"Dave","familyName":"Hay"}' | json_pp

{
   "familyName" : "Hay",
   "givenName" : "Dave"
}

echo '{"givenName":"Dave","familyName":"Hay"}' | jq


{
  "givenName": "Dave",
  "familyName": "Hay"
}

So far, so good.

To be sure, on both macOS and Ubuntu, I double-checked the version of jq : -

jq --version

jq-1.6

Again, all is fine.

And then I hit an issue ....

I was building a Jenkins Job that runs from a GitHub repo, with a Jenkinsfile that invokes a Bash script.

At one point, I saw: -

jq - commandline JSON processor [version 1.5-1-a5b5cbe]
Usage: jq [options] <jq filter> [file...]
jq is a tool for processing JSON inputs, applying the
given filter to its JSON text inputs and producing the
filter's results as JSON on standard output.
The simplest filter is ., which is the identity filter,
copying jq's input to its output unmodified (except for
formatting).
For more advanced filters see the jq(1) manpage ("man jq")
and/or https://stedolan.github.io/jq
Some of the options include:
-c compact instead of pretty-printed output;
-n use `null` as the single input value;
-e set the exit status code based on the output;
-s read (slurp) all inputs into an array; apply filter to it;
-r output raw strings, not JSON texts;
-R read raw strings, not JSON texts;
-C colorize JSON;
-M monochrome (don't colorize JSON);
-S sort keys of objects on output;
--tab use tabs for indentation;
--arg a v set variable $a to value <v>;
--argjson a v set variable $a to JSON value <v>;
--slurpfile a f set variable $a to an array of JSON texts read from <f>;
See the manpage for more options.

Note the version of jq being reported - by default, it is: -

1.5-1-a5b5cbe

To validate this, I created a basic Jenkinsfile: -

timestamps {
    node('cf_slave') {
      stage('Testing jq') {
        sh '''#!/bin/bash
              which jq
              ls -al `which jq`
              jq --version
              echo '{"givenName":"Dave","familyName":"Hay"}' | jq
            '''
            }
    }
}

which: -

(a) show which jq is being used

(b) shows the file-path of that jq

(c) shows the version of that jq

(d) attempts to render the same bit of JSON

which returned: -

09:06:22  /usr/bin/jq
09:06:22  -rwxr-xr-x 1 root root 280720 Sep  7  2018 /usr/bin/jq
09:06:22  jq-1.5-1-a5b5cbe
09:06:22  jq - commandline JSON processor [version 1.5-1-a5b5cbe]
09:06:22  Usage: jq [options] <jq filter> [file...]
09:06:22  
09:06:22   jq is a tool for processing JSON inputs, applying the
09:06:22   given filter to its JSON text inputs and producing the
09:06:22   filter's results as JSON on standard output.
09:06:22   The simplest filter is ., which is the identity filter,
09:06:22   copying jq's input to its output unmodified (except for
09:06:22   formatting).
09:06:22   For more advanced filters see the jq(1) manpage ("man jq")
09:06:22   and/or https://stedolan.github.io/jq
09:06:22  
09:06:22   Some of the options include:
09:06:22   -c compact instead of pretty-printed output;
09:06:22   -n use `null` as the single input value;
09:06:22   -e set the exit status code based on the output;
09:06:22   -s read (slurp) all inputs into an array; apply filter to it;
09:06:22   -r output raw strings, not JSON texts;
09:06:22   -R read raw strings, not JSON texts;
09:06:22   -C colorize JSON;
09:06:22   -M monochrome (don't colorize JSON);
09:06:22   -S sort keys of objects on output;
09:06:22   --tab use tabs for indentation;
09:06:22   --arg a v set variable $a to value <v>;
09:06:22   --argjson a v set variable $a to JSON value <v>;
09:06:22   --slurpfile a f set variable $a to an array of JSON texts read from <f>;
09:06:22   See the manpage for more options.

So, there's the issue - the default version of jq that's included within my cf_slave container is out-of-date.

There are two resolutions here: -

(a) Install an up-to-date version of jq

(b) Add a trailing period to the jq command

echo '{"givenName":"Dave","familyName":"Hay"}' | jq .

{
  "givenName": "Dave",
  "familyName": "Hay"
}

I'm still working on the former: -

sudo apt-get update && sudo apt-get install -y jq

which results in: -

09:24:14  jq is already the newest version (1.5+dfsg-1ubuntu0.1).

so I need to dig into my cf_slave container a bit more ...

In the meantime, the latter resolution ( adding the trailing period ) does the trick: -

09:24:14  /usr/bin/jq
09:24:14  -rwxr-xr-x 1 root root 280720 Sep  7  2018 /usr/bin/jq
09:24:14  jq-1.5-1-a5b5cbe
09:24:14  {
09:24:14    "givenName": "Dave",
09:24:14    "familyName": "Hay"
09:24:14  }

Wow, SSHD on Synology - fun and games

 Having created a new user on my Synology DS414+, I was trying/failing to SSH using a non-admin user, via the command: - ssh hayd@diskstatio...