Wednesday, 14 April 2021

New day, new Docker, new capability - image scanning

 Whilst I was upgrading some of my Ubuntu boxes the other day, I noticed a new plugin - docker-scan-plugin - in the list of things being upgraded.

A quick Google brought me this: -

Vulnerability scanning for Docker local images

Having since upgraded Docker on my Mac: -

docker version

Client: Docker Engine - Community
 Cloud integration: 1.0.12
 Version:           20.10.5
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        55c4c88
 Built:             Tue Mar  2 20:13:00 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true
Server: Docker Engine - Community
 Engine:
  Version:          20.10.5
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       363e9a8
  Built:            Tue Mar  2 20:15:47 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.4
  GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

I checked my available images: -

docker images

REPOSITORY    TAG       IMAGE ID       CREATED       SIZE
busybox       latest    388056c9a683   6 days ago    1.23MB
tekton-lint   latest    b79680846c0c   10 days ago   93.1MB

and then scanned one of them: -

docker scan busybox

Docker Scan relies upon access to Snyk, a third party provider, do you consent to proceed using Snyk? (y/N)
y

Testing busybox...

Organization:      undefined
Package manager:   linux
Project name:      docker-image|busybox
Docker image:      busybox
Platform:          linux/amd64

✓ Tested busybox for known vulnerabilities, no vulnerable paths found.

Note that we do not currently have vulnerability data for your image.

For more free scans that keep your images secure, sign up to Snyk at https://dockr.ly/3ePqVcp

Definitely a nice capability to have in the kitbag - we're also using IBM Container Registry's built-in Vulnerability Advisor tool, but more insights are better than fewer ....

Friday, 9 April 2021

Penny dropped - why is my job immutable ?

I'm tinkering with kube-bench at present, and wanted to deploy it as a Kubernetes Job using the job.yaml that's included with the repo.

However, my custom-built kube-bench image is stored in a private registry ( via IBM Container Registry ) so I needed to reference a Kubernetes Secret which I'd previously defined within my K8s cluster.

So far, so good ...

Having defined the Secret, I needed to reference it in the Job description ( job.yaml ) via an imagePullSecrets which I'd added to the YAML: -

---
apiVersion: batch/v1
kind: Job
metadata:
  name: kube-bench
spec:
  template:
    metadata:
      labels:
        app: kube-bench
    spec:
      hostPID: true
      imagePullSecrets:
        - name: mysecret
      containers:
        - name: kube-bench
          image: aquasec/kube-bench:latest
          command: ["kube-bench"]

However, when I tried to apply the YAML: -

kubectl apply -f job.yaml 

The Job "kube-bench" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kube-bench", "controller-uid":"849af2dc-fd58-48af-bdbc-529fb8b65a56", "job-name":"kube-bench"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:"var-lib-etcd", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b100), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"var-lib-kubelet", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b160), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"var-lib-kube-scheduler", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b180), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"var-lib-kube-controller-manager", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b1a0), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"etc-systemd", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b1e0), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"lib-systemd", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b200), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"srv-kubernetes", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b240), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"etc-kubernetes", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b260), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"usr-bin", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b4e0), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"etc-cni-netd", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b500), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}, core.Volume{Name:"opt-cni-bin", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(0xc007c0b520), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}}, InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"kube-bench", Image:"aquasec/kube-bench:latest", Command:[]string{"kube-bench"}, Args:[]string(nil), WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:"var-lib-etcd", ReadOnly:true, MountPath:"/var/lib/etcd", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"var-lib-kubelet", ReadOnly:true, MountPath:"/var/lib/kubelet", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"var-lib-kube-scheduler", ReadOnly:true, MountPath:"/var/lib/kube-scheduler", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"var-lib-kube-controller-manager", ReadOnly:true, MountPath:"/var/lib/kube-controller-manager", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"etc-systemd", ReadOnly:true, MountPath:"/etc/systemd", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"lib-systemd", ReadOnly:true, MountPath:"/lib/systemd/", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"srv-kubernetes", ReadOnly:true, MountPath:"/srv/kubernetes/", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"etc-kubernetes", ReadOnly:true, MountPath:"/etc/kubernetes", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"usr-bin", ReadOnly:true, MountPath:"/usr/local/mount-from-host/bin", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"etc-cni-netd", ReadOnly:true, MountPath:"/etc/cni/net.d/", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}, core.VolumeMount{Name:"opt-cni-bin", ReadOnly:true, MountPath:"/opt/cni/bin/", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc01ba2e338), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc01905b480), ImagePullSecrets:[]core.LocalObjectReference{core.LocalObjectReference{Name:"zaasking"}}, Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable

What the heck ?

Then I read this: -


which says, in part: -

This is not a bug, you can't change the template after the job has been created.

Ah!!!

I listed my jobs: -

kubectl get jobs -A

NAMESPACE          NAME                                                  COMPLETIONS   DURATION   AGE
default            kube-bench                                            0/1           117m       117m
tekton-pipelines   pw-3282a77a-680d-4731-a3f0-6c73f67ddb42-cleaner-job   0/1           6d20h      6d20h
tekton-pipelines   pw-54d9eabf-ade2-47ae-bfa1-d1b7b3be2240-cleaner-job   0/1           6d20h      6d20h
tekton-pipelines   pw-b12cc256-3ff6-445a-86f1-1b0e1ef6ed08-cleaner-job   1/1           3s         5d21h

So the job already exists - so how can I change the job description for an inflight job ?

Oh, yeah, I can just delete it ...

kubectl delete job kube-bench

job.batch "kube-bench" deleted

kubectl get jobs -A

NAMESPACE          NAME                                                  COMPLETIONS   DURATION   AGE
tekton-pipelines   pw-3282a77a-680d-4731-a3f0-6c73f67ddb42-cleaner-job   0/1           6d20h      6d20h
tekton-pipelines   pw-54d9eabf-ade2-47ae-bfa1-d1b7b3be2240-cleaner-job   0/1           6d20h      6d20h
tekton-pipelines   pw-b12cc256-3ff6-445a-86f1-1b0e1ef6ed08-cleaner-job   1/1           3s         5d21h

kubectl apply -f job.yaml

job.batch/kube-bench created

kubectl get jobs

NAME         COMPLETIONS   DURATION   AGE
kube-bench   0/1           115s       115s

kubectl get pods

NAME                           READY   STATUS             RESTARTS   AGE
el-listener-867d86b9f7-gllf6   0/1     CrashLoopBackOff   23         28h
kube-bench-qhg82               1/1     Running            0          2m1s

Nice!

Wednesday, 17 March 2021

Not in Kansas anymore - apparently I don't exist

Whilst trying to upgrade the Tekton CLI tool tkn using Homebrew: -

brew upgrade tektoncd/tools/tektoncd-cli

==> Upgrading 1 outdated package:

tektoncd/tools/tektoncd-cli 0.8.0 -> 0.15.0

==> Upgrading tektoncd/tools/tektoncd-cli 0.8.0 -> 0.15.0 

==> Downloading https://github.com/tektoncd/cli/releases/download/v0.15.0/tkn_0.15.0_Darwin_x86_64.tar.gz

==> Downloading from https://github-releases.githubusercontent.com/181939372/9d001b00-4060-11eb-9efa-57717f8e92f7?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJY

######################################################################## 100.0%

Error: An exception occurred within a child process:

  ArgumentError: user david_hay@uk.ibm.com doesn't exist

To narrow things down, I ran: -

brew upgrade

but that failed with much the same: -

==> Upgrading tektoncd/tools/tektoncd-cli 0.8.0 -> 0.15.0 
==> Downloading https://github.com/tektoncd/cli/releases/download/v0.15.0/tkn_0.15.0_Darwin_x86_64.tar.gz
Already downloaded: /Users/hayd/Library/Caches/Homebrew/downloads/d911addd12ba79ea06e5d8d0002a0523ec0b53d6aa5407a94f6e9836f3ff6fa5--tkn_0.15.0_Darwin_x86_64.tar.gz
Error: An exception occurred within a child process:
  ArgumentError: user david_hay@uk.ibm.com doesn't exist

Following this: -


I checked my environment: -

printenv | grep david_hay@uk.ibm.com


which returned: -

USER=david_hay@uk.ibm.com

Given that my macOS user is actually hayd, I suspect that this is the issue.

I reset that environment variable: -

export USER=hayd

and tried again: -

brew upgrade tektoncd/tools/tektoncd-cli

Updating Homebrew...
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
==> Updated Formulae
Updated 1 formula.

==> Upgrading 1 outdated package:
tektoncd/tools/tektoncd-cli 0.8.0 -> 0.15.0
==> Upgrading tektoncd/tools/tektoncd-cli 0.8.0 -> 0.15.0 
==> Downloading https://github.com/tektoncd/cli/releases/download/v0.15.0/tkn_0.15.0_Darwin_x86_64.tar.gz
Already downloaded: /Users/hayd/Library/Caches/Homebrew/downloads/d911addd12ba79ea06e5d8d0002a0523ec0b53d6aa5407a94f6e9836f3ff6fa5--tkn_0.15.0_Darwin_x86_64.tar.gz
==> Caveats
Bash completion has been installed to:
  /usr/local/etc/bash_completion.d
==> Summary
🍺  /usr/local/Cellar/tektoncd-cli/0.15.0: 8 files, 43.7MB, built in 7 seconds
Removing: /usr/local/Cellar/tektoncd-cli/0.8.0... (8 files, 32.4MB)

Job done!

For the record, tkn was back-level: -

tkn version

Client version: 0.8.0
Pipeline version: unknown

and is now up-to-date: -

tkn version

Client version: 0.15.0
Pipeline version: v0.22.0
Triggers version: v0.12.1

( which, usefully, also reports the versions of Tekton Pipelines and Triggers deployed to my cluster )




Tuesday, 16 March 2021

Two of my favourite things - Kubernetes and jq

 As per recent posts, I've been falling in love with jq and using it for more and more and more ....

Today, it's Kubernetes meets jq

Specifically, working with secrets ... and following on from an earlier post: -

Gah, again with the ImagePullBackOff

So I'm creating a dummy secret, wrapped around some credentials for an IBM Container Registry (ICR) instance: -

kubectl create secret docker-registry foobar --docker-server='https://us.icr.io' --docker-username='iamapikey' --docker-password='THIS_IS_NOT_A_VALID_APIKEY'

secret/foobar created

Now the secret of type docker-registry essentially wraps the credentials ( whether it be for Docker Hub or an ICR instance or similar ) up in a Base64-encoded "blob", as evidenced by the query: -

kubectl get secret foobar --output json

{
    "apiVersion": "v1",
    "data": {
        ".dockerconfigjson": "eyJhdXRocyI6eyJodHRwczovL3VzLmljci5pbyI6eyJ1c2VybmFtZSI6ImlhbWFwaWtleSIsInBhc3N3b3JkIjoiVEhJU19JU19OT1RfQV9WQUxJRF9BUElLRVkiLCJhdXRoIjoiYVdGdFlYQnBhMlY1T2xSSVNWTmZTVk5mVGs5VVgwRmZWa0ZNU1VSZlFWQkpTMFZaIn19fQ=="
    },
    "kind": "Secret",
    "metadata": {
        "creationTimestamp": "2021-03-16T16:57:52Z",
        "managedFields": [
            {
                "apiVersion": "v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:data": {
                        ".": {},
                        "f:.dockerconfigjson": {}
                    },
                    "f:type": {}
                },
                "manager": "kubectl-create",
                "operation": "Update",
                "time": "2021-03-16T16:57:52Z"
            }
        ],
        "name": "foobar",
        "namespace": "default",
        "resourceVersion": "19983",
        "selfLink": "/api/v1/namespaces/default/secrets/foobar",
        "uid": "26bdd49b-49c6-4133-a331-3e9cb6150a26"
    },
    "type": "kubernetes.io/dockerconfigjson"
}

so we can quickly inspect ( decode ) the secret, using jq to parse the output: -

kubectl get secret foobar --output json | jq -r .data[] | base64 -d

{"auths":{"https://us.icr.io":{"username":"iamapikey","password":"THIS_IS_NOT_A_VALID_APIKEY","auth":"aWFtYXBpa2V5OlRISVNfSVNfTk9UX0FfVkFMSURfQVBJS0VZ"}}}

This is a useful way to check input against output, and thus avoid GIGO.

The other jq related tip is in the context of Calico Node, where I was looking to inspect a daemonset to grab one specific data element - known as IP_AUTODETECTION_METHOD - in short, this can be used to ensure that the Calico Node pods use a specific network adapter inside each of the K8s nodes.

So I'd take a generic kubectl command such as: -

kubectl get daemonset/calico-node -n kube-system --output json

and then parse the output to only retrieve the value of IP_AUTODETECTION_METHOD : -

{
  "name": "IP_AUTODETECTION_METHOD",
  "value": "interface=eth.*"
}

I could, if I so chose, override that by editing the daemonset : -

kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=eth0

and then re-run the kubectl get command to check that the value had changed ....

Nice !

Thursday, 4 March 2021

Gah, again with the ImagePullBackOff

 So, following on from this: -

Gah, ImagePullBackOff with Calico CNI running on Kubernetes

I was again seeing this: -

kube-system   calico-node-lxmk4                          0/1     Init:ImagePullBackOff   0          5m26s

and, upon further digging: -

kubectl describe pod calico-node-lxmk4 --namespace kube-system

Type     Reason     Age                    From                   Message
----     ------     ----                   ----                   -------
Normal   Scheduled  5m47s                  default-scheduler      Successfully assigned kube-system/calico-node-lxmk4 to 667ceb40fc75
Normal   Pulling    4m24s (x4 over 5m46s)  kubelet, 667ceb40fc75  Pulling image "us.icr.io/mynamespace/calico/cni:v3.16.5"
Warning  Failed     4m23s (x4 over 5m45s)  kubelet, 667ceb40fc75  Failed to pull image "us.icr.io/mynamespace/calico/cni:v3.16.5": rpc error: code = Unknown desc = Error response from daemon: Get https://us.icr.io/v2/mynamespace/calico/cni/manifests/v3.16.5: unauthorized: The login credentials are not valid, or your IBM Cloud account is not active.
Warning  Failed     4m23s (x4 over 5m45s)  kubelet, 667ceb40fc75  Error: ErrImagePull
Warning  Failed     3m57s (x7 over 5m45s)  kubelet, 667ceb40fc75  Error: ImagePullBackOff
Normal   BackOff    46s (x21 over 5m45s)   kubelet, 667ceb40fc75  Back-off pulling image "us.icr.io/mynamespace/calico/cni:v3.16.5"

Note that my images are coming from IBM Container Registry, rather than Docker Hub, and that's the key .....

I was following this: -


which describes how one can generate a K8s secret from an existing docker login by grabbing the content of ~/.docker/config.json

Therefore, I was doing this: -

kubectl create secret generic regcred --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson

having previously logged in: -

echo "<MY API KEY>" | docker login -u iamapikey --password-stdin us.icr.io

which creates/updates /root/.docker/config.json

And that's where I was failing .....

Finally, after a few hours of head-banging, I looked back through my notes and realised that, for previous activities, including Tekton Pipelines / Triggers, I used a different approach to generate the secret: -

kubectl create secret docker-registry regcred --namespace kube-system --docker-server='https://us.icr.io' --docker-username='iamapikey' --docker-password='<MY API KEY>'

And, of course, it worked .....

Every day is ......

Tuesday, 2 March 2021

Fun with IBM Container Registry, Vulnerability Advisor and Nginx

 So I'm tinkering with IBM Container Registry (ICR) at present, and am testing the Vulnerability Advisor (VA) feature, by building/tagging/pushing a basic Nginx image.

Having configured my Nginx server for HTTPS ( HTTP over TLS ) - or so I thought - I was baffled that VA kept throwing up configuration errors: -

The scan results show that 5 ISSUES were found for the image.
Configuration Issues Found
==========================
Configuration Issue ID                                Policy Status   Security Practice                                  How to Resolve   
application_configuration:nginx.ssl_certificate_key   Active          Specifies the private key file for server cert.    ssl_certificate_key is not present in   
                                                                                                                         /etc/nginx/nginx.conf or   
                                                                                                                         /etc/nginx/sites-enabled/default.   
application_configuration:nginx.ssl_ciphers           Active          Specifies ciphers used in TLS.                     ssl_ciphers is not present in   
                                                                                                                         /etc/nginx/nginx.conf or   
                                                                                                                         /etc/nginx/sites-enabled/default. Defaults may not   
                                                                                                                         be secure.   
application_configuration:nginx.server_tokens         Active          Enables or disables emitting nginx version in      server_tokens is present but value is off. nginx   
                                                                      error messages and in the Server response header   will sends its version in HTTP responses which can   
                                                                      field.                                             be used by attackers for version-specific attacks   
                                                                                                                         against this nginx server.   
                                                                                                                         File: /etc/nginx/nginx.conf   
application_configuration:nginx.ssl_protocols         Active          Enables the specified protocols.                   ssl_protocols is not present in   
                                                                                                                         /etc/nginx/nginx.conf or   
                                                                                                                         /etc/nginx/sites-enabled/default.   
application_configuration:nginx.ssl_certificate       Active          Specifies a file with the certificate in the PEM   ssl_certificate is not present in   
                                                                      format for the given virtual server.               /etc/nginx/nginx.conf or   
                                                                                                                         /etc/nginx/sites-enabled/default.   
OK

even though I thought I'd configured Nginx to support the required configuration items e.g. server_tokens and ssl_protocols etc.

Well, I kinda had ....

I'd added these items: -

ssl_certificate     /etc/nginx/nginx.crt;
ssl_certificate_key /etc/nginx/nginx.key;
ssl_ciphers         EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
ssl_protocols       TLSv1.2;
ssl_prefer_server_ciphers   on;
server_tokens       on;
 
into nginx.conf BUT in the wrong place.

I had them in the http{} section rather than in the server{} section.

After some further digging, I realised that all but server_tokens should go in the server{} block, so we end up with this: -

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;

    server_tokens   off;
 
    server {
        listen                      443 ssl default_server;
        listen                      [::]:443 ssl default_server ;
        server_name                 example.com www.example.com;
        root                        /usr/share/nginx/html;
        ssl_certificate             /etc/nginx/nginx.crt;
        ssl_certificate_key         /etc/nginx/nginx.key;
        ssl_ciphers                 EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
        ssl_protocols               TLSv1.2;
        ssl_prefer_server_ciphers   on;
    }
}

and, more importantly, this: -

The scan results show that NO ISSUES were found for the image.

OK

For further reading, there's a useful tutorial covering ICR and VA here: -

Tuesday, 23 February 2021

Munging Dockerfiles using sed

 So I had a requirement to update a Dockerfile, which I'd pulled from a GitHub repository, without actually adding my changes ( via git add and git commit ) to that repo ...

Specifically, I wanted to add a command to update the to-be-built Alpine image.

Here's how I solved it ...

Having cloned the target repo, which included the Dockerfile ( example below ): -

FROM alpine:3.7
RUN apk add --no-cache mysql-client
ENTRYPOINT ["mysql"]

At present, this is what happens when I build the image: -

docker build -f Dockerfile .

Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM alpine:3.7
3.7: Pulling from library/alpine
5d20c808ce19: Pull complete 
Digest: sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10
Status: Downloaded newer image for alpine:3.7
 ---> 6d1ef012b567
Step 2/3 : RUN apk add --no-cache mysql-client
 ---> Running in 07bc9ca0e14a
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/6) Installing mariadb-common (10.1.41-r0)
(2/6) Installing ncurses-terminfo-base (6.0_p20171125-r1)
(3/6) Installing ncurses-terminfo (6.0_p20171125-r1)
(4/6) Installing ncurses-libs (6.0_p20171125-r1)
(5/6) Installing mariadb-client (10.1.41-r0)
(6/6) Installing mysql-client (10.1.41-r0)
Executing busybox-1.27.2-r11.trigger
OK: 41 MiB in 19 packages
Removing intermediate container 07bc9ca0e14a
 ---> 43862371f8a4
Step 3/3 : ENTRYPOINT ["mysql"]
 ---> Running in d8b08c967cc1
Removing intermediate container d8b08c967cc1
 ---> 1ee30800ffbd
Successfully built 1ee30800ffbd

I wanted to add a command to run apk upgrade before the ENTRYPOINT entry ( so, in essence, inserting it between lines 2 and 3 )

This is what I did: -

sed -i '' -e "2s/^//p; 2s/^.*/RUN apk --no-cache upgrade/" Dockerfile

which results in: -

FROM alpine:3.7
RUN apk add --no-cache mysql-client
RUN apk --no-cache upgrade
ENTRYPOINT ["mysql"]

Now the build looks like this: -

docker build -f Dockerfile .

Sending build context to Docker daemon  2.048kB
Step 1/4 : FROM alpine:3.7
3.7: Pulling from library/alpine
5d20c808ce19: Pull complete 
Digest: sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10
Status: Downloaded newer image for alpine:3.7
 ---> 6d1ef012b567
Step 2/4 : RUN apk add --no-cache mysql-client
 ---> Running in b29668e70377
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/6) Installing mariadb-common (10.1.41-r0)
(2/6) Installing ncurses-terminfo-base (6.0_p20171125-r1)
(3/6) Installing ncurses-terminfo (6.0_p20171125-r1)
(4/6) Installing ncurses-libs (6.0_p20171125-r1)
(5/6) Installing mariadb-client (10.1.41-r0)
(6/6) Installing mysql-client (10.1.41-r0)
Executing busybox-1.27.2-r11.trigger
OK: 41 MiB in 19 packages
Removing intermediate container b29668e70377
 ---> 971a3d538edf
Step 3/4 : RUN apk --no-cache upgrade
 ---> Running in 8dfa10b481ad
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/2) Upgrading musl (1.1.18-r3 -> 1.1.18-r4)
(2/2) Upgrading musl-utils (1.1.18-r3 -> 1.1.18-r4)
Executing busybox-1.27.2-r11.trigger
OK: 41 MiB in 19 packages
Removing intermediate container 8dfa10b481ad
 ---> 35d7cbec77c0
Step 4/4 : ENTRYPOINT ["mysql"]
 ---> Running in c0d7d310d396
Removing intermediate container c0d7d310d396
 ---> 4ad02b88f4d4
Successfully built 4ad02b88f4d4

In essence, the sed command duplicates line 2: -

RUN apk add --no-cache mysql-client

as line 3, and then replaces the newly duplicated text with the required replacement: -

RUN apk --no-cache upgrade

Neat, eh ?

I've got this in a script, and it works a treat .....

New day, new Docker, new capability - image scanning

 Whilst I was upgrading some of my Ubuntu boxes the other day, I noticed a new plugin - docker-scan-plugin - in the list of things being up...