Wednesday, 13 October 2021

Kubernetes - tailing Pod logs - TIL

I saw this on Twitter a few days back: -


but hadn't yet got around to playing with it.

I thought I'd test it with a brand new K8s 1.20 cluster running on IBM Kubernetes Service (IKS).

Firstly, I queried the pods that were running: -

kubectl get pods -A

NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE

ibm-system    addon-catalog-source-2x7hj                   1/1     Running   0          41h
ibm-system    catalog-operator-578f7c8857-666wd            1/1     Running   0          41h
ibm-system    olm-operator-6c45d79d96-pjtmr                1/1     Running   0          41h
kube-system   calico-kube-controllers-78ccd56cd7-wqgtf     1/1     Running   0          41h
kube-system   calico-node-pg6vv                            1/1     Running   0          41h
kube-system   calico-typha-ddd44968b-86cgs                 1/1     Running   0          41h
kube-system   calico-typha-ddd44968b-ffxmt                 0/1     Pending   0          41h
kube-system   calico-typha-ddd44968b-mqjrb                 0/1     Pending   0          41h
kube-system   coredns-7fc9f85d9c-5rwwv                     1/1     Running   0          41h
kube-system   coredns-7fc9f85d9c-bxtts                     1/1     Running   0          41h
kube-system   coredns-7fc9f85d9c-qk6gv                     1/1     Running   0          41h
kube-system   coredns-autoscaler-9cccfb98d-mw9qj           1/1     Running   0          41h
kube-system   dashboard-metrics-scraper-7c75dcd466-d5b9f   1/1     Running   0          41h
kube-system   ibm-keepalived-watcher-856k6                 1/1     Running   0          41h
kube-system   ibm-master-proxy-static-10.144.213.225       2/2     Running   0          41h
kube-system   kubernetes-dashboard-659cd5b798-thd57        1/1     Running   0          41h
kube-system   metrics-server-b7bc76594-4fdg2               2/2     Running   0          41h
kube-system   vpn-546847fcbf-dzzml                         1/1     Running   0          41h

and chose the metrics-server-b7bc76594-4fdg2 pod, because it's running two containers: -

kubectl get pod metrics-server-b7bc76594-4fdg2 --namespace kube-system --output json | jq -r .spec.containers[].name

metrics-server
metrics-server-nanny

Grabbing the labels for this pod: -

kubectl get pod metrics-server-b7bc76594-4fdg2 --namespace kube-system --output json | jq .metadata.labels

{
  "cert-checksum": "6457ed123878693e37fbde8b4a0abf966ae050c3",
  "k8s-app": "metrics-server",
  "pod-template-hash": "b7bc76594",
  "version": "v0.4.4"
}

and choosing the k8s-app label, I followed Lili's advice: -

kubectl logs --namespace kube-system --selector k8s-app=metrics-server --container metrics-server --tail=-1

I1011 17:10:21.606350       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/metrics-server-certs/tls.crt::/etc/metrics-server-certs/tls.key
I1011 17:10:21.606421       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.626860       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.646520       1 secure_serving.go:197] Serving securely on [::]:4443
I1011 17:10:21.627139       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1011 17:10:21.647378       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I1011 17:10:21.647922       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1011 17:10:21.606461       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:21.686383       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:22.007270       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I1011 17:10:22.046837       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I1011 17:10:22.126237       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 

and: -

kubectl logs --namespace kube-system --selector k8s-app=metrics-server --container metrics-server-nanny --tail=-1

ERROR: logging before flag.Parse: I1011 17:10:03.951537       1 pod_nanny.go:68] Invoked by [/pod_nanny --config-dir=/etc/config --cpu=40m --extra-cpu=0.5m --memory=40Mi --extra-memory=4Mi --threshold=5 --deployment=metrics-server --container=metrics-server --poll-period=300000 --estimator=exponential --use-metrics=true]
ERROR: logging before flag.Parse: I1011 17:10:03.951645       1 pod_nanny.go:69] Version: 1.8.12
ERROR: logging before flag.Parse: I1011 17:10:03.951673       1 pod_nanny.go:85] Watching namespace: kube-system, pod: metrics-server-b7bc76594-4fdg2, container: metrics-server.
ERROR: logging before flag.Parse: I1011 17:10:03.951684       1 pod_nanny.go:86] storage: MISSING, extra_storage: 0Gi
ERROR: logging before flag.Parse: I1011 17:10:03.968655       1 pod_nanny.go:116] cpu: 40m, extra_cpu: 0.5m, memory: 40Mi, extra_memory: 4Mi
ERROR: logging before flag.Parse: I1011 17:10:03.968697       1 pod_nanny.go:145] Resources: [{Base:{i:{value:40 scale:-3} d:{Dec:<nil>} s:40m Format:DecimalSI} ExtraPerNode:{i:{value:5 scale:-4} d:{Dec:<nil>} s: Format:DecimalSI} Name:cpu} {Base:{i:{value:41943040 scale:0} d:{Dec:<nil>} s: Format:BinarySI} ExtraPerNode:{i:{value:4194304 scale:0} d:{Dec:<nil>} s:4Mi Format:BinarySI} Name:memory}]

In other words, I used the --selector switch to grab logs for the two containers by their label, rather than knowing/caring about the pod name.

I could've done much the same BUT via a much more complicated route using grep and awk : -

kubectl logs --namespace kube-system $(kubectl get pods --namespace kube-system | grep metrics-server | awk '{print $1}')  -c metrics-server

I1011 17:10:21.606350       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/metrics-server-certs/tls.crt::/etc/metrics-server-certs/tls.key
I1011 17:10:21.606421       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.626860       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.646520       1 secure_serving.go:197] Serving securely on [::]:4443
I1011 17:10:21.627139       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1011 17:10:21.647378       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I1011 17:10:21.647922       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1011 17:10:21.606461       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:21.686383       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:22.007270       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I1011 17:10:22.046837       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I1011 17:10:22.126237       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 


so Lili's approach is WAY more streamlined: -

No comments:

Note to self - use kubectl to query images in a pod or deployment

In both cases, we use JSON ... For a deployment, we can do this: - kubectl get deployment foobar --namespace snafu --output jsonpath="{...