Tuesday, 23 February 2021

Munging Dockerfiles using sed

 So I had a requirement to update a Dockerfile, which I'd pulled from a GitHub repository, without actually adding my changes ( via git add and git commit ) to that repo ...

Specifically, I wanted to add a command to update the to-be-built Alpine image.

Here's how I solved it ...

Having cloned the target repo, which included the Dockerfile ( example below ): -

FROM alpine:3.7
RUN apk add --no-cache mysql-client
ENTRYPOINT ["mysql"]

At present, this is what happens when I build the image: -

docker build -f Dockerfile .

Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM alpine:3.7
3.7: Pulling from library/alpine
5d20c808ce19: Pull complete 
Digest: sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10
Status: Downloaded newer image for alpine:3.7
 ---> 6d1ef012b567
Step 2/3 : RUN apk add --no-cache mysql-client
 ---> Running in 07bc9ca0e14a
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/6) Installing mariadb-common (10.1.41-r0)
(2/6) Installing ncurses-terminfo-base (6.0_p20171125-r1)
(3/6) Installing ncurses-terminfo (6.0_p20171125-r1)
(4/6) Installing ncurses-libs (6.0_p20171125-r1)
(5/6) Installing mariadb-client (10.1.41-r0)
(6/6) Installing mysql-client (10.1.41-r0)
Executing busybox-1.27.2-r11.trigger
OK: 41 MiB in 19 packages
Removing intermediate container 07bc9ca0e14a
 ---> 43862371f8a4
Step 3/3 : ENTRYPOINT ["mysql"]
 ---> Running in d8b08c967cc1
Removing intermediate container d8b08c967cc1
 ---> 1ee30800ffbd
Successfully built 1ee30800ffbd

I wanted to add a command to run apk upgrade before the ENTRYPOINT entry ( so, in essence, inserting it between lines 2 and 3 )

This is what I did: -

sed -i '' -e "2s/^//p; 2s/^.*/RUN apk --no-cache upgrade/" Dockerfile

which results in: -

FROM alpine:3.7
RUN apk add --no-cache mysql-client
RUN apk --no-cache upgrade
ENTRYPOINT ["mysql"]

Now the build looks like this: -

docker build -f Dockerfile .

Sending build context to Docker daemon  2.048kB
Step 1/4 : FROM alpine:3.7
3.7: Pulling from library/alpine
5d20c808ce19: Pull complete 
Digest: sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10
Status: Downloaded newer image for alpine:3.7
 ---> 6d1ef012b567
Step 2/4 : RUN apk add --no-cache mysql-client
 ---> Running in b29668e70377
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/6) Installing mariadb-common (10.1.41-r0)
(2/6) Installing ncurses-terminfo-base (6.0_p20171125-r1)
(3/6) Installing ncurses-terminfo (6.0_p20171125-r1)
(4/6) Installing ncurses-libs (6.0_p20171125-r1)
(5/6) Installing mariadb-client (10.1.41-r0)
(6/6) Installing mysql-client (10.1.41-r0)
Executing busybox-1.27.2-r11.trigger
OK: 41 MiB in 19 packages
Removing intermediate container b29668e70377
 ---> 971a3d538edf
Step 3/4 : RUN apk --no-cache upgrade
 ---> Running in 8dfa10b481ad
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/2) Upgrading musl (1.1.18-r3 -> 1.1.18-r4)
(2/2) Upgrading musl-utils (1.1.18-r3 -> 1.1.18-r4)
Executing busybox-1.27.2-r11.trigger
OK: 41 MiB in 19 packages
Removing intermediate container 8dfa10b481ad
 ---> 35d7cbec77c0
Step 4/4 : ENTRYPOINT ["mysql"]
 ---> Running in c0d7d310d396
Removing intermediate container c0d7d310d396
 ---> 4ad02b88f4d4
Successfully built 4ad02b88f4d4

In essence, the sed command duplicates line 2: -

RUN apk add --no-cache mysql-client

as line 3, and then replaces the newly duplicated text with the required replacement: -

RUN apk --no-cache upgrade

Neat, eh ?

I've got this in a script, and it works a treat .....

Wednesday, 17 February 2021

Tinkering with YQ, it's like JQ but for YAML rather than JSON

 A colleague was tinkering with Mike Farah's excellent yq tool, and had asked about updating it on his Mac.

I've got yq installed via Homebrew: -

brew update

which threw up: -

Error: 

  homebrew-core is a shallow clone.

To `brew update`, first run:

  git -C /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core fetch --unshallow

so I did this: -

git -C /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core fetch --unshallow

and then: -

brew upgrade yq

which resulted in the latest version: -

yq --version

yq version 4.5.1

Subsequently, my friend was then looking to use the evaluate feature of yq to run queries against YAML documents, with the query string being defined in an environment variable.

Therefore, I had to try this ...

Firstly, I created sample.yaml 

name:
 - Ben
 - Dave
 - Tom
 - Jerry

which yq correctly evaluates: -

yq eval sample.yaml

name:
  - Ben
  - Dave
  - Tom
  - Jerry

and: -

yq eval .name sample.yaml

- Ben
- Dave
- Tom
- Jerry

I then set the environment variable to define my search query: -

export FRIEND="Ben"

and updated my yq query: -

yq eval ".name = \"$FRIEND\"" sample.yaml

name: Ben

and then changed my variable: -

export FRIEND="Tom"

and re-ran my query: -

yq eval ".name = \"$FRIEND\"" sample.yaml

name: Tom

Thanks to this: -



https://mikefarah.gitbook.io/yq/commands/evaluate


Monday, 15 February 2021

JQ - more filtering, more fun

 Building upon earlier posts, I wanted to streamline the output from a REST API that returns a list of running containers: -

curl -s -k -X GET https://my.endpoint.com/containers -H 'Accept: application/json' -H 'Content-Type: application/json' -H 'Authorization: Bearer '"$ACCESS_TOKEN" | jq '.data[] | select (.Names[] | contains("dave"))' | jq .Names

[
  "/davehay_k8s_worker_1"
]
[
  "/davehay_k8s_master"
]
[
  "/davehay_k8s_worker_2"
]

Specifically, I wanted to remove the extraneous square brackets and double quotes ...

Here we go ...

curl -s -k -X GET https://my.endpoint.com/containers -H 'Accept: application/json' -H 'Content-Type: application/json' -H 'Authorization: Bearer '"$ACCESS_TOKEN"  | jq '.data[] | select (.Names[] | contains("dave"))' | jq -r .Names[]

/davehay_k8s_worker_1
/davehay_k8s_master
/davehay_k8s_worker_2

As with everything, there's a better way ....

But this works for me :-)

I learn something new each and every day - finding big files on macOS

 This came up in the context of a colleague trying to work out what's eating his Mac's disk.

Whilst I'm familiar with using the built-in Storage Management app, which includes a Recommendations tab: -


I'd not realised that there's an easy way to look for files, based upon size, using the mdfind command: -

mdfind "kMDItemFSSize >$[1024*1024*1024]"

/System/Library/dyld/dyld_shared_cache_x86_64
/System/Library/dyld/dyld_shared_cache_x86_64h
/System/Library/dyld/dyld_shared_cache_arm64e
/System/Library/dyld/aot_shared_cache
/Applications/HCL Notes.app
/Applications/Docker.app
/Users/hayd/Virtual Machines.localized/Ubuntu.vmwarevm
/Users/hayd/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
/Applications/Microsoft Excel.app
/Applications/iMovie.app
/Applications/Microsoft Word.app
/Applications/Microsoft PowerPoint.app
/Applications/VMware Fusion.app
/Users/hayd/Virtual Machines.localized/Windows10.vmwarevm

Now my example is looking for really large files - 1024^3 - or, to be more specific, files exceeding 1 GB in size, but it's good to know .....

Thursday, 11 February 2021

Gah, ImagePullBackOff with Calico CNI running on Kubernetes

Whilst deploying Calico Node etc. to my cluster, via: -

kubectl apply -f calico.yaml

and whilst checking the running Pods, via: -

kubectl get pods -A

I was seeing: -

...

kube-system   calico-node-9srv6                         0/1     Init:ErrImagePull   0          8s

...

I dug into that failing Pod with: -

kubectl describe pod calico-node-9srv6 --namespace kube-system

which showed: -

...
  Type     Reason   Age                    From                   Message
  ----     ------   ----                   ----                   -------
  Normal   BackOff  17m (x303 over 88m)    kubelet, 50c933ad26be  Back-off pulling image "us.icr.io/davehay/calico/cni:latest-s390x"
  Warning  Failed   2m56s (x368 over 88m)  kubelet, 50c933ad26be  Error: ImagePullBackOff
...


Now I knew that it wasn't an authentication issue, as my YAML was also defining a Secret, as per my previous post: -


and had defined that Secret within the YAML, using: -

...
imagePullSecrets:
- name: my_secret
...

So why wasn't it working .... ?

And then it struck me .... DOH!

My Pod is running inside the kube-system Namespace ....

You know where I'm going with this, am I right ?

Yes, my Secret was NOT inside the same kube-system Namespace, but was in default.

Once I updated my YAML to redefine my Secret: -

...
apiVersion: v1
kind: Secret
data:
  .dockerconfigjson:
    <HERE'S THE BASE64 ENCODED STUFF>
metadata:
  name: my_secret
  namespace: kube-system
type: kubernetes.io/dockerconfigjson
...

re-applied the YAML, and deleted the failing Pod, all was well

Wednesday, 10 February 2021

Argh, Kubernetes and YAML hell

I was trying to create a Kubernetes (K8s) Secret, containing existing Docker credentials, as per this: -

Create a Secret based on existing Docker credentials

and kept hitting syntax errors with the YAML.

For reference, in this scenario, we've already logged into a container registry, such as IBM Container Registry or Docker Hub, and want to grab the credentials that Docker itself "caches" in ~/.docker/config.json

Wait, what ? You didn't know that Docker helpfully does that ? Another good reason to NOT leave yourself logged into a container registry when you step away from your box ....

Anyhow, as per the above linked documentation, the trick is to encapsulate the content of that file, encoded using Base64, into a YAML file that looks something like this: -

---
apiVersion: v1
kind: Secret
data:
  .dockerconfigjson:
    <HERE'S THE BASE64 ENCODED STUFF>
metadata:
  name: my_secret
type: kubernetes.io/dockerconfigjson

The trick is to get the Base64 encoded stuff just right ....

I was doing this: -

cat ~/.docker/config.json | base64 

which resulted in: -

ewoJImF1dGhzIjoge30sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2Vy
LUNsaWVudC8xOS4wMy42IChsaW51eCkiCgl9Cn0=

I kept seeing exceptions such as: -

error: error parsing secret.yaml: error converting YAML to JSON: yaml: line 7: could not find expected ':'

and: -

Error from server (BadRequest): error when creating "secret.yaml": Secret in version "v1" cannot be handled as a Secret: v1.Secret.ObjectMeta: v1.ObjectMeta.TypeMeta: Kind: Data: decode base64: illegal base64 data at input byte 76, error found in #10 byte of ...|BLAHBLAH=="},"kind":"|..., bigger context ...|BLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAH=="},"kind":"Secret","metadata":{"annotations":{"kube|...

when I tried to apply the YAML: -

kubectl apply -f secret.yaml

And then I re-read the documentation, for the 11th time, and saw: -

base64 encode the docker file and paste that string, unbroken as the value for field data[".dockerconfigjson"]

Can you see what I was doing wrong ?

Yep, I wasn't "telling" the Base64 encoded to produce an unbroken ( and, more importantly, unwrapped ) string.

This time I did it right: -

cat ~/.docker/config.json | base64 --wrap=0

resulting in this: -

ewoJImF1dGhzIjoge30sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy42IChsaW51eCkiCgl9Cn0=root@379cd9170839:~# 

Having discarded the user@hostname stuff, I was left with this: -

ewoJImF1dGhzIjoge30sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy42IChsaW51eCkiCgl9Cn0=

I updated my YAML: -

---
apiVersion: v1
kind: Secret
data:
  .dockerconfigjson: ewoJImF1dGhzIjoge30sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy42IChsaW51eCkiCgl9Cn0=
metadata:
  name: my_secret
type: kubernetes.io/dockerconfigjson

and applied it: -

kubectl apply -f secret.yaml 

secret/armadamultiarch created

and we're off to the races!

Wednesday, 27 January 2021

More about jq - this time it's searching for stuff

 Having written a lot about jq recently, I'm continuing to have fun.

Today it's about searching for stuff, as I was seeking to parse a huge amount of output ( a list of running containers ) for a snippet of the container's name ....

Here's an example of how I solved it ...

Take an example JSON document: -

cat family.json 

{
    "friends": [
        {
            "givenName": "Dave",
            "familyName": "Hay"
        },
        {
            "givenName": "Homer",
            "familyName": "Simpson"
        },
        {
            "givenName": "Marge",
            "familyName": "Simpson"
        },
        {
            "givenName": "Lisa",
            "familyName": "Simpson"
        },
        {
            "givenName": "Bart",
            "familyName": "Simpson"
        }
    ]
}

I can then use jq to dump out the entire document: -

cat family.json | jq

{
  "friends": [
    {
      "givenName": "Dave",
      "familyName": "Hay"
    },
    {
      "givenName": "Homer",
      "familyName": "Simpson"
    },
    {
      "givenName": "Marge",
      "familyName": "Simpson"
    },
    {
      "givenName": "Lisa",
      "familyName": "Simpson"
    },
    {
      "givenName": "Bart",
      "familyName": "Simpson"
    }
  ]
}

but, say, I want to find all the records where the familyName is Simpson ?

cat family.json | jq -c '.friends[] | select(.familyName | contains("Simpson"))'

{"givenName":"Homer","familyName":"Simpson"}
{"givenName":"Marge","familyName":"Simpson"}
{"givenName":"Lisa","familyName":"Simpson"}
{"givenName":"Bart","familyName":"Simpson"}

or all the records where the givenName contains the letter a ?

cat family.json | jq -c '.friends[] | select(.givenName | contains("a"))'

{"givenName":"Dave","familyName":"Hay"}
{"givenName":"Marge","familyName":"Simpson"}
{"givenName":"Lisa","familyName":"Simpson"}
{"givenName":"Bart","familyName":"Simpson"}

or, as an edge-case, where the givenName contains the letter A or the letter a i.e. ignore the case ?

cat family.json | jq -c '.friends[] | select(.givenName | match("A";"i"))'

{"givenName":"Dave","familyName":"Hay"}
{"givenName":"Marge","familyName":"Simpson"}
{"givenName":"Lisa","familyName":"Simpson"}
{"givenName":"Bart","familyName":"Simpson"}

TL;DR; jq rules!

JQ - Syntax on macOS vs. Linux

 I keep forgetting that the syntax of commands on macOS often varies from Linux platforms, such as Ubuntu.

JQ ( jq ) is a good example.

So here's an example using json_pp ( JSON Print Pretty )

echo '{"givenName":"Dave","familyName":"Hay"}' | json_pp

{
   "givenName" : "Dave",
   "familyName" : "Hay"
}

and here's the same example using jq 

echo '{"givenName":"Dave","familyName":"Hay"}' | jq

{
  "givenName": "Dave",
  "familyName": "Hay"
}

both on macOS.

Having spun up an Ubuntu container: -

docker run -it ubuntu:latest bash

and installed json_pp and jq: -

apt-get update && apt-get install -y libjson-pp-perl

and: -

apt-get update && apt-get install -y jq

here's the same pair of examples: -

echo '{"givenName":"Dave","familyName":"Hay"}' | json_pp

{
   "familyName" : "Hay",
   "givenName" : "Dave"
}

echo '{"givenName":"Dave","familyName":"Hay"}' | jq


{
  "givenName": "Dave",
  "familyName": "Hay"
}

So far, so good.

To be sure, on both macOS and Ubuntu, I double-checked the version of jq : -

jq --version

jq-1.6

Again, all is fine.

And then I hit an issue ....

I was building a Jenkins Job that runs from a GitHub repo, with a Jenkinsfile that invokes a Bash script.

At one point, I saw: -

jq - commandline JSON processor [version 1.5-1-a5b5cbe]
Usage: jq [options] <jq filter> [file...]
jq is a tool for processing JSON inputs, applying the
given filter to its JSON text inputs and producing the
filter's results as JSON on standard output.
The simplest filter is ., which is the identity filter,
copying jq's input to its output unmodified (except for
formatting).
For more advanced filters see the jq(1) manpage ("man jq")
and/or https://stedolan.github.io/jq
Some of the options include:
-c compact instead of pretty-printed output;
-n use `null` as the single input value;
-e set the exit status code based on the output;
-s read (slurp) all inputs into an array; apply filter to it;
-r output raw strings, not JSON texts;
-R read raw strings, not JSON texts;
-C colorize JSON;
-M monochrome (don't colorize JSON);
-S sort keys of objects on output;
--tab use tabs for indentation;
--arg a v set variable $a to value <v>;
--argjson a v set variable $a to JSON value <v>;
--slurpfile a f set variable $a to an array of JSON texts read from <f>;
See the manpage for more options.

Note the version of jq being reported - by default, it is: -

1.5-1-a5b5cbe

To validate this, I created a basic Jenkinsfile: -

timestamps {
    node('cf_slave') {
      stage('Testing jq') {
        sh '''#!/bin/bash
              which jq
              ls -al `which jq`
              jq --version
              echo '{"givenName":"Dave","familyName":"Hay"}' | jq
            '''
            }
    }
}

which: -

(a) show which jq is being used

(b) shows the file-path of that jq

(c) shows the version of that jq

(d) attempts to render the same bit of JSON

which returned: -

09:06:22  /usr/bin/jq
09:06:22  -rwxr-xr-x 1 root root 280720 Sep  7  2018 /usr/bin/jq
09:06:22  jq-1.5-1-a5b5cbe
09:06:22  jq - commandline JSON processor [version 1.5-1-a5b5cbe]
09:06:22  Usage: jq [options] <jq filter> [file...]
09:06:22  
09:06:22   jq is a tool for processing JSON inputs, applying the
09:06:22   given filter to its JSON text inputs and producing the
09:06:22   filter's results as JSON on standard output.
09:06:22   The simplest filter is ., which is the identity filter,
09:06:22   copying jq's input to its output unmodified (except for
09:06:22   formatting).
09:06:22   For more advanced filters see the jq(1) manpage ("man jq")
09:06:22   and/or https://stedolan.github.io/jq
09:06:22  
09:06:22   Some of the options include:
09:06:22   -c compact instead of pretty-printed output;
09:06:22   -n use `null` as the single input value;
09:06:22   -e set the exit status code based on the output;
09:06:22   -s read (slurp) all inputs into an array; apply filter to it;
09:06:22   -r output raw strings, not JSON texts;
09:06:22   -R read raw strings, not JSON texts;
09:06:22   -C colorize JSON;
09:06:22   -M monochrome (don't colorize JSON);
09:06:22   -S sort keys of objects on output;
09:06:22   --tab use tabs for indentation;
09:06:22   --arg a v set variable $a to value <v>;
09:06:22   --argjson a v set variable $a to JSON value <v>;
09:06:22   --slurpfile a f set variable $a to an array of JSON texts read from <f>;
09:06:22   See the manpage for more options.

So, there's the issue - the default version of jq that's included within my cf_slave container is out-of-date.

There are two resolutions here: -

(a) Install an up-to-date version of jq

(b) Add a trailing period to the jq command

echo '{"givenName":"Dave","familyName":"Hay"}' | jq .

{
  "givenName": "Dave",
  "familyName": "Hay"
}

I'm still working on the former: -

sudo apt-get update && sudo apt-get install -y jq

which results in: -

09:24:14  jq is already the newest version (1.5+dfsg-1ubuntu0.1).

so I need to dig into my cf_slave container a bit more ...

In the meantime, the latter resolution ( adding the trailing period ) does the trick: -

09:24:14  /usr/bin/jq
09:24:14  -rwxr-xr-x 1 root root 280720 Sep  7  2018 /usr/bin/jq
09:24:14  jq-1.5-1-a5b5cbe
09:24:14  {
09:24:14    "givenName": "Dave",
09:24:14    "familyName": "Hay"
09:24:14  }

Saturday, 23 January 2021

More about K8s medium-strength ciphers and Kube Scheduler

Last year, I wrote about how I was able to mitigate a medium-strength ciphers warning against the cube-scheduler component of IBM Cloud Private ( a Kubernetes distribution ): -

Mitigating "SSL Medium Strength Cipher Suites Supported" warnings from Nessus scans

I did a little bit more with this yesterday on a Red Hat Linux box.

This was the warning that our Nessus box threw up: -

The remote host supports the use of SSL ciphers that offer medium strength encryption, which we currently regard as those with key lengths at least 56 bits and less than 112 bits.

against port 10259 of our host.

I used this command: -

netstat -aonp|grep 10259 

to work out which process was using that particular port: -

tcp6       0      0 :::10259                :::*                    LISTEN      23860/hyperkube      off (0.00/0/0)

and then used: -

ps auxw | grep 23860

to check the specific process, which verified that it was indeed kube-scheduler 

I then edited the relevant configuration file: -

/etc/cfc/pods/master.json 

( having backed it up first )

and changed from: -

--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256

to: -

--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

Once I killed the kube-scheduler process - kill -9 23860 - and waited for kube-scheduler to restart, Nessus was happy.

I also used: -

openssl s_client -connect 127.0.0.1:10259 </dev/null

to validate the cipher being used: -

...

New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
...

Job done!

Required reading: -

Modifying Cipher Suites used by Kubernetes in IBM Cloud Private


Scripts for a future me - grabbing a container's IP address

 I'm writing this now, as I'll likely need it in the not-too-distant future.

I want to grab the IP address of a Linux box e.g. an Ubuntu container or VM.

The command ip address returns a slew of information: -

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
    link/tunnel6 :: brd ::
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

If I just want the address of one of the adapters e.g. eth0 then this works for me: -

ip address show dev eth0 | grep inet | awk '{print $2}' | cut -d/ -f1

172.17.0.2

or, an way shorter version: -

hostname -I

172.17.0.2 

As ever, Linux gives us fifty-leven ways to do things !

For the record, if ip isn't installed e.g. within an Ubuntu container, here's how to get it: -

apt-get update && apt-get install -y iproute2

which ip

/usr/sbin/ip

ip help

Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
       ip [ -force ] -batch filename
where  OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |
                   tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |
                   netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila |
                   vrf | sr | nexthop }
       OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |
                    -h[uman-readable] | -iec | -j[son] | -p[retty] |
                    -f[amily] { inet | inet6 | mpls | bridge | link } |
                    -4 | -6 | -I | -D | -M | -B | -0 |
                    -l[oops] { maximum-addr-flush-attempts } | -br[ief] |
                    -o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |
                    -rc[vbuf] [size] | -n[etns] name | -N[umeric] | -a[ll] |
                    -c[olor]}

Friday, 15 January 2021

JQ saves me time AND typing

 Over the past few months, I've been getting more to grips with jq and have only just realised that I can actually use it to parse JSON way better than using grep and awk and sed.

TL;DR; I've got a Bash script that generates an Access Token by wrapping a cURL command e.g.

export ACCESS_TOKEN=$(curl -s -k -X POST https://myservice.com -H 'Content-Type: application/json' -d '{"kind": "request","parameters":{"user": "blockchain","password": "passw0rd"}}')

I was previously using a whole slew of commands to extract the required token from the response: -

{ "kind": "response", "parameters": { "token": "this_is_a_token", "isAdmin": true } }

including json_pp and grep and awk and sed e.g.

export ACCESS_TOKEN=$(curl -s -k -X POST https://myservice.com -H 'Content-Type: application/json' -d '{"kind": "request","parameters":{"user": "user","password": "passw0rd"}}' | json_pp | grep -i token | awk '{print $3}' | sed -e 's/"//g' | sed -e 's/,//g')

And then I realised ... this is JSON, right ?

So why am I using conventional Unix scripting tools to munge JSON ?

So I stripped 'em all out and ended up with this: -

export ACCESS_TOKEN=$(curl -s -k -X POST https://myservice.com -H 'Content-Type: application/json' -d '{"kind": "request","parameters":{"user": "user","password": "passw0rd"}}' | jq -r .parameters.token)

In other words, I replaced json_pp with jq and used that to parse the response for the .token element of the .parameters object ...

Note that I'm using jq -r ( aka raw ) to remove the double quotes ( " ) around the token element of the response.

Way simpler ! Now off to wrangle the rest of my scripts ...

#LifeIsGood


Tuesday, 5 January 2021

HTTP 403 - Unauthorized - REST API being hexed ...

 A colleague had an interesting challenge this AM, with a REST API authorisation failure.

The API call, using the POST verb, should have just worked, but she was seeing: -

{
   "message" : "Unauthorized",
   "statusCode" : 403
}

From an authorisation perspective, the cURL command was using an environment, ACCESS_TOKEN, which had been previously set via a prior Bash script, via this: -

—H "authorization: Bearer ${ACCESS_TOKEN}"

Having copied the command that she was running, I saw the same exception even though similar POST verbs worked for me.

After much digging, I realised the problem; her cURL command should have included this: -

-H "authorization: Bearer ${ACCESS_TOKEN}"

Hmm, looks pretty identical, right ?

WRONG!

For some weird reason, the hyphen ( - ) in my colleague's command wasn't actually a hyphen.

I ended up using hexedit to dig into the failing command: -



where those strange-looking period ( . ) symbols were actually: -

20 E2 80  94 48 20

rather than: -


20 2D 48  20

So the failing command had hex E2 80 94 whereas the working command had hex 2D, before the letter H ( which is hex 48 ).

I'm guessing that this was a copy/paste-induced bug, but it was fun digging .....


Munging Dockerfiles using sed

 So I had a requirement to update a Dockerfile, which I'd pulled from a GitHub repository, without actually adding my changes ( via git ...