Wednesday, 27 January 2021

More about jq - this time it's searching for stuff

 Having written a lot about jq recently, I'm continuing to have fun.

Today it's about searching for stuff, as I was seeking to parse a huge amount of output ( a list of running containers ) for a snippet of the container's name ....

Here's an example of how I solved it ...

Take an example JSON document: -

cat family.json 

{
    "friends": [
        {
            "givenName": "Dave",
            "familyName": "Hay"
        },
        {
            "givenName": "Homer",
            "familyName": "Simpson"
        },
        {
            "givenName": "Marge",
            "familyName": "Simpson"
        },
        {
            "givenName": "Lisa",
            "familyName": "Simpson"
        },
        {
            "givenName": "Bart",
            "familyName": "Simpson"
        }
    ]
}

I can then use jq to dump out the entire document: -

cat family.json | jq

{
  "friends": [
    {
      "givenName": "Dave",
      "familyName": "Hay"
    },
    {
      "givenName": "Homer",
      "familyName": "Simpson"
    },
    {
      "givenName": "Marge",
      "familyName": "Simpson"
    },
    {
      "givenName": "Lisa",
      "familyName": "Simpson"
    },
    {
      "givenName": "Bart",
      "familyName": "Simpson"
    }
  ]
}

but, say, I want to find all the records where the familyName is Simpson ?

cat family.json | jq -c '.friends[] | select(.familyName | contains("Simpson"))'

{"givenName":"Homer","familyName":"Simpson"}
{"givenName":"Marge","familyName":"Simpson"}
{"givenName":"Lisa","familyName":"Simpson"}
{"givenName":"Bart","familyName":"Simpson"}

or all the records where the givenName contains the letter a ?

cat family.json | jq -c '.friends[] | select(.givenName | contains("a"))'

{"givenName":"Dave","familyName":"Hay"}
{"givenName":"Marge","familyName":"Simpson"}
{"givenName":"Lisa","familyName":"Simpson"}
{"givenName":"Bart","familyName":"Simpson"}

or, as an edge-case, where the givenName contains the letter A or the letter a i.e. ignore the case ?

cat family.json | jq -c '.friends[] | select(.givenName | match("A";"i"))'

{"givenName":"Dave","familyName":"Hay"}
{"givenName":"Marge","familyName":"Simpson"}
{"givenName":"Lisa","familyName":"Simpson"}
{"givenName":"Bart","familyName":"Simpson"}

TL;DR; jq rules!

JQ - Syntax on macOS vs. Linux

 I keep forgetting that the syntax of commands on macOS often varies from Linux platforms, such as Ubuntu.

JQ ( jq ) is a good example.

So here's an example using json_pp ( JSON Print Pretty )

echo '{"givenName":"Dave","familyName":"Hay"}' | json_pp

{
   "givenName" : "Dave",
   "familyName" : "Hay"
}

and here's the same example using jq 

echo '{"givenName":"Dave","familyName":"Hay"}' | jq

{
  "givenName": "Dave",
  "familyName": "Hay"
}

both on macOS.

Having spun up an Ubuntu container: -

docker run -it ubuntu:latest bash

and installed json_pp and jq: -

apt-get update && apt-get install -y libjson-pp-perl

and: -

apt-get update && apt-get install -y jq

here's the same pair of examples: -

echo '{"givenName":"Dave","familyName":"Hay"}' | json_pp

{
   "familyName" : "Hay",
   "givenName" : "Dave"
}

echo '{"givenName":"Dave","familyName":"Hay"}' | jq


{
  "givenName": "Dave",
  "familyName": "Hay"
}

So far, so good.

To be sure, on both macOS and Ubuntu, I double-checked the version of jq : -

jq --version

jq-1.6

Again, all is fine.

And then I hit an issue ....

I was building a Jenkins Job that runs from a GitHub repo, with a Jenkinsfile that invokes a Bash script.

At one point, I saw: -

jq - commandline JSON processor [version 1.5-1-a5b5cbe]
Usage: jq [options] <jq filter> [file...]
jq is a tool for processing JSON inputs, applying the
given filter to its JSON text inputs and producing the
filter's results as JSON on standard output.
The simplest filter is ., which is the identity filter,
copying jq's input to its output unmodified (except for
formatting).
For more advanced filters see the jq(1) manpage ("man jq")
and/or https://stedolan.github.io/jq
Some of the options include:
-c compact instead of pretty-printed output;
-n use `null` as the single input value;
-e set the exit status code based on the output;
-s read (slurp) all inputs into an array; apply filter to it;
-r output raw strings, not JSON texts;
-R read raw strings, not JSON texts;
-C colorize JSON;
-M monochrome (don't colorize JSON);
-S sort keys of objects on output;
--tab use tabs for indentation;
--arg a v set variable $a to value <v>;
--argjson a v set variable $a to JSON value <v>;
--slurpfile a f set variable $a to an array of JSON texts read from <f>;
See the manpage for more options.

Note the version of jq being reported - by default, it is: -

1.5-1-a5b5cbe

To validate this, I created a basic Jenkinsfile: -

timestamps {
    node('cf_slave') {
      stage('Testing jq') {
        sh '''#!/bin/bash
              which jq
              ls -al `which jq`
              jq --version
              echo '{"givenName":"Dave","familyName":"Hay"}' | jq
            '''
            }
    }
}

which: -

(a) show which jq is being used

(b) shows the file-path of that jq

(c) shows the version of that jq

(d) attempts to render the same bit of JSON

which returned: -

09:06:22  /usr/bin/jq
09:06:22  -rwxr-xr-x 1 root root 280720 Sep  7  2018 /usr/bin/jq
09:06:22  jq-1.5-1-a5b5cbe
09:06:22  jq - commandline JSON processor [version 1.5-1-a5b5cbe]
09:06:22  Usage: jq [options] <jq filter> [file...]
09:06:22  
09:06:22   jq is a tool for processing JSON inputs, applying the
09:06:22   given filter to its JSON text inputs and producing the
09:06:22   filter's results as JSON on standard output.
09:06:22   The simplest filter is ., which is the identity filter,
09:06:22   copying jq's input to its output unmodified (except for
09:06:22   formatting).
09:06:22   For more advanced filters see the jq(1) manpage ("man jq")
09:06:22   and/or https://stedolan.github.io/jq
09:06:22  
09:06:22   Some of the options include:
09:06:22   -c compact instead of pretty-printed output;
09:06:22   -n use `null` as the single input value;
09:06:22   -e set the exit status code based on the output;
09:06:22   -s read (slurp) all inputs into an array; apply filter to it;
09:06:22   -r output raw strings, not JSON texts;
09:06:22   -R read raw strings, not JSON texts;
09:06:22   -C colorize JSON;
09:06:22   -M monochrome (don't colorize JSON);
09:06:22   -S sort keys of objects on output;
09:06:22   --tab use tabs for indentation;
09:06:22   --arg a v set variable $a to value <v>;
09:06:22   --argjson a v set variable $a to JSON value <v>;
09:06:22   --slurpfile a f set variable $a to an array of JSON texts read from <f>;
09:06:22   See the manpage for more options.

So, there's the issue - the default version of jq that's included within my cf_slave container is out-of-date.

There are two resolutions here: -

(a) Install an up-to-date version of jq

(b) Add a trailing period to the jq command

echo '{"givenName":"Dave","familyName":"Hay"}' | jq .

{
  "givenName": "Dave",
  "familyName": "Hay"
}

I'm still working on the former: -

sudo apt-get update && sudo apt-get install -y jq

which results in: -

09:24:14  jq is already the newest version (1.5+dfsg-1ubuntu0.1).

so I need to dig into my cf_slave container a bit more ...

In the meantime, the latter resolution ( adding the trailing period ) does the trick: -

09:24:14  /usr/bin/jq
09:24:14  -rwxr-xr-x 1 root root 280720 Sep  7  2018 /usr/bin/jq
09:24:14  jq-1.5-1-a5b5cbe
09:24:14  {
09:24:14    "givenName": "Dave",
09:24:14    "familyName": "Hay"
09:24:14  }

Saturday, 23 January 2021

More about K8s medium-strength ciphers and Kube Scheduler

Last year, I wrote about how I was able to mitigate a medium-strength ciphers warning against the cube-scheduler component of IBM Cloud Private ( a Kubernetes distribution ): -

Mitigating "SSL Medium Strength Cipher Suites Supported" warnings from Nessus scans

I did a little bit more with this yesterday on a Red Hat Linux box.

This was the warning that our Nessus box threw up: -

The remote host supports the use of SSL ciphers that offer medium strength encryption, which we currently regard as those with key lengths at least 56 bits and less than 112 bits.

against port 10259 of our host.

I used this command: -

netstat -aonp|grep 10259 

to work out which process was using that particular port: -

tcp6       0      0 :::10259                :::*                    LISTEN      23860/hyperkube      off (0.00/0/0)

and then used: -

ps auxw | grep 23860

to check the specific process, which verified that it was indeed kube-scheduler 

I then edited the relevant configuration file: -

/etc/cfc/pods/master.json 

( having backed it up first )

and changed from: -

--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256

to: -

--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

Once I killed the kube-scheduler process - kill -9 23860 - and waited for kube-scheduler to restart, Nessus was happy.

I also used: -

openssl s_client -connect 127.0.0.1:10259 </dev/null

to validate the cipher being used: -

...

New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
...

Job done!

Required reading: -

Modifying Cipher Suites used by Kubernetes in IBM Cloud Private


Scripts for a future me - grabbing a container's IP address

 I'm writing this now, as I'll likely need it in the not-too-distant future.

I want to grab the IP address of a Linux box e.g. an Ubuntu container or VM.

The command ip address returns a slew of information: -

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
    link/tunnel6 :: brd ::
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

If I just want the address of one of the adapters e.g. eth0 then this works for me: -

ip address show dev eth0 | grep inet | awk '{print $2}' | cut -d/ -f1

172.17.0.2

or, an way shorter version: -

hostname -I

172.17.0.2 

As ever, Linux gives us fifty-leven ways to do things !

For the record, if ip isn't installed e.g. within an Ubuntu container, here's how to get it: -

apt-get update && apt-get install -y iproute2

which ip

/usr/sbin/ip

ip help

Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
       ip [ -force ] -batch filename
where  OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |
                   tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |
                   netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila |
                   vrf | sr | nexthop }
       OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |
                    -h[uman-readable] | -iec | -j[son] | -p[retty] |
                    -f[amily] { inet | inet6 | mpls | bridge | link } |
                    -4 | -6 | -I | -D | -M | -B | -0 |
                    -l[oops] { maximum-addr-flush-attempts } | -br[ief] |
                    -o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |
                    -rc[vbuf] [size] | -n[etns] name | -N[umeric] | -a[ll] |
                    -c[olor]}

Friday, 15 January 2021

JQ saves me time AND typing

 Over the past few months, I've been getting more to grips with jq and have only just realised that I can actually use it to parse JSON way better than using grep and awk and sed.

TL;DR; I've got a Bash script that generates an Access Token by wrapping a cURL command e.g.

export ACCESS_TOKEN=$(curl -s -k -X POST https://myservice.com -H 'Content-Type: application/json' -d '{"kind": "request","parameters":{"user": "blockchain","password": "passw0rd"}}')

I was previously using a whole slew of commands to extract the required token from the response: -

{ "kind": "response", "parameters": { "token": "this_is_a_token", "isAdmin": true } }

including json_pp and grep and awk and sed e.g.

export ACCESS_TOKEN=$(curl -s -k -X POST https://myservice.com -H 'Content-Type: application/json' -d '{"kind": "request","parameters":{"user": "user","password": "passw0rd"}}' | json_pp | grep -i token | awk '{print $3}' | sed -e 's/"//g' | sed -e 's/,//g')

And then I realised ... this is JSON, right ?

So why am I using conventional Unix scripting tools to munge JSON ?

So I stripped 'em all out and ended up with this: -

export ACCESS_TOKEN=$(curl -s -k -X POST https://myservice.com -H 'Content-Type: application/json' -d '{"kind": "request","parameters":{"user": "user","password": "passw0rd"}}' | jq -r .parameters.token)

In other words, I replaced json_pp with jq and used that to parse the response for the .token element of the .parameters object ...

Note that I'm using jq -r ( aka raw ) to remove the double quotes ( " ) around the token element of the response.

Way simpler ! Now off to wrangle the rest of my scripts ...

#LifeIsGood


Tuesday, 5 January 2021

HTTP 403 - Unauthorized - REST API being hexed ...

 A colleague had an interesting challenge this AM, with a REST API authorisation failure.

The API call, using the POST verb, should have just worked, but she was seeing: -

{
   "message" : "Unauthorized",
   "statusCode" : 403
}

From an authorisation perspective, the cURL command was using an environment, ACCESS_TOKEN, which had been previously set via a prior Bash script, via this: -

—H "authorization: Bearer ${ACCESS_TOKEN}"

Having copied the command that she was running, I saw the same exception even though similar POST verbs worked for me.

After much digging, I realised the problem; her cURL command should have included this: -

-H "authorization: Bearer ${ACCESS_TOKEN}"

Hmm, looks pretty identical, right ?

WRONG!

For some weird reason, the hyphen ( - ) in my colleague's command wasn't actually a hyphen.

I ended up using hexedit to dig into the failing command: -



where those strange-looking period ( . ) symbols were actually: -

20 E2 80  94 48 20

rather than: -


20 2D 48  20

So the failing command had hex E2 80 94 whereas the working command had hex 2D, before the letter H ( which is hex 48 ).

I'm guessing that this was a copy/paste-induced bug, but it was fun digging .....


Reminder - installing podman and skopeo on Ubuntu 22.04

This follows on from: - Lest I forget - how to install pip on Ubuntu I had reason to install podman  and skopeo  on an Ubuntu box: - lsb_rel...