Thursday 26 November 2020

Back in the day - PuTTY and Windows and RDP

I had an interesting tinker this PM, harking back to a client engagement where we were using PuTTY on Windows to access a bunch of AIX boxen.

In this case, a colleague was running PuTTY on a Windows boxen, via Microsoft's Remote Desktop client, and was trying to work out how to paste text from macOS into the target Unix boxen, via PuTTY.

I setup a Windows 10 VM on one of our hypervisors, and accessed it via RDP,  and downloaded/installed PuTTY

Once I'd connected to an Ubuntu boxen from the PuTTY session, I set out to test the options for copy/paste.

Having proven that I could copy text from the Mac using [cmd][c] and paste it into Notepad.exe on the Windows boxen, using [ctrl][v], I tried/failed to do the same within the PuTTY session.

No matter what I did, paste failed to ... paste, in terms of invoking it via a keyboard shortcut.

Whilst the "right mouse button" action worked for me ( I've got the Apple Magic Mouse 2, so there are actually no buttons - the entire mouse is a button ! ), the keyboard failed ....

I dug around in PuTTY's settings for a while, and then found this: -



Once I changed Ctrl + Shift + [C,V] from No action to System Clipboard : -



it just worked.

In other words, I could, for example, go into Visual Studio Code (VSCode) on my Mac, use [cmd] [a] to select some text e.g. ps auxw and then [cmd] [c] to copy it to the clipboard.

I could then toggle back into the Remote Desktop session, using [cmd] [tab], use [option] [tab] within the RDP session to toggle into the PuTTY session, and hit [shift] [control] [v] to paste it into the PuTTY session: -



So a fair few keystrokes to remember ... but .. SUCCESS!

For the record, I'm running: -


and the remote Windows 10 box has: -



Monday 23 November 2020

Tinkering with Spotlight disk indexing in macOS 11 Big Sur

 Having upgraded to Big Sur last week, I'd noticed that Spotlight still hadn't completed disk indexing after ~7 days.

I was digging in further using Terminal: -

sudo mdutil -E /

/:
Error: Index is already changing state.  Please try again in a moment.

sudo mdutil -i on /

Password:
/:
Indexing enabled. 


sudo mdutil -i off /

/:
Error: Index is already changing state.  Please try again in a moment.

sudo mdutil -s /

/:
Error: unexpected indexing state.  kMDConfigSearchLevelTransitioning

None of this looked particularly good ....

Thankfully, a colleague showed me how to turn indexing off: -

sudo mdutil -a -i off

/:
2020-11-23 11:12:08.988 mdutil[75847:1674629] mdutil disabling Spotlight: / -> kMDConfigSearchLevelFSSearchOnly
Indexing disabled.
/System/Volumes/Data:
2020-11-23 11:12:09.061 mdutil[75847:1674629] mdutil disabling Spotlight: /System/Volumes/Data -> kMDConfigSearchLevelFSSearchOnly
Indexing disabled.
/Volumes/Backups of Dave’s MacBook Pro:
2020-11-23 11:12:10.963 mdutil[75847:1674629] mdutil disabling Spotlight: /Volumes/Backups of Dave’s MacBook Pro -> kMDConfigSearchLevelFSSearchOnly
Indexing enabled. 

and on again: -

sudo mdutil -a -i on

/:
Indexing enabled. 
/System/Volumes/Data:
Indexing enabled. 
/Volumes/Backups of Dave’s MacBook Pro:
Indexing enabled. 

and, now, things are looking better ....

sudo mdutil -s /

Password:
/:
Indexing enabled. 

Spotlight is still eating battery : -



but .... we'll see ....


Saturday 21 November 2020

Synology NAS via Ethernet - more fun n' games

 Following on from an earlier ( wow, two years ago ) post: -

Synology DS414 - From Megabits to Gigabits

I was talking with a colleague about the speed of the Ethernet between my Mac ( now a more modern 2018 MacBook Pro ) and my DS414.

I wanted to test, and demonstrate, the speed of the 1 GB/s Ethernet connection between the two devices: -

MacBook Pro


Synology DS414


Or, via the CLI: -

MacBook Pro

ifconfig en8

en8: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=6467<RXCSUM,TXCSUM,VLAN_MTU,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
ether 00:e0:4c:68:03:70 
inet6 fe80::14a7:3984:6d54:14f3%en8 prefixlen 64 secured scopeid 0xb 
inet 192.168.1.21 netmask 0xffffff00 broadcast 192.168.1.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect (1000baseT <full-duplex>)
status: active

Synology DS414

ifconfig eth0

eth0      Link encap:Ethernet  HWaddr 00:11:32:25:58:91  
          inet addr:192.168.1.100  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::211:32ff:fe25:5891/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:90968788 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13099066 errors:0 dropped:19 overruns:0 carrier:0
          collisions:0 txqueuelen:532 
          RX bytes:2857752864 (2.6 GiB)  TX bytes:36719514 (35.0 MiB)
          Interrupt:8 


but, to "prove" the performance between the two, this is what I did: -

Create a 10 MB file

time dd if=/dev/zero of=tstfile bs=1024 count=1024000

1024000+0 records in
1024000+0 records out
1048576000 bytes transferred in 4.880373 secs (214855709 bytes/sec)

real 0m4.886s
user 0m0.470s
sys 0m4.381s

Validate the file

ls -alh tstfile 

-rw-r--r--  1 hayd  staff   1.0G 21 Nov 13:38 tstfile

Upload the file to the NAS

scp -P 8822 -c aes256-cbc tstfile admin@diskstation:~

tstfile                                                                                                                                                                                              100% 1000MB  22.9MB/s   00:43

which, in part, shows an upload speed of 23.0 MB/s - which ain't too shabby 

Friday 20 November 2020

macOS 11 Big Sur and Kernel Extensions - down the rabbit hole I go ....

I've been having a few discussions with colleagues as we get to grips with the new macOS 11 Big Sur release, especially with regard to the slow evolution away from Kernel Extensions ( aka KExts ).

One particular thread led me here: -

How to configure Kernel Extension settings for Mac

and, specifically this: -

sudo sqlite3 /var/db/SystemPolicyConfiguration/KextPolicy

Password:

]SQLite version 3.32.3 2020-06-18 14:16:19

Enter ".help" for usage hints.

sqlite> SELECT * FROM kext_policy; 

QED4VVPZWA|com.logitech.manager.kernel.driver|1|Logitech Inc.|5

6HB5Y2QTA3|com.hp.kext.io.enabler.compound|1|HP Inc.|0

Z2SG5H3HC8|net.tunnelblick.tun|1|Jonathan Bullard|5

Z2SG5H3HC8|net.tunnelblick.tap|1|Jonathan Bullard|5

sqlite> ^D

Why did I not know this before ?

There's a whole SQLite database infrastructure inside my Mac ? Wow, who knew ?

A colleague then pointed out that macOS also has kextstat which allows me to show which kernel extensions are loaded and, via this: -

kextstat | grep -v com.apple

Executing: /usr/bin/kmutil showloaded
No variant specified, falling back to release
Index Refs Address            Size       Wired      Name (Version) UUID <Linked Against>

the non-Apple extensions that are loaded or, in my case, NOT !

So, whilst the SQLite database has kexts from Logitech, HP and Tunnelblick listed, none appear to be loaded ...

Which is nice!

Friday 13 November 2020

Inspecting Kubernetes Worker nodes - a Work-in-Progress

I have a need to query a list of Kubernetes Worker Nodes, and ignore the Master Node.

This is definitely a W-I-P, but here's what I've got thus far

So we have a list of nodes: -

kubectl get nodes

NAME           STATUS   ROLES    AGE   VERSION
68bc83cf0d09   Ready    <none>   51d   v1.19.2
b23976de6423   Ready    master   51d   v1.19.2

of which I want the one that is NOT the Master.

So I do this: -

kubectl get nodes | awk 'NR>1' | grep -v master | awk '{print $1}'

which gives me this: -

68bc83cf0d09

so that I can do this: -

kubectl describe node 68bc83cf0d09 | grep -i internal

which gives me this: -

  InternalIP:  172.16.84.5

If I combine the two commands together: -

kubectl describe node `kubectl get nodes | awk 'NR>1' | grep -v master | awk '{print $1}'` | grep -I internal

I get what I need: -

  InternalIP:  172.16.84.5

Obviously, there are fifty-seven other ways to achieve the same, including using JSON and JQ: -

kubectl get node `kubectl get nodes | awk 'NR>1' | grep -v master | awk '{print $1}'` --output json | jq

so that I could then use JQ's select statement to find the internal IP .... but that's for another day.....

Yet more fun and goodness with Cloudant and couchimport

 One of my friends was wondering why couchimport was apparently working BUT not actually working ....

Running a test such as: -

cat cartoon.csv | couchimport --url https://<<SECRET>>-bluemix.cloudantnosqldb.appdomain.cloud --database cartoon

couchimport
-----------
 url         : "https://<<SECRET>>-bluemix.cloudantnosqldb.appdomain.cloud"
 database    : "cartoon"
 delimiter   : "\t"
 buffer      : 500
 parallelism : 1
 type        : "text"
-----------
  couchimport {"documents":0,"failed":8,"total":0,"totalfailed":8,"statusCodes":{"401":1},"latency":475} +0ms
  couchimport Import complete +0ms

In other words, it does something but reports failed:8

I had to dig back into my memory AND into the docs to work out what was going on....

Specifically this: -



So, it's a case of "If your name's not down, you're not coming in ....

If Cloudant or CouchDB ( from whence Cloudant came ) was running elsewhere, we could specify user/password credentials but, given that it's running as SaaS on the IBM Cloud, we need a better way ....

Once I realised ( remembered ) that, we were golden....

In essence, the "key" ( do you see what I did there? ) thing is to set an environment variable with an IBM Cloud API key: -

export IAM_API_KEY="<<TOP SECRET>>"

Here's the end-to-end walkthrough : -

Create data to be imported

vi cartoon.csv

id,givenName,familyName
1,Maggie,Simpson
2,Lisa,Simpson
3,Bart,Simpson
4,Homer,Simpson
5,Fred,Flintstone
6,Wilma,Flintstone
7,Barney,Rubble
8,Betty,Rubble

Set environment variables

export COUCH_URL="https://<<TOP SECRET>>-bluemix.cloudantnosqldb.appdomain.cloud"
export IAM_API_KEY="<<TOP SECRET>>"
export COUCH_DATABASE="cartoon"
export COUCH_DELIMITER=","

Generate Access Token

- This is a script that generates an ACCESS_TOKEN variable for my IBM Cloud API key

source ~/genAccessToken.sh

Create database

curl -s -k -X PUT -H 'Authorization: Bearer '"$ACCESS_TOKEN" $COUCH_URL/$COUCH_DATABASE | json_pp

{
   "ok" : true
}

Populate database

cat cartoon.csv | couchimport

couchimport
-----------
 url         : "https://<<SECRET>>-bluemix.cloudantnosqldb.appdomain.cloud"
 database    : "cartoon"
 delimiter   : ","
 buffer      : 500
 parallelism : 1
 type        : "text"
-----------
  couchimport {"documents":8,"failed":0,"total":8,"totalfailed":0,"statusCodes":{"201":1},"latency":381} +0ms
  couchimport Import complete +0ms

Create index

curl -s -k -X POST -H 'Authorization: Bearer '"$ACCESS_TOKEN" -H 'Content-type: application/json' $COUCH_URL/$COUCH_DATABASE/_index -d '{
   "index": {
      "fields": [
         "givenName"
      ]
   },
   "name": "givenName-json-index",
   "type": "json"
}'

Query database

curl -s -k -X POST -H 'Authorization: Bearer '"$ACCESS_TOKEN" -H 'Content-type: application/json' $COUCH_URL/$COUCH_DATABASE/_find -d '{
   "selector": {
      "$or": [
         {
            "givenName": "Maggie"
         },
         {
            "givenName": "Lisa"
         }
      ]
   },
   "fields": [
      "givenName",
      "familyName"
   ],
   "sort": [
      {
         "givenName": "asc"
      }
   ]
}'  | json_pp

{
   "bookmark" : "g2wAAAACaAJkAA5zdGFydGtleV9kb2NpZG0AAAAgNGI5YWZhMzZjNTBiNTg4ZTljMWFmMzUxZjQyNzViMGNoAmQACHN0YXJ0a2V5bAAAAAFtAAAABk1hZ2dpZWpq",
   "docs" : [
      {
         "givenName" : "Lisa",
         "familyName" : "Simpson"
      },
      {
         "givenName" : "Maggie",
         "familyName" : "Simpson"
      }
   ]
}

Can you say "Yay" ? I bet you can .....

Thursday 12 November 2020

Random weirdness with OpenSSL on Ubuntu 18.04.5

 I hit an interesting problem today, whilst trying to create a public/private key pair: -

openssl req -subj '/C=GB/O=IBM/CN=david_hay.uk.ibm.com' -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out ~/nginx/nginx.crt -keyout ~/nginx/nginx.key

Can't load /root/.rnd into RNG

4396464178976:error:2406F079:random number generator:RAND_load_file:Cannot open file:../crypto/rand/randfile.c:88:Filename=/root/.rnd

Generating a RSA private key

........................++++

........................++++

writing new private key to '/root/nginx/nginx.key'

-----

on an Ubuntu box: -

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic

( actually it's an Ubuntu container running on an IBM Z box, via the Secure Service Container technology,  but that's not the point of the story here ! )

I'd not seen that before ... but I noticed that the missing file was .rnd in my user's home directory - /root.

Taking a punt, I tried creating that file: -

touch ~/.rnd

and re-ran the openssl command: -

openssl req -subj '/C=GB/O=IBM/CN=david_hay.uk.ibm.com' -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out ~/nginx/nginx.crt -keyout ~/nginx/nginx.key

Generating a RSA private key
....................................................................++++
..++++
writing new private key to '/root/nginx/nginx.key'
-----

I'd previously run the same command on a different Ubuntu container: -

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic

without similar issues.

Both are running the same version of openssl namely: -

openssl version

OpenSSL 1.1.1  11 Sep 2018

Using this as a source: -


I used openssl to generate the .rnd file: -

openssl rand -out /root/.rnd -hex 256

and validated that I could still generate the key pair: -

openssl req -subj '/C=GB/O=IBM/CN=david_hay.uk.ibm.com' -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out ~/nginx/nginx.crt -keyout ~/nginx/nginx.key

Generating a RSA private key
.....................................................................++++
..................++++
writing new private key to '/root/nginx/nginx.key'
-----

Weird !

Wednesday 11 November 2020

More with Cloudant - Serverless web application and API

This popped into my inbox t'other day: -

In this tutorial, you will create a serverless web application using a bucket in Object Storage and implementing the application backend using IBM Cloud™ Functions.

As an event-driven platform, Cloud Functions supports a variety of use cases. Building web applications and APIs is one of them. With web apps, events are the interactions between the web browsers (or REST clients) and your web app, the HTTP requests. Instead of provisioning a virtual machine, a container or a Cloud Foundry runtime to deploy your backend, you can implement your backend API with a serverless platform. This can be a good solution to avoid paying for idle time and to let the platform scale as needed.

Any action (or function) in Cloud Functions can be turned into a HTTP endpoint ready to be consumed by web clients. When enabled for web, these actions are called web actions. Once you have web actions, you can assemble them into a full-featured API with API Gateway. API Gateway is a component of Cloud Functions to expose APIs. It comes with security, OAuth support, rate limiting, custom domain support.

Serverless web application and API

macOS Software Versioning and Updating from the command-line

Over and above the usual way to determine the Mac's software levels : -

 > About This Mac


and Software Update : -


we also have the command line: -

sw_vers 

ProductName: Mac OS X
ProductVersion: 10.15.7
BuildVersion: 19H15

and: -

softwareupdate -l

Software Update Tool
Finding available software
No new software available.

Getting ready for Big Sur, in case you couldn't tell 😂

Tuesday 10 November 2020

Want to know a little more about Red Hat OpenShift Container Platform ?

 Red Hat have created a nice little set of self-paced tutorials / exercises for OpenShift Container Platform (OCP), which provide one with a set of exercises ( GUI and CLI steps ) to run against a ( very quickly provisioned ) OCP cluster.

I ran through a few of these yesterday afternoon, and can definitely recommend them to anyone looking to learn a little more about OCP.

More details here: -

OpenShift Playground

with an even more complete set of courses here: -

Red Hat Developer - Interactive Courses


Friday 6 November 2020

Backslash broke my Cloudant

Well, the headline somewhat overhypes things BUT following on from my earlier post: -

Cloudy databases - Back with Cloudant on IBM Cloud

I noted that one can get the URL of a Cloudant instance from the Manage -> Overview page: -


and copy the External Endpoint (preferred) URL.

When this URL is copied to the clipboard, it includes a trailing slash ( / ), which is totally fine.

Right ?

Right ?

Well, not necessarily.

I then go on to describe how that URL can be added to a $URL environment variable: -

export URL="https://d66573ef-c567-4efc-6e59-c8ab7eb33de6-bluemix.cloudantnosqldb.appdomain.cloud/"

and then use that for various cURL commands such as: -

curl -s -k -X GET -H 'Authorization: Bearer '"$ACCESS_TOKEN" $URL/music | json_pp

What I'd missed was.... it just does not work

That command fails with: -

{
   "reason" : "Database does not exist.",
   "error" : "not_found"
}

What was worse - I couldn't even create new databases: -

curl -s -k -X PUT -H 'Authorization: Bearer '"$ACCESS_TOKEN" $URL/chocolate | json_pp

{
   "reason" : "Database does not exist.",
   "error" : "not_found"
}

curl -v -k -X PUT -H 'Authorization: Bearer '"$ACCESS_TOKEN" $URL/cookies | json_pp

{
   "reason" : "Database does not exist.",
   "error" : "not_found"
}

which made absolutely no sense whatsoever.

It took me a while to work out what was going wrong ....

As mentioned, when I copied the URL from the Cloudant Overview page, it was coping the trailing slash.

This meant that the $URL variable included the slash ...

And then, when I specified the database name appended to the URL, I had: -

$URL/music

or: -

$URL/cookies

which, when the variable is expanded by the shell, turns into: -

https://d66573ef-c567-4efc-6e59-c8ab7eb33de6-bluemix.cloudantnosqldb.appdomain.cloud//music

or: -

https://d66573ef-c567-4efc-6e59-c8ab7eb33de6-bluemix.cloudantnosqldb.appdomain.cloud//cookies

In other words, we're trying to access / create a database called /music or /cookies.

Once I realised what was going wrong, I amended the $URL variable: -

export URL="https://d66573ef-c567-4efc-6e59-c8ab7eb33de6-bluemix.cloudantnosqldb.appdomain.cloud"

and all was well: -

curl -s -k -X PUT -H 'Authorization: Bearer '"$ACCESS_TOKEN" $URL/chocolate | json_pp

{
   "ok" : true
}

curl -s -k -X GET -H 'Authorization: Bearer '"$ACCESS_TOKEN" $URL/chocolate | json_pp

{
   "props" : {},
   "compact_running" : false,
   "update_seq" : "0-g1AAAAP3eJzLYWBgEMhgTmFQT0lKzi9KdUhJMjTQS8rVTU7WTS3VTUnVNTTWS87JL01JzCvRy0styQFqYMpjAZIMH4DUfyDISmQAmaAGN8GISAMeQAx4j2GAKZEGXIAYcB_DAAsiDTgAMeA81AAyAmEDxIT9ZAfCAogB68kOhAkQA-aTHQgNEAP6yQmEpAIgmVRPXipISgBpzsemmbDnkwJAmuOxaSbs8SQHkGZ_hGbSPG0A0m1PpqcVQJr1yfS0AEizPHmeTmRI4ofozAIACC1D2Q",
   "doc_del_count" : 0,
   "purge_seq" : 0,
   "instance_start_time" : "0",
   "doc_count" : 0,
   "sizes" : {
      "active" : 0,
      "external" : 0,
      "file" : 133940
   },
   "cluster" : {
      "w" : 2,
      "r" : 2,
      "n" : 3,
      "q" : 16
   },
   "disk_format_version" : 8,
   "db_name" : "chocolate"
}

Nice!

Thursday 5 November 2020

More on Cloudant - things to read

Custom Indexers for Cloudant

Glynn Bird - CouchImport

IBM Cloud Docs - Cloudant - Creating an IBM Cloudant Query

IBM Cloud Docs - Cloudant - Authentication

Export & Import a Database with CouchDB

How to import JSON Document into a Cloudant NoSQL DB

Glynn Bird - Medium - Blogs for everything including Cloudant


Cloudy databases - Back with Cloudant on IBM Cloud

 A few years back, I was using Cloudant to store JSON data for a POC upon which my team and I were working, and wrote a few posts: -

IBM Integration Bus and Cloudant - Baby steps ...

Cloudant - Continuing to tinker

Doofus Alert - Using Cloudant queries via cURL

Fast forward to now ... I'm back in the game with Cloudant, and had to remind myself of some of the core concepts ...

So I created a Cloudant instance in my IBM Cloud account: -



Once my instance was created, I navigated to the Manage tab: -


( note that I've deliberately obscured the CRN and Endpoint details, for security )

However, I grabbed the External Endpoint (preferred) because that URL will be of use in a tick ...

Also note that Authentication methods defaults to IBM Cloud IAM which is important, in that it allows one to authenticate to the database using an API key / Access Token ...

Using the External Endpoint as an example, I've created a randomised sample here: -

https://d66573ef-c567-4efc-6e59-c8ab7eb33de6-bluemix.cloudantnosqldb.appdomain.cloud/

because ... SECURITY 😂

With that, and an Access Token, which I generated via my IBM Cloud API key, I'm able to access my Cloudant instance from the command-line: -

Generate Access Token

- This is a script that I use, via an alias, to parse the API key and generate an $ACCESS_TOKEN variable

source ~/genAccessToken.sh

Set the $URL variable

- Note, when copying/pasting the URL from the Manage -> Overview page, note that a trailing backslash ( / ) character will be copied. This must NOT be appended to the URL, or things go BOOM! ( I'll post about this shortly )

export URL="https://d66573ef-c567-4efc-6e59-c8ab7eb33de6-bluemix.cloudantnosqldb.appdomain.cloud"

Query the Cloudant instance

- Note that we don't need to explicitly authenticate to do this i.e. we don't present $ACCESS_TOKEN

curl -s -k -X GET $URL | json_pp

{
   "features" : [
      "geo",
      "access-ready",
      "iam",
      "partitioned",
      "pluggable-storage-engines",
      "scheduler"
   ],
   "version" : "2.1.1",
   "vendor" : {
      "version" : "8162",
      "name" : "IBM Cloudant",
      "variant" : "paas"
   },
   "features_flags" : [
      "partitioned"
   ],
   "couchdb" : "Welcome"
}

Create a database

- called movies

curl -s -k -X PUT $URL/movies | json_pp

{
   "error" : "unauthorized",
   "reason" : "one of _admin, server_admin is required for this request"
}

Goops, forgot to add the $ACCESS_TOKEN environment variable to the HTTP header

curl -s -k -X PUT -H 'Authorization: Bearer '"$ACCESS_TOKEN" $URL/movies | json_pp

{
   "ok" : true
}

Query available databases

curl -s -k -X GET -H 'Authorization: Bearer '"$ACCESS_TOKEN" $URL/_all_dbs | json_pp

[
   "movies"
]

Get the details on the movies database

curl -s -k -X GET -H 'Authorization: Bearer '"$ACCESS_TOKEN" $URL/movies | json_pp

{
   "instance_start_time" : "0",
   "doc_count" : 0,
   "cluster" : {
      "q" : 16,
      "r" : 2,
      "w" : 2,
      "n" : 3
   },
   "compact_running" : false,
   "props" : {},
   "db_name" : "movies",
   "sizes" : {
      "external" : 0,
      "file" : 133940,
      "active" : 0
   },
   "purge_seq" : 0,
   "update_seq" : "0-g1AAAAP3eJzLYWBgEMhgTmFQT0lKzi9KdUhJMjTQS8rVTU7WTS3VTUnVNTTWS87JL01JzCvRy0styQFqYMpjAZIMH4DUfyDISmQAmaAGN8GISAMeQAx4j2GAKZEGXIAYcB_DAAsiDTgAMeA81AAyAmEDxIT9ZAfCAogB68kOhAkQA-aTHQgNEAP6yQmEpAIgmVRPXipISgBpzsemmbDnkwJAmuOxaSbs8SQHkGZ_hGbSPG0A0m1PpqcVQJr1yfS0AEizPHmeTmRI4ofozAIACC1D2Q",
   "disk_format_version" : 8,
   "doc_del_count" : 0
}

Create a JSON payload to populate the movies database

vi movies.json

{
  "docs":[
    {
      "_id":"1",
      "name":"War Games",
      "format":"DVD"
    },
    {
      "_id":"2",
      "name":"Top Gun",
      "format":"BluRay"
    },
    {
      "_id":"3",
      "name":"Rogue One",
      "format":"MP4"
    },
    {
      "_id":"4",
      "name":"Airplane",
      "format":"MP4"
    },
    {
      "_id":"5",
      "name":"Avengers",
      "format":"BluRay"
    },
    {
      "_id":"6",
      "name":"Mission Impossible",
      "format":"MP4"
    }
  ]
}

Populate the movies database

curl -s -k -X POST -H 'Authorization: Bearer '"$ACCESS_TOKEN" -H "Content-type: application/json" -d @movies.json $URL/movies/_bulk_docs | json_pp

[
   {
      "rev" : "1-48aa26577c7e2a4380df147095a1f592",
      "ok" : true,
      "id" : "1"
   },
   {
      "rev" : "1-cd93fc31e45e9379eb2137601ffaef38",
      "ok" : true,
      "id" : "2"
   },
   {
      "id" : "3",
      "rev" : "1-7bd2eb120d332170527a584897702e60",
      "ok" : true
   },
   {
      "ok" : true,
      "rev" : "1-5e930ff076df50b8cb9b09d354dc184b",
      "id" : "4"
   },
   {
      "rev" : "1-007906c513fc0c6d8c13c99e3ca4265f",
      "ok" : true,
      "id" : "5"
   },
   {
      "ok" : true,
      "rev" : "1-e3af283ebc1e44960cbc90690dde197c",
      "id" : "6"
   }
]

Query the movies database

curl -s -k -X GET -H 'Authorization: Bearer '"$ACCESS_TOKEN" $URL/movies/_all_docs?include_docs=true | json_pp

{
   "rows" : [
      {
         "key" : "1",
         "value" : {
            "rev" : "1-48aa26577c7e2a4380df147095a1f592"
         },
         "id" : "1",
         "doc" : {
            "format" : "DVD",
            "_rev" : "1-48aa26577c7e2a4380df147095a1f592",
            "_id" : "1",
            "name" : "War Games"
         }
      },
      {
         "doc" : {
            "name" : "Top Gun",
            "format" : "BluRay",
            "_id" : "2",
            "_rev" : "1-cd93fc31e45e9379eb2137601ffaef38"
         },
         "id" : "2",
         "value" : {
            "rev" : "1-cd93fc31e45e9379eb2137601ffaef38"
         },
         "key" : "2"
      },
      {
         "doc" : {
            "_rev" : "1-7bd2eb120d332170527a584897702e60",
            "_id" : "3",
            "format" : "MP4",
            "name" : "Rogue One"
         },
         "key" : "3",
         "value" : {
            "rev" : "1-7bd2eb120d332170527a584897702e60"
         },
         "id" : "3"
      },
      {
         "id" : "4",
         "value" : {
            "rev" : "1-5e930ff076df50b8cb9b09d354dc184b"
         },
         "key" : "4",
         "doc" : {
            "name" : "Airplane",
            "_id" : "4",
            "_rev" : "1-5e930ff076df50b8cb9b09d354dc184b",
            "format" : "MP4"
         }
      },
      {
         "key" : "5",
         "value" : {
            "rev" : "1-007906c513fc0c6d8c13c99e3ca4265f"
         },
         "id" : "5",
         "doc" : {
            "_id" : "5",
            "_rev" : "1-007906c513fc0c6d8c13c99e3ca4265f",
            "format" : "BluRay",
            "name" : "Avengers"
         }
      },
      {
         "doc" : {
            "_rev" : "1-e3af283ebc1e44960cbc90690dde197c",
            "_id" : "6",
            "format" : "MP4",
            "name" : "Mission Impossible"
         },
         "key" : "6",
         "value" : {
            "rev" : "1-e3af283ebc1e44960cbc90690dde197c"
         },
         "id" : "6"
      }
   ],
   "offset" : 0,
   "total_rows" : 6
}

Query individual records

curl -s -k -X GET -H 'Authorization: Bearer '"$ACCESS_TOKEN" $URL/movies/1 | json_pp

{
   "name" : "War Games",
   "_rev" : "1-48aa26577c7e2a4380df147095a1f592",
   "_id" : "1",
   "format" : "DVD"
}

curl -s -k -X GET -H 'Authorization: Bearer '"$ACCESS_TOKEN" $URL/movies/2 | json_pp

{
   "name" : "Top Gun",
   "format" : "BluRay",
   "_id" : "2",
   "_rev" : "1-cd93fc31e45e9379eb2137601ffaef38"
}

Query the database using search criteria

curl -s -k -X POST -H 'Authorization: Bearer '"$ACCESS_TOKEN" -H "Content-type: application/json" $URL/movies/_find -d '{
   "selector": {
     "name":"Airplane"
   }
 }' | json_pp

{
   "docs" : [
      {
         "name" : "Airplane",
         "format" : "MP4",
         "_id" : "4",
         "_rev" : "1-5e930ff076df50b8cb9b09d354dc184b"
      }
   ],
   "warning" : "No matching index found, create an index to optimize query time.",
   "bookmark" : "g1AAAAAyeJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYozmoAkOGASEKEsAEr3DR8"
}

Create a text index on the name field

- This will allow us to mitigate the warnings shown previously

curl -s -k -X POST -H 'Authorization: Bearer '"$ACCESS_TOKEN" -H "Content-type: application/json" $URL/movies/_index -d '{
   "index": {
      "fields": [
         "name"
      ]
   },
   "name": "name-json-index",
   "type": "json"
}
' | json_pp

{
   "id" : "_design/700ae9eb4c3ddffe7f46e8b3140ee324aed53c0c",
   "result" : "created",
   "name" : "name-json-index"
}

Create a text index on the format field

curl -s -k -X POST -H 'Authorization: Bearer '"$ACCESS_TOKEN" -H "Content-type: application/json" $URL/movies/_index -d '{
   "index": {
      "fields": [
         "format"
      ]
   },
   "name": "format-json-index",
   "type": "json"
}
' | json_pp

{
   "name" : "format-json-index",
   "result" : "created",
   "id" : "_design/22dce09672c8b999fabd11c2cb7caffc9ee3ee37"
}

Query the database using the name field

curl -s -k -X POST -H 'Authorization: Bearer '"$ACCESS_TOKEN" -H "Content-type: application/json" $URL/movies/_find -d '{
   "selector": {
     "name":"Airplane"
   }
 }' | json_pp

{
   "bookmark" : "g1AAAAA_eJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYozmoAkOGASOSAhkDiHY2ZRQU5iXmpWFgAI1RD1",
   "docs" : [
      {
         "name" : "Airplane",
         "_id" : "4",
         "_rev" : "1-5e930ff076df50b8cb9b09d354dc184b",
         "format" : "MP4"
      }
   ]
}

Query the database using the format field

curl -s -k -X POST -H 'Authorization: Bearer '"$ACCESS_TOKEN" -H "Content-type: application/json" $URL/movies/_find -d '{
   "selector": {
     "format":"MP4"
   }
 }' | json_pp

{
   "bookmark" : "g1AAAAA6eJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYozmoEkOGASOSAhkDizb4BJVhYAt0cOlw",
   "docs" : [
      {
         "_rev" : "1-7bd2eb120d332170527a584897702e60",
         "format" : "MP4",
         "name" : "Rogue One",
         "_id" : "3"
      },
      {
         "_rev" : "1-5e930ff076df50b8cb9b09d354dc184b",
         "format" : "MP4",
         "name" : "Airplane",
         "_id" : "4"
      },
      {
         "name" : "Mission Impossible",
         "format" : "MP4",
         "_rev" : "1-e3af283ebc1e44960cbc90690dde197c",
         "_id" : "6"
      }
   ]
}

Just to close the loop, this is what I now see in the Cloudant Dashboard: -


and: -


It's been great to get back into Cloudant ... so much more to learn 🤣🤣🤣🤣🤣

Tuesday 3 November 2020

UrbanCode Deploy - Using the query symbol to reference properties

 I was looking at an existing UrbanCode Deploy (UCD) element, known as a Component, and noticed that it made reference to an existing Property, but also included a query symbol ( ? ) within the configuration.

So, as an example, rather than referencing an existing Property via the conventional ( for UCD ) syntax of: -

${p:userName}

it instead referenced it using a query: -

${p?:userName}

which confused me initially.

Given that I am using UCD 6.2.7, I checked the documentation: -






Use the ? syntax if you are not sure about a property name:

${p?:propertyName}

This syntax returns a blank if the property is not found, and avoids the undefined property error.

Which is useful to know ....

Monday 2 November 2020

IBM UrbanCode Deploy - it's been a while ...

 For my day job, I'm using IBM (HCL) UrbanCode Deploy (UCD) to manage the lifecycle of some containers on an IBM Z box.

It's been a while since last I tinkered with UCD, as per my previous posts ....

So I wanted to quickly prototype what I was intending to build, without risking breaking our shared UCD platform.

And, of course, I work with Docker and Kubernetes on a daily basis ...

So ...  we have UCD on Docker ...

IBM UrbanCode Deploy Server trial

UrbanCode Deploy Agent

UrbanCode Deploy Relay Image

Having pulled the UCD Server image to one of my Virtual Servers: -

docker pull ibmcom/ucds

I followed the instructions to run a container: -

docker run -d -p 8443:8443 -p 7918:7918 -p 8080:8080 -t ibmcom/ucds:latest

and then ran docker ps -a to check the running container: -

CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES

e2d9fa725da7        ibmcom/ucds:latest   "/tmp/entrypoint-ibm…"   3 seconds ago       Up 3 seconds        0.0.0.0:7918->7918/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8443->8443/tcp   crazy_euclid

However, when I checked the container logs: -

docker logs crazy_euclid

Installing IBM UrbanCode Deploy...
Started building ucds container image at Mon Nov  2 16:15:44 UTC 2020.
unzip -q /tmp/ibm-ucd-6.2.7.1.960481.zip -d /tmp
[Database] Configuring Derby
Completed preparing ucds install files at Mon Nov  2 16:15:53 UTC 2020.
Enter the directory of the server to upgrade(leave blank for installing to a clean directory).
root@virtualserver01:~# docker logs crazy_euclid
Installing IBM UrbanCode Deploy...
Started building ucds container image at Mon Nov  2 16:15:44 UTC 2020.
unzip -q /tmp/ibm-ucd-6.2.7.1.960481.zip -d /tmp
[Database] Configuring Derby
Completed preparing ucds install files at Mon Nov  2 16:15:53 UTC 2020.
Enter the directory of the server to upgrade(leave blank for installing to a clean directory).

which is a bit of a pain ....

Given that the container is running as a "daemon", how can one complete the obviously interactive installation ?

Thankfully, there's a GitHub repo: -


with an issue ( albeit one raised 2.5 years ago !! ) : -


which says, in part: -

Same problem.
I created the container with --interactive.
You can run it attached or attach to it later on and just give the required input.

Works for me that way :)

Therefore, I stopped: -

docker stop crazy_euclid

and removed: -

docker rm crazy_euclid

the previous container, and ran it in interactive mode: -

docker run -it -p 8443:8443 -p 7918:7918 -p 8080:8080 -t ibmcom/ucds:latest

Installing IBM UrbanCode Deploy...
Started building ucds container image at Mon Nov  2 16:20:18 UTC 2020.
unzip -q /tmp/ibm-ucd-6.2.7.1.960481.zip -d /tmp
[Database] Configuring Derby
Completed preparing ucds install files at Mon Nov  2 16:20:27 UTC 2020.
Enter the directory of the server to upgrade(leave blank for installing to a clean directory).

Enter the home directory for the JRE/JDK that the new server or already installed server uses. Default [/opt/ibm/java/jre]:

Buildfile: install.with.groovy.xml
    [mkdir] Created dir: /tmp/ibm-ucd-install/compiled

version-check:

compile:
  [groovyc] Compiling 4 source files to /tmp/ibm-ucd-install/compiled
     [copy] Copying 20 files to /tmp/ibm-ucd-install/compiled

install:
    [unzip] Expanding: /tmp/ibm-ucd-install/conf.zip into /tmp/install-1035477236729911138.tmp
     [echo] Found sub-installer UCDeployInstaller.groovy
...

and followed the prompts ...

Once I'd done this, I used [cmd] [c] to terminate the running container, validated the container had stopped: -

docker ps -a

CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS                       PORTS               NAMES
f11f5d93b304        ibmcom/ucds:latest   "/tmp/entrypoint-ibm…"   56 minutes ago      Exited (130) 2 minutes ago                       stoic_hopper

and then started the container, this time as a daemon: -

docker start stoic_hopper

and monitored the logs: -

docker logs stoic_hopper -f

...
2020-11-02 17:17:48,849 UTC INFO  main com.urbancode.ds.UDeployServer - MultiServer? : false
2020-11-02 17:17:48,857 UTC INFO  main com.urbancode.ds.UDeployServer - External URL: https://ucd-server:8443
2020-11-02 17:17:48,858 UTC INFO  main com.urbancode.ds.UDeployServer - External User URL: https://ucd-server:8443
2020-11-02 17:17:48,865 UTC INFO  main com.urbancode.ds.UDeployServer - Server Broker ID: server-4Gr1NyFx1VO84tSBD8At
2020-11-02 17:17:48,866 UTC INFO  main com.urbancode.ds.UDeployServer - Server Unique ID: 2c1c1469-7291-49c6-9b06-9b06a5c72e44
2020-11-02 17:17:48,867 UTC INFO  main com.urbancode.ds.UDeployServer - JDBC URL: jdbc:derby://localhost:11377/data
2020-11-02 17:17:48,946 UTC INFO  main com.urbancode.ds.UDeployServer - Database User name: ibm_ucd
2020-11-02 17:17:48,948 UTC INFO  main com.urbancode.ds.UDeployServer - DBMS name: Apache Derby
2020-11-02 17:17:48,950 UTC INFO  main com.urbancode.ds.UDeployServer - DBMS version: 10.8.3.1 - (1476465)
2020-11-02 17:17:48,954 UTC INFO  main com.urbancode.ds.UDeployServer - Database Driver name: Apache Derby Network Client JDBC Driver
2020-11-02 17:17:48,956 UTC INFO  main com.urbancode.ds.UDeployServer - Database Driver version: 10.8.3.1 - (1476465)
2020-11-02 17:17:48,958 UTC INFO  main com.urbancode.ds.UDeployServer - === End Diagnostic Info ===
2020-11-02 17:17:48,960 UTC INFO  main com.urbancode.ds.UDeployServer - IBM UrbanCode Deploy server version 6.2.7.1.960481 started.

and now I'm good to go ....

Visual Studio Code - Wow 🙀

Why did I not know that I can merely hit [cmd] [p]  to bring up a search box allowing me to search my project e.g. a repo cloned from GitHub...