Friday, 20 October 2017

More on Elasticsearch, Logstash and Kibana (ELK)

Following earlier posts: -




I've had a brief play with a new ( to me ) Docker image, ELK: -


Collect, search and visualise log data with Elasticsearch, Logstash, and Kibana.

using this documentation: -


This time around, I built it using Docker Compose ( on my Mac ) : -

Create a DC YAML

vi docker-compose.yml 

elk:
  image: sebp/elk
  ports:
    - "5601:5601"
    - "9200:9200"
    - "5044:5044"


Spin up the Container

docker-compose up elk

Creating elk_elk_1 ... 
Creating elk_elk_1 ... done
Attaching to elk_elk_1
elk_1  |  * Starting periodic command scheduler cron
elk_1  |    ...done.
elk_1  |  * Starting Elasticsearch Server
elk_1  |    ...done.
elk_1  | waiting for Elasticsearch to be up (1/30)
elk_1  | waiting for Elasticsearch to be up (2/30)
elk_1  | waiting for Elasticsearch to be up (3/30)
elk_1  | waiting for Elasticsearch to be up (4/30)
elk_1  | waiting for Elasticsearch to be up (5/30)
elk_1  | waiting for Elasticsearch to be up (6/30)
elk_1  | waiting for Elasticsearch to be up (7/30)
elk_1  | Waiting for Elasticsearch cluster to respond (1/30)
elk_1  | logstash started.
elk_1  |  * Starting Kibana5
elk_1  |    ...done.
elk_1  | ==> /var/log/elasticsearch/elasticsearch.log <==
elk_1  | [2017-10-20T09:58:07,375][INFO ][o.e.p.PluginsService     ] [Q6xLn7b] no plugins loaded
elk_1  | [2017-10-20T09:58:09,062][INFO ][o.e.d.DiscoveryModule    ] [Q6xLn7b] using discovery type [zen]
elk_1  | [2017-10-20T09:58:09,753][INFO ][o.e.n.Node               ] initialized
elk_1  | [2017-10-20T09:58:09,753][INFO ][o.e.n.Node               ] [Q6xLn7b] starting ...
elk_1  | [2017-10-20T09:58:09,960][INFO ][o.e.t.TransportService   ] [Q6xLn7b] publish_address {172.17.0.2:9300}, bound_addresses {0.0.0.0:9300}
elk_1  | [2017-10-20T09:58:09,974][INFO ][o.e.b.BootstrapChecks    ] [Q6xLn7b] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
elk_1  | [2017-10-20T09:58:13,044][INFO ][o.e.c.s.ClusterService   ] [Q6xLn7b] new_master {Q6xLn7b}{Q6xLn7bNR66inZlv5JcUaQ}{HPqd_E_QSJ2eHModlSUT6A}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
elk_1  | [2017-10-20T09:58:13,080][INFO ][o.e.h.n.Netty4HttpServerTransport] [Q6xLn7b] publish_address {172.17.0.2:9200}, bound_addresses {0.0.0.0:9200}
elk_1  | [2017-10-20T09:58:13,080][INFO ][o.e.n.Node               ] [Q6xLn7b] started
elk_1  | [2017-10-20T09:58:13,143][INFO ][o.e.g.GatewayService     ] [Q6xLn7b] recovered [0] indices into cluster_state
elk_1  | 
elk_1  | ==> /var/log/logstash/logstash-plain.log <==
elk_1  | 
elk_1  | ==> /var/log/kibana/kibana5.log <==


See what's running

docker ps -a

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS                      PORTS                                                                              NAMES
be3d5ee65642        sebp/elk                   "/usr/local/bin/st..."   2 minutes ago       Up 2 minutes                0.0.0.0:5044->5044/tcp, 0.0.0.0:5601->5601/tcp, 0.0.0.0:9200->9200/tcp, 9300/tcp   elk_elk_1
4f54bc00b67d        websphere-liberty:wlp101   "/opt/ibm/docker/d..."   8 days ago          Exited (143) 45 hours ago                                                                                      dazzling_mestorf


Start a shell on the container

docker exec -it be3d5ee65642 /bin/bash

Pull the logs to the foreground

Note the subtle use of the apostrophe ( ' )

/opt/logstash/bin/logstash --path.data /tmp/logstash/data -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'

Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties
[2017-10-20T10:24:30,729][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/opt/logstash/modules/fb_apache/configuration"}
[2017-10-20T10:24:30,741][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/opt/logstash/modules/netflow/configuration"}
[2017-10-20T10:24:31,388][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-10-20T10:24:31,390][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-10-20T10:24:31,516][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2017-10-20T10:24:31,607][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-10-20T10:24:31,614][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-10-20T10:24:31,620][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2017-10-20T10:24:31,623][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-10-20T10:24:31,792][INFO ][logstash.pipeline        ] Pipeline main started
The stdin plugin is now waiting for input:
[2017-10-20T10:24:31,976][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601}


Send a test message

The Quick Brown Fox Jumped Over The Lazy Dog!

Check the log


{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {
    "total" : 6,
    "successful" : 6,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 8,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : ".kibana",
        "_type" : "config",
        "_id" : "5.6.3",
        "_score" : 1.0,
        "_source" : {
          "buildNum" : 15554
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85PK0Ji95TIyQOdvFj",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:03:18.652Z",
          "message" : "this is a dummy entry"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85PT5gi95TIyQOdvFm",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:03:55.857Z",
          "message" : "I love it !"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85UBqvi95TIyQOdvFp",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:24:31.867Z",
          "message" : "Hello Fluffy"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85UWpEi95TIyQOdvFr",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:25:57.808Z",
          "message" : "The Quick Brown Fox Jumped Over The Lazy Dog!"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85PKzri95TIyQOdvFi",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:03:17.729Z",
          "message" : "this is a dummy entry"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85PK9Si95TIyQOdvFk",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:03:19.238Z",
          "message" : "this is a dummy entry"
        }
      },
      {
        "_index" : "logstash-2017.10.20",
        "_type" : "logs",
        "_id" : "AV85UCuXi95TIyQOdvFq",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "be3d5ee65642",
          "@timestamp" : "2017-10-20T10:24:36.234Z",
          "message" : "Hello Fluffy"
        }
      }
    ]
  }
}

So we have Kibana running: -


and Elasticsearch: -


Next job is to wire my BPM Event Emitter up to this - but that's the easy part :-)

*UPDATE*

And, as expected, it just worked. I completed one of my running BPM processes, and immediately saw messages in Elasticsearch, including: -

elk_1  | [2017-10-20T13:14:49,377][INFO ][o.e.c.m.MetaDataCreateIndexService] [Q6xLn7b] [bpm-events] creating index, cause [api], templates [], shards [5]/[1], mappings []
elk_1  | [2017-10-20T13:14:50,641][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] create_mapping [ProcessEvent]
elk_1  | [2017-10-20T13:14:50,738][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] create_mapping [ActivityEvent]
elk_1  | [2017-10-20T13:14:50,828][INFO ][o.e.c.m.MetaDataCreateIndexService] [Q6xLn7b] [restore_task_index] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
elk_1  | [2017-10-20T13:14:52,022][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [restore_task_index/HMBr8hw4RAmDJrNzZCX-ag] create_mapping [configuration_type]
elk_1  | [2017-10-20T13:18:30,329][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] update_mapping [ActivityEvent]
elk_1  | [2017-10-20T13:18:38,529][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] update_mapping [ActivityEvent]
elk_1  | [2017-10-20T13:18:38,718][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] update_mapping [ProcessEvent]
elk_1  | [2017-10-20T13:18:38,810][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] update_mapping [ActivityEvent]

elk_1  | [2017-10-20T13:18:38,836][INFO ][o.e.c.m.MetaDataMappingService] [Q6xLn7b] [bpm-events/cWG124C1SOqS4UR_6QQboA] update_mapping [ActivityEvent]

which is nice.

Thursday, 19 October 2017

Zipping and Tarring on macOS - with added funkiness

So I had a specific requirement yesterday - I wanted to extract three specific files from a ZIP file.

This is what I had: -

unzip -l certificate-bundle.zip

Archive:  certificate-bundle.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  10-19-2017 16:58   ca/
     1310  10-19-2017 16:58   ca/ca.crt
     1679  10-19-2017 16:58   ca/ca.key
        0  10-19-2017 16:58   node1/
     1379  10-19-2017 16:58   node1/node1.crt
     1679  10-19-2017 16:58   node1/node1.key

---------                     -------
     6047                     6 files


So I wanted to extract the certificates and one of the keys …. and place them into specific locations

BUT…..

I didn't want the paths, just the files.

Whilst zip supports this: -

-j  junk paths (do not make directories) 

alas, unzip does not.

Thankfully, the internet had the answer: -

How do I exclude absolute paths for Tar?

I knew that I could use tar on a ZIP file, but this was a nuance.

So here're the commands that I used: -

tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack ca/ca.crt
tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack node1/node1.crt
tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack node1/node1.key


so we use —strip-components to remove the path and -C to place the files into specific locations.

So that's all good then :-)



IBM BPM and Elasticsearch - with added TLS

Following this: -



I've been tinkering further with Elasticsearch on Docker, establishing a TLS connection between it and IBM BPM.

Here's my notes: -

Pull Image


Start container

es=`docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:5.6.3`

Check logs

docker logs $es -f

Upload YAML for Certgen

docker cp ~/instances.yml $es:/usr/share/elasticsearch/config

Generate Self-Signed Certificate, plus Keys

docker exec -i -t $es /bin/bash -c "/usr/share/elasticsearch/bin/x-pack/certgen -in /usr/share/elasticsearch/config/instances.yml -out /usr/share/elasticsearch/certificate-bundle.zip"

Download Certificates

docker cp $es:/usr/share/elasticsearch/certificate-bundle.zip ~

Stop Container

docker stop $es

Remove Container

docker rm $es

Extract and place certificates and key

tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack ca/ca.crt

tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack node1/node1.crt

tar xvzf ~/certificate-bundle.zip --strip-components=1 -C ~/Desktop/elasticsearch-config/x-pack node1/node1.key

Re-start container

Note; we're mapping ~/Desktop/elasticsearch-config as the ES config root

es=`docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -v /Users/davidhay/Desktop/elasticsearch-config:/usr/share/elasticsearch/config docker.elastic.co/elasticsearch/elasticsearch:5.6.3`

Check logs

docker logs $es -f

Test using Curl - on host

curl --insecure https://localhost:9200 -u elastic:changeme

Should return: -

{
  "name" : "-2S40f4",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "zV8P1a4FR26Q_J_h1E0QKA",
  "version" : {
    "number" : "5.6.3",
    "build_hash" : "1a2f265",
    "build_date" : "2017-10-06T20:33:39.012Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
}

or similar

Test using browser

Default credentials are elastic/changeme


Should return same JSON

Test on BPM box

Hostname node1.uk.ibm.com aliased to IP address of host Mac

curl --insecure https://node1.uk.ibm.com:9200 -u elastic:changeme

{
  "name" : "-2S40f4",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "zV8P1a4FR26Q_J_h1E0QKA",
  "version" : {
    "number" : "5.6.3",
    "build_hash" : "1a2f265",
    "build_date" : "2017-10-06T20:33:39.012Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
}

or similar

Place CA certificate on BPM box

scp ~/Desktop/elasticsearch-config/x-pack/ca.crt wasadmin@bpm86:~

Update BPM Event Emitter YAML files

vi /opt/ibm/WebSphereProfiles/Dmgr01/config/cells/PCCell1/nodes/Node1/servers/SupClusterMember1/analytics/config/BPMEventEmitter.yml

vi /opt/ibm/WebSphereProfiles/Dmgr01/config/cells/PCCell1/clusters/SupCluster/analytics/config/BPMEventEmitter.yml

ES configuration as follows: -

...
esConfiguration:
    enabled: true
    # The Elasticsearch index name
    index: bpm-events
    # Enable the following properties when Elasticsearch security is on.
    username: elastic
    password: changeme
    httpsTrustType: CRT
    trustFileLocation: /home/wasadmin/ca.crt
    hostnameVerifier: false
    esTaskIndex: restore_task_index
...

Synchronise Node

/opt/ibm/WebSphereProfiles/Dmgr01/bin/wsadmin.sh -lang jython -f fullSync.jy

Validate Sync

ls -al `find /opt/ibm/WebSphereProfiles -name BPMEventEmitter.yml`

-rw-r--r-- 1 wasadmin wasadmins 2793 Oct 19 16:54 /opt/ibm/WebSphereProfiles/AppSrv01/config/cells/PCCell1/clusters/SupCluster/analytics/config/BPMEventEmitter.yml
-rw-r--r-- 1 wasadmin wasadmins 2793 Oct 19 16:54 /opt/ibm/WebSphereProfiles/AppSrv01/config/cells/PCCell1/nodes/Node1/servers/SupClusterMember1/analytics/config/BPMEventEmitter.yml
-rw-r--r-- 1 wasadmin wasadmins 2762 Sep 18 08:51 /opt/ibm/WebSphereProfiles/AppSrv01/installedApps/PCCell1/BPMEventEmitter_war_De1.ear/BPMEventEmitter.war/WEB-INF/classes/BPMEventEmitter.yml
-rw-r--r-- 1 wasadmin wasadmins 2797 Oct 19 17:19 /opt/ibm/WebSphereProfiles/Dmgr01/config/cells/PCCell1/clusters/SupCluster/analytics/config/BPMEventEmitter.yml
-rw-r--r-- 1 wasadmin wasadmins 2797 Oct 19 17:19 /opt/ibm/WebSphereProfiles/Dmgr01/config/cells/PCCell1/nodes/Node1/servers/SupClusterMember1/analytics/config/BPMEventEmitter.yml

All but BPMEventEmitter_war_De1.ear version of file should be the same size/date/time

Start App

/opt/ibm/WebSphereProfiles/Dmgr01/bin/wsadmin.sh -lang jython

AdminControl.invoke('WebSphere:name=ApplicationManager,process=SupClusterMember1,platform=proxy,node=Node1,version=8.5.5.12,type=ApplicationManager,mbeanIdentifier=ApplicationManager,cell=PCCell1,spec=1.0', 'startApplication', '[BPMEventEmitter_war_De1]')

quit

Check Logs

tail -f /opt/ibm/WebSphereProfiles/AppSrv01/logs/SupClusterMember1/SystemOut.log

Note

If you see this: -

Caused by: javax.net.ssl.SSLPeerUnverifiedException: Host name '9.174.27.153' does not match the certificate subject provided by the peer (CN=node1, DC=uk, DC=ibm, DC=com)

use: -

hostnameVerifier: false

in BPMEventEmitter.yml

Backup





Monday, 16 October 2017

Apple Watch - go, no go, go

So I had a weird experience last evening, and not in a good way.

For no apparent reason, this was my Apple Watch: -


and this: -


I have no earthly idea what happened.

So, being a true nerd, and a big fan of The IT Crowd, I decided to ( all together now ) TURN IT OFF AND ON AGAIN ….

Obviously I couldn't read the display, what with it being all garbled n' all, so I just hit the big button on the right-hand side, below the digital crown and chose the appropriate gibberish - it was the one in red, so it must've been the right one ? Right ?

WRONG !!

The next, my Apple Watch has called 999 ( the UK's emergency services number, similar to 911 in the USA ), and I'm talking to an operator, who's asking how he can help.

When I don't immediately respond ( panic has set in at this point ), he's saying "If you're unable to speak, please press a digit on your phone's dial" etc. assuming, for good reason, that I am injured and cannot respond :-(

I manage to find my voice, and tell him that all is well, and apologise profusely for wasting his time and our public resources ….

Then the house phone rings … and my beloved gets a recorded message telling her that Dave Hay has called the emergency services.

And then I get SMS messages on all my Apple devices …..

And then the home phone rings again, with yet another recorded message with my location ( thanks to Apple Maps ).

In short, the Apple ecosystem has kicked in to save me … even though there's nothing wrong with me, apart from my obvious inability to use Apple hardware.

Finally, I manage to power the watch off, set it on its charging stand, so it can reboot - and all seems well.

For the record, this is what I should've done: -

data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAkGBwgHBgkIBwgKCgkLDRYPDQwMDRsUFRAWIB0iIiAdHx8kKDQsJCYxJx8fLT0tMTU3Ojo6Iys/RD84QzQ5OjcBCgoKDQwNGg8PGjclHyU3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3Nzc3N//AABEIAKAAzwMBIgACEQEDEQH/xAAcAAABBAMBAAAAAAAAAAAAAAAABAUGBwECAwj/xABREAABAgQDAwYHCBAEBwEAAAABAgMABAURBhIhEzFBB1FhcZGSFCIyU4Gh0RUjMzVCUmOyFiUmREZUVVZicnOClLGzwnSiwfA0Q0WEk9LhJP/EABoBAQADAQEBAAAAAAAAAAAAAAABAgMEBQb/xAAiEQEAAgIBAwUBAAAAAAAAAAAAAQIDESEEEmEFEyIx8EH/2gAMAwEAAhEDEQA/ALxggggCIrjXHVIwjLFU45tZkjxJds+MqFOOcRIw1QXpzKVvqs2w2N61nQDtijqvT3aar3XrxE3iCbG1CHPGblUnydOJ5h6YCwpWs4rr8s3OvTktQpN8ZmWgAp5SeBud3UAY7ClPuav4vqbh50KIHqTFW0nHjkqnZVHOtwffCU3J/W49nZDyMcsrHiTzZvwzawE8FFFtcTVc/vq9kZFFb/OSr95Xsiv140I3Tye/HJWNFX/4wd+AsX3Fb/OKr9qvZB7itfnDWD6V+yK5+zRf44O/Gfs0X+NjvwFi+4rP5wVjtX7IDRmR+EFZ7V+yK7+zRf42O/8A/YwcarH32O/AWCqkND8Iqx2q9kcl01sa/ZHWO1XsiAKxqs/fY78cV4yWfvod6An6pJKfwmq4/fV7I4OMqRqnFdVHWtXsivnMXOH76v8AvQjdxa7r/wDo/wA0BZrM5WZdQFPxcXl38VubSFA9G4GFlJ5UvAqx7iYulkyc1pkmWzdtYO49F4pWZxS4oEFSnL8OeNqVX2qi/wC5+ImvCZBzRCv+ZLdKDzdEB61ZdQ82lxtQWhQuFDcRG8VDyc1iewzWkYWq75mJGaGemzd7hQIvlueiLdG6AzBBBAEEEEAQQQQBAYIDAQDEjIrHKFS5F43lKeyqbcTwzbk37VH0RV2OZxc3PzMws3Liyr0cOwWHoi10JvjPEMwf+XJJQOxUU5ikErV1QEGmjqTCImFszxhzo9Ep71JXU6xPuSksqZ8Fa2TW0UV5cxURceKARe2pvAR+CHxWGJ00p6qS7ktMSrSiCWXgVZc2ULynUAm2h11va0LJjAtZYmUMFEqtRecZWpuaQpLK2xdYcN/FsNYCLRm8SaWwPWJoPGXTKupQcrakTSCHzlzWbsdfFIPDm36QgrlIbpqpANOKX4VIszKs3yVLBJA6NIBovGIUCX03Rqpm0RscoxHUIjYMGGxwjMdFtFO+OcSC8bNqKXEqG8GNIIC8aK2azyd7VPjT1HdD8qriAPHAvzXCh1RdNMmRN0+WmRudaSvtEU3ySHPh+ptHUFlGnei08GKzYWph+gSOyAeoIIIAggggCCCCAIIIICEs/H2J1fQgf5VRTmKk2cVFyM/HGKT9GPqmKexYPfFwEAmt5hfSK+JCRXITVOlKhKl8TKG5grAQ6Ba/iqFwRvSd9oQTfGEXGAmKcZTkxSHKauUk0ocaLKlt50+LnzizYOQG99bbrQoZxROodmnBLyqhNzz8482sKKV7VOVaNCDltu49PPDmFEGFqHriMbzbfCsn9vFb8nLOystTpRmWKitplpx5GyVlCSSQsFdwBfNfXXTcWSeqLtSVKF5CEGWlW5ZGU+UlAIBN+OscXbq3RyykHdF67mOUReCkAZeEcHhGQvhGQkucIaXiP64spuYdGWEZTCRDBRwjsHigRE7Z325TbaRuhrWLGF0w8VQhXe94mqatIIIIuuvPkhWE0eoKUQlIlwVE8BrrE6wpialSdHp1PmZxhuayBCmtu2ooJ3ZgFXHpGkVFQZl2VwDVwyopXMNtS4I32Wux9VxDy3yXeAU33S2i1PhJc0SSbgE/zgL5EZjVpWdtKvnAGNoAggggCCCCAIDBBAQpn42xV+zH1TFP4t+EXFwM/GmKj+h/aYp/Fvwi4Cv5vUmOUnKPTkwhiWQp11xWVCEi5UeiO01oox6A5DsKy9Lw8muzCEqnZy6kKI+Db3C3XYmArSU5IcWzDAd8EQ1pfI4sBXZDRPYQqdLmvBqihUs9a4Q4g+MOcHiOqPUCaql9ynllPvU0tYusWJSEkgjrt2QnqkhI4roz0q+3uUpKSpPjMuDS/X/pFY1MpvS2nmpmguEavIP7pjZygLsbPI7piUUuXSaqJScsMji23QVWAKb316xDiZSnuMreaQpxrKtTjiVkBopbBsL6kFVwLi9rcY025NK8NBczfDI7phfL0Fdh76jsMTUUemB9hG1LhXMttKs5pZasyT/47ekxu1JSglJtS0JbdbzlCLrB8VCToFDnOt+F7boh0U4hD1UFfnUd0w3zVCd4PI7piVKcEIZhwExEwvPKMfY+8T8OjumENQpMxKJK1AKQN6k8ImKCI5TFsukVRpXx3xiF1Wlky82oIFkKGZPRCGLC0sNqQnCEwtwXSl2Xv3lRduIFoVh9ZSLp2avQMp6Dfq0iocDUpVawVXJFrR5UmlbJv8tKrp9Yt6YTUubxXNJYk35mXmG3yltjK6FE5hbdvHZAehZTSVZH6Cf5R2jCQAkAbhGYAggggCCCCAIIIICFs/GeKv1R9QxT2LfhFxcTPxhirqH1DFPYu+FXAV/NGyyY9Ocl04zWOTmQaZXq0wZVy29KgLesEH0x5jmvKiQ4Cx1P4NnFKlvfpR221l1k2V0jmMExOno+VnmJVmWZqcq8mblE5U7NlSgo5ct0W3giFtJQ5LsTc5OJDG2cLykKPwaQANemwuYg0py2YXdlgt9E607bVGzB167xEsacqor0uZCmqTJSS9HSpQLjo5jwA5xx/nWK8r2yRaNRGtmovprOIpt1Ky21MzLzwIFyElRXp02hailsOSTcyiYd2UxsyyklAIChcXBOv7u8a9UQYqUu04lxqbQhaTcKC7EekQ5tVZEwsren1LUVZ7qdJObQX67AdkXY9sF9Ukm5SdUwypSko0JUpKtbn5v8t8J7lAvcxuZyWWSozKCTqSVakxxfm5QJ+Hb7YJjhhcz0mErj1zvhM/PydyPCWu9HDw6U/GG+2CTgh63GBx3NpDeJ+U/GG+2E03WGW0WYVtF9WgiAir7gVNJQPko19MNUbuFTiytRuVG5MPchQD4K3UKzMpp1PcF21LGZ2YH0Te9X6xsnp4QFt8jBtTKh/hh/MxZWCA0MLU1WVAOxGthFYcl9TkUT8nT6XJrEtNKLTzk2vM4tIbWoaJsBqOnSLBpc4pjFj1IQWmaezJMqal0tpSlKlLdBtpfclOnRASwKHCNo5lpB4W6tI1KVo1SSocxgO0EcmnQvfvjrAEEEEAQQQQENZ+MMVdX9hincXfCri4WD9scVDo/sMU9i34RcBX81vMSrk35PZvGMyt1ayxT2SA49a9z80c5/3zRFZreY9SclUmzJYApAl0gbZnbLtxUrU3/l6IBvY5IMGMshtyRfeX5xUytJvz2SQPVERxxyWNUmTcqNCCn5dsZnmHEgrQnnB4gc2+JBVsTTrNYly8qXmFSanBtJdRS2vNbRW/VNonlFdenKW05OPS0wtwHMuXHvZB4DU9UZ0yxeZiHZ1HRZMGOt7fVnmFiUQ84EMy4cWdyUouT6Ic5CmuPISpmQW4kpzBSGbi17X3bo3eWKFimbMuMyZObeQ0BzJUpI9VoWjETTkuph2TLbRRkQhvKoIAVceWLc+vOTGrjaGnrbbQpcitKVpKkKLNgoAZiQbbraxyfpzt2kGQcCnvggWdV9Wmsdk11lts7GXczONJQ6VLBF0sFkZQB0316owqutIm3JxiWWZhaVGzrpKErIAJAFjuvre+o5tQYn5dvKlZYSErF0Eo8oXI09IMcNgz5pHdEO1TnpWdbRkZeZdQV5U5gW7KcUs9O9RENsBz2DPmkd0QkmqWw+k7NIac4KSNPTC+O8hKLn56Xk21ZVPOBOb5o4n0C59EQGqmU6XpdPVWqw0h7x1NyUkvdMOJ0Kl/Rp4j5R054aKlUpuqzjk3PvKefc0KjoAOAAG4DgBpC3FFVFTq7ipfSSlx4PKI4JZTonrJ3k8SSYZ+MBbPJJ8eUn9sv+i5E1xE06jGzc54Ol9gJlm7KUQEqUt4ZtN9s17G4iFcknx7SR9Mv+i5FpVJCXZ+bChcpLCx6FH2wE0EZgggOL7VwVo0UPXGJd0LTrvjuYQve8TII0S5r6YBdBGEm4BjMAQQQGAhUuftlirq/sMU/i74RcW+x8b4q/U/sioMXeWuAgE1vMXtyG4ylZukJw9PPIRNy6j4PmNg4g65esG/otzGKIm95jhLB0vJ8HzbS+mXfAexHKFJmckphpIZTK7QhptACV57Xv2QjxLXKdg+iqcs0hZCvB5ZOhcX0DmvvMUTTsRcoTMoG2KlM7O1hnIUbdZBMMc6/WpmZW/POByYVvW8pRUejXhGdLY5n4y3yRmisd8TrycpCbDVTM3NeNnK1uEDylKBue0wurlRYnGAhp1bytolacyLbFIbCSgHrF+aIxeo80v64z9seaX9casCyA674R/bHmY9cYvUTwl/XALYIR/bHml/XGPtjzS/rgFsLaS8ZcVOZBIXLUx9xCh8lRytg9rghm+2O+0v64V0hM47L19qZCRtKS5kybvFdacPqQYCIQDeIIIgW1ySfHtJ/bL/ouRaM2q1WnB9E39aKt5JPj2k/t1/0XIs2dVavzI52kfWEBOoIIIAhHUxaXSr5qx7IWQiqyrSmX5y0j13/0gO0qq6I7wmkh72IUjdAEYO6MxgwEKY+OMVD6P+yKgxbqtUXAwPt3ikfRD6hin8WeUrqgIBNbzfdE7wHh5S2EOoZDsy6CoX+SBxud0QSa8oxbWBZxkS0usvNtoUwps7RvOhWnkrA4Hnjh660xSI3qJnl6npVInLa2tzEbiC4UioEPESrlmCQ5zg2v6dNdIRVaivrkkOzLBQh34J08Da47REtFUpSX2HGZlaG5F51YaUFK2wUkCySem414Q2VWak36UwnwgOzacgzJSpJyhNjtBuJB3GPK9vHj3bHbmPL3/fy5tY8tPjPif3lXdIk0ztUYk3syUrUQrLvFgTpoebmh2lcOMzWyXt1tN7ZSHhcKKUADKQbC91EJ3cRDTKoM5VwG5gS5cdUUuXy5RqdNRrwha/SZhLitpOuF87QspcbUlTmRAWrMb+IbaDyrkb7WMfRVndY2+OvERaYh3aoMstbSVFwFUmmYUS+lNypLarXKLJ1XzncIHcPshDmRUyuzivfTlSlID2zsQdSrjcc6dLQgq8o/IpZQqdcmGlpOQ6hFhbyTcgp3brdQhCX3ikp2rmUnMRnNiee3PFlUiRhph6ouybTriFpmmmUBSwS4FKVnsco8bKm4Ft4hGmlSx8DbWmczzIbs+ACzmcuEjdewNr6njutDUuZfWoKW+8pQIIUpxRItu1J4axrtXciUbVzIg3QnObJPOBwgNpxtpqadaYWpaEKyhagAVW0J6ib26I70aZZlKow5NW8GXmaf/ZrBSq/QAb+iEZ1JJ4xgi4O6AjdWkHaXU5qQfBDku6ps9Njv9Oh9MJImNUlk4gk0iWSVVqnsgOIGqpqXSLBQ51oFgRvKbHgYhw39EQLZ5JPj6k/t1/0XIsqd1xDMD6NH1hFa8knx5Sv2y/6LkTWs1qXkcc+APXC5htkN2F7lSz/6QFpQQQQAYaqq5nmGmE8PGV/pC6bmW5ZlTjh6hxJ5oa5Btx50vu6rWbmAdZZOVAEdo1SLCNoAgMEYMBDWE/dJiZsbywk260GKbxUrU9UXJPOJp3KC2HtGanKZQTuKkbx2H1RU2OZFyUnpmXUNWllPo4eq0BW8z5RMOWGsQqpSi08CthRvb5sNs4CFqhEd8UvSuSvbb6aYct8N4vSdTC0msRUpxGbwkAHhb2QhqNeafbLcs4EoVoVFQBPVFd6QRyY/T8NLdz0c3rHVZadkzpMWJwS7qXWX0JWncbg+o7/TCg1uZKHEGeGV0WUAECwyhNk6eKMoAsmwsLbogkEdzyk3m6o5OZPCZpLmS9vJTv3nS1ybDU6wn27XnEd4REIICX7drziO8INu15xHeERCCAl+2a86gfvCEs3VJeXQQhQcc4BO70xGoIBS1PTLE6icl3ltTCF50OINik9ESAvUjEQC51xuk1dR8d/KfBplXOoJB2ajxIGW+thEWhRJsqfmW20gm5EBcfJlSJ+n1ylqmpYhsurKXUKDjahsl6haSUntiUrojNc5SXlOPuMOyLUs+hbdsx1dBSb8CCRDFSpSYoXJ09MSzrkvOT7uzlSgkG5shKrfrFR6osiTkpxllsqecU/s0hxw6qUbcTASO9tTCGbqjEv4qTtXPmI17TCBUlMO22zzqhzFRMKZamIR8mASobfnnttM8PJQNyYeJdrZoGkbNMpbGkdYAggggCCCCAjuN6E7WqUkyS9nUJRYflV/pjgegi4iBTrbOMpNQUjwauy42cxLOeKVEf70PERbxiMYowdKVp4Tss6qRqiB4s0yNSOZQ4iA89YgwxOSj7iFsOIUDqlSbGIzMUyZQo+9q7I9FLbxVT0GXq1GYrEunQPS5FyP1VbvQYTqco7ljN4OqDa+OWTWf5QHnNUq+D8ErsjHg73ml92PRf3L/Kw1UAf8G77Ix9yg/B6f/g3vZAedPB3vNOd0wbB7zS+6Y9FXwn+QZ/8AhHvZGb4T/IVQ/hHvZAedNg75pfdMGwd80vumPRf3Jj/oNQ/hXvZB9yf5BqH8K97IDzpsHvNOd0xnwd/zLndMeirYT/INR/hXvZGB9if5v1D+Fe9kB528He8y53TGyZOZV5Mu6f3DHogfYtww7UT/ANm97I3ScNA+Lhmok83gb3sgPPsvRqg+fFlnAniSLARZOAuT5xWWo1UmWkkeMt5emYcyL7+vd2RYTD6AR7i4MmlODyVPsbMJ6buQtawtV8QPJdxVMttSY19z5RRIV+uvS46AIBNQmTi6vMTyGdlQKSQJMcH3Bpcfop4HifRFhZRe5EaS0u1KsoZl20ttNpyoQkWCRzCOsBjKOaACMwQBBBBAEEEEB//Z

i.e. hit the FIRST rather than the THIRD control.

An update - the landline rang again today, 12 hours later, to tell my beloved that my location had changed - I wonder how much longer it's going to do that ……

IBM Cloud Private - My first foray

So this week, along with many other things, I'm starting to get to grips with the newly announced IBM Cloud Private: -

IBM brings the power of cloud behind the enterprise firewall

I'm running on Ubuntu Linux: -

lsb_release -a

No LSB modules are available.
Distributor ID:    Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:    16.04
Codename:    xenial


so started by installing the pre-requisites of VirtualBox and Vagrant: -

sudo apt-get install virtualbox
sudo apt-get install vagrant

and, having cloned the Git repository: -

https://github.com/IBM/deploy-ibm-cloud-private

I followed the instructions to bring up the Vagrant environment: -

vagrant up

Bringing machine 'icp' up with 'virtualbox' provider...
==> icp: Clearing any previously set forwarded ports...
==> icp: Clearing any previously set network interfaces...
==> icp: Preparing network interfaces based on configuration...
    icp: Adapter 1: nat
    icp: Adapter 2: hostonly
==> icp: Forwarding ports...
    icp: 22 (guest) => 2222 (host) (adapter 1)
==> icp: Running 'pre-boot' VM customizations...
A customization command failed:

["modifyvm", :id, "--apic", "on"]

The following error was experienced:

#<Vagrant::Errors::VBoxManageError: There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["modifyvm", "6386ef56-d015-4672-919d-40758eeab63c", "--apic", "on"]

Stderr: Oracle VM VirtualBox Command Line Management Interface Version 5.0.40_Ubuntu
(C) 2005-2017 Oracle Corporation
All rights reserved.

Usage:

VBoxManage modifyvm         <uuid|vmname>
                            [--name <name>]
                            [--groups <group>, ...]
                            [--description <desc>]
                            [--ostype <ostype>]
                            [--iconfile <filename>]
                            [--memory <memorysize in MB>]
                            [--pagefusion on|off]
                            [--vram <vramsize in MB>]
                            [--acpi on|off]
                            [--pciattach 03:04.0]
                            [--pciattach 03:04.0@02:01.0]
                            [--pcidetach 03:04.0]
                            [--ioapic on|off]
                            [--hpet on|off]
                            [--triplefaultreset on|off]
                            [--paravirtprovider none|default|legacy|minimal|
                                                hyperv|kvm]
                            [--hwvirtex on|off]
                            [--nestedpaging on|off]
                            [--largepages on|off]
                            [--vtxvpid on|off]
                            [--vtxux on|off]
                            [--pae on|off]
                            [--longmode on|off]
                            [--cpuid-portability-level <0..3>
                            [--cpuidset <leaf> <eax> <ebx> <ecx> <edx>]
                            [--cpuidremove <leaf>]
                            [--cpuidremoveall]
                            [--hardwareuuid <uuid>]
                            [--cpus <number>]
                            [--cpuhotplug on|off]
                            [--plugcpu <id>]
                            [--unplugcpu <id>]
                            [--cpuexecutioncap <1-100>]
                            [--rtcuseutc on|off]
                            [--graphicscontroller none|vboxvga|vmsvga]
                            [--monitorcount <number>]
                            [--accelerate3d on|off]
                            [--accelerate2dvideo on|off]
                            [--firmware bios|efi|efi32|efi64]
                            [--chipset ich9|piix3]
                            [--bioslogofadein on|off]
                            [--bioslogofadeout on|off]
                            [--bioslogodisplaytime <msec>]
                            [--bioslogoimagepath <imagepath>]
                            [--biosbootmenu disabled|menuonly|messageandmenu]
                            [--biossystemtimeoffset <msec>]
                            [--biospxedebug on|off]
                            [--boot<1-4> none|floppy|dvd|disk|net>]
                            [--nic<1-N> none|null|nat|bridged|intnet|hostonly|
                                        generic|natnetwork]
                            [--nictype<1-N> Am79C970A|Am79C973|
                                            82540EM|82543GC|82545EM|
                                            virtio]
                            [--cableconnected<1-N> on|off]
                            [--nictrace<1-N> on|off]
                            [--nictracefile<1-N> <filename>]
                            [--nicproperty<1-N> name=[value]]
                            [--nicspeed<1-N> <kbps>]
                            [--nicbootprio<1-N> <priority>]
                            [--nicpromisc<1-N> deny|allow-vms|allow-all]
                            [--nicbandwidthgroup<1-N> none|<name>]
                            [--bridgeadapter<1-N> none|<devicename>]
                            [--hostonlyadapter<1-N> none|<devicename>]
                            [--intnet<1-N> <network name>]
                            [--nat-network<1-N> <network name>]
                            [--nicgenericdrv<1-N> <driver>
                            [--natnet<1-N> <network>|default]
                            [--natsettings<1-N> [<mtu>],[<socksnd>],
                                                [<sockrcv>],[<tcpsnd>],
                                                [<tcprcv>]]
                            [--natpf<1-N> [<rulename>],tcp|udp,[<hostip>],
                                          <hostport>,[<guestip>],<guestport>]
                            [--natpf<1-N> delete <rulename>]
                            [--nattftpprefix<1-N> <prefix>]
                            [--nattftpfile<1-N> <file>]
                            [--nattftpserver<1-N> <ip>]
                            [--natbindip<1-N> <ip>
                            [--natdnspassdomain<1-N> on|off]
                            [--natdnsproxy<1-N> on|off]
                            [--natdnshostresolver<1-N> on|off]
                            [--nataliasmode<1-N> default|[log],[proxyonly],
                                                         [sameports]]
                            [--macaddress<1-N> auto|<mac>]
                            [--mouse ps2|usb|usbtablet|usbmultitouch]
                            [--keyboard ps2|usb
                            [--uart<1-N> off|<I/O base> <IRQ>]
                            [--uartmode<1-N> disconnected|
                                             server <pipe>|
                                             client <pipe>|
                                             tcpserver <port>|
                                             tcpclient <hostname:port>|
                                             file <file>|
                                             <devicename>]
                            [--lpt<1-N> off|<I/O base> <IRQ>]
                            [--lptmode<1-N> <devicename>]
                            [--guestmemoryballoon <balloonsize in MB>]
                            [--audio none|null|oss|alsa|pulse]
                            [--audiocontroller ac97|hda|sb16]
                            [--audiocodec stac9700|ad1980|stac9221|sb16]
                            [--clipboard disabled|hosttoguest|guesttohost|
                                         bidirectional]
                            [--draganddrop disabled|hosttoguest]
                            [--vrde on|off]
                            [--vrdeextpack default|<name>
                            [--vrdeproperty <name=[value]>]
                            [--vrdeport <hostport>]
                            [--vrdeaddress <hostip>]
                            [--vrdeauthtype null|external|guest]
                            [--vrdeauthlibrary default|<name>
                            [--vrdemulticon on|off]
                            [--vrdereusecon on|off]
                            [--vrdevideochannel on|off]
                            [--vrdevideochannelquality <percent>]
                            [--usb on|off]
                            [--usbehci on|off]
                            [--usbxhci on|off]
                            [--usbrename <oldname> <newname>]
                            [--snapshotfolder default|<path>]
                            [--teleporter on|off]
                            [--teleporterport <port>]
                            [--teleporteraddress <address|empty>
                            [--teleporterpassword <password>]
                            [--teleporterpasswordfile <file>|stdin]
                            [--tracing-enabled on|off]
                            [--tracing-config <config-string>]
                            [--tracing-allow-vm-access on|off]
                            [--usbcardreader on|off]
                            [--autostart-enabled on|off]
                            [--autostart-delay <seconds>]
                            [--videocap on|off]
                            [--videocapscreens all|<screen ID> [<screen ID> ...]]
                            [--videocapfile <filename>]
                            [--videocapres <width> <height>]
                            [--videocaprate <rate>]
                            [--videocapfps <fps>]
                            [--videocapmaxtime <ms>]
                            [--videocapmaxsize <MB>]
                            [--videocapopts <key=value> [<key=value> ...]]
                            [--defaultfrontend default|<name>]

VBoxManage: error: Unknown option: --apic
>

Please fix this customization and try again.


Suspecting that I'd got the wrong versions of the pre-requisites, I checked what I'd installed: -

vagrant -v

Vagrant 1.8.1

VBoxManage -version

5.0.40_Ubuntur115130

whereas the above Git page specifies: -

Vagrant 2.0.0

VirtualBox 5.1.28

I downloaded the latest versions of both: -

https://www.hashicorp.com/blog/hashicorp-vagrant-2-0/

https://www.virtualbox.org/wiki/Linux_Downloads

and started by installing the new version of Vagrant, and retrying the ICP installation: -

vagrant up

Bringing machine 'icp' up with 'virtualbox' provider...
==> icp: Clearing any previously set forwarded ports...
==> icp: Clearing any previously set network interfaces...
==> icp: Preparing network interfaces based on configuration...
    icp: Adapter 1: nat
    icp: Adapter 2: hostonly
==> icp: Forwarding ports...
    icp: 22 (guest) => 2222 (host) (adapter 1)
==> icp: Running 'pre-boot' VM customizations...
==> icp: Booting VM...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["startvm", "6386ef56-d015-4672-919d-40758eeab63c", "--type", "headless"]

Stderr: VBoxManage: error: The virtual machine 'IBM-Cloud-Private-dev-edition' has terminated unexpectedly during startup with exit code 1 (0x1)
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MachineWrap, interface IMachine

Assuming that the problem was more with VirtualBox than Vagrant, I installed the new version of that ( which took a bit of work with sudo dpkg --remove and sudo dpkg --purge).

Having validate the versions: -

vagrant -v

Vagrant 2.0.0

VBoxManage -v

5.1.28r117968

This time around: -

vagrant up

Bringing machine 'icp' up with 'virtualbox' provider...
==> icp: Clearing any previously set forwarded ports...
==> icp: Clearing any previously set network interfaces...
==> icp: Preparing network interfaces based on configuration...
    icp: Adapter 1: nat
    icp: Adapter 2: hostonly
==> icp: Forwarding ports...
    icp: 22 (guest) => 2222 (host) (adapter 1)
==> icp: Running 'pre-boot' VM customizations...
==> icp: Booting VM...
==> icp: Waiting for machine to boot. This may take a few minutes...
    icp: SSH address: 127.0.0.1:2222
    icp: SSH username: vagrant
    icp: SSH auth method: private key
==> icp: Machine booted and ready!
==> icp: Checking for guest additions in VM...
==> icp: Setting hostname...
==> icp: Running provisioner: shell...
    icp: Running: script: configure_master_ssh_keys
==> icp: Running provisioner: shell...
    icp: Running: script: configure_swap_space
==> icp: Setting up swapspace version 1, size = 8 GiB (8589930496 bytes)
==> icp: no label, UUID=d5e47d79-2646-4bf8-b89d-45b60ca406ff
==> icp: vm.swappiness = 60
==> icp: vm.vfs_cache_pressure = 10
==> icp: Running provisioner: shell...
    icp: Running: script: configure_performance_settings
==> icp: vm.swappiness = 60
==> icp: vm.vfs_cache_pressure = 10
==> icp: net.ipv4.ip_forward = 1

...

==> icp: Starting cfc-worker2
==> icp: Running provisioner: shell...
    icp: Running: script: wait_for_worker_nodes_to_boot
==> icp:
==> icp: Preparing nodes for IBM Cloud Private community edition cluster installation.
==> icp: This process will take approximately 10-20 minutes depending on network speeds.
==> icp: Take a break and go grab a cup of coffee, we'll keep working on this while you're away ;-)
==> icp: .
==> icp: .
==> icp: .
==> icp: master.icp             ready
==> icp: cfc-worker1.icp         ready
==> icp: cfc-worker2.icp         ready
==> icp: cfc-manager1.icp         ready
==> icp: Running provisioner: shell...
    icp: Running: script: precache_images
==> icp:
==> icp: Seeding IBM Cloud Private installation by pre-caching required docker images.
==> icp: This may take a few minutes depending on your connection speed and reliability.
==> icp: Pre-caching docker images....
==> icp: Pulling ibmcom/icp-inception:2.1.0-beta-3...
==> icp: Pulling ibmcom/icp-datastore:2.1.0-beta-3...
 ==> icp: Pulling ibmcom/icp-platform-auth:2.1.0-beta-3...
 ==> icp: Pulling ibmcom/icp-auth:2.1.0-beta-3...

...

So it hasn't yet finished, but, in the words of Tom Cruise, "It's looking good so far"

:-)