Tuesday, 31 July 2018

WebSphere Application Server - Patching

As part of an integration between IBM Business Process Manager (BPM) Standard 8.6 and IBM Master Data Management (MDM) Advanced Edition 11.6.0.5, I needed to add a patch to the underlying WebSphere Application Server (WAS) Network Deployment 8.5.5.13.

This is the requisite fix: -


and this is how I applied it: -

See what's installed

/opt/ibm/InstallationManager/eclipse/tools/imcl listInstalledPackages

...
com.ibm.websphere.ND.v85_8.5.5013.20180112_1418


See the available fix

/opt/ibm/InstallationManager/eclipse/tools/imcl listAvailableFIxes com.ibm.websphere.ND.v85_8.5.5013.20180112_1418 -repositories /mnt/WAS85/Fixes/8.5.5.12-ws-wasprod-ifpi94754.zip

...
8.5.5.12-WS-WASProd-IFPI94754_8.5.5012.20180307_1003
...


Install the fix

/opt/ibm/InstallationManager/eclipse/tools/imcl install 8.5.5.12-WS-WASProd-IFPI94754_8.5.5012.20180307_1003 -repositories /mnt/WAS85/Fixes/8.5.5.12-ws-wasprod-ifpi94754.zip -installationDirectory /opt/ibm/WebSphere/AppServer/

Installed 8.5.5.12-WS-WASProd-IFPI94754_8.5.5012.20180307_1003 to the /opt/ibm/WebSphere/AppServer directory.

See what's installed

/opt/ibm/InstallationManager/eclipse/tools/imcl listInstalledPackages

com.ibm.cic.agent_1.8.7000.20170706_2137
com.ibm.bpm.ADV.v85_8.6.201803.20180316_1456
com.ibm.websphere.ND.v85_8.5.5013.20180112_1418
8.5.5.12-WS-WASProd-IFPI94754_8.5.5012.20180307_1003
com.ibm.mdm.advanced.0.5_11.6.0.FP05IF000_20180326-1642
com.ibm.mdm.mdph.0.5_11.6.0.FP05IF000_20180326-1642


Job done !

IBM AppConnect Enterprise 11 - Fiddling with Containers and Logs

Some rough notes from today's tinkering …. in the context of throwing logs from ACE in a container to the outside world.

Next is to add ELK ( Elasticsearch / Logstash / Kibana ) into the mix …...

Create a directory to which to map the logs

mkdir ~/acelogs

Start the container

docker run --name myAce -e LICENSE=accept -v ~/acelogs:/home/aceuser/ace-server/log -p 7600:7600 -p 7800:7800 ibmcom/ace:11.0.0.0

Monitor the logs

tail -f ~/acelogs/integration_server.ace-server.events.txt

-or-

docker logs myAce -f

Monitor SysLog inside the container

docker exec -i -t myAce tail -f /var/log/syslog

- Note that there's nothing going on here, because ACE doesn't appear to be using syslog - this is next on my list ….

Inspect the container

docker inspect myAce

[
    {
        "Id": "85662e2ff8ed14579d883b42ef4c73a379dc2bb6387e66813a3ab8a29fe0980d",
        "Created": "2018-07-30T16:17:16.385528759Z",
        "Path": "/bin/bash",
        "Args": [
            "-c",
            "/usr/local/bin/ace_license_check.sh && IntegrationServer -w /home/aceuser/ace-server --console-log"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 5498,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2018-07-31T07:00:46.924337421Z",
            "FinishedAt": "2018-07-30T16:22:31.724425914Z"
        },
        "Image": "sha256:e2b6a1ec30896826a42c5c5effb4eee8e2ed25e694c41de73a2f3ba866e0717a",
        "ResolvConfPath": "/var/lib/docker/containers/85662e2ff8ed14579d883b42ef4c73a379dc2bb6387e66813a3ab8a29fe0980d/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/85662e2ff8ed14579d883b42ef4c73a379dc2bb6387e66813a3ab8a29fe0980d/hostname",
        "HostsPath": "/var/lib/docker/containers/85662e2ff8ed14579d883b42ef4c73a379dc2bb6387e66813a3ab8a29fe0980d/hosts",
        "LogPath": "/var/lib/docker/containers/85662e2ff8ed14579d883b42ef4c73a379dc2bb6387e66813a3ab8a29fe0980d/85662e2ff8ed14579d883b42ef4c73a379dc2bb6387e66813a3ab8a29fe0980d-json.log",
        "Name": "/myAce",
        "RestartCount": 0,
        "Driver": "aufs",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/Users/davidhay/acelogs:/home/aceuser/ace-server/log"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {
                "7600/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "7600"
                    }
                ],
                "7800/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "7800"
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "shareable",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/asound",
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": null,
            "Name": "aufs"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/Users/davidhay/acelogs",
                "Destination": "/home/aceuser/ace-server/log",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "85662e2ff8ed",
            "Domainname": "",
            "User": "aceuser",
            "AttachStdin": false,
            "AttachStdout": true,
            "AttachStderr": true,
            "ExposedPorts": {
                "7600/tcp": {},
                "7800/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "LICENSE=accept",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "BASH_ENV=/usr/local/bin/ace_env.sh"
            ],
            "Cmd": [
                "/bin/bash",
                "-c",
                "/usr/local/bin/ace_license_check.sh && IntegrationServer -w /home/aceuser/ace-server --console-log"
            ],
            "ArgsEscaped": true,
            "Image": "ibmcom/ace:11.0.0.0",
            "Volumes": null,
            "WorkingDir": "/home/aceuser",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {
                "maintainer": "Dan Robinson <dan.robinson@uk.ibm.com>",
                "product.id": "447aefb5fd1342d5b893f3934dfded73",
                "product.name": "IBM App Connect Enterprise",
                "product.version": "11.0.0.0"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "995cc719e422a83639ac1b56c5b058d416805086854b753cf3735906c676c329",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "7600/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "7600"
                    }
                ],
                "7800/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "7800"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/995cc719e422",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "a64b6f54d6bd4808f0a68e89991f88dd13b7a82fb588067ad14ebbe067b49ab0",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.3",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:03",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "0a85739e0f5285e6e3a0c006ec0e1547b18a13fe368865c0dc613c2ea933b2a7",
                    "EndpointID": "a64b6f54d6bd4808f0a68e89991f88dd13b7a82fb588067ad14ebbe067b49ab0",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:03",
                    "DriverOpts": null
                }
            }
        }
    }
]

Check Docker logging driver

docker info | grep 'Logging Driver'

Logging Driver: json-file

Check Logging Driver for container

docker inspect -f '{{.HostConfig.LogConfig.Type}}' myAce

json-file

Stop the container, when required

docker stop myAce

Start the container

docker start myAce

Start a shell session

docker exec -i -t myAce /bin/bash

Monday, 23 July 2018

DB2 - Moving databases

This is definitely a Your Mileage May Vary (YMMV) post.

If in doubt, please check with IBM Support *BEFORE* following the steps outlined here …

So I had a requirement to rename some IBM BPM databases from their default names of BPMDB, CMNDB and PDWDB.

This is related to IBM BPM 8.6 on DB2 v11.1.2.2 although the same approach works for DB2 v10.5 as well.

Thankfully DB2 comes with a useful db2relocate tool, as described here:-


So, before doing this for real, I wanted to test it using the SAMPLE database.

This is what I did ….

Switch to the instance owner

su - db2inst1

Create the SAMPLE database

 db2sampl 

  Creating database "SAMPLE"...
  Connecting to database "SAMPLE"...
  Creating tables and data in schema "DB2INST1"...
  Creating tables with XML columns and XML data in schema "DB2INST1"...

  'db2sampl' processing complete.


Validate the current catalog

db2 list db directory

 System Database Directory

 Number of entries in the directory = 4

Database 4 entry:

 Database alias                       = SAMPLE
 Database name                        = SAMPLE
 Local database directory             = /home/db2inst1
 Database release level               = 14.00
 Comment                              =
 Directory entry type                 = Indirect
 Catalog database partition number    = 0
 Alternate server hostname            =
 Alternate server port number         =

Validate the current DB storage

ls -al /home/db2inst1/db2inst1/NODE0000/SAMPLE

total 4
drwx--x--x   8 db2inst1 db2iadm1  114 Jul 23 13:39 .
drwxrwxr-x. 11 db2inst1 db2iadm1 4096 Jul 23 13:36 ..
-rw-------   1 db2inst1 db2iadm1    0 Jul 23 13:36 .SQLCRT.FLG
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000000
drwx--x--x   3 db2inst1 db2iadm1   43 Jul 23 13:37 T0000001
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000002
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000003
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000004
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:39 T0000005


Connect to SAMPLE

db2 connect to sample

   Database Connection Information

 Database server        = DB2/LINUXX8664 11.1.2.2
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLE


Check that we can access data

db2 "select * from db2inst1.employee where empno = '000010'"

EMPNO  FIRSTNME     MIDINIT LASTNAME        WORKDEPT PHONENO HIREDATE   JOB      EDLEVEL SEX BIRTHDATE  SALARY      BONUS       COMM       
------ ------------ ------- --------------- -------- ------- ---------- -------- ------- --- ---------- ----------- ----------- -----------
000010 CHRISTINE    I       HAAS            A00      3978    01/01/1995 PRES          18 F   24/08/1963   152750.00     1000.00     4220.00

  1 record(s) selected.

Terminate the connection

db2 terminate

DB20000I  The TERMINATE command completed successfully.

Create a template configuration file

This defines the FROM and TO states

vi sample.cfg

DB_NAME=SAMPLE,SAMPLENE
DB_PATH=/home/db2inst1
INSTANCE=db2inst1


Move the database from the old container to the new container

Note that this works for me because my database has a single partition, and is located in the instance owner's home directory
- This is where YOUR mileage MAY/WILL vary

mv /home/db2inst1/db2inst1/NODE0000/SAMPLE /home/db2inst1/db2inst1/NODE0000/SAMPLENE

Validate the new DB storage layout

 ls -al /home/db2inst1/db2inst1/NODE0000/SAMPLENE/

total 4
drwx--x--x   8 db2inst1 db2iadm1  114 Jul 23 13:39 .
drwxrwxr-x. 11 db2inst1 db2iadm1 4096 Jul 23 14:28 ..
-rw-------   1 db2inst1 db2iadm1    0 Jul 23 13:36 .SQLCRT.FLG
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000000
drwx--x--x   3 db2inst1 db2iadm1   43 Jul 23 13:37 T0000001
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000002
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000003
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000004
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:39 T0000005


Run the db2relocate command to update the catalog

db2relocatedb -f sample.cfg

Files and control structures were changed successfully.
Database was catalogued successfully.
DBT1000I  The tool completed successfully.

Validate the updated catalog

db2 list db directory

 System Database Directory

 Number of entries in the directory = 4


Database 4 entry:

 Database alias                       = SAMPLENE
 Database name                        = SAMPLENE
 Local database directory             = /home/db2inst1
 Database release level               = 14.00
 Comment                              =
 Directory entry type                 = Indirect
 Catalog database partition number    = 0
 Alternate server hostname            =
 Alternate server port number         =


Connect to SAMPLENE

db2 connect to samplene

   Database Connection Information

 Database server        = DB2/LINUXX8664 11.1.2.2
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLENE


Check that we can access data

db2 "select * from db2inst1.employee where empno = '000010'"

EMPNO  FIRSTNME     MIDINIT LASTNAME        WORKDEPT PHONENO HIREDATE   JOB      EDLEVEL SEX BIRTHDATE  SALARY      BONUS       COMM       
------ ------------ ------- --------------- -------- ------- ---------- -------- ------- --- ---------- ----------- ----------- -----------
000010 CHRISTINE    I       HAAS            A00      3978    01/01/1995 PRES          18 F   24/08/1963   152750.00     1000.00     4220.00

  1 record(s) selected.


Terminate the connection

db2 terminate

DB20000I  The TERMINATE command completed successfully.

Again, this is definitely a Your Mileage May Vary (YMMV) post.

If in doubt, please check with IBM Support *BEFORE* following the steps outlined here …

WebSphere Application Server - Testing JDBC connections via Jython and the EJBTimer

As part of a recent engagement, I'd written a simple Jython script to test WAS -> database connections: -

cellID = AdminControl.getCell()
cell=AdminConfig.getid( '/Cell:'+cellID+'/')
for dataSource in AdminConfig.list('DataSource',cell).splitlines():
 print dataSource
 AdminControl.testConnection(dataSource)


However, when I ran this against an IBM Business Process Manager Standard 8.6 environment, I saw this: -

DefaultEJBTimerDataSource(cells/PCCell1/applications/commsvc.ear/deployments/commsvc|resources.xml#DataSource_1228749623069)
WASX7017E: Exception received while running file "/mnt/Scripts/testDataSource.jy"; exception information: com.ibm.websphere.management.exception.AdminException
javax.management.MBeanException
java.sql.SQLException: java.sql.SQLException: Database '/opt/ibm/WebSphereProfiles/AppSrv01/databases/EJBTimers/AppClusterMember1/EJBTimerDB' not found. DSRA0010E: SQL State = XJ004, Error Code = 40,000

which was an annoyance, as I'm not actively using the EJBTimer datasource.

As ever, the solution was simple, rather than testing ALL datasources within the cell, I changed the script to only test the datasources that are specifically part of the BPM Deployment Environment i.e. those that are scoped at cluster level.

cellID = AdminControl.getCell()

cell=AdminConfig.getid( '/Cell:'+cellID+'/')
cluster=AdminConfig.getid("/ServerCluster:AppCluster/")
for dataSource in AdminConfig.list('DataSource',cluster).splitlines():
 print dataSource
 AdminControl.testConnection(dataSource)

cluster=AdminConfig.getid("/ServerCluster:AppCluster/")
for dataSource in AdminConfig.list('DataSource',cluster).splitlines():
 print dataSource
 AdminControl.testConnection(dataSource)

cluster=AdminConfig.getid("/ServerCluster:AppCluster/")
for dataSource in AdminConfig.list('DataSource',cluster).splitlines():
 print dataSource
 AdminControl.testConnection(dataSource)

For a BPM Standard environment, this is good enough………

Also, for the record, it's possible to see the EJBTimer datasources within the WAS Integrated Solutions Console: -




which is nice.

Wednesday, 18 July 2018

Learning something new every day .... Where's my WAS Admin Console stuff for BPM ?

So today I'm working with a client to deploy IBM BPM Standard 8.6 alongside IBM Master Data Management 11.6, to leverage the Data Stewardship capabilities that the BPM/MDM tie-up offers.

In brief, this means that I'm installing the BPM binaries into an existing installation of WebSphere Application Server Network Deployment 8.5.5.13, which is hosting the MDM environment ( Deployment Manager, Node Agent and Application Server/Cluster ).

However, when I created the BPM Deployment Environment, using BPMConfig.sh, I'm specifying to use DIFFERENT WAS profiles, leaving the MDM stuff well alone.

So, for no obvious reason, I hit a small glitch with my Deployment Manager, having built the BPM Deployment Environment.

Whilst I can start/stop things, run up the Process Center/Portal/Admin UIs etc., I'm unable to see  the BPM-related capabilities within the Deployment Manager ( Integrated Solutions Console ), such as: -




etc. even though the ISC *DID* show BPM as being installed : -


( ignore the version in this example; I've got 8.5.6 on my VM, but 8.6.0 on the customer's environment )

I also checked that the profile was properly augmented: -

cat /opt/ibm/WebSphere/AppServer/properties/profileRegistry.xml
 
<?xml version="1.0" encoding="UTF-8"?><profiles>
    <profile isAReservationTicket="false" isDefault="true" name="Dmgr01" path="/opt/ibm/WebSphereProfiles/Dmgr01" template="/opt/ibm/WebSphere/AppServer/profileTemplates/management">
        <augmentor template="/opt/ibm/WebSphere/AppServer/profileTemplates/BPM/BpmDmgr"/>
    </profile>
    <profile isAReservationTicket="false" isDefault="false" name="AppSrv01" path="/opt/ibm/WebSphereProfiles/AppSrv01" template="/opt/ibm/WebSphere/AppServer/profileTemplates/managed">
        <augmentor template="/opt/ibm/WebSphere/AppServer/profileTemplates/BPM/BpmNode"/>
    </profile>


but to no avail.

More weird, I received something similar to this: -

ServletWrappe E com.ibm.ws.webcontainer.servlet.ServletWrapper service SRVE0014E: Uncaught service() exception root cause /com.ibm.ws.console.servermanagement/addPropLayout.jsp: com.ibm.websphere.servlet.error.ServletErrorReport: javax.servlet.jsp.JspException: Missing message for key "addprops.category.label.businessintegration"

when I attempted to navigate here: -


and the right-hand side of the page merely contained that exception.

Thankfully this post had the answer: -


specifically this bit: -

3) run the command to restore admin console application: [Dmgr profile]/bin/iscdeploy.sh -restore
...

So I did the needful: -

/opt/ibm/WebSphereProfiles/Dmgr01/bin/iscdeploy.sh 

having shut down ALL the JVMs, and magically it fixed the problem.

I'm not sure how I got here, but glad I found a fix.

Thanks, Internet, you rock !



Friday, 13 July 2018

IBM Cloudant - Another useful source

I've referenced Glynn Bird here before, as the author of the most excellent couchimport, couchbackup and couchrestore tools.

Here's Glynn's personal site: -


including even more lovely CouchDB ( Cloudant ) goodness.

Wednesday, 11 July 2018

Microsoft Excel - Generating test data

We had a requirement to generate some test data within a Microsoft Excel spreadsheet.

similar to this: -


This was the magic invocation: -

="Subscriber Reference #"&ROW()

We merely needed to copy that to the clipboard, and paste it into a nice chunk 'o cells ….

So it's a piece of text concatenated with the row() index, using the ampersand ( & ) symbol.


IBM BPM and Oracle - Bootstrap challenges

So, whilst running the bootstrap command: -

/opt/ibm/WebSphereProfiles/Dmgr01/bin/bootstrapProcessServerData.sh -clusterName AppCluster

I saw this: -

java.lang.Exception: java.lang.reflect.InvocationTargetException
Caused by: java.lang.reflect.InvocationTargetException
Caused by: java.lang.IllegalStateException: Failed to initialize registry
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'handlersMap': Cannot create inner bean 'com.lombardisoftware.server.ejb.persistence.PSDefaultHandler#f4e9076' of type [com.lombardisoftware.server.ejb.persistence.PSDefaultHandler] while setting bean property 'sourceMap' with key [TypedStringValue: value [Branch], target type [null]]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.lombardisoftware.server.ejb.persistence.PSDefaultHandler#f4e9076' defined in class path resource [registry.persistence.xml]: Cannot resolve reference to bean 'dao.branch' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dao.branch' defined in class path resource [registry.persistence.xml]: Instantiation of bean failed; nested exception is java.lang.ExceptionInInitializerError

etc.

So I checked the log: -

view /opt/ibm/WebSphereProfiles/Dmgr01/logs/bootstrapProcesServerData.AppCluster.log

and saw this: -

[11/07/18 12:20:27:801 BST] 00000001 ProviderTrack I com.ibm.ffdc.osgi.ProviderTracker AddingService FFDC1007I: FFDC Provider Installed: com.ibm.ffdc.util.provider.FfdcOnDirProvider@cfa421b1
[11/07/18 12:20:28:470 BST] 00000001 LocalCache    I   CWLLG2155I:  Cache settings read have been from file file:////opt/ibm/WebSphere/AppServer/BPM/Lombardi/process-server/twinit/lib/basic_resources.jar!/LombardiTeamWorksCache.xml.
[11/07/18 12:20:28:768 BST] 00000001 XmlBeanDefini I org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml]
[11/07/18 12:20:28:784 BST] 00000001 SQLErrorCodes I org.springframework.jdbc.support.SQLErrorCodesFactory <init> SQLErrorCodes loaded: [DB2, Derby, H2, HSQL, Informix, MS-SQL, MySQL, Oracle, PostgreSQL, Sybase]
[11/07/18 12:20:28:787 BST] 00000001 SQLErrorCodes W org.springframework.jdbc.support.SQLErrorCodesFactory getErrorCodes Error while extracting database product name - falling back to empty error codes
                                 org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory, cause: IO Error: The Network Adapter could not establish the connection


I checked that the Oracle 12c listener appeared to be running: -

netstat -aon|grep LISTEN

tcp        0      0 127.0.0.1:1521          0.0.0.0:*               LISTEN      off (0.00/0/0)
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      off (0.00/0/0)
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      off (0.00/0/0)
tcp6       0      0 :::36875                :::*                    LISTEN      off (0.00/0/0)
tcp6       0      0 :::22                   :::*                    LISTEN      off (0.00/0/0)
tcp6       0      0 ::1:25                  :::*                    LISTEN      off (0.00/0/0)
tcp6       0      0 :::5500                 :::*                    LISTEN      off (0.00/0/0)


lsnrctl status listener

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 11-JUL-2018 12:42:50

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date                10-JUL-2018 20:22:19
Uptime                    0 days 16 hr. 20 min. 30 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /home/oracle/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Listener Log File         /home/oracle/app/oracle/diag/tnslsnr/bpm/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=bpm.uk.ibm.com)(PORT=5500))(Security=(my_wallet_directory=/home/oracle/app/oracle/admin/orcl/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "orcl.uk.ibm.com" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...
Service "orclXDB.uk.ibm.com" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...
The command completed successfully


Notice the problem ?

Both the netstat and the lsnrctl commands show that the listener is "bound" to 127.0.0.1 rather than to the server's hostname.




lsnrctl stop LISTENER

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 11-JUL-2018 12:44:58

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=bpm.uk.ibm.com)(PORT=1521)))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 111: Connection refused
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
The command completed successfully


 lsnrctl start LISTENER

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 11-JUL-2018 12:45:09

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Starting /home/oracle/app/oracle/product/12.2.0/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.2.0.1.0 - Production
System parameter file is /home/oracle/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Log messages written to /home/oracle/app/oracle/diag/tnslsnr/bpm/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=bpm.uk.ibm.com)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=bpm.uk.ibm.com)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date                11-JUL-2018 12:45:09
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /home/oracle/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Listener Log File         /home/oracle/app/oracle/diag/tnslsnr/bpm/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=bpm.uk.ibm.com)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully


I then saw this: -

ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

which I solved via a previous blog post: -

So, the conclusion is that the Listener needs to know / care about the host name of the box: -

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=bpm.uk.ibm.com)(PORT=1521)))

whereas the BPM -> Oracle connectivity needs to know / care about the Oracle Service Name: -

sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Jul 11 15:06:23 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select value from v$parameter where name='service_names';

VALUE
--------------------------------------------------------------------------------
orcl.uk.ibm.com

SQL>
exit

so the JDBC data sources need to look like this: -


This needs to be reflected in the BPM Deployment Environment properties file: -

bpm.de.db.1.databaseName=orcl.uk.ibm.com
bpm.de.db.2.databaseName=orcl.uk.ibm.com
bpm.de.db.3.databaseName=orcl.uk.ibm.com

So now we have this: -

/opt/ibm/WebSphereProfiles/Dmgr01/bin/bootstrapProcessServerData.sh -clusterName AppCluster

Bootstraping data into cluster AppCluster and logging into /opt/ibm/WebSphereProfiles/Dmgr01/logs/bootstrapProcesServerData.AppCluster.log

WASX7357I: By request, this scripting client is not connected to any server process. Certain configuration and application operations will be available in local mode.
'BootstrapProcessServerData admin command completed successfully.....'

Saturday, 7 July 2018

IBM AppConnect and DB2 and SQL0530N

I'm fiddling about with IBM AppConnect Professional ( formerly known as CastIron ), looking at the integration between a flow running on AppConnect, hosted on the IBM Cloud ( nee Bluemix ) and a DB2 database running on a VM on my Mac.

I'll be writing another blog post about the actual integration, including the Secure Gateway later.

Meantime, I wanted to test my flow, which should be monitoring a table for changes.

I did this by inserting a new row into the EMPLOYEE table of the SAMPLE database ( which has been around since I worked on DB2/400 in the mid-90s ).

This is what that table looks like: -

db2 describe "select * from DB2INST1.EMPLOYEE"

 Column Information

 Number of columns: 14

 SQL type              Type length  Column name                     Name length
 --------------------  -----------  ------------------------------  -----------
 452   CHARACTER                 6  EMPNO                                     5
 448   VARCHAR                  12  FIRSTNME                                  8
 453   CHARACTER                 1  MIDINIT                                   7
 448   VARCHAR                  15  LASTNAME                                  8
 453   CHARACTER                 3  WORKDEPT                                  8
 453   CHARACTER                 4  PHONENO                                   7
 385   DATE                     10  HIREDATE                                  8
 453   CHARACTER                 8  JOB                                       3
 500   SMALLINT                  2  EDLEVEL                                   7
 453   CHARACTER                 1  SEX                                       3
 385   DATE                     10  BIRTHDATE                                 9
 485   DECIMAL                9, 2  SALARY                                    6
 485   DECIMAL                9, 2  BONUS                                     5
 485   DECIMAL                9, 2  COMM                                      4


This is what I ran: -

db2 "INSERT INTO DB2INST1.EMPLOYEE   VALUES('000001','Dave','M','Hay','ABC','2122','30/10/1999','Guru',18,'F','30/10/1973',1234.89,1234.89,1221.89)"

which returned: -

DB21034E  The command was processed as an SQL statement because it was not a 
valid Command Line Processor command.  During SQL processing it returned:
SQL0530N  The insert or update value of the FOREIGN KEY 
"DB2INST1.EMPLOYEE.RED" is not equal to any value of the parent key of the 
parent table.  SQLSTATE=23503

which baffled me somewhat.

I dug into the database further: -

db2look -d sample -e -t db2inst1.employee

CREATE TABLE "DB2INST1"."EMPLOYEE"  (
  "EMPNO" CHAR(6 OCTETS) NOT NULL , 
  "FIRSTNME" VARCHAR(12 OCTETS) NOT NULL , 
  "MIDINIT" CHAR(1 OCTETS) , 
  "LASTNAME" VARCHAR(15 OCTETS) NOT NULL , 
  "WORKDEPT" CHAR(3 OCTETS) , 
  "PHONENO" CHAR(4 OCTETS) , 
  "HIREDATE" DATE , 
  "JOB" CHAR(8 OCTETS) , 
  "EDLEVEL" SMALLINT NOT NULL , 
  "SEX" CHAR(1 OCTETS) , 
  "BIRTHDATE" DATE , 
  "SALARY" DECIMAL(9,2) , 
  "BONUS" DECIMAL(9,2) , 
  "COMM" DECIMAL(9,2) )   
 IN "USERSPACE1"  
 ORGANIZE BY ROW; 


-- DDL Statements for Primary Key on Table "DB2INST1"."EMPLOYEE"

ALTER TABLE "DB2INST1"."EMPLOYEE" 
ADD CONSTRAINT "PK_EMPLOYEE" PRIMARY KEY
("EMPNO");



-- DDL Statements for Indexes on Table "DB2INST1"."EMPLOYEE"

SET SYSIBM.NLS_STRING_UNITS = 'SYSTEM';

CREATE INDEX "DB2INST1"."XEMP2" ON "DB2INST1"."EMPLOYEE" 
("WORKDEPT" ASC)

COMPRESS NO 
INCLUDE NULL KEYS ALLOW REVERSE SCANS;
-- DDL Statements for Aliases based on Table "DB2INST1"."EMPLOYEE"

CREATE ALIAS "DB2INST1"."EMP" FOR TABLE "DB2INST1"."EMPLOYEE";


-- DDL Statements for Foreign Keys on Table "DB2INST1"."EMPLOYEE"

ALTER TABLE "DB2INST1"."EMPLOYEE" 
ADD CONSTRAINT "RED" FOREIGN KEY
("WORKDEPT")
REFERENCES "DB2INST1"."DEPARTMENT"
("DEPTNO")
ON DELETE SET NULL
ON UPDATE NO ACTION
ENFORCED
ENABLE QUERY OPTIMIZATION;
...

which showed me the error of my way.

In essence, the WORKDEPT column is actually keyed against a different table: -

db2 describe "select * from DB2INST1.DEPARTMENT"

 Column Information

 Number of columns: 5

 SQL type              Type length  Column name                     Name length
 --------------------  -----------  ------------------------------  -----------
 452   CHARACTER                 3  DEPTNO                                    6
 448   VARCHAR                  36  DEPTNAME                                  8
 453   CHARACTER                 6  MGRNO                                     5
 452   CHARACTER                 3  ADMRDEPT                                  8
 453   CHARACTER                16  LOCATION                                  8

db2 "select * from DB2INST1.DEPARTMENT"

DEPTNO DEPTNAME                             MGRNO  ADMRDEPT LOCATION        
------ ------------------------------------ ------ -------- ----------------
A00    SPIFFY COMPUTER SERVICE DIV.         000010 A00      -               
B01    PLANNING                             000020 A00      -               
C01    INFORMATION CENTER                   000030 A00      -               
D01    DEVELOPMENT CENTER                   -      A00      -               
D11    MANUFACTURING SYSTEMS                000060 D01      -               
D21    ADMINISTRATION SYSTEMS               000070 D01      -               
E01    SUPPORT SERVICES                     000050 A00      -               
E11    OPERATIONS                           000090 E01      -               
E21    SOFTWARE SUPPORT                     000100 E01      -               
F22    BRANCH OFFICE F2                     -      E01      -               
G22    BRANCH OFFICE G2                     -      E01      -               
H22    BRANCH OFFICE H2                     -      E01      -               
I22    BRANCH OFFICE I2                     -      E01      -               
J22    BRANCH OFFICE J2                     -      E01      -               

  14 record(s) selected.


My insert: -

db2 "INSERT INTO DB2INST1.EMPLOYEE   VALUES('000001','Dave','M','Hay','ABC','2122','30/10/1999','Guru',18,'F','30/10/1973',1234.89,1234.89,1221.89)"

is using a DIFFERENT and NON-EXISTENT code ( ABC ) for WORKDEPT.

I changed my insert to: -

db2 "INSERT INTO DB2INST1.EMPLOYEE   VALUES('000001','Dave','M','Hay','A00','2122','30/10/1999','Guru',18,'F','30/10/1973',1234.89,1234.89,1221.89)"

and it all worked: _

DB20000I  The SQL command completed successfully.

Yay !

IBM API Connect V2018.3.1 is available

This just in: -

IBM API Connect V2018.3.1 is now available. This update includes important internal development fixes and support for the API Designer as part of the toolkit.

Content

IBM API Connect 2018.x delivers enhanced capabilities for the market-leading IBM API management solution. In addition to the ability to deploy in complex, multi-cloud topologies, this version provides enhanced experiences for developers and cloud administrators at organizations.

The API Connect 2018.3.1 update includes important internal development fixes. In addition, this release includes the API Designer within the toolkit. API developers use the API management functions in the API Designer or the CLI to create draft API definitions for REST and SOAP APIs, or for OAuth provider endpoints that are used for OAuth 2.0 authentication. The API definitions can be configured to add the API to a Product, add a policy assembly flow (to manipulate requests/responses), and to define security options and other settings. APIs can then be tested locally prior to publishing, to ensure they are defined and implemented correctly.

Upgrading to 2018.3.1 makes changes to the underlying data structure of API Connect.  It is highly recommended to have automatic backups configured in your environment and at least one successful backup complete prior to performing this upgrade.

We advise all users of IBM API Connect 2018.1.x and earlier versions of IBM API Connect 2018.2.x to install this update to take advantage of the fixes.

Friday, 6 July 2018

Doofus Alert - Using Cloudant queries via cURL

I'm continuing to tinker with Cloudant, and am looking at how I can use indexes ( indices ? ) via the command-line using cURL.

This is what I'm sending: -

curl -X POST -H 'Content-type: application/json' -g $COUCH_URL/$COUCH_DATABASE/_find -d query.json

and this is what I'm seeing: -

{"error":"bad_request","reason":"invalid UTF-8 JSON"}

I checked my query: -

cat query.json 

{
   "selector": {
      "$or": [
         {
            "givenName": "Maggie"
         },
         {
            "givenName": "Lisa"
         }
      ]
   },
   "fields": [
      "givenName",
      "familyName"
   ],
   "sort": [
      {
         "givenName": "asc"
      }
   ]
}

and it looks OK.

Thankfully, this chap had already hit the same: -



Yep, that's exactly where I went wrong ….

I changed my query: -

curl -X POST -H 'Content-type: application/json' -g $COUCH_URL/$COUCH_DATABASE/_find -d @query.json

and … guess what ….

IT WORKED

{"docs":[
{"givenName":"Lisa","familyName":"Simpson"},
{"givenName":"Maggie","familyName":"Simpson"}
],
"bookmark": "g1AAAAA9eJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYozGoIkOGASOSAhkDibb2J6emZqVhYA5ooQDg"}

Can you say "Doofus" ? I bet you can ….

Thursday, 5 July 2018

Note to self - setting AND keeping alternate boot device on  macOS

Having added a shiny new 256 GB SSD drive to my  Mac Mini ( this is a USB3 device as I didn't fancy opening up the Mini and replacing the in-built Fusion drive ), I needed a way to make the drive bootable.

I'd already used SuperDuper to clone the old drive to the new drive.

I just needed to work out how to (a) boot from it and (b) make the new drive the main drive.

This gave me the answers: -


Specifically this: -


Note this subtlety: -


Yes, it's all very well booting from the SSD, but no good if it then reverts back to the "spinning rust" that is the Fusion drive ( yes, I know it's a mix of disk and SSD ).

However, the other issue that I faced was that my Bluetooth keyboard ( connected via an external USB Bluetooth dongle ) did NOT allow me to press [Option] during the boot process.

This came to the rescue: -


however, if you use Apple's Bluetooth keyboard, you could find that the system may ignore these inputs and boot normally. While you might assume that these options require a USB keyboard or other physical connection

Thankfully I had a wired USB keyboard, so I used that ….

The article does offer some other guidance: -

If any inputs are being sent via the Bluetooth keyboard before the controllers are active, then they will not be recognized by the system. However, if these inputs are performed after the controllers are activated, then they will be properly read. Therefore, for Bluetooth keyboards, be sure to press the desired key sequences after you hear the boot chimes and not before.

which is nice.

So I'm now booting from USB/SSD and the 2014 Mac Mini is suddenly WAY faster !

Cloudant - Fun with Indexing and Querying

So I was trying to resolve an issue for a colleague, who needed to use an $or operator.

He found that his query would take a very long time ( minutes ) and fail to return any results, searching through ~500K documents.

I tested the problem and, eventually, the solution, using my own data set: -

id,givenName,familyName
1,Maggie,Simpson
2,Lisa,Simpson
3,Bart,Simpson
4,Homer,Simpson
5,Fred,Flintstone
6,Wilma,Flintstone
7,Barney,Rubble
8,Betty,Rubble


In Cloudant, each document looks like this: -

{
  "_id": "1",
  "_rev": "1-0152a3e6867ad34da6e882a80f0fbeff",
  "id": "1",
  "givenName": "Maggie",
  "familyName": "Simpson"
}

{
  "_id": "2",
  "_rev": "1-6bbb94301323a3c3f6ff54f1c3c765e5",
  "id": "2",
  "givenName": "Lisa",
  "familyName": "Simpson"
}

etc.

So this was the query I was using: -

{
  "selector": {
     "familyName": "Simpson",
     "givenName": {
        "$or": [
           {
              "givenName": "Maggie"
           },
           {
              "givenName": "Lisa"
           }
        ]
     }
  },
  "fields": [
     "givenName",
     "familyName"
  ],
  "sort": [
     {
        "givenName": "asc"
     }
  ]
}

In my simple brain, this would return documents for Maggie and Lisa, out of the eight in my database.

I'd previously created this index: -

{
   "index": {
      "fields": [
         "givenName"
      ]
   },
   "name": "givenName-json-index",
   "type": "json"
}

When I ran my query, I got nothing back: -


apart from this statistic: 


Thankfully I found a smart person on our Cloudant Slack channel, who told me where I was going wrong: -


So I changed my query: -

{
   "selector": {
      "$or": [
         {
            "givenName": "Maggie"
         },
         {
            "givenName": "Lisa"
         }
      ]
   },
   "fields": [
      "givenName",
      "familyName"
   ],
   "sort": [
      {
         "givenName": "asc"
      }
   ]
}

and now I see data: -


Yay!



This was also very useful: -


Following up ... defining K8S Services using YAML

As a fup to this: - Playing with Kubernetes deployments and NodePort services life is SO much easier if I choose to define the service...