Thursday 13 April 2017

JMSMQ1112: The operation for a domain specific object was not valid. The operation 'createProducer()' is not valid for type 'com.ibm.mq.jms.MQQueue'

We saw this exception today: -

Caused by: com.ibm.msg.client.jms.DetailedJMSException: JMSMQ1112: The operation for a domain specific object was not valid. The operation 'createProducer()' is not valid for type 'com.ibm.mq.jms.MQQueue'. A JMS application attempted to perform an operation on domain specific object, but the operation is valid only for the other messaging domain. Make sure that the JMS objects and operations used by your application are relevant for the required messaging domain. If your application uses both messaging domains, consider using domain independent objects throughout the application.

whilst testing an application that had been migrated from WebSphere Application Server (WAS) Network Deployment (ND) 7.0.0.27 to WAS ND 8.5.5.10.

The application or, more specifically, the Message Driven Bean (MDB) is activated ( woken up ) by a Message being placed onto a Queue on an WebSphere MQ Queue Manager, using a JMS Activation Specification and a JMS Queue.

The JMS artefacts act as a buffer ( more accurately, an abstraction ) between the actual Java code and the WebSphere MQ configuration.

This means that the developer merely needs to write his/her code to hit a JMS queue via an alias e.g. jms/OutputQ, rather than specifically writing WebSphere MQ client code within their Java code.

The actual awakening of the MDB occurs via the Activation Specification, rather than in the Java code itself.

Anyway, the incoming Message was awakening the MDB, proving that the WAS to MQ configuration was in order i.e. that there wasn't a problem with the "plumbing" - Queue Manager, Channel, Authentication etc.

And yet ….

I pinged the question to one of my fellow gurus, and he suggested checking the Class Loader.

In parallel, the application developer also dug through the code, and a trace ( thanks to this MustGather: MQ Java Message Service (JMS) problems with WebSphere Application Server ) and came to the same conclusion.

He suspected that the underlying WAS code was actually checking whether the target JMS Queue ( jms/OutputQ ) was really a MQ Queue ( as configured in WAS using the WebSphere MQ Resource Adapter ).

We checked the application ( the Enterprise Archive (EAR) that included the MDB ), and realised that the Class Loader Order differed from the old environment ( WAS 7 ) and the new environment  ( WAS 8.5.5 ), specifically: -


Once we changed the EAR from Parent Last to Parent First, which automatically caused the application to restart, the problem disappeared.

The developer checked and realised that they were packaging JMS 1.1 classes within the EAR file.

This meant that, in the WAS 8.5.5 environment, the JMS classes were being loaded from the EAR rather than from the WAS JVM itself.

Whilst Parent Last is NOT necessarily a problem, it was an issue here because JMS was being loaded TWICE.

Nice :-)

No comments:

Note to self - use kubectl to query images in a pod or deployment

In both cases, we use JSON ... For a deployment, we can do this: - kubectl get deployment foobar --namespace snafu --output jsonpath="{...