OSB MQ Transport Tuning – Proxy Service threads

The MQ Transport is a polling transport.  At the user defined interval, a polling thread, fired by a timer, checks for new messages to process on a designated queue.  If messages are found, a number of worker thread requests are scheduled to execute via the WebLogic work scheduler.  Each of these worker threads will get messages from the queue and initiate the processing of a proxy service.

The number of worker threads to schedule is based on the following factors:

The queue depth
The number of managed servers
The number of worker threads currently executing for this proxy service
The max number of threads defined for the work manager associated with the proxy service

If a work manager is not assigned to the proxy service, it will use the WLS Default work manager and if a Max Threads Constraint has not been defined for the Default work manager it will use 16 as a default value.  Therefore, without any tuning, the default will result in a maximum of 16 threads concurrently processing for a given MQ Proxy service.  In order to change this, define a new work manager with the desired max thread constraint and assign it to the proxy service via its dispatch policy setting.


  1. Abhi Porwal says:

    Hi Mike ,

    I have one query . 4 MDB was connected to WMQ in weblogic 10.3.2 and set up was like that
    8 mgd server in cluster .deployment of 4 MDB was done on cluster .so according to the concept it should make
    16*8*4 =512 listenner or can say connnection in WMQ. But sadly it was not as max channel in qm.ini was 300 only .

    Now We have upgraded to weblogic 10.3.6 and set up is same . when we deploy MDB it was conneted to some server but in some server it was and reason I found is that max channel was 300 it needs 512 now .

    I know it is correct now in 10.3.6 as per the weblogic doc but My Question is that how it was working fine in 10.3.2 and all MDB was connecting to all the mgd server then also

Add Your Comment