Enabling Debug Logging for Managed Servers in Oracle Service Bus 12c

In 12c, changing the logging settings for Service Bus is done via the EM console. However, this only works for the Admin server. If you’re using a clustered domain and need to enable debug on one or more managed servers, you must update the log configuration by hand.  This is true for Service Bus whether […]

Configuring SOACS for BPEL invocations of OAuth2 protected services

OAuth2 has become increasingly popular for authorizing access to web services.  Invoking these services is possible by applying Oracle Web Services Manager (OWSM) policies to the component in the composite.  OWSM actually acquires the OAuth Access Token and includes it in the HTTP request to the resource server for you automatically.  This blog describes the […]

Poller Transport Based Service Management Scripts

The polling transports (Email, File, FTP) in Service Bus only poll on one managed server for a given service, which is defined in the service configuration.  However, if the polling managed server is not running, polling is not resumed automatically on another managed server in the cluster. The zip file below contains a collection of […]

OSB MQ Transport Tuning – Proxy Service threads

The MQ Transport is a polling transport.  At the user defined interval, a polling thread, fired by a timer, checks for new messages to process on a designated queue.  If messages are found, a number of worker thread requests are scheduled to execute via the WebLogic work scheduler.  Each of these worker threads will get […]

OSB Http Transport Client Certificate Authentication Common Pitfall

I recently worked with a customer to help them resolve some issues they were having with configuring client certificate authentication (2-way SSL) for an Http Business Service in Oracle Service Bus (OSB).  This blog is to discuss a common issue encountered and how to fix it. The customer’s use case was to invoke a service […]

OSB Threading and the HTTP Transport White Paper

I have created a white paper explaining the OSB threading model with a focus on the HTTP transport.  I have heard from several customers who have experienced difficulty with tuning HTTP services with relation to the use of work managers.  This paper’s goal is to explain the threads involved in servicing a proxy and how […]

Oracle Service Bus JMS Deployments Utility

For proxy services utilizing the JMS transport, OSB receives messages from destinations by using an MDB.  These MDBs get generated and deployed during activation of the service configuration.  OSB creates a random, unique name for the J2EE application that gets deployed to WLS.  The name starts with “_ALSB_” and ends in a unique series of […]

OSB Performance Tuning – RouterRuntimeCache

Many customers start out with smaller projects for an initial release.  Typically, these applications require 20-30 Proxy services.  But as time goes on and later phases of the project rollout, the number of proxy services can increase drastically.  The RouterRuntimeCache is a cache implemented by OSB to improve performance by eliminating or reducing the amount of time spent on compiling the proxy pipeline. 

By default, OSB will not compile a pipeline until a request message for a given service is received.  Once it has been compiled, the pipeline is cached in memory for re-use.  You have probably noticed in testing that the first request to a service takes longer to respond than subsequent requests, and this is a big part of the reason.  Since free heap space is often at a premium, this cache can not be infinite in size so this cache has a built in limit.  When the cache is full, the least recently used entry is released and the pipeline that is currently being requested is placed into cache in its place.  The next time a request comes in for the service who’s pipeline was released, that pipeline has to be re-compiled and placed in cache, again forcing out the least recently used pipeline.  Once a pipeline is placed in cache it is never removed from cache unless forced out by a full cache scenario as above, or if the service is updated, forcing it to be recompiled.

The default size limit of the RouterRuntimeCache is 100 entries (or pipelines).  It is limited by the number of services in the cache, not the memory used by the cache so the amount of memory used by a full cache will vary greatly based on the complexity of the services, the extent and complexity of inline xquery, etc.  If your project grows beyond 100 proxy services, system performance can degrade significantly if the cache size is not increased to hold all frequently used services. 

the way to tune this cache is not exposed through the OSB console.  As
of 11g PS5, the only way to set this parameter is via a system property
specified on the Java command-line.  The property name is com.bea.wli.sb.pipeline.RouterRuntimeCache.size.   For

“java … -Dcom.bea.wli.sb.pipeline.RouterRuntimeCache.size=500 …
weblogic.Server …”. 

In this example, OSB will cache 500 proxies,
instead of the default 100.  Because
increasing the RouterRuntimeCache.size value will
require more space in the heap to hold the additional proxies, be aware that you may need to
reevaluate your JVM memory settings to allow OSB to
continue to perform optimally.