In one of our recent engagements, we were challenged to improve the overall throughput of an Oracle AIA integration. This was a deployment of the ‘Order-to-Bill’ Process Integration Pack which is basically a solution for the Communications Industry enabling the order flow from a Siebel CRM deployment to the Oracle billing system called Billing and Revenue Management (BRM).
From a very high level, every time a new order (say a customer signs up for package including a fixed line, voice and internet services, and others) is being submitted from the Siebel user application, the following flow is executed by the Order-to-Bill solution:
So the question here is what is the best possible throughput we can achieve – that is orders per minute – in this scenario? And it becomes quickly obvious that the overall performance is not only dictated by how poor or how well the middleware operates here, but also what is the impact of the applications’ latency on the overall numbers.
In order to figure this out, we have stubbed both Siebel CRM and Oracle BRM as well so that we could achieve two things:
So we were actually replacing both Siebel CRM and Oracle BRM with mock services that we implemented with soapUI. On a side note: We didn’t use the AIA tool, CAVS, for creating mock services because of certain scenarios specific to this integration that CAVS doesn’t support. However we are leveraging the underlying CAVS framework. We will later see how this can be done.
So Siebel and BRM were out of the picture now and we able to run performance test runs that are repeatable and have no dependency on external system behavior. With that we could significantly improve the overall throughput by changing various performance-related configurations in the middleware including JVM sizing and parameters, BPEL and ESB threading models, etc.
Also, we were able to run what-if scenarios by introducing delays into the soapUI mock services, which are basically sleep statements to have the mock services wait for a certain time before responding to AIA. With that we could simulate situation where BRM responds in 0s (almost), 1s, 2s, etc. in average and see how this impacts the overall throughput. This was in particular useful to emphasize once more that poor application performance will always degrade the overall throughput and the best tuned middleware will never be able to compensate that.
The good news is that the implementation of routing to mock services, doesn’t require any sort of invasive change to your AIA services. This is because AIA services are already ‘CAVS enabled’ which means, one can change endpoints (to point to either applications or mock services) in a very dynamic fashion. So we basically only had to make these mock SOAP UI services available (we were not using CAVS simulators) and tell AIA in its configuration file (AIAConfigurationProperties.xml) to rather invoke these mock services than the actual application:
The other part is of course to implement the mock services that are able to simulate either Siebel or BRM APIs. SoapUI provides a convenient interface to create such mock services once all wsdl’s and xsd’s are copied from the AIA server. The initial wsdl can be easily identified from the AIA ABCS code, but in some cases it might take a little while to collect all dependent schema files so that everything can be successfully imported to soapUI.
The Siebel web services for querying customer details and for updating orders are rather static and were therefore relatively easy to simulate. The only crucial thing here was that we had to ensure that the query customer details API responds with unique identifiers to avoid clashes in the AIA X-Referencing database. This part was actually done by using a API in soapUI to generate a GUID that was dynamically embedded into the right places of the response payload.
The BRM side gave us a few more challenges. In particular the BRM API related to subscriptions requires a high degree of flexibility in generating a response payload that is considered valid by the AIA code. To achieve this, we were conditionally responding with different dynamic payload structures depending on the request payload that was sent by AIA. On a side note, this was the reason for not using CAVS simulators for building mock services as it’s possiiblities towards dynamic responses are limited.
Finally, the last piece to complete the above scenario is some way to generate the initial payload (orders that is) to initiate the flow. This has been accomplished by a simple BPEL process that basically takes a template payload stored in the database, makes it unique by simple text substitutions, and then pushes the order on the very same queue that Siebel usually uses to submit orders. AIA just picks them from there and starts the usual order orchestration.