Note: This article provides an approach to implement compression in OSB 11g. OSB 12c supports compression OOTB. For details please refer to the following documentation:
Compressed HTTP Request and Response Payload Support
HTTP compression is a mechanism that enables client and server to exchange compressed data over HTTP to improve performance by better use the bandwidth. Oracle Service Bus (OSB) doesn't support gzip compression by default, this might create a potential performance issue if the large payload is exchanged. It may become even more serious if data exchange takes place over a long distant WAN or a high latency network. This blog will illustrate several workarounds of how to handle HTTP compression with OSB.
Before move on, let's have a recall of how does HTTP compression work.
An HTTP Client sends a request to an HTTP server by inserting Accept-Encoding:gzip into HTTP header to tell the server that it accepts gzip compressed response payload. If the server supports gzip compression, it compresses the response payload, and inserts Content-Encoding:gzip into the response's HTTP header to confirm the compression.
Compression isn't free, the higher CPU usage is expected for compression and decompression. It is a tradeoff - it doesn't necessarily bring you better performance for both client and server in the same, low latency network with a small sized response payload.
However, the compression is preferable for:
OSB will support HTTP compression in future release, so the ease of upgrade is an important factor to consider when we work out on the solutions. In this blog, the following workarounds will be elaborated in the context of Web Service over HTTP:
Other approaches were also iterated and tested. Due to the extra level of complexity and the needs of changing OSB artifacts, they were relinquished:
OSB allows both HTTP headers and content to be passed through proxy service's pipeline intact. By taking advantage of this feature, the Pass Through workaround provides a simple solution with high performance.
OSB, when invoked by a client, routes the received request to server along with the client's original HTTP headers including the property Accept-Encoding:gzip.
The server examines the received HTTP header and finds Accept-Encoding:gzip is present. It compresses the response payload, inserts Content-Encoding:gzip in HTTP header, and sends it back to OSB.
OSB simply passes the compressed response payload and the server's headers back to client. The client decompresses and processes the response.
This workaround can be implemented as follows:
When defining the proxy service, “Get All Headers” must be ticked. This allows OSB to keep received HTTP header in proxy service's pipeline.
If no data processing within proxy service is required, you may consider this workaround to benefit its best performance.
In a typical use-case of OSB, it is expected that:
VOTE (Validation, Orchestration, Transformation, Enrichment) in proxy service's pipeline are needed
Security enforcement, logging, tracing or reporting features are commonly used
The outbound response (response from server application) and inbound response (response to client) are different
The inbound and outbound protocols might be different too, for instance, JMS for inbound and HTTP for outbound.
In order to support these broader range of scenarios, the uncompressed payload is required to be processed within OSB. To make this workable, the solution is to externalize compression outside of OSB artifacts. There is no more constraints imposed to OSB development as it makes compression totally transparent to OSB artifacts.
A typical deployment architecture is very likely to have load balancer or proxy server frond-end OSB cluster to load balance and failover inbound invocations. Most load balancer and proxy server support compression, we can simply delegate compression to the frond-end proxy server for inbound invocation. (1)
Now, let's focus on outbound invocation. The following two workarounds handle compression out of OSB.
External Proxy Server
The workaround is to put an external compression-enabled proxy server (for instance, an apache proxy server or even a hardware proxy server) between OSB and backend Web Service server to handle compression. The idea is to reduce the amount of data transferred between proxy server and backend server. The data exchange between OSB and proxy server remains uncompressed.
Firstly, we need to define proxy server in OSB. Multiple proxy servers should be taken into account in considering load balancing and failover.
Oracle Service Bus Documentation describes how do multiple proxy servers work: Adding multiple proxy servers to a resource enables Oracle Service Bus to perform load balancing and offer fault tolerance among the configured proxy servers. If a particular proxy server is not reachable, Oracle Service Bus attempts to use the next proxy server in the configuration. If all proxy servers are unreachable, Oracle Service Bus tries to connect to the back end service directly. If that too fails, a fault is raised and sent back to the caller.
OSB allows fine grained configuration that you can specify some business services using proxy server and not others. On business configuration page, select the configured proxy server for a specific business service that relies on proxy server to handle compression.