X

Best Practices from Oracle Development's A‑Team

Handling HTTP Compression with OSB

Note: This article provides an approach to implement compression in OSB 11g. OSB 12c supports compression OOTB. For details please refer to the following documentation:
Compressed HTTP Request and Response Payload Support

 

HTTP compression is a mechanism that enables client and server to exchange compressed data over HTTP to improve performance by better use the bandwidth. Oracle Service Bus (OSB) doesn't support gzip compression by default, this might create a potential performance issue if the large payload is exchanged. It may become even more serious if data exchange takes place over a long distant WAN or a high latency network. This blog will illustrate several workarounds of how to handle HTTP compression with OSB.

HTTP Compression

Before move on, let's have a recall of how does HTTP compression work.

An HTTP Client sends a request to an HTTP server by inserting Accept-Encoding:gzip into HTTP header to tell the server that it accepts gzip compressed response payload. If the server supports gzip compression, it compresses the response payload, and inserts Content-Encoding:gzip into the response's HTTP header to confirm the compression.

Compression isn't free, the higher CPU usage is expected for compression and decompression. It is a tradeoff - it doesn't necessarily bring you better performance for both client and server in the same, low latency network with a small sized response payload.

However, the compression is preferable for:

  • large payload, especially for large XML payload whose size can be significantly reduced by compression
  • data exchange over long distant network or high latency network. For example, OSB makes invocation to a partner's service residing in a different geographical location.

 

Workarounds

OSB will support HTTP compression in future release, so the ease of upgrade is an important factor to consider when we work out on the solutions. In this blog, the following workarounds will be elaborated in the context of Web Service over HTTP:

  • Pass Through
  • Proxy Server
  • Compression Servlet

Other approaches were also iterated and tested. Due to the extra level of complexity and the needs of changing OSB artifacts, they were relinquished:

  • Java Callout to make compression and decompression in Java POJO
  • Java Callout to invoke apache HTTP Client

 

Pass Through

OSB allows both HTTP headers and content to be passed through proxy service's pipeline intact. By taking advantage of this feature, the Pass Through workaround provides a simple solution with high performance.

OSB, when invoked by a client, routes the received request to server along with the client's original HTTP headers including the property Accept-Encoding:gzip.

The server examines the received HTTP header and finds Accept-Encoding:gzip is present. It compresses the response payload, inserts Content-Encoding:gzip in HTTP header, and sends it back to OSB.

OSB simply passes the compressed response payload and the server's headers back to client. The client decompresses and processes the response.

This workaround can be implemented as follows:

  • When defining the proxy service, “Get All Headers” must be ticked. This allows OSB to keep received HTTP header in proxy service's pipeline.

  • In Route action, the transport header actions for request and response must be added. This ensures the received HTTP headers to be passed through along with outbound request and inbound response respectively.

Pros:

  • This workaround provides best performance among the 3 workarounds discussed in this blog since OSB doesn't touch the content, therefore, no need to deal with compression and decompression inside OSB. The time spent in OSB is remarkably short - the test on my laptop shows only couple dozen milliseconds was taken by OSB for a response payload of 2.4 MB.

Cons:

  • The interface of proxy service and the backend service must be identical
  • The response payload is not accessible or “touchable” in pipeline: logging, tracing, validation and transformation are not allowed

If no data processing within proxy service is required, you may consider this workaround to benefit its best performance.

 

In a typical use-case of OSB, it is expected that:

  • VOTE (Validation, Orchestration, Transformation, Enrichment) in proxy service's pipeline are needed

  • Security enforcement, logging, tracing or reporting features are commonly used

  • The outbound response (response from server application) and inbound response (response to client) are different

  • The inbound and outbound protocols might be different too, for instance, JMS for inbound and HTTP for outbound.

In order to support these broader range of scenarios, the uncompressed payload is required to be processed within OSB. To make this workable, the solution is to externalize compression outside of OSB artifacts. There is no more constraints imposed to OSB development as it makes compression totally transparent to OSB artifacts.

A typical deployment architecture is very likely to have load balancer or proxy server frond-end OSB cluster to load balance and failover inbound invocations. Most load balancer and proxy server support compression, we can simply delegate compression to the frond-end proxy server for inbound invocation. (1)

Now, let's focus on outbound invocation. The following two workarounds handle compression out of OSB.

External Proxy Server

The workaround is to put an external compression-enabled proxy server (for instance, an apache proxy server or even a hardware proxy server) between OSB and backend Web Service server to handle compression. The idea is to reduce the amount of data transferred between proxy server and backend server. The data exchange between OSB and proxy server remains uncompressed.

Firstly, we need to define proxy server in OSB. Multiple proxy servers should be taken into account in considering load balancing and failover.

Oracle Service Bus Documentation describes how do multiple proxy servers work: Adding multiple proxy servers to a resource enables Oracle Service Bus to perform load balancing and offer fault tolerance among the configured proxy servers. If a particular proxy server is not reachable, Oracle Service Bus attempts to use the next proxy server in the configuration. If all proxy servers are unreachable, Oracle Service Bus tries to connect to the back end service directly. If that too fails, a fault is raised and sent back to the caller.

OSB allows fine grained configuration that you can specify some business services using proxy server and not others. On business configuration page, select the configured proxy server for a specific business service that relies on proxy server to handle compression.