X

Best Practices from Oracle Development's A‑Team

High availability considerations for WebCenter Portal and ADF

Introduction

High availability is very important in large WebCenter installations. It allows users to gracefuly switch from one node to another node in the cluster. This allows user to keep their session in case a node goes down.
In order for this to work, some conciderations need to be made when implementing a WebCenter Portal application.

In this article we will explain some guidelines which should help implementing with High Availability in mind. Even if you are not implementing with HA in mind, these guidelines will help make your application perform better and more stable.

Main Article

In order to know how we can implement a solution that supports High Availability (HA) we first need to understand how HA works and how WebLogic handles a fail-over.

Whenever you have configured a cluster of managed server for your application, WebLogic will replicate the HTTP session to a secondary machine. When the primary machine where the session is running on fails, WebLogic will point the user to the secondary machine and the user will be able to continue the work without losing hit data.

WebLogic duplicates those sessions by serializing the objects in the session and than transfers it to the secondary machine. Only the object that are stored in the session will be replicated. From an ADF perspective this means that managed bean with pageFlowScope and above will be replicated.
In order for this to work all object in the session need to be serializable. If not, the session cannot completely be replicated which means the user will lose data upon fail-over.

WebLogic also need to be notified when the state of an object changes so it can propagate these changes to the secondary machine.

Some implementation guidelines need to be followed in order to fully support a HA environment.
Below is a list with those guidelines that can help implementing for a High Available environment

Serialization of Managed Beans

One of the key items for a successful HA environment is that all object in the session are serializable. If objects cannot be serializable than WebLogic will not be able to replicate those object to the fail-over server.
ADF UI components are not serializable by design. This means that there are bindings in managed bean with PageFlowScope or SessionScope, the replication will fail.

If there is a need for referencing those UI components in managed bean than we recommend following the ComponentReference pattern which is described in following post: http://www.ateam-oracle.com/rules-and-best-practices-for-jsf-component-binding-in-adf/

In addition to this, if custom objects are referenced in the managed bean, you always have to make sure those objects implement the Serializable interface. If not, replication will fail and the user will lose data upon fail-over.

Session footprint

The session footprint is an important factor when replicating the session. The bigger the session object is, the more traffic the replication will require to propagate the session to the secondary server.

Therefore it is important to keep track of your session footprint. The session footprint can be tracked with tools like JRockit Mission Control (for JRockit JVM) or VisualVM (for Hotspot JVM).

A session footprint of 3MB is considered very high in an ADF application.

Reasons for a large footprint can be any of the following:

  • Referencing UI components in managed bean with pageFlowScope or above
  • Referencing large data sets (View Objects) in managed beans with pageFlowScope or above

In order to minimize the footprint some design considerations can take place. When putting large objects in managed beans you always need to ask yourself the following question: Does the object really need to be stored in the pageFlowScope? Is ViewScope or RequestScope not enough? In a lot of the cases ViewScope will be sufficient.
Even if you need to pass on information from one view to another you still can use techniques like contextual events or passing parameters instead of storing an entire object in the session.

Lowering the session footprint will not only help in a HA environment. It will also help to optimize the heap size of the JVM. This will allow more users on the node than when a non optimized implementation is used.

Application Configuration

In order for the application to work in a fail-over environment, we need to tell the application that it needs to behave like a HA application.

This need to be done in two files: weblogic.xml and adf-config.xml:

weblogic.xml

In order to enable support for session replication you need to tell WebLogic to replicate the persistent store in a clustered environment. This can be done with following code:

<weblogic-web-app>     <session-descriptor>     <persistent-store-type>     replicated_if_clustered     </persistent-store-type>     </session-descriptor> </weblogic-web-app>

adf-config.xml

In order for HA to work in an ADF environment we need to tell the controller to replicate the beans in the pageFlowScope and ViewScope. This can be done by setting the adf-scope-ha-support to true:

<adf-controller-config xmlns="http://xmlns.oracle.com/adf/controller/config">  <adf-scope-ha-support>true</adf-scope-ha-support> </adf-controller-config>

Business Components considerations

Business components also need to have some additional configurations in order to support HA:

  • jbo.dofailover='true'
  • jbo.ampooling='true'

Mutating managed beans

Managed beans with a scope higher than request (view scope, pageflow scope) need to be propagated on the secundaire node. The controller won't know automatically when those managed beans are changed.
Whenever you make changes in a managed bean and those changes need to be replicated, you need to notify the controller so it can make sure the updated value of the bean will be passed on the secundaire node.

This can be done by executing following code:

Map<String, Object> viewScope =     AdfFacesContext.getCurrentInstance().getViewScope(); controllerContext ctx = ControllerContext.getInstance(); ctx.markScopeDirty(viewScope);

Replication groups

Within a HA environment, each session will be replicated to a second node. Not all nodes in the cluster will have all the session. This is to save network overhead and performance.
In order to have the best possible outcome, WebLogic will automatically replicate the session to a different machine than the current node. This is safe because if the machine fails, the session is available on a different machine.

If you have a big cluster with many machines it is possible that those machines are setup in different data centers. Therefore WebLogic allows you to configure replication groups. Those are configured in order to tell WebLogic which secondary machine to use for each machine.

In order to configure the replication group you need to configure the cluster settings in the WebLogic console:

Replication Groups

Conclusion

In order to support a truly High Available environment it is important to know you not only need to configure a cluster but it's also important to know you have to make some implementation considerations in order for the fail-over to work.

Additional information about High Availability can be found in the High Availability guide for ADF and WebCenter: http://docs.oracle.com/cd/E25054_01/core.1111/e10106/adf.htm

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha

Recent Content