by Robert Patrick and Sabha Parameswaran
WebLogic Server clusters form a loosely-federated group of managed servers that provide a model for applications to leverage for achieving scalability, load balancing, and failover. Each managed server manages its view of what servers are in the cluster and its own cluster-wide view of the JNDI tree. The cluster uses a messaging model for members of the cluster to exchange the information required to keep the cluster in sync. WebLogic Server supports two different cluster messaging protocols, known as unicast and multicast. This blog entry describes the two different cluster messaging protocols and makes recommendations around selecting which protocol to use for all versions of WebLogic Server up to and including WLS 12c (12.1.x).
Most features of WebLogic Server clustering are targeted at providing scalability, high availability, and failover capabilities to specific application component types (e.g., web applications, EJBs, JMS). To support these capabilities, WebLogic Server clustering provides some infrastructure services upon which all of the other features rely. The two most important services are:
- Cluster Membership Service – Cluster members exchange messages so that each server can independently track the current cluster membership list.
- JNDI Replication Service – Cluster members exchange messages about local changes to their JNDI tree so that each server can independently maintain the current cluster-wide view of the JNDI tree.
These two infrastructure services are the primary users of the WLS cluster messaging protocol. Most of the other WLS clustering features (e.g., HTTP Session Replication, EJB clustering, and JMS clustering) do not use the cluster messaging protocol and simply rely on these two infrastructure services.
Cluster members maintain their own view of what servers are currently in the cluster. To accomplish this, each server periodically sends out heartbeat messages to the cluster to let the cluster members know that the server is alive. Each server is also receiving these messages from other cluster members and maintaining its cluster list based on these incoming heartbeat messages. If a cluster member misses a number of heartbeat messages in a row from another server, it removes that server from its cluster membership list until such a time when it receives the next heartbeat message from it. The number of consecutive missed heartbeats required to remove a server from the cluster list varies by cluster messaging protocol—the specifics are discussed later in this article.
For other reasons not related to cluster membership, cluster members may have persistent RMI/T3 socket connections to other cluster members. For example, HTTP Session replication uses an RMI/ T3 socket to replicate HTTP Sessions on that server to another server in the cluster. WebLogic Server cluster membership service also detects the death of an RMI/T3 socket to another cluster member and uses that information to remove the server at the other end of the socket from its cluster membership list. This allows the cluster membership service to detect cluster membership changes more quickly.
For more information on failure detection, please see the WebLogic Server documentation at http://docs.oracle.com/cd/E24329_01/web.1211/e24425/failover.htm#i1024590.
JNDI replication provides each server with a complete, cluster-wide view of the JNDI tree. This provides applications deployed to WebLogic Server with cluster transparency so that they do not need to worry about what services are available on what members of the cluster. To support this, the cluster members send out JNDI update messages to the cluster when objects are bound or removed from their local JNDI tree. When a member leaves the cluster, other members remove JNDI bindings associated with that server from their JNDI tree. When a server (re)joins the cluster, that server will ask another server in the cluster to provide the current view of its JNDI tree (known as a JNDI state dump) to initialize its view and then rely on JNDI replication messages to maintain it. This JNDI state dump does not use the cluster messaging protocol and relies on a point-to-point connection with the other server.
WebLogic Server supports two cluster messaging protocols:
- Multicast – This protocol, which relies on UDP Multicast, has been around since WebLogic Server introduced clustering back in WebLogic Server version 4.0.
- Unicast – This protocol, which relies on point-to-point TCP/IP sockets, was added in WebLogic Server 10.0.
The underlying cluster messages that WebLogic Server sends are essentially independent of the clustering protocol in use. The actual number of network messages is determined by the protocol implementation (e.g., where a message must be retransmitted to get it to all of the cluster members) and the network configuration (e.g., the network packet size restriction may cause a message to be split into multiple packets).
It is important to note that the WebLogic Server clustering protocols and the specific implementation details discussed in this article only apply to the WebLogic Server clustering implementation. Other Oracle products (e.g., Coherence) use the similar terminology to describe their clustering protocols but the specifics of their implementations are different. It would be a mistake to try to apply the descriptions and guidelines in this article to other products’ clustering protocols!
The WebLogic Server multicast implementation uses standard UDP multicast to broadcast the cluster messages to a group that is explicitly listening on the multicast address and port over which the message is sent. Multicast addresses can range from 18.104.22.168 to 22.214.171.124, though certain multicast addresses are reserved for specific purposes and should be avoided (see https://en.wikipedia.org/wiki/Multicast_address). Multicast ports have the normal UDP port ranges (i.e., 0 to 65535); again, certain UDP ports are reserved for specific purposes and should generally be avoided (see https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers).
Since UDP is not a reliable protocol, WebLogic Server builds its own reliable messaging protocol into the messages it sends to detect and retransmit lost messages. On modern, properly-configured, local-area networks, packets are rarely lost so this should not be a factor in deciding on which cluster messaging protocol to use.
Most modern operating systems and switches support UDP multicast by default between machines in the same subnet. However, most routers do not support the propagation of UDP multicast messages between subnets by default. In environments that do support UDP multicast message propagation, UDP multicast has a time-to-live (TTL) mechanism built into the protocol. Each time the message reaches a router, the TTL is decremented by 1 before it routes the message. When the TTL reaches 0, the message will no longer be propagated between networks, making it an effective control for the range of a UDP multicast message. By default, WebLogic Server sets the TTL for its multicast cluster messages to 1, which restricts the message to the current subnet.
When using multicast, the cluster heartbeat mechanism will remove a server from the cluster if it misses three heartbeat messages in a row to account for the fact that UDP is not considered a reliable protocol. Since the default heartbeat frequency is one heartbeat every 10 seconds, this means it can take up to 30 seconds to detect that a server has left the cluster. Of course, socket death detection or failed connection attempts can also accelerate this detection.
So what does this all mean? It means that the WLS multicast cluster messaging protocol:
- Uses a very efficient and scalable peer-to-peer model where a server sends each message directly to the network once and the network makes sure that each cluster member receives the message directly from the network.
- Works out of the box in most modern environments where the cluster members are in a single subnet.
- Requires additional configuration in the router(s) and WebLogic Server (i.e., Multicast TTL) if the cluster members span more than one subnet.
- Uses three consecutive missed heartbeats to remove a server from another server’s cluster membership list.
It is important to note that although parts of the WebLogic Server documentation suggest that multicast is only supported for backward compatibility—this is not correct. The multicast cluster messaging protocol is fully supported by Oracle. The A-team is working with WebLogic Server product management to address these documentation issues in the Weblogic Server 12c documentation.
To test an environment for its ability to support the WebLogic Server multicast messaging protocol, WLS provides a Java command-line utility known as MulticastTest (see http://docs.oracle.com/cd/E24329_01/web.1211/e24487/utils.htm#i1119755). To verify an environment, simply run the tool on each machine that will host the cluster members and make sure that all machines can see the messages sent by all other machines. For example, to test the ability to use multicast across a cluster deployed to three machines, run the following commands simultaneously:
On machine 1:
java utils.MulticastTest –n Machine1 –a 126.96.36.199 –p 7001
On machine 2:
java utils.MulticastTest –n Machine2 –a 188.8.131.52 –p 7001
On machine 3:
java utils.MulticastTest –n Machine3 –a 184.108.40.206 –p 7001
The resulting output should show that each machine is seeing messages from all three machines. For example, the output on Machine 2 should look something like the following:
New Neighbor Machine1 found on message number 1
I (Machine2) sent message num 1
Received message 2 from Machine1
New Neighbor Machine3 found on message number 1
Received message 2 from Machine2
I (Machine2) sent message num 2
Received message 3 from Machine1
Received message 2 from Machine3
Received message 3 from Machine2
I (Machine2) sent message num 3
Received message 4 from Machine1
Received message 3 from Machine3
Received message 4 from Machine2
I (Machine2) sent message num 4
Received message 5 from Machine1
Received message 4 from Machine3
Received message 5 from Machine2
I (Machine2) sent message num 5
The WebLogic Server unicast protocol uses standard TCP/IP sockets to send messages between cluster members. Since all modern networks—and network devices—support TCP/IP sockets, this makes unicast a great out of the box experience for WLS clusters since it typically requires no additional configuration, regardless of the network topology between the cluster members. As a result, WebLogic Server changed the default clustering protocol from multicast to unicast in WLS 10.0.
Unicast is just another cluster message protocol; Oracle fully supports both the unicast and multicast protocols. As stated previously, parts of the WLS documentation suggest or imply that multicast is only supported for backwards compatibility. This suggestion or implication is incorrect. The A-team is working with WLS product management to correct this in the WebLogic Server 12c documentation. The choice of protocols should not be influenced by this wording in the WLS documentation. This article tries to provide a balanced view of the two different protocols and makes recommendations on how to choose a cluster messaging protocol for a particular environment.
Since TCP/IP sockets are a point-to-point mechanism, WebLogic Server’s unicast implementation uses a group leader strategy to limit the growth in the number of sockets required as the cluster size grows. The cluster is split into one or more groups; each group has a group leader. Group members communicate with the group leader; group leaders also communicate with other group leaders in the cluster. If a group leader dies, the group elects another group leader.
For small clusters of 10 managed servers or less, the cluster contains a single group and therefore, a single group leader. The other servers in the group make a TCP/IP socket connection to the group leader that they use to send and receive cluster messages. When the group leader receives a cluster message from one server, it retransmits that message to all other members of the group. The group leader acts as a message relay to propagate the messages across the cluster.
For larger clusters, the cluster splits into multiple groups of 10 managed servers. For example, a cluster of 16 managed servers will have two groups, one with 10 members and one with 6. In these clusters with multiple groups, the group leaders are connected directly to one another. When a group leader receives a cluster message, it not only retransmits that message to other members of its group but also to every other group leader. This allows the entire cluster to receive every cluster message.
When using unicast, the cluster heartbeat mechanism will remove a server from the cluster if it misses a single heartbeat message since TCP/IP is a reliable protocol. Unicast will check every 15 seconds to see if it has missed a heartbeat. This extra 5 seconds is to allows sufficient time for the message to travel up to 3 hops, from the remote group’s member to the remote group’s leader to the local group’s leader, and finally to the local group’s member. Since the default heartbeat frequency is one heartbeat every 10 seconds, this means it should take no more than 15 seconds to detect that a server has left the cluster. Of course, socket death detection or failed connection attempts can also accelerate this detection.
So what does this all mean? It means that the WLS unicast cluster messaging protocol:
- Uses a group leader model where a server sends each message directly to the group leader. The group leader is responsible for retransmitting the message to every other group member and other group leaders, if applicable.
- Works out of the box in virtually any environment.
- Requires no additional configuration, regardless of the network topology.
- Uses a single missed heartbeat to remove a server from another server’s cluster membership list.
It is important to note that although unicast is the default protocol, Oracle fully supports both protocols equally.
Regardless of the software being used, clustering protocols generally assume that the participants have sufficient resources to guarantee timely processing of cluster messages—WebLogic Server clustering is no different. Both protocols require that the cluster members get sufficient processing time to send and receive cluster messages in a timely fashion to prevent unnecessary cluster membership changes and the inherent resynchronization costs associated with leaving and rejoining the cluster. While WLS has optimized the resynchronization costs to make them as low as possible (and much lower than in early versions of the product), it is still best to eliminate unnecessary cluster membership changes due to over-utilization of available resources.
When using unicast, it is important to make sure that the group leaders are not resource constrained since they act as the message relay to deliver a cluster message to the rest of the cluster. Any slowness on their part can impact multiple cluster members and even result in the group electing a new group leader. Contrast this with multicast where a slow member can only really impact its own membership to the cluster.
On the other hand, multicast requires a network that supports UDP multicast when trying to create clusters that span subnets. In many organizations, network administrators prefer to not allow the propagation of UDP multicast messages across routers. Customers that use third-party network providers may not have sufficient control over the network to support multicast across subnets. Unicast makes it simple and eliminates any additional configuration to span subnets. Multicast also requires setting the appropriate TTL of the multicast messages to restrict how far they may be propagated in the network when clusters span multiple subnets.
As you can see, each protocol has its own benefits so it is up to each administrator to choose the protocol that best meets their needs. The table below highlights some of the differences between multicast and unicast.
|Only option in pre-10.0 versions of WLS, continues to exist in version 10+||Available from WLS 10.0 onwards|
|Uses UDP Multicast||Uses TCP/IP|
|Requires additional configurations to Routers, TTL when clustering across multiple subnets.||Requires no additional configuration to account for network topology.|
|Requires configuring the Multicast Listen Address and Port. May need to specify the Network Interface to use on machines with multiple NICs.||Simply specify the listen address. Supports using the Default Channel or a Custom Network Channel for cluster communication.|
|Each message delivered directly to and received directly from the network||Each message delivered to a group leader, which retransmits the message to other group members (N - 1) and any other group leaders (M - 1), if they exist. The other group leaders then retransmit the message to their group members resulting in up to NxM network messages for every cluster message. Message delivery to each cluster member takes between 1 and 3 network hops.|
|Every server sees every other server||Group leaders act as a message relay point to retransmit messages to its group members and other group leaders.|
|Cluster membership changes require 3 consecutive missed heartbeat messages to remove a member from the cluster list.||Cluster membership changes require only a single missed heartbeat message to remove a member from the cluster.|
Many customers using WebLogic Server version 10.x and newer are using the unicast clustering protocol simply because it is the default protocol when creating new clusters from the WebLogic Console. There is no need to rush out and change protocols—especially if there are no signs of cluster stress such as log messages about lost cluster messages (these can be normal during server startup by the way), unexpected membership changes even though the servers are still running, etc. However, the A-Team suggests that everyone consider their choice of clustering protocols carefully.
In general, the A-Team rule of thumb is to always recommend customers use multicast unless there is good reason why it isn’t possible or practical (e.g., spanning multiple subnets where routers are not allowed to propagate UDP multicast messages). The primary reasons for this are simply efficiency and resiliency to resource shortages.
For customers that need to use unicast, the A-Team recommends that they ensure that their environment is not oversubscribed in terms of resources so that the managed servers, and especially the group leaders, have sufficient bandwidth to handle the timely propagation of cluster messages across the cluster. Oracle is making frequent improvements to optimize unicast clustering so the A-Team recommends that all customers using unicast consult the Oracle Support Note (ID 1397268.1) to get the latest information on optimizing the configuration for the unicast protocol.