This posting is the fourth and final section in the series Analyzing Thread Dumps in Middleware
In this section, we will introduce ThreadLogic, a Thread Dump Analyzer tool developed by the author of this blog (Sabha Parameswaran in collaboration with his colleague, Eric Gross, also from Oracle A-Team) and its capabilities as well as some real world samples of thread dump analysis before concluding the series.
As mentioned earlier, the current set of TDA tools (Samurai/TDA) don't mine the data inside thread dumps or provide a more detailed view of what each thread is doing while just limiting themselves to reporting the state (locked/waiting/running) or the lock information. No mention of type of activity within a thread, should it be treated as normal or deserving a closer look? Can a pattern or anti-pattern be applied against them? Any possible optimizations? Are there any hot spots? Any classification of threads based on their execution cycles?
In order to fill some of the gaps in the existing TDA Tools as well as provide a detailed analysis of WebLogic Server specific thread dumps, the author of this blog (Sabha Parameswaran) decided to developed a custom Thread dump analyzer tool that can suggest set of actions and recommendations based on some predefined thread execution patterns.
In collaboration with Oracle A-team colleague, Eric Gross, the author decided to enhance the existing open source TDA version 2.2 instead of reinventing the wheel and leveraging the capabilities of base TDA to handle base parsing of the thread dump, the UI and Reporting and named it ThreadLogic. Eric Gross enhanced ThreadLogic to fully support Sun Hotspot, JRockit (support was partial for JRockit in base TDA v2.2) and IBM JVM Thread dumps as well as adding support for custom categories in the tool. Sabha Parameswaran enhanced the tool's analytics capabilities - grouping of threads based on functionality and tagging of threads with advisories using pre-defined rules and patterns which can be extended to handle additional patterns.
We wish to thank Ingo Rockel, Robert Whitehurst and numerous others who had contributed to the original TDA which allowed us build on their work in delivering a more powerful tool for the entire Java community.
Once a thread dump is parsed and threads details are populated, each of the thread is then analyzed against matching advisories and tagged appropriately. The threads are also associated with specific Thread Groups based on functionality or thread group name.
Each of the advisory has a health level indicating severity of the issue found, pattern, name, keyword and related advice.
|Thread Advisory Name||Large # of WLS Muxer Threads|
|Description||Large number of WLS Muxer Threads|
|Advice||Reduce number of WLS Muxer Threads to under 4, use -Dweblogic.SocketReaders=NoOfThreads flag in command line|
|Thread Advisory Name||Stuck Thread|
|Description||Thread is Stuck, request taking very long time to finish|
|Advice||Check why the thread or call is taking very long?? Is it blocked for unavailable or bad resource or contending for Lock? Can be ignored if it is doing repeat work in a loop (like adapter threads polling for events in a infinite loop)...|
|Thread Advisory Name||Hot Spots|
|Description||Multiple Threads executing same code path|
|Advice||Ensure there are no blocking locks or bottlenecks, sufficient resources are available,remote service being invoked is responsive and scaling well to handle increased load|
Each of the advisory gets triggered based on either call execution patterns observed in the thread stack or presence of other conditions (thread blocked or multiple threads blocked for same lock can trigger BlockedThreads Advisory). Sometimes a thread might be tagged as IGNORE or NORMAL based on its execution logic or might be tagged more specifically as involved in JMS send or receive client or a Servlet thread.
The health levels (in descending of severity) are FATAL (meant for Deadlocks, STUCK, Finalizer blocked etc), WARNING, WATCH (worth watching), NORMAL and IGNORE. Based on the highest criticality of threads within a group, that health level gets promoted to the Thread Group's health level and same is repeated at the thread dump level. There can be multiple advisories tagged to a Thread, Thread Group and Thread Dump.
|Threads in a thread dump tagged with Advisories/Health Levels|
The threads are associated with thread groups based on the functionality or thread names. Additional patterns exists to tag other threads (like iWay Adapter, SAP, Tibco threads) and group them. The summary page reports on health level of the group, total number of threads, threads that are blocked, critical advisories found etc.
|Thread Groups Summary|
The critical advisories (at Warning/Fatal health levels) found in individual threads are then promoted to the the parent thread group and reported in the thread group summary page.
|Critical Advisories for Thread Group|
One can see the thread groups are divided into two buckets - WLS and non-WLS related threads. The JVM threads, Coherence, LDAP and other unknown custom threads go under the non-WLS bucket while all the WLS, Muxer, Oracle, SOA, JMS, Oracle Adapter threads are all under the WLS bucket.
Clicking on the individual threads will display the advisories and thread stack.
|Advisories and details at thread level|
The details of the advisory will pop up on mouse over on the advisory links.
The Advisories are color coded and details can be highlighted.
|Colored advisories for individual threads|
Sub-groups are also created within individual Thread Groups based on Warning Levels, Hot call patterns (multiple threads executing same code section), threads doing remote I/O (socket or db reads) etc.
Following snapshot shows example of a Hot call pattern where multiple threads appear to be executing the same code path (all are attempting to update the MBean Server with a new mbean).
|Hot Call Pattern - multiple threads exhibiting similar code execution|
Merge has been enhanced to report on the progress of the thread across the thread dumps. Based on the order of the thread dumps, the thread stack trace is compared for every consecutive thread dump.
|Merged view showing progress information for individual threads|
Merged reporting of individual thread stack traces (exists in base TDA version 2.2)
|Merged Thread stack traces across thread dumps|
Merging can also be done across multiple thread dump log files (like in case of IBM which creates new log file containing the thread dump every time a request is issued).
Thanks to the advisories and health levels, its easy for users to quickly understand the usage patterns, hot spots, thread groups, as well as highlight the patterns or anti-patterns already implemented in the advisory list.
For example of an anti-pattern: a Servlet thread should not be waiting for an event to occur as this will translate to bad performance for end user. Similarly usage of synchronous jms consumers might be less performant compared to using async consumers. Too many WLS Muxer threads is not advisable. If WLS Muxer or Finalizer threads are blocked for unrelated locks, this will be a fatal condition. It would be okay to ignore STUCK warning issued by WLS Server for Pollers like AQ Adapter threads but not for other threads that are handling servlet request.
The thread groups help in bunching together related threads; so SOA Suite users can see how many BPEL Invoke and Engine threads are getting used, B2B users can see number of JMS consumers/producers, WLS users can look at condition and health of Muxer threads, similarly for JVM/Coherence/LDAP/other thread groups.
The merged report lets the user see at a glance the critical threads and check if they are progressing or not instead of wading through numerous threads and associated thread dumps.
We hope this tool can really help both beginners and experts do their jobs more quickly and efficiently when it comes to thread dumps.
A Customer was planning on a POC (Proof of Concept) of WebLogic Portal Server (WLP) on the new Oracle Exalogic Platform and measure its performance relative to their existing configurations. The test configuration used a cluster of WLP servers, communicating via Multicast for cluster membership and other service announcements. As soon as the servers were brought up, the servers became unresponsive. The server instances could not serve any request - neither traffic to Portal Application nor WLS Admin Server requests for status from the server instances. Following errors get getting thrown in the server:
•UnsyncCircularQueue$FullQueueException: Queue exceeded maximum capacity of: '65536' elements.
There were also frequent log messages of p13n cache getting updated and flushed repeatedly. The p13n cache is the Personalization cache used internally by Portal for handling Portal personalization. Each of the servers in the cluster are kept of changes in cache via multicast broadcasts. When the customer disabled the p13n cache, the server regained normalcy and responsiveness. But the p13n cache is a requirement for portal application functionality and its not possible to have it disabled.
The FullQueueException essentially indicated that the WLS's internal request queue was getting overwhelmed with requests and its starting to ignore/drop new requests as its already has 64K requests waiting to be executed. These requests can be internal (like transaction timeouts, schedulers, async processing) or externally generated (like client http requests). The storm of requests coupled with p13n being a key presence for the issue indicated p13n cache is triggering an internal flood of requests to the server somehow. It was decided to take thread dumps to understand what the p13n cache is executing.
Analyzing the thread dumps shed light on a weird behavior.
The above thread stack indicates the MulticastReceiver used by WLS internally to handle multicast announcements got some request/announcement that gets handled by the p13n cache. It appears to trigger a flush of local keys in the cache. But then it gets modified as a flush of cluster keys which then results in a MulticastSender sending out a cluster broadcast.
This essentially means for every announcement that lands on a server instance, it will resend the announcement to other members which then gets replicated for every member across the cluster leading to an infinite looping of the same message and contributing to a never ending multicast packet storm.
Analyzing the product code against the stack trace, it appeared that there was a mismatch between the CacheManager.flushKeys() implementation as the method code lines didn't match. So, it was apparent that there was some variant form of CacheManager that was getting picked by the application while handling the cluster announcements.
Using WLS ClassLoader Analysis Tool (CAT), we were able to drill down to where this particular CacheManager class was getting loaded from. It was packaged inside a jar that got added to the Portal EAR application libraries. Removing that particular jar containing the CacheManager resolved the problem. Later investigation revealed that the jar came from an older version of the Portal Product and should not have been used with the latest version of Portal that was being tested on Exalogic. The analysis of a single thread stack trace helped pin point the root cause for all the hang conditions in this scenario.
Although Thread dumps can shed light on multiple things, there are also some limitations.
Even if thread dump does not give a direct solution, it can definitely provide good pointers as to where the problem lies and what are the hotspots.
Thread Dump Analysis is a really valuable and key weapon in every Java Developer and Architect's arsenal when it comes to understanding, identifying and resolving performance or other issues. I hope this series (along with the new enhanced TDA tool) helps makes it easier and useful for people to troubleshoot, tune performance and get a better handle over complex Java/JEE systems.