X

Best Practices from Oracle Development's A‑Team

Discover Utility : A Tool to Collect Comprehensive Configuration Details of a Fusion Applications Instance

Introduction

Oracle Fusion Applications is a large collection of artifacts at various levels - from Application Modules,URLs and Web services at the top levels to storage, hostnames and IPs at the lower layers with numerous connection and configuration settings and tunable parameters within and across the various products in different layers.  Often administrators need values of these configuration settings for various routine activities like patching, upgrades and diagnostics as well as LifeCycle Managerment (LCM) activities such as cloning or P2T data copy.  User documentation during the system provisioning along with user-maintained change logs are key to get this information other than peering into the system with proper access to the stack and knowledge of product internals.  Finding such information can be a significant issue for system and application administrators especially when left without enough system documentation.  There is a new tool ready to help with this - it is called the "Discover Utility" and is part of the upcoming Fusion Applications Release 12 LCM Utilities.  This post provides an introduction to the Fusion Applications Discover tool.

Main Article

The Discover tool does two important tasks.  First, it systematically gather data about a Fusion Applications instance by going through a set of known repositories (in filesystem and databases) and querying online servers. Next it summarized the data in useful formats that can be used as input to other activities like Cloning and P2T.  This can save a lot of time and hardships to get the same data through manual review of the system.  In addition, it can be used to validate or update existing system configuration documentation or even create a new one - this can be useful for purposes like audits. Also, Discover is a read-only tool and has little impact on system performance. Though system configurations typically do not change, it can be re-run as desired to review updated information such as new patches installed.

Given below are the steps to get and use the Discover tool.

Getting and Installing the Discover tool

The Discover tool is part of the upcoming Fusion Applications Release 12 LCM Clone and P2T utilities that are distributed by request through Oracle Support.  Please log a support request to get these. Though the tool is part of the R12 release, it is quite usable in previous releases as well for the core use of getting configuration information.

The LCM Clone and P2T tools come as standalone zip files that only needs java runtime. You can install them by just unzipping it into any directory in the Fusion Applications primordial host.

Once unzipped, the Discover part of the directory structure will look as given below.

Discover directory structure

Using the Discover Tool

The Discover tool should be run from the Fusion Applications primordial host (the host running the CommonDomain AdminServer) and from there it should have access to the filesystem as well as hosts for all FA servers as well as IdM and Webtier servers.  If in case the directories are not shared, the tool can be copied over to the other hosts and run in each host to collect its data.

For the discover tool to run successfully, the complete system (all DBs and servers for FA and IdM) should be fully up and running.

Assuming you unzipped the tool into the dir /u01/app, then the command to run the tool is :

cd /u01/app
./Quadro/app/discover/bin/discover.sh -e FA_Dev_env

Where FA_Dev_Env is just a name you are giving to this FA environment for this tool to track.

The tool will ask a few questions and takes the input to run its queries and gather all the data.

The Discover tool needs minimal input for itself - it only needs the JAVA_HOME to be set (best to set it to the jdk used by Fusion Applications) and the sys passwords for the FA and IdM databases and the superuser name and passwords for all the FA and IdM Domains.

Depending on the system and hardware it may run for about 30mins and is completely non-intrusive to the system - it is a read-only tool and does not update or change anything in the system and writes its output and temp work files within its own directory in the filesystem.

When the tool finishes its run, its directory structure above will change and show a new dir ~/Quadro/environment/FA_Dev_Env as given below.

  1. [caption id="" align="alignnone" width="304"] Directory Structure after a Run[/caption]

    The new directory ~/Quadro/environment/FA_Dev_Env contains the configuration data collected from this Fusion Applications environment.

  2. The subdirectory with a long number (actually the date timestamp of the last run) has 4 further directories under it - the output dir has the output files in csv format, the log and the input dir keeps the response file it builds based on the you have provided earlier and other input it has gathered itself to run the scripts and the log and temp directory names reflect their content.
  3. The FusionAppsRef directory contains a sample response file to use as a template when running in a new environment.
  4. If the tool is run again, it archives the dir with timestamp name and creates a fresh dir with new timestamp.
  5. In case the tool needs to be run separately in FA, IdM and DB hosts to collect the respective data, then one has to use the command line option -t <operation> as given below :
  6. # when running in the FA host :
  7. ./Quadro/app/discover/bin/discover.sh -e FA_Dev_env -t discover.fa.collect  
  8. # when running in the IDM host :
  9. ./Quadro/app/discover/bin/discover.sh -e FA_Dev_env -t discover.idm.collect
  10. # when running in the DB host :
  11. ./Quadro/app/discover/bin/discover.sh -e FA_Dev_env -t discover.db.collect

If for any reason the above hosts do not have a common shared storage to run Discover from, you can copy the Discover dir into each host and run the above commands.

Once the run completes in each host above, it will create dirs with timestamp names as before in each of the hosts - for example :

<FA_Host>/Quadro/environment/FA_Dev1_Env/20161102012221

<IDM_Host>/Quadro/environment/FA_Dev1_Env/20161102012231

<DB_Host>/Quadro/environment/FA_Dev1_Env/20161102012241

You can merge these separate data into one single collection as below :

Copy the IdM and DB timestamp directories into a FA environment subdir - for example :

cp -rp <IDM_Host>/Quadro/environment/FA_Dev1_Env/20161102012231  <FA_Host>/Quadro/environment/FA_Dev1_Env/input

cp -rp <IDM_Host>/Quadro/environment/FA_Dev1_Env/20161102012231  <FA_Host>/Quadro/environment/FA_Dev1_Env/input

and then run the command below in the FA host :

./Quadro/app/discover/bin/discover.sh -e FA_Dev_env -t discover.analyze.manual

and this will create a new timestamp dir (say, 20161102012251) in the FA host under the environment subdir with merged data from FA, IDM and DB.

Why is getting a merged data important ? That helps use the data to create input properties files for the Clone and P2T tools as shown in next step.

Using Discover to create Input files for Clone and P2T

So we come to the great usage of this tool to help generate the input properties file for the Clone and P2T tools.

Because we have already gathered all the data about the system already by running the tool as given above, this step is quite simply the massaging the collected data into formats needed by the LCM tools.

To get a clone input file, the command to run is :

./Quadro/app/discover/bin/discover.sh -e FA_Dev_env -t clone.fa.discover.esp -b 20161102012251 

and to get the P2T input file, the command to run is :

./Quadro/app/discover/bin/discover.sh -e FA_Dev_env -t p2t.fa.discover.esp -b 20161102012251 

where the value 20161102012251 latest timestamp directory located under ./Quadro/environment/FA_Dev_Env   directory.

The above commands will produce faclone.rsp and p2t.rsp files under the ./Quadro/environment/FA_Dev_Env/<timestamp_dir>/output  directory.

Note : The Clone and P2T response file generation may not work for releases before R12 but the configuration data should be useful enough to build these response files manually.

Sample Data

Below is a partial sample of the data files that discover gathers during its data collection activities.  The filenames are fair indicators of their content.

[caption id="" align="alignnone" width="1159"] Sample Discover Data Files[/caption]

Summary

Gathering system information for ongoing operations and especially Cloning and P2T activities is not easy and manual review may lead to errors. Using the Discover tool makes the work much easier with error free results that provide ready input for Clone and P2T work.

Join the discussion

Comments ( 1 )
  • Tim Wednesday, May 1, 2019
    Hello, I think you should remove this article. No one in support is familiar with this product. When an SR is created, no one knows what to do with it. Thanks.
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha