A previous blog post discussed how to generate and consume user accounts when testing WebCenter with OATS. In this scenario, the generated user accounts can only be used once. This blog post addresses the additional complexity of multiple scripts that each do a part of the overall test case, with each script using the results of the previous step.
Each OATS script tests a part of the overall functionality of the customer application, then records the users that successfully completed that step. The next script starts with that list, and continues with another set of functionality. While it might be possible to create a single massive script, there are script size limitations and any change to the customer application might require re-recording the entire massive script. This approach allows for a certain level of modularity, while still passing along only users that have successfully completed each individual script.
The previous blog post discussed how to generate the initial databank, create the user accounts, capture successful iterations, and then consume the user accounts in a subsequent script. This blog post discusses the additional complexity of multiple subsequent scripts, potentially running at the same time.
As previously mentioned, a log statement at the end of a script writes out the databank record of the successful iteration. With multiple scripts, each script needs to write its log message using a string that uniquely identifies that script. This string should describe the functionality that the script is testing. The only requirement is that the string should not appear any where else in the logs, so avoid using any string that OATS itself might log.
After each script is run, the log files can be searched for the unique string for that script, and the results collected into a data file representing the successful completion of that script. The previous blog post discussed exactly how this is done. For multiple scripts, however, there needs to be multiple searches so that the records for each unique string are extracted.
After a script has been run and the results extracted from the log files, there will be a databank containing the successful completions of that script. The next script in the sequence needs to draw down from that databank. The next script needs to consume records that that the previous script generated, but only do so once. The idea is to copy off the records that are about to be used, and remove them from the databank. Since each script is logging a unique string, there will be one databank for each script, and the next script in the sequence will draw down from the previous script in the sequence.
The script sequence always starts with the initial generation of the user accounts. The log message at the end of this script would write out the databank row and include a unique string of "CREATEACCOUNT" or similar. An extraction process would search through the log files for that unique string and create a new databank containing only those accounts that were successfully created. The next script would draw down from that databank, perform the functionality to be tested, and end by writing out a log statement that uniquely identifies the script. Each subsequent script would follow the same pattern, drawing down from the databank created by the previous script, and writing out what will become the databank for the next script.
For example, the initial CreateAccount script would write out a log message that starts with the unique string "CREATEACCOUNT" followed by the databank line for that iteration. The extraction process would search the log files and accumulate a CREATEACCOUNT databank. The next Func1 script would draw down from that CREATEACCOUNT databank, and would end by writing out a log message that starts with the unique string "FUNC1" and the databank line. Part of the extraction process would search the log files and accumulate a FUNC1 databank, that the next Func2 script would consume. The pattern would continue, with the Func2 script writing out a log message starting with the unique string "FUNC2" and the log file extraction process creating a FUNC2 databank that the Func3 script would consume. Each subsequent script would draw down and use the results of the previous script.
Since each script writes out a log message containing a string that uniquely identifies the script, multiple scripts can be run simultaneously. The extraction process just searches the log files for the various unique strings, putting the results into separate databanks. Obviously, the script to initially create the accounts has to be run first. Also, each script has to be run through enough successful iterations for the subsequent script to have enough records to draw down successfully.
Suppose the customer application is conducting a survey, but each person is only allowed to vote once. As expected, the first OATS script would create an account. The initial databank for the CreateAccount script would be generated using the process described in the previous blog post. That CreateAccount script would end with a log statement, such as:
The extraction process would create a new databank by searching the log files for that unique string, such as:
grep CREATEACCOUNTSTRING ats_log_file.log >> create_account_values.txt
Note that the double-arrow (">>") adds to the file rather than overwriting it. This allows for the CreateAccount script to be run multiple times, each run accumulating more records in the resultant databank. Also note that logs from multiple OATS agents can be concatenated into the single ats_log_file.log file, or the search can be run separately on each agent's log file.
In this example, the next step in the survey application would be to choose a product. So, the ChooseProduct script would need to draw down from the results of the CreateAccount script, such as:
head -$1 create_account_values.txt > foo cat header.txt foo > ChooseProduct.csv sed -e '1,'$1'd' create_account_values.txt > foo2 mv foo2 create_account_values.txt
The number of rows to be extracted is passed as a parameter on the command line, and is referenced by the variable $1. The first line copies that number of rows off to a temporary file. The second line adds the header row to create the actual databank. The next two lines delete that number rows from the draw-down file. This shell script would be named for the script about to be run, such as: DrawDownChooseProduct.sh and would be invoked by supplying the number of rows, such as:
So, the ChooseProduct OATS script would be run, using the ChooseProduct.csv databank that was just created. The end of that script would write out a similar log message, but with a unique string "CHOOSEPRODUCTSTRING" that correlates to the script being run. The extraction process would create a new databank by searching the log files for that unique string, such as:
grep CHOOSEPRODUCTSTRING ats_log_file.log >> choose_product_values.txt
In this example, after a product has been chosen, the next step in the survey application would be to provide feedback. Only those users that successfully chose a product can provide feedback. So, the ProvideFeedback script would need to draw down from the results of the ChooseProduct script, such as:
head -$1 choose_product_values.txt > foo cat header.txt foo > ProvideFeedback.csv sed -e '1,'$1'd' choose_product_values.txt > foo2 mv foo2 choose_product_values.txt
This shell script would be named for the script about to be run, such as: DrawDownProvideFeedback.sh and would be invoked by supplying the number of rows, such as:
So, the ProvideFeedback OATS script would be run, using the ProvideFeedback.csv databank that was just created.
For the situation described here, there is a sequence of multiple OATS scripts which each need to consume the results of the previous script execution, and each user account can only be used once. The first script would create the user accounts and write out log messages containing only the databank rows for users that were successfully created. The next script would draw down from those databank rows and write out a new set of databank rows for only those iterations that succeeded. Each subsequent script draws down from the previous databank and generates a new databank.
While setting up the scripts and databank names can be complicated, the result is quite powerful and easy to use. After each set of script iterations, a single command searches the OATS log files and extracts each unique string, creating new databank files for each script that was run. Preparing for the next set of script iterations only requires drawing down the desired number of databank rows for each script about to be executed.