Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
tutorial:debugging [2020/06/03 18:36] – created admintutorial:debugging [2024/01/02 19:37] (current) – external edit 127.0.0.1
Line 2: Line 2:
  
 Learning Objectives: Learning Objectives:
-  * Debugging features +  * [[#Debugging features]] 
-  * Debugging test generation +  * [[#Debugging test generation]] 
-  * Debugging automation script +  * [[#Debugging automation script]] 
-  * Debug console +  * [[#Logging]] 
-  * Logging+ 
 +===== Debugging features ===== 
 +You may encounter issues during test generation and model execution (running your scripts). Although issues you may encounter may overlap, debugging test generation issues has distinctive needs than debugging model execution issues.  
 + 
 +Below is the list of helpful features in debugging: 
 +  * debug run mode - interactively stepping through model execution 
 +  * play run mode - visualize model execution with ability to pause 
 +  * [[../ide_monitor | Execution Monitor]] - console to interrogate model during execution 
 +  * graphs - helpful for debugging test sequence issues 
 +  * logging - log messages to model log or Server Log 
 + 
 + 
 +===== Debugging test generation ===== 
 +Debugging issues with test generation will require knowledge on the sequencer used. If you need a refreshing course on the sequencer works, check out [[../sequencers | Sequencers]]. 
 + 
 +The first thing that you need is to identify the sequencer being used and the error message that may be displayed. 
 + 
 +The sequencer used is shown on application toolbar on the upper-left corner of the IDE.  Make sure that the expected sequencer has been chosen. 
 + 
 +==== Missing transitions ==== 
 +In order for the sequencers to be able to generate test sequences from the model, the following conditions must hold true for all models: 
 +   * there must be a path from initial state to every state in the model  
 +   * there must be a path from every state to a final state 
 + 
 +One of the common issues encountered during test generation is caused by missing transitions. An example model of such case is: 
 +{{wiki:overview:tut_debugging_state_model_missing_trans.png?400}} 
 + 
 +As you may have picked up the error in the model - it breaks the first condition: there isn't a path from initial state "Start" to state "State 2" and thus it's impossible to cover "State 2" and it's outgoing transition.  If you run test generation using //Optimal// on this model, you would get the following error message: 
 +   openOptima.NoSolutionException: Unable to reach following states from state 1: state 2 
 + 
 +==== Transition guards not working ==== 
 +Another commonly encountered issue with test generation is the use of transition guard.  Transition guard is only used during automation as the variables used in guard condition will only be evaluated/set by automation script.  During test sequence generation, you should assume that all transition guards will be ignored. 
 + 
 + 
 +==== Test case too long ==== 
 +A test cases is represented as a path from the initial state to a final state.  You may notice that some test cases generated by some sequencers may be longer than expected.  This is the expected behavior, especially for //Random// and //Optimal// sequencers.  
 + 
 +If you wish to get a set of shorter test cases, you may choose //Priority// sequencer. For more details about the sequencers, check out [[../sequencers | Sequencers]]. 
 + 
 + 
 + 
 + 
 + 
 +===== Debugging automation script ===== 
 + 
 +Automation scripts are called as model executes. If your automation script is not functioning as expected, you may try any of the following methods to trouble-shoot the problem: 
 +  * check //Server Log// file if receiving runtime errors 
 +  * add additional debugging messages and check //Script Log// file 
 +  * pause model and check if AUT is at the expected state 
 +  * dynamically execute scripts and validate the results 
 +    * highlight script and press //Ctrl-E// 
 +    * execute script in Debug Console in [[../ide_monitor | Monitor]] tab 
 + 
 + 
 + 
 + 
 + 
 +===== Logging ===== 
 + 
 +//TestOptimal// logs runtime errors to the //Server Log// file. 
 + 
 +You can also log messages from your scripts by using $SYS.log(...).  The messages are written to the //Script Log// file which is cleared before each model execution. 
 + 
 +Some of the plugins may also creates driver specific log files, such as Selenium's browser drivers. 
 + 
 +All log files can be found in //logs// folder. 
 +