Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
tutorial:debugging [2020/07/19 16:15]
admin [Debugging test generation]
tutorial:debugging [2021/01/05 03:00] (current)
Line 5: Line 5:
   * [[#Debugging test generation]]   * [[#Debugging test generation]]
   * [[#Debugging automation script]]   * [[#Debugging automation script]]
-  * [[#Debug console]] 
   * [[#Logging]]   * [[#Logging]]
  
Line 26: Line 25:
 The sequencer used is shown on application toolbar on the upper-left corner of the IDE.  Make sure that the expected sequencer has been chosen. The sequencer used is shown on application toolbar on the upper-left corner of the IDE.  Make sure that the expected sequencer has been chosen.
  
 +==== Missing transitions ====
 In order for the sequencers to be able to generate test sequences from the model, the following conditions must hold true for all models: In order for the sequencers to be able to generate test sequences from the model, the following conditions must hold true for all models:
    * there must be a path from initial state to every state in the model     * there must be a path from initial state to every state in the model 
Line 36: Line 36:
    openOptima.NoSolutionException: Unable to reach following states from state 1: state 2    openOptima.NoSolutionException: Unable to reach following states from state 1: state 2
  
-Most sequencers will report the above error except //Random// and //Concurrent// sequencers due to the nature how these two sequencers generate test sequence.  +==== Transition guards not working ====
 Another commonly encountered issue with test generation is the use of transition guard.  Transition guard is only used during automation as the variables used in guard condition will only be evaluated/set by automation script.  During test sequence generation, you should assume that all transition guards will be ignored. Another commonly encountered issue with test generation is the use of transition guard.  Transition guard is only used during automation as the variables used in guard condition will only be evaluated/set by automation script.  During test sequence generation, you should assume that all transition guards will be ignored.
  
  
 +==== Test case too long ====
 +A test cases is represented as a path from the initial state to a final state.  You may notice that some test cases generated by some sequencers may be longer than expected.  This is the expected behavior, especially for //Random// and //Optimal// sequencers. 
 +
 +If you wish to get a set of shorter test cases, you may choose //Priority// sequencer. For more details about the sequencers, check out [[../sequencers | Sequencers]].
  
  
Line 48: Line 51:
 ===== Debugging automation script ===== ===== Debugging automation script =====
  
-===== Debug console =====+Automation scripts are called as model executes. If your automation script is not functioning as expected, you may try any of the following methods to trouble-shoot the problem: 
 +  * check //Server Log// file if receiving runtime errors 
 +  * add additional debugging messages and check //Script Log// file 
 +  * pause model and check if AUT is at the expected state 
 +  * dynamically execute scripts and validate the results 
 +    * highlight script and press //Ctrl-E// 
 +    * execute script in Debug Console in [[../ide_monitor | Monitor]] tab 
 + 
 + 
  
  
 ===== Logging ===== ===== Logging =====
 +
 +//TestOptimal// logs runtime errors to the //Server Log// file.
 +
 +You can also log messages from your scripts by using $SYS.log(...).  The messages are written to the //Script Log// file which is cleared before each model execution.
 +
 +Some of the plugins may also creates driver specific log files, such as Selenium's browser drivers.
 +
 +All log files can be found in //logs// folder.