Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
tutorial:debugging [2020/06/03 18:36] – created admin | tutorial:debugging [2024/01/02 19:37] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 2: | Line 2: | ||
Learning Objectives: | Learning Objectives: | ||
- | * Debugging features | + | * [[#Debugging features]] |
- | * Debugging test generation | + | * [[#Debugging test generation]] |
- | * Debugging automation script | + | * [[#Debugging automation script]] |
- | * Debug console | + | * [[# |
- | * Logging | + | |
+ | ===== Debugging features ===== | ||
+ | You may encounter issues during test generation and model execution (running your scripts). Although issues you may encounter may overlap, debugging test generation issues has distinctive needs than debugging model execution issues. | ||
+ | |||
+ | Below is the list of helpful features in debugging: | ||
+ | * debug run mode - interactively stepping through model execution | ||
+ | * play run mode - visualize model execution with ability to pause | ||
+ | * [[../ | ||
+ | * graphs - helpful for debugging test sequence issues | ||
+ | * logging - log messages to model log or Server Log | ||
+ | |||
+ | |||
+ | ===== Debugging test generation ===== | ||
+ | Debugging issues with test generation will require knowledge on the sequencer used. If you need a refreshing course on the sequencer works, check out [[../ | ||
+ | |||
+ | The first thing that you need is to identify the sequencer being used and the error message that may be displayed. | ||
+ | |||
+ | The sequencer used is shown on application toolbar on the upper-left corner of the IDE. Make sure that the expected sequencer has been chosen. | ||
+ | |||
+ | ==== Missing transitions ==== | ||
+ | In order for the sequencers to be able to generate test sequences from the model, the following conditions must hold true for all models: | ||
+ | * there must be a path from initial state to every state in the model | ||
+ | * there must be a path from every state to a final state | ||
+ | |||
+ | One of the common issues encountered during test generation is caused by missing transitions. An example model of such case is: | ||
+ | {{wiki: | ||
+ | |||
+ | As you may have picked up the error in the model - it breaks the first condition: there isn't a path from initial state " | ||
+ | | ||
+ | |||
+ | ==== Transition guards not working ==== | ||
+ | Another commonly encountered issue with test generation is the use of transition guard. | ||
+ | |||
+ | |||
+ | ==== Test case too long ==== | ||
+ | A test cases is represented as a path from the initial state to a final state. | ||
+ | |||
+ | If you wish to get a set of shorter test cases, you may choose // | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | ===== Debugging automation script ===== | ||
+ | |||
+ | Automation scripts are called as model executes. If your automation script is not functioning as expected, you may try any of the following methods to trouble-shoot the problem: | ||
+ | * check //Server Log// file if receiving runtime errors | ||
+ | * add additional debugging messages and check //Script Log// file | ||
+ | * pause model and check if AUT is at the expected state | ||
+ | * dynamically execute scripts and validate the results | ||
+ | * highlight script and press // | ||
+ | * execute script in Debug Console in [[../ | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | ===== Logging | ||
+ | |||
+ | // | ||
+ | |||
+ | You can also log messages from your scripts by using $SYS.log(...). | ||
+ | |||
+ | Some of the plugins may also creates driver specific log files, such as Selenium' | ||
+ | |||
+ | All log files can be found in //logs// folder. | ||
+ | |||