Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
tutorial:load_testing [2020/12/31 19:45] – [Multi-model execution] admintutorial:load_testing [2024/01/02 19:37] (current) – external edit 127.0.0.1
Line 9: Line 9:
   * [[#Load testing with cloud-based services]]   * [[#Load testing with cloud-based services]]
   * [[#Simulating production load]]   * [[#Simulating production load]]
-  * [[#Performance testing]]+
  
  
Line 32: Line 32:
 You can also change how fast you want to ramp up the VUs by setting the "Thread Delay (ms)" before starting next VU. You can also change how fast you want to ramp up the VUs by setting the "Thread Delay (ms)" before starting next VU.
  
-For load / stress testing, typically you will want the model and all of its VUs to be running for a desired period of time.  You can do so by setting "Elapse Time (mins)" in "Model Execution Stop Conditions" section before running the model. +For load / stress testing, typically you will want the model and all of its VUs to be running for a desired period of time.  You can do so by setting "Elapse Time (mins)" in "Model Execution Stop Conditions" section before running the model. 
 + 
 +When running the model with IDE, be sure to select "Random" sequencer such that the model will be traversed randomly to simulate realistic user behavior while still maintaining validity of the test scenarios as described by the model. 
 + 
  
  
 ===== Multi-model execution ===== ===== Multi-model execution =====
-The first step in load testing is to determine the type of loads and how much you wish to exercise on AUT. Typically you would already have suite of models created for functional testing of AUT. These models when executed generates a specific type of loads on AUT. 
  
-Carefully select subset of these models that will generate the type of loads required and determine the number of virtual users for each selected models. You may use the following table to help you plan your next load testing:+Each model describes set of behavior (requirements, user stories) of AUT. By running the model, we are exercising and checking those behavior.  
 + 
 +If you have more than one models created to test your AUT, you may want to run these models concurrently to expose your AUT to various types of activities described by these models. 
 + 
 +You can do so by opening each model and run the models (with 1 or more than one VUs).  As you open another model, be sure do not close the previous model. 
  
-    Model Nam        MCase             VUs   Run Duration (min)   Thread Delay (ms) 
-    WebStore Login   Invalid Login      20        60                   15000  
-    WebStore Main                      300       120                    5000  
-    WebStore Main    Quick Purchase     50        60                   10000  
-    WebStore Main    Add/Remove Items   20        60                   10000      
-     
  
 ===== Load testing with multi-browser ===== ===== Load testing with multi-browser =====
  
 +Selecting which browser to use to run load testing is done in the scripts, for example:
 +   $SELENIUM.setBrowserFirefox();
 +   
 +You may replace above script with the following to randomly choose a browser to run the testing for each VU:
 +   $SELENIUM.setBrowser ('Firefox|IE|Chrome');
 +   
 +If you wish to have more "Firefox" users, you can just list "Firefox" twice or three times, for example:
 +   $SELENIUM.setBrowser ('Firefox|Firefox|IE|Chrome');
 +
 +If you run the models through [[../server_manager|SvrMgr]] or use API to run models on //Runtime// server, you may select browser from a variable that is specified in the model execution request:
 +   $SELENIUM.setBrowser ($VAR.browserList);
 +
 +where "browserList" variable is set in the model execution request options.
 +
 +   
 +   
 ===== Load testing with Runtime Servers ===== ===== Load testing with Runtime Servers =====
 +
 +To generate a lot of load on AUT, you may need to run many instances of //TestOptimal// servers, which makes it harder to manage and control the load testing session.
 +
 +//Runtime// server is a licensed edition of //TestOptimal// server that can be managed by [[../server_manager | SvrMgr]] for large scale load / stress testing.
 +
 +[[../server_manager | SvrMgr]] also provides a central model deployment repository and model execution stats collection for all //Runtime// servers.  
 +
 +//Runtime// servers can be running on different OS or even on the cloud.
 +
 +
 +By using //TestOptimal// REST APIs, you can also build a concurrent model (model that uses Concurrent [[../sequencers | sequencer]]) to orchestrate model executions on //Runtime// servers (via [[../server_manager | SvrMgr]]) and thus automate your load testing by simply running the orchestration model.
 +
  
 ===== Load testing with cloud-based services ===== ===== Load testing with cloud-based services =====
  
 +There are cloud-based testing services for web application testing.  These cloud-based services offers an alternative to managing different OS and browser and even mobile device testing.
 +
 +Below is a few example of such services:
 +  * [[https://saucelabs.com/ | SauceLabs]]
 +  * [[https://www.browserstack.com/ | BrowserStack]]
 +
 +To connect to these services, you would use [[../plugins#webdriver_plugin | WebDriver plugin]] using the APIs provided by the online service provider.
 ===== Simulating production load ===== ===== Simulating production load =====
 +The first step in load testing is to determine the type of loads and how much you wish to exercise on AUT. Typically you would already have suite of models created for functional testing of AUT. These models when executed generates a specific type of loads on AUT.
 +
 +Carefully select a subset of these models that will generate the type of loads required and determine the number of virtual users for each selected models. You may use the following example to help you plan your next load testing:
 +
 +    Model Nam           MCase             VUs   Run Duration (min)   Thread Delay (ms)
 +    WebStore Security                      20        60                   15000 
 +    WebStore Main                         300       120                    5000 
 +    WebStore Main       Quick Purchase     50        60                   10000 
 +    WebStore Main       Add/Remove Items   20        60                   10000     
 +    
 +Note that in the above example, we are using two models: 
 +  * WebStore Security - security testing for user registration and valid / invalid logins.
 +  * WebStore Main - main functional testing of WebStore application
 +
 +We also plan to run certain scenarios (MCases) during the load testing.
 +
  
-===== Performance testing =====