Pacing provides a way to limit the average number of transactions (tests) hitting the SUT. If you have pacing set at 50 transactions per second (tps), what that tells LISA is that you do not want tests running faster than 50 tests per second. This limit is across the population of virtual users. If you have, say, 150 steady state virtual users, then 50 tps / 150 vu, that's 0.333 tps per VU. With that, if a VU runs the test in less than 0.333 seconds, pacing will cause the VU to sleep a bit between test runs. If a test takes longer than 0.333 seconds to run, pacing is ignored, since LISA is staying under the limit of 50 tps.
Pacing is, fundamentally, a braking system and, just as important, not an acceleration system.
- Pacing will not cause the virtual users to run tests at a faster pace.
- Pacing will not cause more virtual users to be started to satisfy the pace.
- Pacing will cause the virtual users to wait longer before they run a test again.
Pacing only works with these load patterns:
- Immediate Ramp
- Run N Times
- Stair Step
Pacing is limited to these load patterns because the load pattern must be able to estimate the number of steady state virtual users it provides; the pacing calculation depends on the number of steady state virtual users.
Under the Covers
Test pacing enable end users to limit how fast test cases are executed. Pacing acts as a kind of think-time between test cases. The pace of transactions is computed using the formula:
paceTimeInSeconds / ( transactions / virtual users )
With a pacing of 50 test cases in 5 minutes using 10 virtual users, the pace per test case, P tc is
Ptc = ( 5 minutes * 60 seconds/minute ) / ( 50 test cases / 10 virtual users )
= ( 300 / 5 )
= 60 seconds
That is, one test case should be executed every 60 seconds.
It is important to notice that pacing is determined by the steady state load (which is also the period when the most virtual users are running).
If you run a stair step load pattern or a similar pattern, the pacing on the first step will be lower than you might expect. The idea of pacing is to throttle how many test cases are executed when the max number of vusers is reached. While the number of vusers is ramping up, pacing is not increased to hit the test cases/minute target. If LISA did that, it would mean more test cases are being executed per step, which cancels out the effect we're trying to achieve with a ramp up time. The idea of ramp up time is to slowly increase the load on the system; for LISA, the load is based on the number of test cases hitting the System Under Test. If, during ramp up, LISA decided to execute more test cases because the number of vusers running at that time we're keeping up with the pace, we'd just end up running at the max load on the first step -- there would be no ramp up.
When you're trying to predict how pacing will behave, determine the number of test cases/minute of each virtual user. If my pacing works out to 3 test cases per minute per virtual user then, when 1 virtual user is running, I'll see 3 TC/minute hit the SUT. When 5 virtual users are running, I'll see 15 TC/minute hit the SUT. When I have 10 virtual users running, I'll see 30 TC/minute hit the SUT.
Sample Pacing Runs
My test results, using 4 test cases, each test case containing one NoOp step and no data set. In the suite, tests ran in parallel.
Suite - 4 Tests, parallel execution
Staging Document - RunNTimes; 2vu, 15min, pacing 2000 tc/15min
Results - 7962 of 8000 test runs
Per Test Counts - 1987, 1992, 1994, 1989
Suite - 4 Tests, parallel execution
Staging Document - RunNTimes; 4vu, 15min, pacing 2000 tc/15min
Results - 7983 of 8000 test runs
Per Test Counts -
Note: as long as the CPU can keep up with the pacing, individual tests within a suite will hit their pacing target. If the load overtakes the CPU, pacing accuracy is off and is unpredictable. By unpredictable, I mean the %-error for each test fluctuates; some tests may be dead-on with their pacing while another test's pacing is off by 50%.
Pacing acts as brake to limit the load on the SUT. Pacing does not act as an accelerator to increase the load. Internally, pacing only affects the lag time between each cycle of a test run; thus, pacing will determine how long a virtual user waits before running a test again.
Lets say the load pattern is a stair-step load pattern, ramping up from 10 virtual users to 100 virtual users, then back down to 10 virtual users. Lets also say pacing is set at 10 test cases/minute. On the first iteration of the test cycle, 10 virtual users execute the test 1 time each. If the average time to execute 1 test case is 5 seconds, these first 10 virtual users will finish their first iteration in about 5 seconds. The pacing algorithm will figure out that, with 10 virtual users running, and each taking 5 seconds to run 1 test, that the lag time needs to be 55 seconds. Thus, after 55 seconds, each virtual user will be told to execute the test again. This will keep the pace of 10 tests/minute, since there are 10 virtual users executing the test 1 time within 1 minute.
Lets say one iteration of test case takes 2 minutes. With the initial 10 virtual users of the load pattern, there aren't enough virtual users running to keep up the pace of 10 tests/minute. Pacing will set the lag time between test iterations of 0 milliseconds - as soon as a virtual user finishes 1 iteration, the next iteration can begin. Pacing will not cause more virtual users to be fired up. If there aren't enough virtual users running to keep up with the pacing, then there just aren't. Once there are 20 virtual users running, things will be on pace - 20 virtual users running 20 tests in 2 minutes averages to 10 tests/minute. With 20 virtual users, the lag time between test iterations is still 0 milliseconds.
When the load increases to 40 virtual users, a lag time finally moves above 0ms. 40 virtual users executing 40 iterations in 2 minutes averages to 20 tests/minute. That's 2X over the pace, so the virtual users need to be slowed down: each virtual user needs to sleep 1 minute between each test iteration. With 40 virtual users, a test only needs to be executed every few minutes to hit that pace. When the load increases to 80 virtual users, pacing will slow them down to wait 2 minutes between executions.
The load pattern determines how many virtual users are running and how many iterations of the test case are executed per virtual user, while pacing controls how much time elapses between when a virtual user finishes one iteration of a test case and starts the next iteration.