BlazeMeter Test Rrunning Much Different VU Numbers When Executing Tests on Google Dedicated IPs
search cancel

BlazeMeter Test Rrunning Much Different VU Numbers When Executing Tests on Google Dedicated IPs

book

Article ID: 193942

calendar_today

Updated On:

Products

BlazeMeter

Issue/Introduction

Please note the following tests were set up summarily with the only exception being of the number of dedicated IPs in the test run.

 

This test (1)  was set up to run 75 VU with 20 min ramp-up with 5 steps, running in 3 google dedicated IP’s (Iowa, Virginia, Oregon).

The test ran 90 VU with 2 steps to 90VU running on 3 google dedicated IP’s. I’m not sure why this test ran 90 VU when during setup it showed that 75 VUs were set up for the test across the 3 IPs.

 


This test(2) was set up to run 75 VU with 20 min ramp-up with 5 steps, running in 4 google dedicated IPs (Iowa, Virginia, Oregon, London).

The test ran 60VU with 1 step to 60 VU running on 4 google dedicated IPs. I’m not sure why this test ran 60 VU when during setup it showed that 75 VUs were set up for the test across the 4 IPs.

 

Environment

Release : SAAS

Component : BLOCKMASTER

Cause

The way BlazeMeter does rounding.

Resolution

For test(1):

This test was set up to run 75VU with 20 min ramp-up with 5 steps, running in 3 google dedicated IP’s (Iowa, Virginia, Oregon).

The test ran 90VU with 2 steps to 90VU running on 3 google dedicated IP’s. I’m not sure why this test ran 90VU when during setup it showed that 75 VU’s were set up for the test across the 3 IPs.

Each of your thread groups was configured for 1.  

You specified 75 users for 15 thread groups.  

Each thread group will run 5 users.  

You specified 3 engines.  

Then since you have 15 thread groups and 3 engines, multiply 5 × 18 and will get 90.

I used this for reference:

https://guide.blazemeter.com/hc/en-us/articles/115005911369

 

For test 2:

The test is configured to run in 4 locations for 75 users.  

That broke down to 3 locations using 19 users and 1 location using 18 users.  

As with the other test, in jmx file, contains 15 Thread Groups.  

This is similar to your other case except in that this time the number of users is < the configured number.  

In this case BlazeMeter rounded down instead of up in order to be able to approximate the configured number of users over the configured number of locations (75 users over 4 locations) within the constraints of the JMeter script (15 Thread Groups).  15 users are allocated to each location - one for each Thread Group - so over 4 locations, that would be 60 users.

Additional Information

https://guide.blazemeter.com/hc/en-us/articles/360000820137-Max-Users-vs-Total-Users-Did-All-My-Users-Run-

These are suggestions from R&D for performance tests:

  • Disable any listeners from the JMX file.
  • The best practice is to have no more than 5 Thread Groups / scenarios per JMX file, given that BlazeMeter identifies each Thread Group as a scenario.
  • Also, you should execute each JMX (with 5 Thread Groups) individually, and calibrate it.
  • Always test each test separately before running as multi-test.
  • Then you should run multi-test, calibrate it.
  • Last, you should run multi-test as desired load.