All Articles

KPI validation with locust.io

In performance testing KPIs should be an inherent part of requirements, thus we don’t blindly stress our application. Rather we should have a clear idea of what our stack should handle under defined circumstances. A real-world example of performance requirement could be: “The 90 percentile should be below 50ms while handling 500 RPS (requests per second)”.

Once we fine-tuned our application to meet defined performance criteria we can hook up this test into your CI/CD tool and run the same performance test as a part of your delivery pipeline to prevent performance regression. To fully automate the whole process — we want want to look at the results only when the build has failed because of defined KPIs were not met — we need to add the ability to decide whether to pass or fail to our Locust script. Locust doesn’t have this feature baked in, but fortunately, Locust is very hackable and provides a nice API for customization.

For the purpose of automated KPI validation we are going to create a custom plugin using locust’s event hooks and its inner statistics. The overall plugin design is pretty simple:

  1. Register quitting event
  2. Get all statistics and serialize them
  3. Calculate missing metrics (RPS, percentiles, …)
  4. heck provided KPI definition
  5. Validate provided KPI agains actual measurements

The whole KPI plugin code can be found here

Now we have to register KpiPlugin class within your Locust script and define KPI(s) like this:

events.init.add_listener
def on_locust_init(environment, **_kwargs):
    KPI_SETTINGS = [{'/store/inventory': [('percentile_90', 50), ('rps', 500), ('error_rate', 0)]}]
    KpiPlugin(env=environment, kpis=KPI_SETTINGS)

The above script will make your build fail in case /store/inventory endpoint won’t meet one of the defined criteria — 90 percentile is worse than 50ms, RPS is under 500, error rate is higher than 0%.

Happy performance testing.

Published Aug 31, 2020

Thoughts on testing and test automation