open-source-load-testing-tools-which-one-should-you-use
https://www.blazemeter.com/blog/open-source-load-testing-tools-which-one-should-you-use
Open Source Load Testing Tools: Which One Should You Use?
Share
Listen to this post: Open Source Load Testing Tools: Which One Should You Use?
00:0014:36
Is your application, server, or service delivering the appropriate speed of need? How do you know? Are you 100-percent certain that your latest feature hasn’t triggered a performance degradation or memory leak? There's only one way to verify - and that's by regularly checking the performance of your app.
But which tool should you use for this? In this blog post, we'll review the pros and cons of the leading open-source solutions for load and performance testing.
If you're like many, chances are you've already seen this great list of 53 of the most commonly used open source performance testing tools
- The Grinder
- Gatling
- Tsung
- JMeter
- Locust
We’ll cover the main features of each tool, show a simple load-test scenario, and display sample reports. At the end, you'll find a comparison matrix to help you decide which tool is best for your project.
Just as a short note, if you are looking for a way to automate these open source tools, BlazeMeter created Taurus, our own open source test automation tool that exends and abstracts most of the above tools (as well as Selenium), and helps to overcome various challenges. Taurus provides a simple way to create, run and analyze performance tests. Make sure to check it out.
THE TEST SCENARIO AND INFRASTRUCTURE
For our comparisons we will use a simple a HTTP GET request from 20 threads with 100,000 iterations. Each tool will send requests as fast as it can.
The server side (application under test):
-
CPU: 4x Xeon L5520 @ 2.27 GHz
-
RAM: 8GB
-
OS: Microsoft Windows Server 2008 R2 x64
-
Application Server: IIS 7.5.7600.16385
The client side (load generator):
-
CPU: 4x Xeon L5520 @ 2.27 GHz
-
RAM: 4GB
-
OS: Ubuntu Server 12.04 64-bit
Load test tools:
1. THE GRINDER
The Grinder is a free Java-based load-testing framework available under a BSD-style open-source license. It was developed by Paco Gomez and is maintained by Philip Aston. Over the years, the community has also contributed many improvements, fixes, and translations. The Grinder consists of:
- The Grinder Console - This GUI application controls various Grinder agents and monitors results in real time. The console can be used as a basic interactive development environment (IDE) for editing or developing test suites.
- Grinder Agents - Each of these are headless load generators can have a number of workers to create the load
Key Features of The Grinder:
- TCP proxy to record network activity into the Grinder test script
- Distributed testing that scales with an the increasing number of agent instances
- Power of Python or Closure, combined with any Java API, for test script creation or modification
- Flexible parameterization, which includes creating test data on the fly and the ability to use external data sources like files and databases
- Post-processing and assertion with full access to test results for correlation and content verification
- Support of multiple protocols
The Grinder Console Running a Sample Test
Grinder Test Results:
The Gatling Project is another free and open source performance testing tool, primarily developed and maintained by Stephane Landelle. Gatling has a basic GUI that's limited to test recorder only. However, the tests can be developed in easily readable/writable domain-specific language (DSL).
Key Features of Gatling:
-
HTTP Recorder
-
An expressive self-explanatory DSL for test development
-
Scala-based
-
Production of higher load using an asynchronous non-blocking approach
-
Full support of HTTP(S) protocols and can also be used for JDBC and JMS load testing
-
Multiple input sources for data-driven tests
-
Powerful and flexible validation and assertions system
-
Comprehensive informative load reports
The Gatling Recorder Window:
An Example of a Gatling Report for a Load Scenario
If you are interested in more information about Gatling, view our on-demand webcast Load Testing at Scale Using Gatling and Taurus.
TSUNG
Tsung (previously known as IDX-Tsunami) is the only non-Java-based open-source performance-testing tool in this review. Tsung relies on Erlang, so you’ll need to have it installed (for Debian/Ubuntu, it’s as simple as "apt-get install erlang”). Tsung was launched in 2001 by Nicolas Niclausse, who originally implemented a distributed-load-testing solution for Jabber (XMPP). Several months later, support for more protocols was added and, in 2003, Tsung was able to perform HTTP Protocol load testing. Today, it’s a fully functional performance-testing solution with the support of modern protocols like websockets, authentication systems, and databases.
Key Features of Tsung:
- Inherently distributed design
- Underlying multithreaded-oriented Erlang architecture simulates thousands of virtual users on mid-range developer machines
- Support of multiple protocols
- A test recorder that supports HTTP and Postgres
- Metrics for operating systems for both the load generator and application under test can be collected via several protocols
- Dynamic scenarios and mixed behaviors. Flexible load scenarios let you define and combine any number of load patterns in a single test
- Post processing and correlation
- External data sources for data driven testing
- Embedded easily-readable load reports that can be collected and visualized during load
Tsung doesn’t provide a GUI for test development or execution. So you’lll have to live with shell scripts, which are:
- Tsung-recorder, a bash script that records a utility capable of capturing HTTP and Postgres requests and that creates a Tsung config file from them
- Tsung, a main bash control script to start/stop/debug and view test status
- Tsung_stats.pl, a Perl script to generate HTML statistical and graphical reports. It requires the gnuplot and Perl Template library. For Debian/Ubuntu, the commands are:
- apt-get install gnuplo
- apt-get install libtemplate-perl
The main tsung script invocation produces the following output:
Running the test:
Querying the current test status:
Generating the statistics report with graphs can be done via the tsung_stats.pl script:
Open report.html with your favorite browser to get the load report. A sample report for a demo scenario is provided below:
A Tsung Statistical Report
A Tsung Graphical Report
APACHE JMETER
Apache JMeter™ is the only desktop application in this review. It has a user-friendly GUI, making test development and debugging much easier. The earliest version of JMeter available for download is dated March 9, 2001. Since then, JMeter has been widely adopted and is now a popular open-source alternative to proprietary solutions like Silk Performer and LoadRunner. JMeter has a modular structure, in which the core is extended by plugins. This means that all implemented protocols and features are plugins that have been developed by the Apache Software Foundation or online contributors.
Key Features of JMeter:
-
Cross-platform. JMeter can run on any operating system with Java
-
Scalable. When you need a higher load than a single machine can create, JMeter can execute in a distributed mode, meaning one master JMeter machine controls a number of remote hosts.
-
Multi-protocol support. The following protocols are all supported out-of-the-box: HTTP, SMTP, POP3, LDAP, JDBC, FTP, JMS, SOAP, TCP
-
Multiple implementations of pre- and post-processors around sampler. This provides advanced setup, teardown parametrization, and correlation capabilities
-
Various assertions to define criteria
-
Multiple built-in and external listeners to visualize and analyze performance test results
-
Integration with major build and continuous integration systems, making JMeter performance tests part of the full software development life cycle
The JMeter Application With an Aggregated Report on the Load Scenario:
LOCUST
Locust in a Python-based open source framework, which enables writing performance scripts in pure Python language. The main uniqueness of this framework is that it was developed by developers and for developers. The main Locust targets are web applications and web-based services, however, if you are comfortable with Python scripting, you can test almost anything you want. In addition to that, it is worth mentioning that Locust has a completely different way to simulate users, which is fully based on the events approach and gevent coroutine as the backbone for this process. This process allows simulating thousands of users even on a regular laptop, and executing even very complex scenarios that have many steps.
Locust Key Features:
- Cross-platform, because Python can be run on any OS
- High scalability on regular machines due to events based implementation
- Power assertion ability, limited only by your own Python knowledge (you can read this article to learn more about Locust assertions)
- Nice web-based load monitoring
- Code-based scripts implementation that is handy to use with version control (Git, SVN...)
- Scalability, because you can run Locust distributed with many agents
- The ability to test almost anything with the implementation of custom samplers based on pure Python code
Basic test script example:
from locust import HttpLocust, TaskSet, task class SimpleLocustTest(TaskSet): @task def get_something(self): self.client.get("/") class LocustTests(HttpLocust): task_set = SimpleLocustTest
You can run the script by using this command:
locust -f locustfile.py --host=http://192.168.1.170:8080
After the script execution, you will find the detailed reporting on http://localhost:8089/:
HOW BLAZEMETER LOAD TESTING CLOUD COMPLEMENTS AND STRENGTHENS JMETER
While Apache JMeter represents a strong and compelling way to perform load testing, of course, we recommend supplementing that tool with BlazeMeter Load Testing Cloud, which lets you simulate up to 1 million users in a single developer-friendly, self-service platform. With BlazeMeter, you can test the performance of any mobile app, website, or API in under 10 minutes. Here’s why we think the BlazeMeter/JMeter combination is attractive to developers:
• Simple Scalability – It’s easy to create large-scale JMeter tests. You can run far larger loads far more easily with BlazeMeter than you could with an in-house lab.
• Rapid-Start Deployment – BlazeMeter’s recorder helps you get started with JMeter right away, and BlazeMeter also provides complete tutorials and tips.
• Web-Based Interactive Reports – You can easily share results across distributed teams and overcome the limitations of JMeter’s standalone UI.
• Built-In Intelligence – The BlazeMeter Cloud provides on-demand geographic distribution of load generation, including built-in CDN-aware testing.
THE GRINDER, GATLING, TSUNG, LOCUST AND JMETER PUT TO THE TEST
Let’s compare the load test results of these tools with the following metrics:
-
Average Response Time (ms)
-
Average Throughput (requests/second)
-
Total Test Execution Time (minutes)
First, let’s look at the average response and total test execution times:
As shown in the graphs, Locust has the fastest response times with the highest average throughout, followed by JMeter, Tsung and Gatling. The Grinder has the slowest times with the lowest average throughput.
FEATURES COMPARISON TABLE
And finally, here’s a comparison table of the key features offered by each testing tool:
Feature | The Grinder | Gatling | Tsung | JMeter | Locust |
OS | Any | Any | Linux/Unix | Any | Any |
GUI | Console Only | Recorder Only | No | Full | No |
Test Recorder | TCP (including HTTP) | HTTP | HTTP, Postgres | HTTP | No |
Test Language | Python, Clojure | Scala | XML | XML | Python |
Extension Language | Python, Clojure | Scala | Erlang | Java, Beanshell, Javascript, Jexl | Python |
Load Reports | Console | HTML | HTML | CSV, XML, Embedded Tables, Graphs, Plugins | HTML |
Protocols | HTTP SOAP JDBC POP3 SMTP LDAP JMS |
HTTP JDBC JMS |
HTTP WebDAV Postgres MySQL XMPP WebSocket AMQP MQTT LDAP |
HTTP FTP JDBC SOAP LDAP TCP JMS SMTP POP3 IMAP |
HTTP |
Host monitoring | No | No | Yes | Yes with PerfMon plugin | No |
Limitations | Python knowledge required for test development & editing. Reports are very plain and brief. |
Limited support of protocols. Scala-based DSL language knowlegde required. Does not scale. |
Tested and supported only on Linux systems. | Bundled reporting isn’t easy to interpret. | Python knowledge required for test development & editing. |
RESOURCES
Want to find out more about these tools? Log onto the websites below - or post a comment here and I’ll do my best to answer!
-
The Grinder - http://grinder.sourceforge.net/
-
Gatling - http://gatling.io/
-
Locust - https://locust.io
-
JMeter
-
Home Page: http://jmeter.apache.org/
-
JMeter Plugins: http://jmeter-plugins.org/
-
Blazemeter’s Plugin for JMeter: http://blazemeter.com/blazemeters-plug-jmeter
-
On a Final Note...
As I mentioned at the start, you might also want to read more about Taurus. When it comes to performance testing, a lot of these are really great...but not perfect. Automation and integration with other systems can be a pain, and the tool itself comes with a steep learning curve. Taurus is an open source test automation tool that provides a simple way to create, run and analyze performance tests.
Want to Learn More About Load Testing With Open Source Tools?
Do you have questions about open source testing tools? View our webinar Ask the Expert: Open Source Testing with JMeter, Gatling, Selenium, and Taurus. This special interactive presentation and Q&A was hosted by Andrey Pokhilko, founder of JMeter-Plugins.org, a core JMeter contributor, and creator and lead developer on the Taurus project.
Do you want to try out JMeter? Learn for free from our JMeter academy.
Start testing now! To try out BlazeMeter, which enhances JMeter features, request a demo, or just put your URL or JMX file in the vox below and your test will start in minutes. To run Locust, Gatling, The Grinder and Tsung automatically and more easily, try out Taurus.
The parts about Locust were contributed by Yuri Bushnev.