What is parallel testing?

Approaches to parallel testing and alternatives that speed up the approach by up to 80%

Key Takeaways

  • Parallel testing approaches include splitting tests by suite, file (or class), and tag.

  • These common parallel testing methods are time-consuming, although they provide fine-grain control over which tests run in each process.

  • Running the most important tests for a code change with Predictive Test Selection speeds up parallel testing by up to 80%.

Parallel software testing is the practice of splitting a test suite up so that tests can execute simultaneously, generally in order to run large test suites much faster. For example, splitting a 1-hour test suite into four subsets that are executed at the same time on 4 machines, allowing the entire test suite to run in 15 minutes instead of an hour.

Many CI servers have built-in mechanisms to make this easier:

The general approach is to invoke your test command with parameters that specify a different subset of tests to run on each machine (or process). There are several approaches to splitting up your tests.

Splitting tests by suite

The first approach is to split your tests by suite. For example, if your tests naturally divide into Unit, Functional, and End-to-End tests (UI), you could easily parallelize by running each suite in a separate process:

The advantage of this approach is that it is fairly easy to accomplish using existing tooling. The disadvantage is that tests will still take as long as your longest test suite. In the example above, the shortest parallel run would take 34 minutes because this is the time it takes to run the End-to-End tests (the longest-running suite).

Splitting tests by file (or class)

Another approach is to split tests out by filename or class. This approach often requires manually splitting out the invocations of your tests in some way which may require a lot of manual work.

CircleCI provides a command for you to do this automatically. For example, to split your Go tests into 4 groups with CircleCI, you could run:

circleci tests glob "**/*.go" | circleci tests split --split-by=timings

This would execute a fraction of the tests in each process. (Note that the fraction depends on the parallelism key in your config file, but you can set this manually with the --total flag.)

Splitting tests by tag

Some test runners provide ways of tagging individual tests, effectively adding them to groups. Ideally, you could split your tests into equally sized groups across processes. To split tests using this method, you manually tag tests that should run in each group.

For example, if you’re using Ruby and Rspec, you can specify tags after each test as symbols in your test source code:

 it 'succesfully logs in', :groupA do
  # test code goes here
 end


 it 'shows the dashboard after loging in', :groupB do
  # test code goes here
 end

(Note that Rspec allows tags to be added after both it and describe blocks.)

Then, you specify which group to run when you execute tests:

rspec --tag groupA

Again, this method is time-consuming, but it does provide fine-grain control over which tests run in each process.

An alternative to parallel testing

Depending on your situation, parallelizing your tests may prove difficult or costly. Perhaps adding more machines to your pipeline is prohibitive, or perhaps the process of splitting up your tests requires too much manual work.

If your goal is to reduce your test runtime, Launchable’s dynamic subsets can be used to run the most important tests for a code change and nothing more. Launchable uses machine learning to understand which tests are most relevant to a code change, an approach known as predictive test selection. We train a model using your historic code changes and test runs that can reduce test runtime by up to 80% while maintaining high confidence that some failing tests will be identified (if they exist).

Practically, this enables you to reduce a 1-hour test suite to a 12-minute subsetcontaining only the tests that matter for a specific code change.

Seeking Your Expert Feedback on Our AI-Driven Solution

Quality a focus? Working with nightly, integration or UI tests?
Our AI can help.