Run a subset of tests to achieve confidence in a build

by Alastair Wilkes in

The optimization target you choose determines how Launchable populates a dynamic subset with tests. We recently made it easier to choose the right target for your needs. Enter the Confidence curve!

The Confidence curve is a new chart available at You can use it to choose an optimization target for running Launchable subsets.

To go along with this, we've also added a new subsetting target option called --confidence. This new option adds to the existing fixed duration and percentage duration target options.

What does 'confidence' mean? Well, Launchable defines confidence is defined as the likelihood that a test run will pass. For example, when you request a subset using --confidence 90%, Launchable will populate the subset with enough relevant tests expected to find 9 out of 10 failing runs. In the graph above, Launchable will select between 10 and 15 minutes of tests.

--confidence is a useful target to start with because the subset duration should decrease over time as Launchable learns more about your changes and tests.

You can read more about subset targets in our documentation.