Run a Subset of Tests to Achieve Confidence in a Build

Introducing the Confidence Curve

Key Takeaways

  • What does 'confidence' mean? Well, Launchable defines confidence as the likelihood that a test run will pass.

  • The Confidence curve is a new chart available that makes it easier to choose the right target for your needs.

  • You can use it to choose an optimization target for running Launchable subsets.

The optimization target you choose determines how Launchable populates a dynamic subset with tests. We recently made it easier to choose the right target for your needs. Enter the Confidence curve!

The Confidence curve is a new chart available at app.launchableinc.com. You can use it to choose an optimization target for running Launchable subsets.

To go along with this, we've also added a new subsetting target option called --confidence. This new option adds to the existing fixed duration and percentage duration target options.

What does 'confidence' mean? Well, Launchable defines confidence is defined as the likelihood that a test run will pass. For example, when you request a subset using --confidence 90%, Launchable will populate the subset with enough relevant tests expected to find 9 out of 10 failing runs. In the graph above, Launchable will select between 10 and 15 minutes of tests.

--confidence is a useful target to start with because the subset duration should decrease over time as Launchable learns more about your changes and tests.

You can read more about subset targets in our documentation.

Seeking Your Expert Feedback on Our AI-Driven Solution

Quality a focus? Working with nightly, integration or UI tests?
Our AI can help.