Finding Flaky Tests And Optimizing Parallel Test Executions

July 2021 Launchable Product Updates

Key Takeaways

  • Find confidence with your test suite by defending against flaky tests with Launchable's Flaky Tests Insights now available in beta.

  • Open source projects can now use Launchable to easily optimize test execution.

  • Choose the right target for your needs with the Confidence curve.

  • Eliminate the nuances of specific languages and test runners with pre-built profiles.

July 2021 Launchable Product Updates

The team at Launchable has been busy working on product advancements to help developers with Continuous Quality, bridging the gap between speed and quality. With our focus on providing insights from examining code change and test suite data, here’s our latest roundup of Launchable product enhancements helping developers deliver high quality software faster.

Find the most flaky tests in your test suites.

Trust your test suite with Flaky Tests Insights (beta) 

Find confidence with your test suite by defending against flaky tests with Launchable's Flaky Tests Insights now available in beta. Flaky tests waste time and erode confidence in your test suite. Flaky Tests Insights analyzes your test runs and scores each test on a flakiness scale. Sign up for the free trial to start sending data and have Launchable identify the flakiest tests in your suite so you can fix or remove them. 

Launchable now works with open source projects

Optimize open source test execution with Tokenless Authentication 

With tokenless authentication you can integrate our test optimization into your public pipelines. Open source projects can now use Launchable to easily optimize test execution by using your CI/CD service provider’s public API to verify that the CLI is being used in a genuine pipeline. Tokenless authentication is currently available for projects that use GitHub Actions. 

Optimize parallel test executions to run a subset of tests

Run Launchable subsets in parallel with Launchable split-subset

Take advantage of predictive test selection and parallelization at the same time with Launchable split-subset. Now you can divide a subset into equal chunks to be run in parallel, rather than choosing between subsetting or parallelizing your tests.

Launchable can identify the right tests based on a confidence threshold

Run a subset of tests to achieve confidence in a build using the Confidence curve 

Your chosen target determines how Launchable populates a dynamic subset with tests. Launchable has now made it easier for you to choose the right target for your needs with the Confidence curve. This newly available chart can be used to choose an optimization target for running your Launchable subsets. 

Launch also has a new subsetting target option called --confidence, defined as the likelihood a test run will pass. When you request a subset using --confidence 90%, Launchable will populate the subset with enough relevant tests expected to find 9 out of 10 failing runs. This is a useful target to start with because the duration should decrease over time as Launchable learns more about your changes and tests.  

Native support for Pytest, Nunit, and Android Debug Bridge (adb) in the Launchable CLI

Launchable integrates with over a dozen build and testing tools

Continuing in our mission to make integration even easier, we’ve pre-built profiles into the Launchable CLI for popular build and testing tools. These profiles eliminate the nuances of specific languages and test runners so you only have to add a few lines to your build script to get started.

Recently, we added support for PytestNunit, and Android Debug Bridge (adb) in addition to the dozen-plus existing profiles. If you have a really custom setup, you can build your own profile for the CLI.

Seeking Your Expert Feedback on Our AI-Driven Solution

Quality a focus? Working with nightly, integration or UI tests?
Our AI can help.