Our core technology is a machine learning engine that predicts the likelihood of failure for each test case based on the source code changes being tested.
Integrating your test runners with Launchable’s SaaS service lets you run only the meaningful subset of tests based on the changes being tested.
Launchable can be applied in a variety of scenarios to reduce test feedback delay, such as:
We want to learn from you! What’s the biggest testing bottleneck in your environment?
After we build a prediction model using your historical test results, you’ll use our CLI to integrate your test runner with Launchable’s prediction APIs for real-time predictions based on the code changes being tested.
The CLI asks Launchable which tests to run based on the changes being tested. The test runner then to runs the right subset of tests.
How long it takes to train a model is really a function of the volume of test sessions you are executing in a given time frame, so we need to learn about your specific environment and use cases before we can make an estimation.
That said, with most enterprise software development teams we’ve been talking to, 4-12 weeks seem to be good baseline estimate to produce a sizable impact.
The flaky tests are the ones that fail, so you’ll continue to see them run. Launchable doesn’t make them better or worse.
Launchable won’t stop the test execution after the first failure has been found, but some CI tools and test runners can be configured to do this if desired.
Launchable will recognize that the tests are new and will make sure to run them.
We will work with you to analyze and the impact of dependencies on Launchable’s ability to make a prediction and resolve if needed.
Launchable’s predictive test selection service learns the relationship between code changes and the test cases impacted by those changes.
The two key inputs for this are:
1) Metadata ('features') about the code changes being tested, which includes:
2) Metadata ('features') about the test cases that were run, which includes:
This scope may change as we evolve the service.
The actual content of your source code is not needed, only metadata about changes.
Launchable works with any test runner or build tool that supports specifying a subset of tests to run. The Launchable CLI has built-in support for Bazel, Google Test, Gradle, Minitest, and Nose. However, most test runners and build tools support subsetting, and we're adding built-in support for more runners over time.
Launchable is a cloud service that is not available on-premise.
Launchable is a multi-tenant SaaS product. Each customer’s data is kept separate from each other.