Our core technology is a machine learning engine that predicts the likelihood of failure for each test case based on the source code changes being tested.
Integrating your test runners with Launchable’s SaaS service lets you run only the meaningful subset of tests based on the changes being tested.
Launchable can be applied in a variety of scenarios to reduce test feedback delay, such as:
We want to learn from you! What’s the biggest testing bottleneck in your environment?
After we build a prediction model using your historical test results, you’ll use our CLI to integrate your test runner with Launchable’s prediction APIs for real-time predictions based on the code changes being tested.
The CLI asks Launchable which tests to run based on the changes being tested. The test runner then to runs the right subset of tests.
How long it takes to train a model is really a function of the volume of test sessions you are executing in a given time frame, so we need to learn about your specific environment and use cases before we can make an estimation.
That said, with most enterprise software development teams we’ve been talking to, 4-12 weeks seem to be good baseline estimate to produce a sizable impact.
The flaky tests are the ones that fail, so you’ll continue to see them run. Launchable doesn’t make them better or worse.
Launchable won’t stop the test execution after the first failure has been found, but some CI tools and test runners can be configured to do this if desired.
Launchable will recognize that the tests are new and will make sure to run them.
We will work with you to analyze and the impact of dependencies on Launchable’s ability to make a prediction and resolve if needed.
You can find the latest information on our integrations page.
Launchable is a cloud service that is not available on-premise.
You can find the latest information on our documentation site.