Our core technology is a machine learning engine that predicts the likelihood of failure for each test case based on the source code changes being tested.
Integrating your test runners with Launchable’s SaaS service lets you run only the meaningful subset of tests based on the changes being tested.
Where does Launchable fit into the software development lifecycle?
Launchable can be applied in a variety of scenarios to reduce test feedback delay, such as:
Long running nightly test suites like integration or end-to-end tests
Test runs waiting in long queues due to limited testing capacity
Test suites that must run before merging a pull request
We want to learn from you! What’s the biggest testing bottleneck in your environment?
How does Launchable select the right subset of tests to ru n?
After we build a prediction model using your historical test results, you’ll use our CLI to integrate your test runner with Launchable’s prediction APIs for real-time predictions based on the code changes being tested.
The CLI asks Launchable which tests to run based on the changes being tested. The test runner then to runs the right subset of tests.
How much historical data does Launchable need to make a prediction?
How long it takes to train a model is really a function of the volume of test sessions you are executing in a given time frame, so we need to learn about your specific environment and use cases before we can make an estimation.
That said, with most enterprise software development teams we’ve been talking to, 4-12 weeks seem to be good baseline estimate to produce a sizable impact.
Do flaky tests impact Launchable’s effectiveness?
The flaky tests are the ones that fail, so you’ll continue to see them run. Launchable doesn’t make them better or worse.
Does this mean that Launchable stops the test execution when the first failure has been found?
Launchable won’t stop the test execution after the first failure has been found, but some CI tools and test runners can be configured to do this if desired.
What if we add new features (and therefore new tests)?
Launchable will recognize that the tests are new and will make sure to run them.
How does Launchable handle dependencies in the test suite sequence?
We will work with you to analyze and the impact of dependencies on Launchable’s ability to make a prediction and resolve if needed.
Technical and security details
What data do we need to send to Launchable?
Launchable’s predictive test selection service learns the relationship between code changes and the test cases impacted by those changes.
The two key inputs for this are:
1) Metadata ('features') about the code changes being tested, which includes:
the names and paths of files added/removed/modified in the change
number of modified lines in files in the change
Git commit hashes associated with the change
Git author details associated with those commits
2) Metadata ('features') about the test cases that were run, which includes:
the names and paths of test cases and test files
pass/fail/skipped status of each test case
the duration of each test case
test case associations to test suites (e.g. ‘unit tests,' ‘integration tests,’ etc.)
This scope may change as we evolve the service.
Does Launchable need access to our code?
The actual content of your source code is not needed, only metadata about changes.
What languages/frameworks/CI tools does Launchable work with?
Launchable works with any test runner or build tool that supports specifying a subset of tests to run. The Launchable CLI has built-in support for Bazel, Google Test, Gradle, Minitest, and Nose. However, most test runners and build tools support subsetting, and we're adding built-in support for more runners over time.
Is Launchable a cloud service? Do you offer an on-premise option?
Launchable is a cloud service that is not available on-premise.
How do you store customer data?
Launchable is a multi-tenant SaaS product. Each customer’s data is kept separate from each other.