Frequently asked questions

How Launchable works

  • What is Launchable's core technology?

    Our core technology is a machine learning engine that predicts the likelihood of failure for each test case based on the source code changes being tested.

    Integrating your test runners with Launchable’s SaaS service lets you run only the meaningful subset of tests based on the changes being tested.

  • Where does Launchable fit into the software development lifecycle?

    Launchable can be applied in a variety of scenarios to reduce test feedback delay, such as:

    • Long running nightly test suites like integration or end-to-end tests
    • Test runs waiting in long queues due to limited testing capacity
    • Test suites that must run before merging a pull request

    We want to learn from you! What’s the biggest testing bottleneck in your environment?

  • How does Launchable select the right subset of tests to run?

    After we build a prediction model using your historical test results, you’ll use our CLI to integrate your test runner with Launchable’s prediction APIs for real-time predictions based on the code changes being tested.

    The CLI asks Launchable which tests to run based on the changes being tested. The test runner then to runs the right subset of tests.

  • How much historical data does Launchable need to make a prediction?

    How long it takes to train a model is really a function of the volume of test sessions you are executing in a given time frame, so we need to learn about your specific environment and use cases before we can make an estimation.

    That said, with most enterprise software development teams we’ve been talking to, 4-12 weeks seem to be good baseline estimate to produce a sizable impact.

  • Do flaky tests impact Launchable’s effectiveness?

    The flaky tests are the ones that fail, so you’ll continue to see them run. Launchable doesn’t make them better or worse.

  • Does this mean that Launchable stops the test execution when the first failure has been found?

    Launchable won’t stop the test execution after the first failure has been found, but some CI tools and test runners can be configured to do this if desired.

  • What if we add new features (and therefore new tests)?

    Launchable will recognize that the tests are new and will make sure to run them.

  • How does Launchable handle dependencies in the test suite sequence?

    We will work with you to analyze and the impact of dependencies on Launchable’s ability to make a prediction and resolve if needed.

Technical and security details

  • What languages/frameworks/CI tools does Launchable work with?

    You can find the latest information on our documentation site.

  • Is Launchable a cloud service? Do you offer an on-premise option?

    Launchable is a cloud service that is not available on-premise.

  • Where can I find information about your security policies? 

    You can find the latest information on our documentation site.