Frequently asked questions

How Launchable works

  • What is Launchable's core technology?

    Our core technology is a machine learning engine that predicts the likelihood of failure for each test case based on the source code changes being tested.

    Integrating your test runners with Launchable’s SaaS service lets you run only the meaningful subset of tests based on the changes being tested in the order that minimizes feedback delay to developers.

  • Where does Launchable fit into the software development lifecycle?

    Launchable can be applied in a variety of scenarios to reduce test feedback delay, such as:

    • Long running nightly test suites like integration or end-to-end tests
    • Test runs waiting in long queues due to limited testing capacity
    • Test suites that must run before merging a pull request

    We want to learn from you! What’s the biggest testing bottleneck in your environment?

  • How does Launchable reorder tests and select the right subset?

    After we build a prediction model using your historical test results, you’ll use our plugins to integrate your test runner with Launchable’s prediction APIs for real-time predictions based on the code changes being tested.

    The plugin asks Launchable which tests to run based on the changes being tested. It then interacts with the test runner to run the right subset of tests in the right order.

  • How much historical data does Launchable need to make a prediction?

    How long it takes to train a model is really a function of the volume of test sessions you are executing in a given time frame, so we need to learn about your specific environment and use cases before we can make an estimation.

    That said, with most enterprise software development teams we’ve been talking to, 4-12 weeks seem to be good baseline estimate to produce a sizable impact. As an example, we used three months of data to create the Spark model.

  • What is Time to First Failure (TTFF)?

    Time to First Failure TTFF refers to the wall clock time from the start of the tests to the end of the first test that fails when a set of tests are run. We believe this is a hugely important metric because the sooner a developer is aware of the first test failure, the sooner they can start working on a fix. Each potential application of Launchable’s predictive test selection capability is designed to reduce TTFF and overall test feedback delay.

    Why is it time to first failure, not time to first feedback?

    Most of the time, builds succeed. And you'd only know that after all the tests have run. So if we compute "time to first feedback" that includes success, then the difference we make will be a wash. What this means is that if people don't change their behaviors (of doing something after they hear the success), then we won't really make any difference. So we need to give people some cue when the test execution is out of the most risky part.

  • How do flaky tests impact TTFF calculation?

    Ideally flaky test failures shouldn’t count toward TTFF. However, we don’t have a mechanism for classifying test failures, so all failures are treated the same. Classifying test failures is an important problem, but we are not focused on it today.

  • Do flaky tests impact Launchable’s effectiveness?

    The flaky tests are the ones that fail, so you’ll continue to see them run. Launchable doesn’t make them better or worse.

  • Does this mean that Launchable stops the test execution when the first failure has been found?

    Launchable won’t stop the test execution after the first failure has been found, but some CI tools and test runners can be configured to do this if desired.

  • What if we add new features (and therefore new tests)?

    Launchable will recognize that the tests are new and will make sure to run them.

  • How does Launchable handle dependencies in the test suite sequence?

    We will work with you to analyze and the impact of dependencies on Launchable’s ability to make a prediction and resolve if needed.

Technical and security details

  • What data do we need to send to Launchable?

    Launchable’s predictive test selection service learns the relationship between code changes and the test cases impacted by those changes.

    The two key inputs for this are:

    1) Metadata ('features') about the code changes being tested, which includes:

    • the names and paths of files added/removed/modified in the change
    • number of modified lines in files in the change
    • the location of modified lines in the files in the change
    • Git commit hashes associated with the change
    • Git author details associated with those commits

    2) Metadata ('features') about the test cases that were run, which includes:

    • the names and paths of test cases and test files
    • pass/fail/skipped status of each test case
    • the duration of each test case
    • test case associations to test suites (e.g. ‘unit tests,' ‘integration tests,’ etc.)

    This scope may change as we evolve the service.

  • Does Launchable need access to our code?

    The actual content of your source code is not needed, only metadata about changes.

  • What languages/frameworks/CI tools does Launchable work with?

    There are two stages to a Product Advisor Program engagement: analysis and pilot.

    In the analysis phase, you’ll install a program called the Launchable ingester (see below). The ingester works with Jenkins today, but we may add support for more CI tools as needed. The primary requirement is that your test results need to be made available in a machine-readable format such as JUnit/xUnit.

    In the pilot phase, the Launchable plugin integrates your test runner or build tool with our cloud service to directly manipulate test execution. We will build a plugin for the test runner(s) or build tool(s) you use in preparation for a pilot in your environment.

  • How does the Launchable ingester work?

    The Launchable ingester runs in a Docker container in your environment. It interacts with your Jenkins server and test frameworks, scraping historical data to send to Launchable to train a prediction model.

  • Is Launchable a cloud service? Do you offer an on-premise option?

    Launchable is a cloud service that is not available on-premises.

  • How do you store customer data? 

    Launchable is a multi-tenant SaaS product. Each customer’s data is kept separate from each other.