Launchable's machine learning model helps to drastically reduce long-running test suites to just the tests relevant to the code changes.
To start reducing your test run time by up to 80-90 percent, get started with a few simple steps.
At Launchable, we subset, reorder, and record tests to help developers get the most out of their tests and get feedback faster, without much effort. Launchable’s machine learning algorithm allows you to reduce long-running test suites to just the tests that are relevant to your code changes. This can have a dramatic impact on test cycle times. In some cases reducing test run time by 80 to 90 percent. Our CLI is designed to allow developers to interact with the Launchable service within the context of a build script.
At it’s heart, the CLI contains three core commands:
Launchable record build when you build software. We record what Git revisions the software is created from.
Launchable record tests after you run your test runner, to analyze test reports
Launchable subset before you run tests so that our service can tell you what tests should be run
Typical usage looks something like this:
# Record commitslaunchable record build ...# Build software the way you normally do, for example:bundle install# Initiate a Launchable sessionlaunchable record session ...# Ask Launchable which tests to run for this buildlaunchable subset ... > tests.txt# Run those tests, for example:bundle exec rails test -v $(cat tests.txt)# Send test results to Launchablelaunchable record tests ...
We’ve used Rails in our example above, but there is nothing about the Launchable solution that limits it to a particular language or framework.
One of the key engineering design challenges for our team was to figure out how we can effectively cope with the endless list of test runners and build tools that people use. Our service is generic enough at the fundamental level, but test runners have different interfaces. For example:
Test names are different depending on the language or framework. Some ecosystems use file names (e.g., Ruby, Python), while others use class names (e.g., Java, C#, C++).
Test report formats are different depending on the framework. They all roughly follow the JUnit report format, but the details vary.
We solved this problem by making the CLI extensible. It consists of the core engine and the glue layer that adjusts the input/output to the formats supported by each tool. It’s a pseudo plugin system, which won’t be a surprise to those familiar with my work on Jenkins 😀. The end result is that it’s quite simple to support different test runners. (If you’re interested in contributing, our CLI is open-source and we welcome contributions!)
To get started you’ll need an API key. API keys are currently only available to early access participants.
Our CLI available as Python package. It can be installed with PIP:
pip3 install --user launchable
To continue, follow our getting started guide. And remember to tell us what you think on Twitter or our newly created Discord server! We’re interested in all feedback from early users. Good or bad. We want to make the Launchable experience as smooth as possible. Also, don’t be afraid to reach out for help! This is early software so there are bound to be rough edges.
Does any of this sound interesting? Reach out if you have thoughts or sign up for the beta to get access to Launchable before anyone else!