Test faster, push more commits, improve dev velocity and happiness

Launchable ML identifies and runs tests with highest probability of failing based on code and test metadata to speed up dev feedback loop

Predictive Test Intelligence

Developer-first testing. Loved by devs and quality professionals.

Works where developers are. Cloud Native, Embedded, Mobile and Traditional Applications.

Launchable language and platform support

My test runtimes went down 90(!) percent. Deployment to Heroku went from 30 minutes to 10 minutes. It is great, just great!

Masayuki Oguni

Masayuki Oguni

CEO and Lead Developer, Manba

Developer Nirvana. Achieving both speed and quality.

Tests are the bottlneck

Tests are the bottleneck clogging your DevOps pipeline.

For every change developers are running through a gauntlet of tests and test suites. Every test and every test suite run adds to the delay in the feedback that developers are waiting on.

Launchable ranks tests by importance to code changes and allows you to create a unique subset (based on this ranking) in real time.

Run the right tests for a code change and nothing more

On each change, Launchable ranks tests by importance to code changes and allows you to create a unique subset (based on this ranking) in real-time. Run a fraction of your test suite while still maintaining high confidence that if a failure exists it will be found.

  • Reduce a 30 minute pre-merge suite to 6 minutes
  • Run a subset of 5-hour nightly suite every hour
Long feedback cycles

From: Long feedback cycles

Short feedback cycles

To: Quick, iterative feedback cycles

Test Pyramid

Support for all types of tests and use cases

Launchable is a good fit for all types of tests phases in the testing pyramid (as long as the results are reported in a binary format). From Unit tests to Service tests to UI our platform can handle all.

Pull requests

Shorten test feedback delay by running tests that are likely to fail, first.

Transparency

Integration tests

Subset long-running suites and shift-left. Run a subset on every PR or at the top of each hour.

Trust

UI tests

Move a subset of your long-running UI suite earlier in your development cycle.

Integrity

Post-deploy

Using canary deployment? Shift tests that are unlikely to fail post-deploy.

Predictive Test Selection works and works well

Stop waiting months on DevOps transformation efforts. Impact in days!

Predictive Test Selection Performance

The ML science under the hood

Predictive Test Selection builds ML models to find the correlation between code metadata and test failures. The rapidly evolving science is used by companies like Facebook to deliver with high confidence without sacrificing quality.

Launchable builds a model based on your data that helps you speed up your testing

Reducing wait times for developers

Rapid drops in wait times for developers

The test feedback wait times reduce dramatically for developers. Thus, driving cycle times down. You don't have to wait months for the of DevOps transformations to see results.

Most models train in about 3-5 weeks (and continually keep training). Most teams can start using in production as quickly as day 1.

Setup in less than 30 minutes. 4 lines of code in your build script

Sit back. Let us learn and recommend the right tests. See tests fly by faster.

Launchable CLI

Frequently asked questions

  • Are you saying "don't run all the tests?"

    Not quite. That said, we would like you to run tests smartly depending on where you are in the SDLC. With Launchable, you test more frequently because you can hone in on the right tests. Read more in our comprehensive FAQ.

  • What data does Launchable need?

    We use Git metadata (files changed etc) and test results to build a machine learning model. Data examples gives you an idea of what this looks like. Data privacy and security policies are also available on our docs site.

  • How long does it take to start working?

    You can start using recommendations from day 1 OR you can wait for the model to be completely trained while you are off doing other stuff. Most models are fully trained (and keep training) in 3-6 weeks based on the frequency of test results and failures. Read more here.

  • Do you have an on premises version?

    We are a SaaS solution only.

  • What about new tests and flaky tests?

    The model prioritizes the tests it doesn't know about first so that there is no gap in coverage. Once it has seen the tests, it starts predicting as it normally would (more here). Our model learns about flaky tests as it makes recommendations. You can read about our Flakiness Insights Dashboard feature that is coming soon.

You don’t have time for slow tests

Eliminate slow test cycles. Test what matters.