Developer Nirvana. Achieving both speed and quality.
Tests are the bottleneck clogging your DevOps pipeline.
For every change developers are running through a gauntlet of tests and test suites. Every test and every test suite run adds to the delay in the feedback that developers are waiting on.
Run the right tests for a code change and nothing more
On each change, Launchable ranks tests by importance to code changes and allows you to create a unique subset (based on this ranking) in real-time. Run a fraction of your test suite while still maintaining high confidence that if a failure exists it will be found.
Launchable is a good fit for all types of tests phases in the testing pyramid (as long as the results are reported in a binary format). From Unit tests to Service tests to UI our platform can handle all.
Shorten test feedback delay by running tests that are likely to fail, first.
Subset long-running suites and shift-left. Run a subset on every PR or at the top of each hour.
Move a subset of your long-running UI suite earlier in your development cycle.
Using canary deployment? Shift tests that are unlikely to fail post-deploy.
Predictive Test Selection works and works well
Stop waiting months on DevOps transformation efforts. Impact in days!
The ML science under the hood
Predictive Test Selection builds ML models to find the correlation between code metadata and test failures. The rapidly evolving science is used by companies like Facebook to deliver with high confidence without sacrificing quality.
Launchable builds a model based on your data that helps you speed up your testing
The test feedback wait times reduce dramatically for developers. Thus, driving cycle times down. You don't have to wait months for the of DevOps transformations to see results.
Most models train in about 3-5 weeks (and continually keep training). Most teams can start using in production as quickly as day 1.
Setup in less than 30 minutes. 4 lines of code in your build script
Sit back. Let us learn and recommend the right tests. See tests fly by faster.
Frequently asked questions
Are you saying "don't run all the tests?"
Not quite. That said, we would like you to run tests smartly depending on where you are in the SDLC. With Launchable, you test more frequently because you can hone in on the right tests. Read more in our comprehensive FAQ.
What data does Launchable need?
We use Git metadata (files changed etc) and test results to build a machine learning model. Data examples gives you an idea of what this looks like. Data privacy and security policies are also available on our docs site.
How long does it take to start working?
You can start using recommendations from day 1 OR you can wait for the model to be completely trained while you are off doing other stuff. Most models are fully trained (and keep training) in 3-6 weeks based on the frequency of test results and failures. Read more here.
Do you have an on premises version?
We are a SaaS solution only.
What about new tests and flaky tests?
The model prioritizes the tests it doesn't know about first so that there is no gap in coverage. Once it has seen the tests, it starts predicting as it normally would (more here). Our model learns about flaky tests as it makes recommendations. You can read about our Flakiness Insights Dashboard feature that is coming soon.