Works where developers are. Cloud Native, Embedded, Mobile and Traditional Applications.
For every change developers are running through a gauntlet of tests and test suites. Every test and every test suite run adds to the delay in the feedback that developers are waiting on.
On each change, Launchable ranks tests by importance to code changes and allows you to create a unique subset (based on this ranking) in real-time. Run a fraction of your test suite while still maintaining high confidence that if a failure exists it will be found.
Launchable is a good fit for all types of tests phases in the testing pyramid (as long as the results are reported in a binary format). From Unit tests to Service tests to UI our platform can handle all.
Shorten test feedback delay by running tests that are likely to fail, first.
Subset long-running suites and shift-left. Run a subset on every PR or at the top of each hour.
Move a subset of your long-running UI suite earlier in your development cycle.
Using canary deployment? Shift tests that are unlikely to fail post-deploy.
Stop waiting months on DevOps transformation efforts. Impact in days!
Predictive Test Selection builds ML models to find the correlation between code metadata and test failures. The rapidly evolving science is used by companies like Facebook to deliver with high confidence without sacrificing quality.
Launchable builds a model based on your data that helps you speed up your testing
The test feedback wait times reduce dramatically for developers. Thus, driving cycle times down. You don't have to wait months for the of DevOps transformations to see results.
Most models train in about 3-5 weeks (and continually keep training). Most teams can start using in production as quickly as day 1.
Sit back. Let us learn and recommend the right tests. See tests fly by faster.
Not quite. That said, we would like you to run tests smartly depending on where you are in the SDLC. With Launchable, you test more frequently because you can hone in on the right tests. Read more in our comprehensive FAQ.
You can start using recommendations from day 1 OR you can wait for the model to be completely trained while you are off doing other stuff. Most models are fully trained (and keep training) in 3-6 weeks based on the frequency of test results and failures. Read more here.
We are a SaaS solution only.
Eliminate slow test cycles. Test what matters.