Test Impact Analysis is the practice of determining which tests are the most important tests to run based on a specific code change. The goal is to avoid running tests that provide little insight into the impact of your code changes, saving valuable time and resources.
There are many approaches to analyzing code changes to assess the impact on tests. Launchable uses Predictive Test Selection. This approach to Test Impact Analysis determines which tests to run for a given code change using a machine learning model that has been trained on historical test results.
Talk to us to see how Launchable can help bring Test Impact Analysis to your team
One of the chief advantages of predictive test selection is that it provides a uniform solution for all types of tests – regardless of programming language or test framework. In comparison, the traditional approach to Test Impact Analysis looks at the syntax of your source code and builds dependency trees to determine which tests should be run. This makes it very specific to a programming language or framework.
Launchable’s solution doesn’t does use syntax analysis at all! Thus, it’s not specific to a language or framework. Instead, we look at the names of the files that have been modified and the test reports to build a machine learning model that can accurately predict which tests are the most important tests to run.
This makes Launchable extremely portable. Not only is it easy to use with new and old languages alike, but it can also be adopted on projects that use multiple languages.
Unit tests, integration tests, end-to-end tests, etc. – Launchable applies to the whole test pyramid
Some Test Impact Analysis tools can only analyze and optimize tests that live alongside application code, like unit tests or integration tests. This approach can provide excellent predictions, but only in certain parts of your test pipeline.
Launchable’s approach doesn’t suffer from these limitations, allowing it to be applied to tests in any layer of the test pyramid (or even across layers).
One scenario that’s becoming increasingly common is to run a suite of E2E or regression tests against a test environment where several microservices written in different languages have been deployed. Launchable makes it easy to perform Test Impact Analysis in this scenario: you just tell Launchable about the files changed in each service using our language-independent CLI, send the test reports at the end of the run, and we do the rest!
Launchable’s approach makes it well suited for use across your organization, with many different languages and all types of tests. Instead of adopting lots of different tools – or avoiding certain approaches due to lack of tooling – Launchable provides a uniform and flexible solution for optimizing automated test run time with Test Impact Analysis.
Interested in trying Launchable out on your own test suite? Sign up to get access to Launchable!