Spark Joy by Running Fewer Tests

Layer a Machine Learning-Based Approach That Embraces Language and Software Test Diversity

Key Takeaways

  • Predictive test selection is a way of using machine learning to choose the highest value tests to run for a specific change.

  • With Predictive Test Selection, your engineers are able to stop waiting on DevOps transformation efforts.

  • Launchable’s ML Predictive Test Selection identifies and runs tests with the highest probability of failing, based on code and test metadata, to speed up the dev feedback loop.

Tests are often the bottleneck that slow the momentum of your DevOps pipeline. Developers and team leaders dream of faster testing cycles. Many teams turn to static code analysis and dependency analysis tools to help reduce their test cycle times. 

These test selection strategies are effective, but they don’t fully solve the velocity problem that testing so often combats. That’s why at Launchable we’ve been exploring ways to further improve test selection strategy for quicker, more effective testing cycles.  

Static Code Analysis and Dependency Analysis Tools 

Dev teams frequently incorporate static code analysis and dependency analysis tools into their test selection strategy. While these resources can play their respective roles in the testing cycle, their potential utility is limited.

These tools identify tests to run based on code changes by creating a dependency graph from code to tests. Change this function? Run the associated test. Change this package? Run all the tests in packages that depend on it – and so on. However, due to the limitation of this selection mechanism, it’s unlikely that relying on dependency analysis alone as a test selection strategy will speed up your testing cycle.

First, dependency analysis tools may cause frequent test bloat, especially for teams working on libraries or components that are used in many places. This can trigger many dependent test suites to be run unnecessarily. Second, they don’t take historic test results into account, so many less useful tests (like tests that never fail) may still be run.

Additionally, dependency and static analysis tools are most useful at the lower levels of the test pyramid like unit testing or integration tests. However, once you’re running end-to-end or UI tests (or even further up the testing pyramid) it becomes harder to programmatically determine which tests are related to code changes using this method alone. Dependency and static analysis tools begin to lose their potency the higher you climb up the testing pyramid.

Finally, these kinds of tools tend to work only with a single programming language or tech stack. This makes them useful for individual teams but harder to deploy across an enterprise organization using a variety of languages and tech stacks. As such, some teams might not get the benefit at all.

Teams relying heavily on this test selection strategy alone often end up running excessive tests and glacial feedback wait times for developers. These obstacles are just a few of the issues engineering leaders and development managers run into as they try to make these tools more effective, and ultimately create a CI/CD pipeline that’s more effective at delivering quality code faster. 

There’s a better, faster road to improve your team’s test selection strategy and overall testing cycle speed: Predictive Test Selection.

What is Predictive Test Selection?

Predictive test selection is a way of using machine learning to choose the highest value tests to run for a specific change. 

Predictive test selection is a branch of Test Impact Analysis, or the practice of automating the selection of which tests to run for a given code change based on their expected value.

In short, predictive test selection tries to find the “needle in the haystack” for every change to minimize test execution time without sacrificing quality. 

For DevOps leaders and teams ready to accelerate their testing cycle, Launchable’s language agnostic machine learning platform featuring Predictive Test Selection is the perfect solution.

Faster Test Cycles with Predictive Test Selection

Launchable’s ML Predictive Test Selection identifies and runs tests with the highest probability of failing, based on code and test metadata, to speed up the dev feedback loop. For dev managers and for VPs of Engineering, this means faster test cycles and the improved functioning and morale of your DevOps team.

So, how exactly does Launchable speed up test cycle times?

To begin, Predictive Test Selection can be used in tandem with existing test selection strategies, including cloud native, embedded, mobile, and traditional applications.

Next, perhaps the biggest benefit of Predictive Test Selection is that it is “language agnostic.” This means the platform works with all programming languages and frameworks. The machine learning model is independent from specific programming language, as it uses Git metadata and test results – information common to any tech stack – to create ML models. This makes Launchable optimal for polyglot organizations and projects.

Additionally, because Predictive Test Selection doesn’t require storing or analyzing actual code, it’s easier to deploy in enterprises with strict security controls.

It’s also important to note that Predictive Test Selection can be deployed across different parts of the test pyramid. Since Predictive Test Selection can support all types of tests and use cases, Launchable is a good fit for all types of tests phases.

Finally, Launchable’s Predictive Test Selection acts as an effective layer on top of existing testing tools. With Predictive Test Selection, your engineers are able to stop waiting on DevOps transformation efforts. 

In short: Launchable’s language agnostic machine learning platform delivers faster testing, happier teams, and impact in days, not months.

Seeking Your Expert Feedback on Our AI-Driven Solution

Quality a focus? Working with nightly, integration or UI tests?
Our AI can help.