Launchable collaborates with a QA and automation team at the BMW Group. This team helps to ensure testing is as efficient as possible, and handle numerous other responsibilities around the automation of build and test pipelines.  Over the past few years, this team has successfully used Predictive Test Selection(PTS) on numerous test suites.

The first test suite this team targeted was a regression test suite.  The initial goal was to improve testing feedback time and reduce hardware constraints.  The test suite runs against a specific piece of hardware head unit, which will be referred to as HU(head unit) in this article.

The Challenge

This team dealt with slow builds and tests for a while due to various reasons. One of the main issues was the long queue times to run tests against the physical HU devices. Every build and test lands on the real ECU HW. This is especially bad during early development, as the number of available hardware test stations is limited. Developers quickly became frustrated because they had to wait to find out if their recent code changes broke the build via the regression test suite that is run.

Part of this team's responsibility is to help estimate resource costs each year. With queue times ballooning and the test suite growing, they had already parallelized as much as possible and resource costs were getting out of control. With management looking for solutions, this is where Launchable came in.

The solution: run a subset of tests to reduce hardware constraints and improve developer feedback time

By using Launchable’s Predictive Test Selection, they can smartly identify the tests that are most likely to fail and prioritize them first. In addition, Launchable can split the subsets evenly across all resources(the physical HU devices) to ensure the team gets the most out of their testing sessions with the least overhead. 

With Launchable

Parallelize → Optimize the tests that land in each parallel resource → Optimized resource utilization → Faster delivery

(See Replacing static parallel suites with a dynamic parallel subset)

Launchable has easily identified failures across their regression test suite by generating an ML model based on their data. This model dictates how subsetting works. As a result, they can be quite aggressive with subsetting and catch many failures in a short amount of time. The graph of their model is shown below:

Note: This model did exceedingly well based on total number of failures in this test suite per test run

The graph shows that the ML can identify a failing run with 90% confidence by running only a few minutes of tests. This model allows them to run small subsets, take up fewer hardware resources, and reduce queue times for developers looking to see if their commits pass the regression test suite.

The risk-based approach helps them choose a target for test runs which reduces their hardware usage by double-digit percentages without the risk of missing out on a failing run. In other words, they can increase their hardware throughput without sacrificing their quality.

The benefits with Launchable

Optimized resource usage The first benefit is the reduction of hardware resources. By saving testing time, they allocate fewer resources for each run.  They are able to reduce their overall hardware capacity by a double digit percentage.

Ensure parallel testing sessions are as efficient as possible With Launchable’s parallel testing solution, test sessions can be split evenly across all resources based on the data collected about the tests. In addition, parallel testing can be used for non-subset and subset runs via Launchable.

Where to next

This QA and automation team has achieved optimized usage of their hardware resources and dramatically eased the hardware constraints problem that they set out to solve. 

In conclusion, they have brought the power of AI and ML to their team relatively effortlessly. Other teams are looking curiously at this lighthouse team to see how they can do something similar.

We are looking forward to more projects with the BMW Group.