We’re seeing a lot of innovation in the testing space, particularly around applying AI and machine learning to testing. Machine learning in general is a hot topic, albeit a controversial one. It is sometimes viewed as a panacea that can solve all problems. Conversely, some are skeptical about the types of problems it can solve well. That said, there are at least two key problems in the testing space that machine learning definitely can solve today. The first is not enough tests, and the second is too many tests.
Not enough tests: using test generation tools and AI bots
Many companies struggle to improve test coverage for their existing codebases. Test generation tools can be used to close the gap.
For UI testing, some very practical tools are being developed. To create a UI test today, a developer must write a lot of code, or a tester has to click through the UI manually. This is an incredibly painful and slow process. To relieve this pain, tools like Mabl and MesmerHQ use AI to create and run UI tests on various platforms.
Mabl has a trainer that lets a developer record their actions on a web app to create script-less tests. As the UI evolves, Mabl “auto-heals” the test to be in lockstep with the UI.
MesmerHQ has AI bots that act like humans - tapping buttons, swiping images, typing text, and navigating screens to detect issues. Once a bot finds an issue, it creates a ticket in Jira for a developer to act on - pretty sweet!
Too many tests: using AI for test impact analysis
The second problem – too many tests – is what we are focused on at Launchable.
As we talk to software teams, we frequently encounter companies that are struggling with simply having too many tests. They have thousands of tests that run all the time. Testing a small change might take hours or days. Test feedback comes like a tsunami: not often, but overwhelming when it does! “What tests really matter?” is the key question they face.
On the other side of the same coin are teams that have optimized their tests into smaller test suites (e.g. 30 minutes) that run multiple times during the day. For these teams, even 30-minute runs are too long because the high frequency adds up and slows them down.
Unicorn companies like Facebook have leveraged their extensive engineering resources and machine learning to find the right tests to run. Facebook calls this approach predictive test selection. Basically, you train a machine learning model with your changes and test failures and then use the model to predict which tests will fail for incoming changes. You can then construct a dynamic subset of your larger test suite for each source code change. (Predictive test selection is a branch of test impact analysis.)
If this sounds complicated, we agree! At Launchable, we’re working to make predictive test selection accessible to any team. You plug your tests into our web service, and we use AI-driven test automation to dynamically subset your test suite to run just the right tests at just the right time. Not every company has the engineering resources of a unicorn company to optimize their test runs!
Launchable identifies the right tests to run for incoming changes, allowing you to drastically reduce testing time with no loss in confidence. Shift tests left or right to get faster feedback. Cherry-pick tests using Launchable’s smart algorithm and accelerate your workflow.
We expect many engineering teams to do more with AI and automated testing in the coming year. For companies struggling with not enough tests, test generation tools will help close the gap between manual and automated testing. For those with too many tests, predictive test selection will be used to create dynamic subsets that run on more frequent intervals. Both innovations will help teams move faster and produce higher quality work.