I recently wrote a new white paper on Delivering code faster with machine learning and test automation for Launchable that is now available for download. I thought I’d take a minute to explain why the concepts it presents are critical for teams that are struggling with long test cycles to understand.
We all want to deliver high quality software to our customers quickly. This notion is fundamental to DevOps. The book Accelerate: State of DevOps 2019 specifically calls out Lead time as one of the key metrics for DevOps performance.
Lead time measures the time from committing code to when that code is in production. Automated testing and continuous integration are a critical components for achieving short lead time. Unfortunately, as projects grow in scope, test execution times can grow along with them causing significant delay. Unit tests, integration tests, system tests, end-to-end (E2E) tests… these are all important, but as their execution time gets longer and longer it becomes impossible to run them as frequently as you’d like.
In many cases, this leads to quick tests being run very frequently and longer-running tests less so. For example, unit tests run on every push, but E2E tests run every few hours. But what if the tests that take a long time actually provide the most useful feedback?
So now we have tension between lead time and quality. How do we make sure we run the right tests at the right time? Developers need to get useful feedback — not just feedback —as early as possible in order to reduce lead time.
In this white paper, I’ll show you exactly what we’re working on at Launchable to eliminate the tension between lead time and quality. You'll learn how we are working to make machine learning technology for testing — that was previously only accessible to unicorn software companies — available to any team!