DevOps Professionals Report Losing Time to Testing Bottlenecks

TechStrong Reports Harmful Effects of Over-Testing and Top Solutions Software Development Teams are Adapting

Key Takeaways

  • Organizations facing the harmful effects of over-testing their software and looking for ways to improve.

  • Too many tests can lead to a chain reaction of bad effects for companies, creating unhappy developers, high turnover, and more cost as new developers are hired and trained to replace the former ones.

  • While helpful, parallelized tests and tests moved to the cloud are only scalable to a point. 

  • Launchable was created to solve the latter problem, with AI and ML baked in from the beginning.

As software development teams, we’ve been told that it’s important to conduct numerous tests throughout our end-to-end processes. While testing is essential and problems arise from too few tests, overloading the SDLC with too many tests also causes issues including poor productivity, developer frustration (and turnover), and high costs. 

A recent TechStrong report, collected from DevOps professionals interested in digital transformation, found organizations facing the harmful effects of over-testing their software and looking for ways to improve.

We’ve pulled the top critical key findings from the survey and what they’re indicating for testing trends.

Testing is a critical bottleneck: 75% of respondents said that over 25% percent of productivity is lost to testing.

This number is significant, seeing that this loss of productivity directly affects developer experience.

While 25% may seem like a low threshold of loss, the way it shows up in the day-to-day lives of developers is significantly frustrating and time-consuming. Developers might even have to work after hours on a consistent basis because the volume of tests is causing results to be delivered in such an untimely manner. 

During a recent customer interview, we talked with a developer and discovered her frustration at her organization’s current processes. She recounted her daily life of needing to check test results and fix code around 9:00pm to ensure that she was ready for the next day of work, along with expressing how frustrating these types of requirements were to her, needing to fit in late hours, around family life and other personal obligations outside of work. 

Unfortunately, our customer’s story isn’t unique. Findings revealed that developers are spending about 35 hours a week on bad code. In fact, waiting on bad code caused a Global GDP loss of $35 billion, back in 2017.

Because of how much they can cause a poor developer experience, too many tests can lead to a chain reaction of bad effects for companies, creating unhappy developers, high turnover, and more cost as new developers are hired and trained to replace the former ones. And in today’s world, it’s difficult to even find new hires in the first place. 

Too many tests also lead to too much data, creating a “tsunami of data” for organizations to parse through and attempt to use. And unfortunately, some of this data isn’t even applicable or usable for the general well-being of the organization. 

Top Four Trending Methods for Improving Tests

TechStrong’s research found respondents starting to see the problem of waiting on tests, and generally turn to 4 different methods for remediating these issues: parallelized tests, tests moved to the cloud, Dependency Analysis, and Predictive Test Selection.

While helpful, parallelized tests and tests moved to the cloud are only scalable to a point. Either of these two options still works with the same volume of tests, as with no improved methodology. So, they just compact the time frame of running all of those tests in some way rather than actually reducing the volume. This doesn’t solve the “tsunami of data” problem.

Dependency Analysis, on the other hand, dives into deeply researching the code that’s being tested. This does help to weed out extraneous tests that create excess data or use up time and resources. Dependency Analysis performs this function by inferring which specific tests should be run after each change is committed, rather than running the whole suite of tests every time. But unfortunately, Dependency Analysis has a high-cost entry point, as it requires lots of data science personnel and research to perform correctly. 

While only being used by 8% of respondents, Predictive Test Selection strikes a balance with a lower cost and the same function as Dependency Analysis (ruling out irrelevant tests). 

Related Article: Guide to Faster Software Testing | The Three Principles That Elite Software Teams Embrace 

AI/ML in DevOps Pipelines Trends

Although 1/3 of respondents are using Artificial Intelligence/Machine Learning and 2/3 of respondents plan to implement it relatively soon, the survey didn’t specify which capacity. It’s important to use these powerful tools in meaningful ways, rather than just putting them in as a quick “add on” to satisfy investors or keep up with trends.

But, true AI/ML tools are especially important when a company is moving towards better automation in their DevOps pipelines. 

“You're not automating developers and testers out of a job. Instead, you're making their lives easier and they're more efficient and effective. In fact, they have more fun because they're spending time actually impacting the customer experience versus just chasing down test results, and bugs, and things like that.” - Dan Kirsch, Principal Analyst and Managing Director at TechStrong Research

Companies often find themselves in either a position of running too few tests or, as we’ve discussed, too many. Launchable was created to solve the latter problem, with AI and ML baked in from the beginning.

Launchable’s ML identifies and runs tests with the highest probability of failing based on code and test metadata, which speeds up the developer feedback loop and helps to maximize productivity. It’s a turnkey solution, meaning that it requires far less effort to stand up than Dependency Analysis. 

In addition, Launchable is solving another source of developer frustration with test suite intelligence, including Flaky Test Insights. By identifying which tests are flaky and providing an easily-accessible report on this data, we help developers pick the smartest and most efficient tests possible. 

With our insights into tests, we aim to provide software engineers with the data that they need to communicate timetables to the rest of their team, ship faster, and launch fearlessly with more confidence in their tests.

Seeking Your Expert Feedback on Our AI-Driven Solution

Quality a focus? Working with nightly, integration or UI tests?
Our AI can help.