Podcast interview: Eighty percent of you are pointless

Predicting failures in tests for proactive software development

Key Takeaways

  • AI has moved far along to question existing development testing paradigms

  • The dev-test workflow is manual and cumbersome. There are opportunities for improvement.

  • Change your testing mindset from "reactive" to "proactive." Predict failures, and don't just wait for them to happen.

(A podcast recording with Harpreet, co-CEO at Launchable.)

The podcast host was focused on the problem that 80% of tests for a particular test session are pointless, but we still run them all the time.

He called the benefit of Launchable's ability to find the tests that will fail, and run them first —Proactive software development. Said differently, his framing was "Can we get ahead of problems created in your code base? Can we not be reactive?"

Absolutely yes!

So the questions is: Is your team doing proactive software development?

If your team is running "all the tests all the time," then you aren't doing proactive software development. If so, this is the right time to "Think Different" and ask yourself, "Why are we doing that? Can we be smarter?"

AI has moved so far that we can do things in new ways.

Our immediate next step takes this even further: It converts the manual, unoptimized Dev-Test workflow and augments root-cause analysis with Generative AI.

I invite you to listen in to the podcast.

-- Harpreet

Seeking Your Expert Feedback on Our AI-Driven Solution

Quality a focus? Working with nightly, integration or UI tests?
Our AI can help.