Launchable EOY 2020: Unlocking Continuous Quality & Agility for teams

End of year look back on Launchable's learnings and the path ahead to balance speed and quality.

Key Takeaways

  • Launchable was launched a few months before the storm came in, and our journey this year was about de-risking a number of assumptions.

  • We found that teams struggle to deliver software with speed and quality and developers are frustrated.

  • We also found that machine learning (and date) are the tools to solve these common pains.

As I sat down to write this blog, the question that came to my mind is “how does one go about recapping a year like 2020?”

The best metaphor I read to describe this year is: We humans aren’t in the same boat, we all are in different boats weathering the same storm. Some people are in dinghies, others are in yachts, and all are trying to make their way through the storm of the century. To that point, I recognize my privilege of being in the technology industry while living in Silicon Valley and will not belabor on my hardships.

Instead, let me talk about this small boat called Launchable: how we have fared and why it matters to you. If you are a software developer, quality professional, or engineering leader building software - keep reading.

Launchable was launched a few months before the storm came in, and our journey this year was about de-risking a number of assumptions (as Alastair on our product management team likes to say). We feel we de-risked those assumptions.

Launchable at EOY 2020

We have found that the challenges that we set out to solve are prevalent across software teams of all sizes within all industry verticals. We found that:

  • Teams struggle to deliver software with agility and quality: Many medium to large teams struggle to cope with long delivery cycles exacerbated by long testing cycles. “Long test cycles” itself is a subjective term: for small teams, a 30 minute cycle might be long, while bigger teams think in terms of hours or days.

  • Developers and testers are frustrated: Developers and testers spend 20-50% of their day fighting through slow and/or bad feedback coming in from their tests or builds. They are just frustrated  

“We are not frustrate but are angry and desperate for better solutions. Having quick feedback would save a lot of nerves” - BG Team Lead, Car Company

  • There’s no easy solution for engineering leaders: There are no silver bullet solutions in the market that can speed up testing cycles. Parallelization of tests and tools like Bazel do help, but implementing these can be a long journey. Case in point, we talked to a team that has a 2-3 year roadmap to adopt Bazel to speed up builds.

We have validated that our unique approach works.

  • Machine-learning (and data) helps find the “needle in the haystack”: We have worked with a number of companies large and small and shown that our approach can have a dramatic impact: a 40%-80% reduction in testing cycle time!

  • Programming language and tech stack agnostic solution: Our initial hypothesis was that our ML based approach would cut across all languages and tech stack. We have since then worked with a diverse set of languages and tech stacks which has validated our hypothesis.

  • Easy to use turbo mode button: The founding team empathized with teams on multi-year DevOps transformation journeys where developers were forced to learn a new way of doing thing while keeping an eye out on their existing deliverables. We wanted a turbo mode button that sped up teams without disrupting them (“be like water” - shape around the teams processes and not force them). Today, Launchable can be brought in through our CLI, embedded in build scripts, so developers can quickly see an impact.

Some surprises along the way: The world is more polyglot than we thought

Given our pedigree from Javasoft at Sun Microsystems, our initial bet was that we would end up meeting a lot of Java/Maven shops that we could help. I was surprised to see how quickly we ran into polyglot environments. Almost every team uses several languages and tech stacks together.

This required us to onboard customers across these stacks by building an extensible approach that can help us easily onboard new languages and test runners. As I write this, we have pilot deployments on Java, C/C++, C#, Python, and Ruby and a number of test runners on these environments. We can easily onboard a new tech stack onto the platform.

What’s Next?

We continue on the march to get customers and have started rolling out our solution to a number of them. Today, this requires both small and large customers to talk to us as we provision their accounts and roll out the solution.

We aim to make onboarding a self-service experience in the first quarter of 2020. This way, developers can try out the solution seamlessly.

How can you help?

If you have a Java, C/C++, C#, Python, or Ruby project that suffers from long testing cycles and want to speed up feedback time for developers, we would love to talk to you.

If you don’t have a project in mind, point us to someone who might be struggling! Here is the text you can copy and paste to them:

Launchable helps reduce test cycle times. From 30 minutes to 5 minutes or from hours to minutes. That’s the kind of picture you need to have in mind. Typically, Launchable reduces test times between 40-80% and cuts development cycle times by 40%. They work with languages such Java, C, C++, C#, Python, and Ruby and cut across all test types and test phases.

Is your team trying to reduce your development cycle times and testing times? If so reach out to

We know we can help.

Finally, we at the Launchable team wish you and your families a Happy New Year.

-- Harpreet

co-CEO Launchable

Seeking Your Expert Feedback on Our AI-Driven Solution

Quality a focus? Working with nightly, integration or UI tests?
Our AI can help.