Smoke tests are a critical component of the software testing life cycle. Smoke testing allows teams to subset acceptance testing. Although they speed up testing velocity to an extent by subsetting large test suites, they often fall victim to risks.
Smoke tests help DevOps teams verify an entire system from end to end without running an entire test suite. Smoke tests are like sticking a toothpick in a cake to see if it’s done before taking the whole dish out of the oven. Without that simple toothpick testing portions of the cake for doneness, you might wind up cutting into a jiggly, under-baked cake.
While smoke tests are essential to the software testing life cycle, smoke tests are not without flaws. Bottlenecks form when running too many or the wrong kinds of smoke tests.
Smoke testing is helpful when new components of an app or software are integrated into an existing build and deployed in a staging environment. Smoke testing helps developers find and fix bugs and flaws earlier, improving overall testing velocity. Additionally, smoke testing helps guarantee the stability of a build.
When it comes to the mechanics of smoke tests, they're sourced from the original test suite, which is why smoke testing is also referred to as “Build Verification Testing” or “Confidence Testing.” Smoke tests identify the build showstoppers by uncovering the flaws in the application functionality, as surfaced by new code.
By finding the major severity defects, smoke testing prevents wasting time running more exhaustive tests.
To run a typical smoke test suite, dev teams first deploy a build into a staging environment where a subset of test cases are run. Rather than running the entire test suite, this subset of test cases includes only key functionality and is meant to reveal any errors or bugs in the build. The system returns any failures to the dev team for fixing. Or, if there are no issues, it's time to move on to Functional Testing.
When developers use tools like Selenium to scale test suites, teams face the burden of sorting through large test suites manually to determine what tests to run for smoke testing. This inevitably leads to two of the largest risks to smoke tests.
Typically, once the smoke test suite is selected by humans, it’s left unchanged after initial creation. This turns smoke testing into a “set it and forget” mentality, where testers assume the initial test suite is correct and testing appropriately as a project moves ahead.
Failure to change or revisit a smoke test suite means build failures can be missed or skipped over. This is especially problematic in the early stages of development when failure to catch mistakes can cause major snafus, snags, and delays to release later on. While automated smoke tests ensure the exact same processes perform at the exact same standard every single time, smoke testing is always susceptible to errors when relying on manual test selection.
A byproduct of static test selection is developer downtime. The manual upkeep of smoke test selection as your test suite grows increases the risks of missing critical bugs or errors - which result in running more exhaustive tests down the line. Those misses drain valuable developer time and resources, not to mention are a major buzzkill to developer productivity and happiness.
So, what’s a better solution? Expand automated smoke testing with intelligent test selection.
Predictive test selection is the answer to smoke testing troubles. By expanding the automation of smoke testing to include intelligent dynamic test selection, teams remove the manual burden of manual test selection and shift testing left, preventing the risk of running more exhaustive testing later when a defect is missed.
Launchable Predictive Test Selection can replace your standard static smoke test subset list with a dynamic list that ebbs and flows based on historical data.
Launchable’s ML platform intelligently selects the best smoke tests to run, based on the characteristics of a code change. Tests are selected based on historic test results, as well as on data on the previously tested changes used to train the ML model.
By learning the relationship between code changes and the previous pass-fail results of prior tests, Predictive Test Selection offers dev teams the best, most accurate prediction of the most important (and correct) smoke tests to perform in a fraction of the time.
Identify bugs easier and earlier
Establish a dynamic system process that streamlines traditional test selection
Improve efficiency and reduces lost dev time fixing missed bugs
Reduce risk of shipping unchecked errors
Improve end-product quality
With this kind of data-driven automation, Launchable’s Predictive Test Selection helps developers slash their overall testing time, while always maintaining a high confidence that significant errors will be caught during smoke testing.
Catch more code breakage earlier and advance your overall testing strategy, while slicing down your testing times with Launchable.