Manual testing is good for exploratory testing focused on uncovering programming defects, plus humans are better at spotting UI and UX issues.
Automated testing allows you to test at scale speeding up cycles, lowering cost, and support vast variety of test types.
Compare the pros and cons of both and how to assess your testing pyramid to start getting faster feedback.
The consumerization of software implies that customers expect well-designed, high-quality software at all times. This means every company is in a race for agility and quality. A survey released by DevOps Research and Assessment and Google Cloud shows that “elite” teams deliver software 200 times more frequently than their peers (which only deploy only once a month). Testing is one of the key controls in the DevOps efforts to get good quality software in the hands of customers.
These tests can be done in two ways: manual or automated. Today, we’ll look at the advantages and disadvantages of both types and see what testing method works best.
Manual testing is simply a human using the software to document bugs as they test it. On smaller projects, developers and other software team members may perform manual tests themselves. On larger projects there is often a dedicated QA Engineer whose role is to test the software. Often a QA Engineer will document scenarios and edge cases to create a “test plan.” They then manually run through the steps outlined to verify that each feature of the software is working as intended. Manual testing continues to play a surprisingly large role in software testing today. Here are some of the pros and cons:
✅ Great for exploratory testing - Exploratory testing focuses on uncovering programming defects that are difficult to fully cover in automated tests. Exploratory testers can quickly uncover unique scenarios. In fact, any test that needs an element of randomness (think a user poking around on a screen) can often produce more conclusive results when performed manually. AI Bots can be used to automate exploratory testing.
✅ Humans are better at spotting UI/UX issues - It is easier for a human to know that a button is out of place (miss-aligned) than for a machine to identify it. Tools like Percy.io are make it even easier for developers and testers to do visual testing and reviews.
⛔️ Slow - Since humans do the testing, the turnaround time is much higher. While an automated test can execute in seconds or minutes, a human may require hours to fully verify that a feature is working.
⛔️ Error-prone - Since humans are doing the work, not only do they have to perform the tests manually, but they also have to write out test cases and document the results. Each step of the process introduces another place where errors may be made making manual testing error-prone.
⛔️ Costly - The cost of hiring a testing engineer isn’t cheap, even more so if you need to hire a large testing team for a big project. For example, service fees from some of the top software testing companies come in at around 15% to 25% of the project’s total cost. Some of them may take even up over half of a project’s budget. This often leads to companies outsource their tests, which requires extra effort to keep teams in sync.
⛔️ Not comprehensive - Finally, it is hard to reliably and consistently test each feature manually. For example, manually exercising a rapidly changing API or service endpoint is an exercise in futility.
Some automated tests are programmed in the language that the software is written. Others are programmed in scripting languages or even constructed in a no-code GUI application specifically made for testing. One of their chief advantages of automated tests is that they can be run at key points in the software delivery life cycle provide faster feedback to developers. For example, tests are often run before code is merged, released, or deployed. In a 2019 DevOps survey by Gartner, 47% of respondents named automated testing as the #2 technology to help scale DevOps. Here are some of the pros and cons of automated testing:
✅ Faster cycles - Once automated and part of your CI/CD processes, they start providing continuous feedback to developers. Thus, developers can get immediate feedback on changes pushed in rather than waiting on humans to test and come back with the data. Tests can now be parallelized and run at scale to bring feedback faster.
✅ Lower costs - Since humans aren’t required to run manual test scripts, the costs of running the tests come down substantially (as is true of any automation). That said, teams may have other problems at hand like higher infrastructure costs for testing + maintenance costs for maintaining the tests.
✅ Great for a vast variety of tests - As technology continues to evolve, it is getting easier and easier to automate every part of the testing stack (see the diagram below). Increased tooling and a desire to match elite teams are forcing teams to go up the testing stack and automate each piece of it. A good example is stress testing or performance testing API. Tools like Redline13 help you create 10s of millions of requests in a short time frame to see the software’s “breaking point,” so developers can see if their API can scale in real-world environments.
⛔️ Debugging takes time - Since tests are run by the thousands, if even 10% of those fail, it gets difficult for QA teams to decipher who or what is responsible for the failures. That said, it is a good problem to have. It is better to find issues and debug them rather than shipping buggy software to customers. Also, DevOps techniques like shifting-tests left can make it easier to identify the change that broke the build earlier rather than later.
In a world where unicorn companies are releasing software 100s of times per day, automated testing is increasingly useful. Manual testing on the other hand is less so. If you’re concerned about moving quickly and maintaining continuous quality, automated testing is a great enabler. Most teams should look at their testing pyramid and start going bottoms-up to automate the tests for faster feedback and release.
Written exclusively for Launchable, Inc. by Jennifer Birch