Select Page

Top Five Testing Terminology Misconceptions

In the world of software testing there are a handful of frequently misused terms.  Prior to my testing terminology enlightenment, I would often use the word “bug” as a catch-all for everything that went wrong with an application.  I’ve since seen the “error” (another commonly misused word) of my ways.  Below are five commonplace testing terminology misconceptions.

 

Verification and Validation

Both are close “cousins”, and they sound similar if you say them really fast…but “verification” is not the same as “validation”.  Verification is the process of determining if specific requirements are met, and validation is determining if those requirements were correct and ultimately if the end user’s needs were met.  Validation answers the million-dollar question, “Did we build the right solution?”  In the old days, verification came first and validation came much later in the process.  With teams moving to agile development, there is less of a time gap between them and sometimes both are performed simultaneously.

 

Defects, Bugs, Spatulas, Errors, and Failures

These words are all commonly used in the wrong context, and I’m certainly guilty of doing it on occasion.  A defect is deviation from requirements that causes a component or system to fail; the flaw itself and NOT the result.  A bug is really a slightly narrower definition of a defect, as its specifically a software fault that causes a component or system to fail.  An error is a human action that causes a fault or mistake, like defects.  And finally, a failure is the inability of a component or system to perform an expected function as specified in the requirements; the result of a defect.  I threw in “spatulas” just to see if you’re still paying attention. 

 

Intake Testing, Sanity Testing, and Smoke Testing

Intake testing is the process of determining if the component or system is ready for testing.  Sanity testing is performed when one or a few areas of functionality are tested, often after feature additions and/or fixes are made to the system under test.  Smoke testing is verifying that critical paths and functionalities of the system are working and ignores the finer details; examples include making sure an application successfully loads or an online banking system doesn’t make basic math errors.  Of these three terms, “sanity testing” seems to have the most varied interpretations.  A Google search returns four or five different definitions, even the ISTQB (International Software Testing Qualification Board) claims the word is exactly the same as smoke testing.  After advising them to go with a more concise name for their organization (I like Overlords of Testing myself), I would further advise that the industry doesn’t think both words are interchangeable.

 

Component Testing, Module Testing, and Unit Testing

Component testing, also known as module testing, is testing an individual program or module to prove that that it works as specified in the requirements.  Unit testing is independently testing the smallest testable component using mock objects like stubs and drivers, and it’s almost always at the code coverage level.  You can find unit testing at the bottom of the trusty testing pyramid.

 

Collaboration

There is common understanding of what “collaboration” is supposed to mean, but much like “artificial intelligence” I believe the word is tossed around very liberally.  Collaboration is everyone on a team truly working together towards a shared goal, and so of course testing solutions are eager to claim that they have “collaboration features” - though it usually ends up being something tacked-on like the ability for QA staff to leave messages beneath a test script that tries to explain what it’s doing.  True collaboration is every team member having access to the same test case, and while each person comes from a different angle depending on their role, they all should have the same understanding of what exactly is being tested with as little abstraction as possible.  The best way to achieve this is to use a behavior-driven testing solution with test cases written in business readable English.

 

Interested in automating your testing and deploying with confidence? Contact Sales for a Cycle demo.

This post was written by:

James Prior
Technical Pre-Sales Consultant

James has been working in software pre-sales and implementation since 2000, and has more recently settled into focusing on technical pre-sales. He takes care of our hands-on demonstrations, and eagerly awaits your request to see our Cycle test automation software in action. Drop him a line at: james.prior[at]tryonsolutions[dot]com.

Recommended Content

 

 

Checklist: Deciding which Tests to Automate

Essential Checklist for Deciding Which Tests to Automate To automate or not to automate?That is the question a lot of test professionals ask themselves- when deciding what tests should be run by software and which ones are best performed by people?To help, we've come...

Cycle & Behavior Driven Development: How It Works Demo

What We Covered: High level explanation of Cycle as a behavior-driven test automation solution Overview of advantages including collaboration, lower maintenance, streamlining, and how Cycle is used to mitigate deployment risk Overview of Cycle interface, syntax, key...

Cycle 2.0: What's New Webcast

  Who Should Watch: Chief Information Officer Chief Technology Officer Developers Business Analysts QA Team Project Managers What We Covered: How the feature upgrades will increase productivity and decrease system downtime More Modern UI/UX. A new Inspector Panel,...

Regression Testing for WMS Upgrade

In this case study, we look at how Cycle was used to save a 3PL time and money by testing their heavily-customized Warehouse Management System (WMS) upgrade.