What Software Testing is and is not

Software Testing is a set of services and methodologies that ensure a product’s output (be it data or visual effects on mouseover) is precise. It is not, however, a way of building accuracy into a product.

The conceptual difference between “precision” and “accuracy” is best summed up by this illustration:

Source: http://glsi.agron.iastate.edu/2015/01/18/accuracy-vs-precision/

The aim of testing is to validate the acceptance criteria set in the discovery process and offer developers valuable insights into why some of the criteria are being violated.

If a product feature is poorly planned (i.e.: useless to the end-user), testing will probably not catch that. But if it is poorly implemented (i.e.: it doesn’t work as it should), it will be caught.

 

A bug’s life - from error to fix

There’s a lot of confusion around the term “bug”. Even its origin is shrouded in mystery. Some claim that, when computers were still room-sized, someone opened a malfunctioning computer’s enclosure only to find that it was infested with cockroaches. When it comes to definitions, these too can be ambiguous or at least not specific enough. 

For us, a bug is an observable difference between the expected output and the actual output of a feature/ module/ section. 

An error can be classified as a bug under specific conditions:

  1. It violates client expectations as defined in Acceptance Criteria (with any Change Requests that may have occurred)
  2. It happens on the live environment
  3. It is observed within 2 months after the module's/ feature's/ section's being published on the live environment
  4. It is not generated by Client-side behaviour outside of the creator’s control (e.g.: Browser updates that break previous standards, Script-blocking extensions etc)

Note: If something worked as expected and is not anymore because of a change the product’s developer made (even if the change was not done specifically for the functionality that is not working correctly right now), we call it a bug.

As such, the following won’t be considered bugs:

  • An issue with a functionality on live after 3 or more months since the functionality is in production (fails the “observable” criterium)
  • An issue encountered before the module/feature/section is present on the live server (fails the “is on the live environment” criterium)
  • An issue that’s generated by an emergency intervention (force majeure)
  • Any specifications that are not present in the acceptance criteria (fails the “defined client expectations” criterium)
  • Improvements for the existing functionalities (is a change request)


Once an error is identified as a bug, it goes through a prioritization process. Namely, the Project Manager (and sometimes Account Manager) and a Developer assess the amount of work needed to fix the bug and suggest a plan of action to the client, allowing them to decide which bugs need immediate fixing (business-critical) and which can wait. 

All bugs, therefore, have the following data associated with them:

  • Testing instructions (how to reproduce the error)
  • Fixing budget
  • Fixing timeline
  • Updated Testing instructions (how to verify the error does not occur anymore)

 

Types of Software Testing

Software testing is a minefield of types, categories, scopes, objectives, methodologies and activities. For the most part, software testing is a form of quality control, of checking compliance degrees. Of the various classes of “testing”, the ones we rely on the most are, ordered by frequency:

 

  1. Unit testing
  2. Integration testing
  3. Acceptance testing
  4. Compatibility testing
  5. Usability testing

The first 3 entries on the list are performed during our regular development cycle, by the developers, with the QA specialist filling in the gaps when extensive integration testing is needed and adding Acceptance testing to the equation. Usability testing is relegated to the interface design phase, with input from the assigned Project Manager once the product’s Front-End work is completed.

 

How Software Testing affects your budget

It’s best to think of a testing budget in terms of opportunity costs. We have already established that 5 kinds of people get to test your product, willingly or unwillingly:

 

  1. Developer
  2. Software QA Specialist
  3. Project Manager
  4. Your Staff
  5. Your Users

 

If testing is left mostly to the Developers and TDD (Test-Driven-Development) is not the agreed upon work methodology, bugs are very likely to be discovered by the end-user. If TDD is used, that shifts their workload from developing features to writing tests. It usually slows down progress and doesn’t always deal well with real-life situations. Moreover, if the developer is biased towards a specific way of solving problems, that bias will be present in the tests as well, resulting in a worse testing performance, and that can increase production costs. 

QA Specialists are the best equipped to handle software testing - they’re technical enough to think about edge cases and software quirks, they can automate tests if they need to BUT they also spend a lot of time understanding the business logic and the needs of the end-user. Their role was built to fill in the gap between Devs, on one side, and PMs/ Clients on the other. Though bringing a QA specialist on a project presents an upfront cost, it saves development time further down the line.

Project Managers can be highly technical (or at least more technically-inclined than your average staff member). However, they’re rarely as tech-savvy as a developer or QA tester and can therefore miss important errors. For example, an e-commerce shop could have a “Quantity” field associated with an item in the cart. If that field passes its contents as a string instead of an integer (the character “2” instead of the number 2) and that is somehow missed in development because it happens to work with numbers up to 9, a PM will probably miss it too. Moreover, asking a PM to perform rigorous testing will detract from their time spent doing what they should - making sure your project is being delivered on time, on budget and within expectations.

Unless your team is already technical (you’re a digital-first company/ SaaS), their testing acumen will be only slightly better than your clients’. You can cut your costs by asking them to do most of the testing, but there’s a high chance that they’ll miss more issues than the aforementioned PM, and they will still need to be trained on how to pass on found bugs to the development team (lest more time will get wasted figuring out what lead to the error). Leaning on your own team to do testing is advised only when the product is an early-stage version (low customer expectations) or when your in-house team is already made up of specialists.

Finally, the ones to catch a bug can be your end-users, but relying on them depends heavily on your brand, business model and market. 

 

Low stakes, high stakes: Testing Games vs. Testing Rockets

Bethesda Studios releases some of the buggiest games on the market. Skyrim, one of its most popular titles, has had community-built patches nearly since launch. However, what Bethesda lacks in testing they make up in complexity - their games tend to be story rich, with many gameplay features and highly moddable. So their users are fine with bugs and glitches because the end product outshines them. 

However, other organizations cannot rely on community patches - not even on post-launch updates. NASA is undoubtedly one of them. Here is how they describe their philosophy regarding QA testing:

“We consider software quality as part of the process from start to finish, not something we just do at the end,” says Crumbley. “Good, complete software testing helps us ensure that we have a good quality software product as we go forward. This isn’t unique to NASA – but for NASA, it has to always be our approach, since the mission, and, often, lives, are at stake.”

–Tim Crumbley, NASA Software Assurance Technical Fellow

Who should be responsible for discovering software issues (and when)

Developer - when debugging

Developers already do a form of testing - debugging, or catching errors before they get pushed to a development branch.

Software QA Specialist - before major milestones

QA Specialists test products throughout development, but their busiest days are the ones leading up to major milestones.

Project Manager - before client demos

Project Managers are not expected to run exhaustive tests, but they are generally in charge of client demos, and as such they typically rehearse the demo to make sure no critical bugs have been missed.

Your Staff - before launch

Once the product reaches your staff, it should be 99% ready to go. However, they know your clients and business better than we do, so it’s generally a good idea to allow them to interact with the product for a couple of days before going live, to capture a small-scale set of real usage data. 

Your Users - almost never

Your users should only find edge cases (for business-critical products) or multiple minor bugs (for non-critical, entertainment-type products such as social apps, games etc). As discussed above, this is influenced by a number of factors and should not be glossed over.

 

Closing words

Regardless of who does the testing, a comprehensive risk assessment is always a good idea; in some cases, it might turn out to be ok to rely on non-specialists for testing. In others, that can have disastrous consequences. Your Account Manager will help you find a balance between budget cuts and customer satisfaction.