
Testing can often seem like the unglamorous Cinderella of the software development world, not as influential as the Customer-facing roles in project teams or as creative as design or coding (development). But Testing as a discipline is just as important, and in some cases, more important, to the overall project and positive outcomes than any other activity, bar none. Ok, without building or otherwise acquiring the product or service there is nothing to test, but if the thing that you have produced is not acceptable to the paying Customer, is not safe, usable (by users) or fit-for-purpose, then the cost and impact of inadequate testing can be a lot worse than doing nothing at all.
In this shortish introduction to a big topic I will try to debunk some of the perceived problems with testing, explain some of the jargon, and hopefully encourage you to think a bit more about the pursuit of quality as the rationale for testing, rather than the negative connotations of finding fault. For clarification here, I am interested in testing designed and man-made artefacts (per the definition in the Glossary of IT Terms), so not, for example, testing a hypothesis, marketing (A/B type testing), gauging opinions, or taking measurements in the natural or human worlds.
To return to the fairy tale analogy, sometimes Testers are invited to the party (ball!) too late, and with inadequate planning, knowledge or resources. Here are the top 5 myths about testing:
(1) Testing is all about finding bugs
‘Bug’ in the sense of any unexpected behaviour, software glitch or problem with interpretation of the requirements or design specifications. All of these things are important, but not the end in itself. It can be misleading to measure the value and effectiveness of testing as a simple headcount of bugs found and killed, i.e. problem resolved.
Here is a short(!) list of some of the types of testing and the reasons why you might undertake them:
Functional
This is the easiest place to start, to answer the question, ‘does the product/software/system do what you expect it to do under normal conditions?’
Non-functional testing (against Technical and Non-functional requirements ‘NFRs’ if they exist) goes beyond ‘what’ the software does to ‘how’ it does it.
In both cases there may be positive and negative test scenarios. The latter may involve trying to ‘break’ the system…better that you find any problems before a clumsy User or some malicious outsider does!
Volume/performance/load/stress/endurance
To answer questions about how the system performs, under stress, for example, a peak number of Users or transactions, large file sizes, or environmental conditions that may be at the limits of expected normal operation.
Usability/User Experience ‘UX’
A sub-set of the non-functional testing. Sometimes it is difficult to articulate and test how easy it is to use a product, a website or software system without letting real users ‘play with it’ possibly in a sand-box, either the finished product (alpha or beta test) or an early proof-of-concept or prototype.
Security/penetration
Test how well the software or application copes with external threats, potential malice-ware and service attacks. Unfortunately this is an increasing issue for all organisations that need to protect themselves, their income, sensitive data, or reputation.
Smoke/shakedown
An early confidence booster; how does the whole system operate when everything is connected together, possibly involving components from different sources, and connecting software from different suppliers.
Each piece of code (unit test), sub-system, component and interface should already be proven in advance, so that there are none or fewer surprises at this stage!
Acceptance/Use Acceptance Test ‘UAT’
Normally a formal and fairly late positive test that the whole system is acceptable to the Customer prior to hand-over. This may include some regression testing, to ensure that existing operational systems or components are not adversely affected.
Lastly, analogous to the discussion about gathering requirements, bugs tend not to be lying around waiting to be found. The testing process needs planning, preparation of testing tools and data (test scenarios/test cases), execution and analysis/interpretation of results to be effective.
(Ed. accessibility and usability are discussed in more detail elsewhere in IT elementary school)
(2) We will test the system at the end of the development cycle
There is a mantra in IT and technology projects, ‘test early, test often’. Even without a finished product to look at and kick the wheels, you can validate requirement models or design prototypes, and verify that each finished part of the whole works independently, especially if you are approaching the project in an ‘agile’ iterative and incremental fashion. It’s always cheaper and quicker to modify or rework the earlier the better.
(3) Hand over all the product specifications to the testers
Ideally testing, whether by dedicated testers or not, should be fully integrated within the development cycle and collaborative project team. However, in a tradition waterfall-type project this is not possible, so a hand-off of the software product, documentation and acceptance criteria is needed. The testers may be a separate off-shore team in a different continent, which can add to the challenges of testing efficiently and effectively.
(4) We didn’t have time to complete all the testing
The sobering thought is that for the majority of software systems and applications – so called ‘variable’ software – the testing can never be 100% complete. There is always the unexpected usage scenario, changes to the external environment, or the addition of unpredictable human users! So, assuming there is never enough time, the amount of testing becomes an exercise in prioritisation, budget constraints, and risk management.
(5) Testing is a waste of time and money
As above, unless it is a trivial exercise, where failure has no financial or reputational cost, some testing is better that no testing. But, the $64,000 question is, how much testing is enough?
It eventually comes down to a juggling act between quality, cost and timescales – the so-called quality triangle – where acceptable quality could be very subjective and mean good enough to meet the Customer’s needs and expectations. There may also be some external acceptance criteria or compliance.
Thank you for reading this post, and for giving me some feedback.
Have a look in the IT elementary school for more handy bite-sized modules.
@ITelementary
(c) 2015-17 Antony Lawrence CBA Ltd.
[…] and also natural owners of other elements, Requirements and Projects, respectively. There is Testing, but no ‘Testers’, ‘Development’ but no Developers or Programmers, and ‘Design’ but no […]