The First Holy Grail of Test Design
-
qa software | Outsource Testing Services | QA Training | Quality Assurance Solutions | Our Clients | Downloads | About Us | Contact Us
#
LogiGear
search: Search
>>
>> Products
>> Testing Services
>> Training
>> Solutions
>> Clients
>> About Us
>> FREE Downloads
>> Newsletter
>> RSS Feed

Email List Signup

For more information:
Contact Us

Add to del.icio.us Add to del.icio.us Digg it! Digg it! Add to reddit Add to reddit Add to MyWeb

The First Holy Grail of Test Design

By Hans Buwalda, Chief Technology Officer, LogiGear Corporation

Introduction

In my previous article "Key Principles of Test Design" I discussed a vision for test design, built around three key principles (which I call the "Holy Grails of Test Design"):

  1. Effective break down of the tests
  2. Right approach per test module
  3. Right level of test specification

This article focuses on the first principle, the effective break down of the tests. I also like to refer to it as the "high level test design". In this step you divide the tests that have to be created into manageable sets like chapters in a book, which I call "test modules". Each test module should typically contain between a few to a few dozen test cases. The next steps in test development deal with designing the individual test modules ("holy grails" 2 and 3) and with effective automation.

Effective Break Down of the Tests

Although making a good high level test design is as much art as it is science, there are some guiding criteria for it that I like to use. They are organized as "primary" and "additional" criteria. The primary criteria are the more obvious ones that should be applied first. The additional criteria can help to further refine the line-up of test modules.

Primary Criteria

  • Functionality and other requirements. The basis for an IT system is the required functionality, usually organized into groups and/or categories. Tests can be organized along similar lines.
  • Architecture of the system under test. Just about every IT system is built up in layers, modules, protocols, databases, etc. All of these pieces have to be tested individually and in combinations. The line-up of test modules should reflect that.
  • Kind of test. Many kinds of tests such as functionality, UI, performance, screen lay out, security, and more can be done to even one small part of a system under test. Generally each test module should not do more than one kind of test.
  • Ambition level. I tend to categorize tests in levels of ambition. A low level is a smoke test, just to see if a system can start and do basic functions. The most common tests are medium ambition level, testing individual functions without combinations. High ambition level tests are "aggressive" tests that are designed to "break" a system under test. Organizing the tests of different ambition levels in different modules will make it easier to develop the test and most of all easier to run them (for example, run the smoke tests first, if successful run the functional tests, last come the aggressive tests).

Additional Criteria

  • Stakeholders. These are departments or individuals with a particular interest in some of the tests. One good line-up of tests is along the lines of stakeholders, so that each test module has only one stakeholder to be involved (for input and/or assessment).
  • Complexity of the test. Put particularly complicated tests in separate test modules, so that the other tests can run unaffected.
  • Technical aspects of execution. Some tests might need a complex environment or specific hardware to run, while others can run more easily. Make sure the module line-up reflects this.
  • Planning and control. Overall project planning and progress can impact whether or not enough information is available to develop certain test cases. Keeping such test cases separate from ones that can be developed earlier in the life cycle can allow you to obtain a more smooth progression of test development.
  • Risks involved. A risk analysis can provide great input for test design. When there are high risk areas in a system under test it can make sense to devote specific test modules to them. A good example is a premium calculation in an insurance system. Any bug in a core function like that is not acceptable, so it is worthwhile to plan for a test module for each single aspect of such a calculation.

The way to apply these criteria is to start with the straightforward ones first, one at the time, then review the results using all of the criteria, including the additional ones. Repeat this process a couple of times, preferably with a number of knowledgeable people involved. When you want to use outside consultants this step is a good candidate. There is also not much time involved in this step helping to keep down outside consulting costs.

When the modules are identified, they can be the basis for a Test Delivery Plan in which the modules selected to be developed are listed with tentative dates for the delivery of their first version (for example, to a stake holder who will review them).

Here are some examples of what typically can go into separate modules:

  • UI oriented tests, like "does a function key work" or "does listbox xyz contain the right values"
  • Do the individual functions (like transactions in a financial system) work
  • Tests of alternate paths in use cases, like does the system roll back after canceling a transaction
  • Higher level business level end-to-end tests, like: create a new customer, let him do a couple of transactions and see if his end balance is correct
  • Odd tests that are more difficult to execute, for example because they need multiple workstations (i.e., a test is done to exceed a limit and to see if a supervisor from another workstation will be involved to approve)
  • Tests that test other qualities of a system other than functionality, like a load/performance test
  • Tests that involve non-UI actions, like testing individual methods of classes used in the system under test, or message in a TCP/IP or SS7 protocol
  • Tests with different "ambition levels", like:
    • a simple low ambition smoke test to see if a new build of the system under test works well before running any other modules
    • an aggressive test, designed to break a system under test, typically to be executed after other modules were successful already

Conclusion

However you do it, try to end up with a list of test modules that are well-differentiated from each other and each have a single well-defined scope. The scope is the anchor point for the successive development of tests within the test modules.

Other Recent Articles by This Author

Download free articles, white papers, templates and more!

LogiGear RSS channel xml feed file LogiGear's Software Testing RSS feed Add to My Yahoo!

-      
newsletter | RSS | site map |
-

1 (800) 322-0333   © 2007 LogiGear Corporation. All rights reserved.   Legal Notice.   Privacy Policy.