发布新日志

  • Requirements Phase

    2007-06-14 23:35:04

    Chapter 1. Requirements Phase

    第一章 需求定义阶段

    The most effective testing programs start at the beginning of a project, long before any program code has been written. The requirements documentation is verified first; then, in the later stages of the project, testing can concentrate on ensuring the quality of the application code. Expensive reworking is minimized by eliminating requirements-related defects early in the project's life, prior to detailed design or coding work.最有效的测试工作起始于一个项目的开始阶段,此时编码工作还没有开始。首先进行需求文档的校验;紧接着在项目的后续阶段,测试工作集中于保证应用程序的代码质量。在项目早期(而不是等到详细设计或是代码编写阶段)消除因需求所带来的一系列的缺陷,使昂贵的返工将大大降低了。

    The requirements specifications for a software application or system must ultimately describe its functionality in great detail. One of the most challenging aspects of requirements development is communicating with the people who are supplying the requirements. Each requirement should be stated precisely and clearly, so it can be understood in the same way by everyone who reads it.软件规格说明书必须非常详细的描述软件产品的功能。

    If there is a consistent way of documenting requirements, it is possible for the stakeholders responsible for requirements gathering to effectively participate in the requirements process. As soon as a requirement is made visible, it can be tested and clarified by asking the stakeholders detailed questions. A variety of requirement tests can be applied to ensure that each requirement is relevant, and that everyone has the same understanding of its meaning.

    Item 1: Involve Testers from the Beginning

    Testers need to be involved from the beginning of a project's life cycle so they can understand exactly what they are testing and can work with other stakeholders to create testable requirements.

    Defect prevention is the use of techniques and processes that can help detect and avoid errors before they propagate to later development phases. Defect prevention is most effective during the requirements phase, when the impact of a change required to fix a defect is low: The only modifications will be to requirements documentation and possibly to the testing plan, also being developed during this phase. If testers (along with other stakeholders) are involved from the beginning of the development life cycle, they can help recognize omissions, discrepancies, ambiguities, and other problems that may affect the project requirements' testability, correctness, and other qualities.

    A requirement can be considered testable if it is possible to design a procedure in which the functionality being tested can be executed, the expected output is known, and the output can be programmatically or visually verified.

    Testers need a solid understanding of the product so they can devise better and more complete test plans, designs, procedures, and cases. Early test-team involvement can eliminate confusion about functional behavīor later in the project life cycle. In addition, early involvement allows the test team to learn over time which aspects of the application are the most critical to the end user and which are the highest-risk elements. This knowledge enables testers to focus on the most important parts of the application first, avoiding over-testing rarely used areas and under-testing the more important ones.

    Some organizations regard testers strictly as consumers of the requirements and other software development work products, requiring them to learn the application and domain as software builds are delivered to the testers, instead of involving them during the earlier phases. This may be acceptable in smaller projects, but in complex environments it is not realistic to expect testers to find all significant defects if their first exposure to the application is after it has already been through requirements, analysis, design, and some software implementation. More than just understanding the "inputs and outputs" of the software, testers need deeper knowledge that can come only from understanding the thought process used during the specification of product functionality. Such understanding not only increases the quality and depth of the test procedures developed, but also allows testers to provide feedback regarding the requirements.

    The earlier in the life cycle a defect is discovered, the cheaper it will be to fix it. Table 1.1 outlines the relative cost to correct a defect depending on the life-cycle stage in which it is discovered.[1]

    [1] B. Littlewood, ed., Software Reliability: Achievement and Assessment (Henley-on-Thames, England: Alfred Waller, Ltd., November 1987).

    Table 1.1. Prevention is Cheaper Than Cure: Error Removal Cost Multiplies over System Development Life Cycle
    Phase Relative Cost to Correct
    Definition $1
    High-Level Design $2
    Low-Level Design $5
    Code $10
    Unit Test $15
    Integration Test $22
    System Test $50
    Post-Delivery $100+

    Item 2: Verify the Requirements

    In his work on specifying the requirements for buildings, Christopher Alexander[1] describes setting up a quality measure for each requirement: "The idea is for each requirement to have a quality measure that makes it possible to divide all solutions to the requirement into two classes: those for which we agree that they fit the requirement and those for which we agree that they do not fit the requirement." In other words, if a quality measure is specified for a requirement, any solution that meets this measure will be acceptable, and any solution that does not meet the measure will not be acceptable. Quality measures are used to test the new system against the requirements.

    [1] Christopher Alexander, Notes On the Synthesis of Form (Cambridge, Mass.: Harvard University Press, 1964).

    Attempting to define the quality measure for a requirement helps to rationalize fuzzy requirements. For example, everyone would agree with a statement like "the system must provide good value," but each person may have a different interpretation of "good value." In devising the scale that must be used to measure "good value," it will become necessary to identify what that term means. Sometimes requiring the stakeholders to think about a requirement in this way will lead to defining an agreed-upon quality measure. In other cases, there may be no agreement on a quality measure. One solution would be to replace one vague requirement with several unambiguous requirements, each with its own quality measure.[2]

    [2] Tom Gilb has developed a notation, called Planguage (for Planning Language), to specify such quality requirements. His forthcoming book Competitive Engineering describes Planguage.

    It is important that guidelines for requirement development and documentation be defined at the outset of the project. In all but the smallest programs, careful analysis is required to ensure that the system is developed properly. Use cases are one way to document functional requirements, and can lead to more thorough system designs and test procedures. (In most of this book, the broad term requirement will be used to denote any type of specification, whether a use case or another type of descrīption of functional aspects of the system.)

    In addition to functional requirements, it is also important to consider nonfunctional requirements, such as performance and security, early in the process: They can determine the technology choices and areas of risk. Nonfunctional requirements do not endow the system with any specific functions, but rather constrain or further define how the system will perform any given function. Functional requirements should be specified along with their associated nonfunctional requirements. (Chapter 9 discusses nonfunctional requirements.)

    Following is a checklist that can be used by testers during the requirements phase to verify the quality of the requirements.[3],[4] Using this checklist is a first step toward trapping requirements-related defects as early as possible, so they don't propagate to subsequent phases, where they would be more difficult and expensive to find and correct. All stakeholders responsible for requirements should verify that requirements possess the following attributes.

    [3] Suzanne Robertson, "An Early Start To Testing: How To Test Requirements," paper presented at EuroSTAR 96, Amsterdam, December 2?, 1996. Copyright 1996 The Atlantic Systems Guild Ltd. Used by permission of the author.

    [4] Karl Wiegers, Software Requirements (Redmond, Wash.: Microsoft Press, Sept. 1999).

    • Correctness of a requirement is judged based on what the user wants. For example, are the rules and regulations stated correctly? Does the requirement exactly reflect the user's request? It is imperative that the end user, or a suitable representative, be involved during the requirements phase. Correctness can also be judged based on standards. Are the standards being followed?

    • Completeness ensures that no necessary elements are missing from the requirement. The goal is to avoid omitting requirements simply because no one has asked the right questions or examined all of the pertinent source documents.

      Testers should insist that associated nonfunctional requirements, such as performance, security, usability, compatibility, and accessibility,[5] are described along with each functional requirement. Nonfunctional requirements are usually documented in two steps:

      [5] Elfriede Dustin et al., "Nonfunctional Requirements," in Quality Web Systems: Performance, Security, and Usability (Boston, Mass.: Addison-Wesley, 2002), Sec. 2.5.

      1. A system-wide specification is created that defines the nonfunctional requirements that apply to the system. For example, "The user interface of the Web system must be compatible with Netscape Navigator 4.x or higher and Microsoft Internet Explorer 4.x or higher."

      2. Each requirement descrīption should contain a section titled "Nonfunctional Requirements" documenting any specific nonfunctional needs of that particular requirement that deviate from the system-wide nonfunctional specification.

    • Consistency verifies that there are no internal or external contradictions among the elements within the work products, or between work products. By asking the question, "Does the specification define every essential subject-matter term used within the specification?" we can determine whether the elements used in the requirement are clear and precise. For example, a requirements specification that uses the term "viewer" in many places, with different meanings depending on context, will cause problems during design or implementation. Without clear and consistent definitions, determining whether a requirement is correct becomes a matter of opinion.

    • Testability (or verifiability) of the requirement confirms that it is possible to create a test for the requirement, and that an expected result is known and can be programmatically or visually verified. If a requirement cannot be tested or otherwise verified, this fact and its associated risks must be stated, and the requirement must be adjusted if possible so that it can be tested.

    • Feasibility of a requirement ensures that it can can be implemented given the budget, schedules, technology, and other resources available.

    • Necessity verifies that every requirement in the specification is relevant to the system. To test for relevance or necessity, the tester checks the requirement against the stated goals for the system. Does this requirement contribute to those goals? Would excluding this requirement prevent the system from meeting those goals? Are any other requirements dependent on this requirement? Some irrelevant requirements are not really requirements, but proposed solutions.

    • Prioritization allows everyone to understand the relative value to stakeholders of the requirement. Pardee[6] suggests that a scale from 1 to 5 be used to specify the level of reward for good performance and penalty for bad performance on a requirement. If a requirement is absolutely vital to the success of the system, then it has a penalty of 5 and a reward of 5. A requirement that would be nice to have but is not really vital might have a penalty of 1 and a reward of 3. The overall value or importance stakeholders place on a requirement is the sum of its penalties and rewards梚n the first case, 10, and in the second, 4. This knowledge can be used to make prioritization and trade-off decisions when the time comes to design the system. This approach needs to balance the perspective of the user (one kind of stakeholder) against the cost and technical risk associated with a proposed requirement (the perspective of the developer, another kind of stakeholder).[7]

      [6] William J. Pardee, To Satisfy and Delight Your Customer: How to Manage for Customer Value (New York, N.Y.: Dorset House, 1996).

      [7] For more information, see Karl Wiegers, Software Requirements, Ch. 13.

    • Unambiguousness ensures that requirements are stated in a precise and measurable way. The following is an example of an ambiguous requirement: "The system must respond quickly to customer inquiries." "Quickly" is innately ambiguous and subjective, and therefore renders the requirement untestable. A customer might think "quickly" means within 5 seconds, while a developer may think it means within 3 minutes. Conversely, a developer might think it means within 2 seconds and over-engineer a system to meet unnecessary performance goals.

    • Traceablity ensures that each requirement is identified in such a way that it can be associated with all parts of the system where it is used. For any change to requirements, is it possible to identify all parts of the system where this change has an effect?

      To this point, each requirement has been considered as a separately identifiable, measurable entity. It is also necessary to consider the connections among requirements梩o understand the effect of one requirement on others. There must be a way of dealing with a large number of requirements and the complex connections among them. Suzanne Robertson[8] suggests that rather than trying to tackle everything simultaneously, it is better to divide requirements into manageable groups. This could be a matter of allocating requirements to subsystems, or to sequential releases based on priority. Once that is done, the connections can be considered in two phases: first the internal connections among the requirements in each group, then the connections among the groups. If the requirements are grouped in a way that minimizes the connections between groups, the complexity of tracing connections among requirements will be minimized.

      [8] Suzanne Robertson, "An Early Start to Testing," op. cit.

      Traceability also allows collection of information about individual requirements and other parts of the system that could be affected if a requirement changes, such as designs, code, tests, help screens, and so on. When informed of requirement changes, testers can make sure that all affected areas are adjusted accordingly.

    As soon as a single requirement is available for review, it is possible to start testing that requirement for the aforementioned characteristics. Trapping requirements-related defects as early as they can be identified will prevent incorrect requirements from being incorporated in the design and implementation, where they will be more difficult and expensive to find and correct.[9]

    [9] T. Capers Jones, Assessment and Control of Software Risks (Upper Saddle River, N.J.: Prentice Hall PTR, 1994).

    After following these steps, the feature set of the application under development is now outlined and quantified, which allows for better organization, planning, tracking, and testing of each feature.

    Item 3: Design Test Procedures As Soon As Requirements Are Available

    Just as software engineers produce design documents based on requirements, it is necessary for the testing team to design test procedures based on these requirements as well. In some organizations, the development of test procedures is pushed off until after a build of the software is delivered to the testing team, due either to lack of time or lack of properly specified requirements suitable for test-procedure design. This approach has inherent problems, including the possibility of requirement omissions or errors being discovered late in the cycle; software implementation issues, such as failure to satisfy a requirement; nontestability; and the development of incomplete test procedures.

    Moving the test procedure development effort closer to the requirements phase of the process, rather than waiting until the software-development phase, allows for test procedures to provide benefits to the requirement-specification activity. During the course of developing a test procedure, certain oversights, omissions, incorrect flows, and other errors may be discovered in the requirements document, as testers attempt to walk through an interaction with the system at a very specific level, using sets of test data as input. This process obliges the requirement to account for variations in scenarios, as well as to specify a clear path through the interaction in all cases.

    If a problem is uncovered in the requirement, that requirement will need to be reworked to account for this discovery. The earlier in the process such corrections are incorporated, the less likely it is that the corrections will affect software design or implementation.

    As mentioned in Item 1, early detection equates to lower cost. If a requirement defect is discovered in later phases of the process, all stakeholders must change the requirement, design, and code, which will affect budgets, schedules, and possibly morale. However, if the defect is discovered during the requirements phase, repairing it is simply a matter of changing and reviewing the requirement text.

    The process of identifying errors or omissions in a requirement through test-procedure definition is referred to as verifying the requirement's testability. If not enough information exists, or the information provided in the specification is too ambiguous to create a complete test procedure with its related test cases for relevant paths, the specification is not considered to be testable, and may not be suitable for software development. Whether a test can be developed for a requirement is a valuable check and should be considered part of the process of approving a requirement as complete. There are exceptions, where a requirement cannot immediately be verified programmatically or manually by executing a test. Such exceptions need to be explicitly stated. For example, fulfillment of a requirement that "all data files need to be stored for record-keeping for three years" cannot be immediately verified. However, it does need to be approved, adhered to, and tracked.

    If a requirement cannot be verified, there is no guarantee that it will be implemented correctly. Being able to develop a test procedure that includes data inputs, steps to verify the requirement, and known expected outputs for each related requirement can assure requirement completeness by confirming that important requirement information is not missing, making the requirement difficult or even impossible to implement correctly and untestable. Developing test procedures for requirements early on allows for early discovery of nonverifiability issues.

    Developing test procedures after a software build has been delivered to the testing team also risks incomplete test-procedure development because of intensive time pressure to complete the product's testing cycle. This can manifest in various ways: For example, the test procedure might be missing entirely; or it may not be thoroughly defined, omitting certain paths or data elements that may make a difference in the test outcome. As a result, defects might be missed. Or, the requirement may be incomplete, as described earlier, and not support the definition of the necessary test procedures, or even proper software development. Incomplete requirements often result in incomplete implementation.

    Early evaluation of the testability of an application's requirements can be the basis for defining a testing strategy. While reviewing the testability of the requirements, testers might determine, for example, that using a capture/playback tool would be ideal, allowing execution of some of the tests in an automated fashion. Determining this early allows enough lead time to evaluate and implement automated testing tools.

    To offer another example: During an early evaluation phase, it could be determined that some requirements relating to complex and diversified calculations may be more suitable tested with a custom test harness (see Item 37) or specialized scrīpts. Test harness development and other such test-preparation activities will require additional lead time before testing can begin.

    Moving test procedures closer to the requirements-definition phase of an iteration[10] carries some additional responsibilities, however, including prioritizing test procedures based on requirements, assigning adequate personnel, and understanding the testing strategy. It is often a luxury, if not impossible, to develop all test procedures immediately for each requirement, because of time, budget, and personnel constraints. Ideally, the requirements and subject-matter expert testing teams are both responsible for creating example test scenarios as part of the requirements definition, including scenario outcomes (the expected results).

    [10] An iteration, used in an iterative development process, includes the activities of requirement analysis, design, implementation and testing. There are many iterations in an iterative development process. A single iteration for the whole project would be known as the waterfall model.

    Test-procedure development must be prioritized based on an iterative implementation plan. If time constraints exist, test developers should start by developing test procedures for the requirements to be implemented first. They can then develop "draft" test procedures for all requirements to be completed later.

    Requirements are often refined through review and analysis in an iterative fashion. It is very common that new requirement details and scenario clarifications surface during the design and development phase. Purists will say that all requirement details should be ironed out during the requirements phase. However, the reality is that deadline pressures require development to start as soon as possible; the luxury of having complete requirements up-front is rare. If requirements are refined later in the process, the associated test procedures also need to be refined. These also must be kept up-to-date with respect to any changes: They should be treated as "living" documents.

    Effectively managing such evolving requirements and test procedures requires a well-defined process in which test designers are also stakeholders in the requirement process. See Item 4 for more on the importance of communicating requirement changes to all stakeholders.

    Item 4: Ensure That Requirement Changes Are Communicated

    When test procedures are based on requirements, it is important to keep test team members informed of changes to the requirements as they occur. This may seem obvious, but it is surprising how often test procedures are executed that differ from an application's implementation that has been changed due to updated requirements. Many times, testers responsible for developing and executing the test procedures are not notified of requirements changes, which can result in false reports of defects, and loss of required research and valuable time.

    There can be several reasons for this kind of process breakdown, such as:

    • Undocumented changes. Someone, for example the product or project manager, the customer, or a requirements analyst, has instructed the developer to implement a feature change, without agreement from other stakeholders, and the developer has implemented the change without communicating or documenting it. A process needs to be in place that makes it clear to the developer how and when requirements can be changed. This is commonly handled through a Change Control Board, an Engineering Review Board, or some similar mechanism, discussed below.

    • Outdated requirement documentation. An oversight on the testers' part or poor configuration management may cause a tester to work with an outdated version of the requirement documentation when developing the test plan or procedures. Updates to requirements need to be documented, placed under configuration management control (baselined), and communicated to all stakeholders involved.

    • Software defects. The developer may have implemented a requirement incorrectly, although the requirement documentation and the test documentation are correct.

    In the last case, a defect report should be written. However, if a requirement change process is not being followed, it can be difficult to tell which of the aforementioned scenarios is actually occurring. Is the problem in the software, the requirement, the test procedure, or all of the above? To avoid guesswork, all requirement changes must be openly evaluated, agreed upon, and communicated to all stakeholders. This can be accomplished by having a requirement-change process in place that facilitates the communication of any requirement changes to all stakeholders.

    If a requirement needs to be corrected, the change process must take into account the ripple effect upon design, code, and all associated documentation, including test documentation. To effectively manage this process, any changes should be baselined and versioned in a configuration-management system.

    The change process outlines when, how, by whom, and where change requests are initiated. The process might specify that a change request can be initiated during any phase of the life cycle梔uring any type of review, walk-through, or inspection during the requirements, design, code, defect tracking, or testing activities, or any other phase.

    Each change request could be documented via a change-request form梐 template listing all information necessary to facilitate the change-request process梬hich is passed on to the Change Control Board (CCB). Instituting a CCB helps ensure that any changes to requirements and other change requests follow a specific process. A CCB verifies that change requests are documented appropriately, evaluated, and agreed upon; that any affected documents (requirements, design documents, etc.) are updated; and that all stakeholders are informed of the change.

    The CCB usually consists of representatives from the various management teams, e.g., product management, requirements management, and QA teams, as well as the testing manager and the configuration manager. CCB meetings can be conducted on an as-needed basis. All stakeholders need to evaluate change proposals by analyzing the priority, risks, and tradeoffs associated with the suggested change.

    Associated and critical impact analysis of the proposed change must also be performed. For example, a requirements change may affect the entire suite of testing documentation, requiring major additions to the test environment and extending the testing by numerous weeks. Or an implementation may need to be changed in a way that affects the entire automated testing suite. Such impacts must be identified, communicated, and addressed before the change is approved.

    The CCB determines whether a change request's validity, effects, necessity, and priority (for example, whether it should be implemented immediately, or whether it can be documented in the project's central repository as an enhancement). The CCB must ensure that the suggested changes, associated risk evaluation, and decision-making processes are documented and communicated.

    It is imperative that all parties be made aware of any change suggestions, allowing them to contribute to risk analysis and mitigation of change. An effective way to ensure this is to use a requirements-management tool,[11] which can be used to track the requirements changes as well as maintain the traceability of the requirements to the test procedures (see the testability checklist in Item 2 for a discussion of traceability). If the requirement changes, the change should be reflected and updated in the requirements-management tool, and the tool should mark the affected test artifact (and other affected elements, such as design, code, etc.), so the respective parties can update their products accordingly. All stakeholders can then get the latest information via the tool.

    [11] There are numerous excellent requirement management tools on the market, such as Rational's RequisitePro, QSS's DOORS, and Integrated Chipware's RTM: Requirement & Traceability Management.

    Change information managed with a requirements-management tool allows testers to reevaluate the testability of the changed requirement as well as the impact of changes to test artifacts (test plan, design, etc.) or the testing schedule. The affected test procedures must be revisited and updated to reflect the requirements and implementation changes. Previously identified defects must be reevaluated to determine whether the requirement change has made them obsolete. If scrīpts, test harnesses, or other testing mechanisms have already been created, they may need to be updated as well.

    A well-defined process that facilitates communication of changed requirements, allowing for an effective test program, is critical to the efficiency of the project.

    Item 5: Beware of Developing and Testing Based on an Existing System

    In many software-development projects, a legacy application already exists, with little or no existing requirement documentation, and is the basis for an architectural redesign or platform upgrade. Most organizations in this situation insist that the new system be developed and tested based exclusively on continual investigation of the existing application, without taking the time to analyze or document how the application functions. On the surface, it appears this will result in an earlier delivery date, since little or no effort is "wasted" on requirements reengineering or on analyzing and documenting an application that already exists, when the existing application in itself supposedly manifests the needed requirements.

    Unfortunately, in all but the smallest projects, the strategy of using an existing application as the requirements baseline comes with many pitfalls and often results in few (if any) documented requirements, improper functionality, and incomplete testing.

    Although some functional aspects of an application are self-explanatory, many domain-related features are difficult to reverse-engineer, because it is easy to overlook business logic that may depend on the supplied data. As it is usually not feasible to investigate the existing application with every possible data input, it is likely that some intricacy of the functionality will be missed. In some cases, the reasons for certain inputs producing certain outputs may be puzzling, and will result in software developers providing a "best guess" as to why the application behaves the way it does. To make matters worse, once the actual business logic is determined, it is typically not documented; instead, it is coded directly into the new application, causing the guessing cycle to perpetuate.

    Aside from business-logic issues, it is also possible to misinterpret the meaning of user-interface fields, or miss whole sections of user interface completely.

    Many times, the existing baseline application is still live and under development, probably using a different architecture along with an older technology (for example, desktop vs. Web versions); or it is in production and under continuous maintenance, which often includes defect fixing and feature additions for each new production release. This presents a "moving-target" problem: Updates and new features are being applied to the application that is to serve as the requirements baseline for the new product, even as it is being reverse-engineered by the developers and testers for the new application. The resulting new application may become a mixture of the different states of the existing application as it has moved through its own development life cycle.

    Finally, performing analysis, design, development, and test activities in a "moving-target" environment makes it difficult to properly estimate time, budgets, and staffing required for the entire software development life cycle. The team responsible for the new application cannot effectively predict the effort involved, as no requirements are available to clarify what to build or test. Most estimates must be based on a casual understanding of the application's functionality that may be grossly incorrect, or may need to suddenly change if the existing application is upgraded. Estimating tasks is difficult enough when based on an excellent statement of requirements, but it is almost impossible when so-called "requirements" are embodied in a legacy or moving-target application.

    On the surface, it may appear that one of the benefits of building an application based on an existing one is that testers can compare the "old" application's output over time to that produced by the newly implemented application, if the outputs are supposed to be the same. However, this can be unsafe: What if the "old" application's output has been wrong for some scenarios for a while, but no one has noticed? If the new application is behaving correctly, but the old application's output is wrong, the tester would document an invalid defect, and the resulting fix would incorporate the error present in the existing application.

    If testers decide they can't rely on the "old" application for output comparison, problems remain. Or if they execute their test procedures and the output differs between the two applications, the testers are left wondering which output is correct. If the requirements are not documented, how can a tester know for certain which output is correct? The analysis that should have taken place during the requirements phase to determine the expected output is now in the hands of the tester.

    Although basing a new software development project on an existing application can be difficult, there are ways to handle the situation. The first step is to manage expectations. Team members should be aware of the issues involved in basing new development on an existing application. The following list outlines several points to consider.

    • Use a fixed application version. All stakeholders must understand why the new application must be based on one specific version of the existing software as described and must agree to this condition. The team must select a version of the existing application on which the new development is to be based, and use only that version for the initial development.

      Working from a fixed application version makes tracking defects more straightforward, since the selected version of the existing application will determine whether there is a defect in the new application, regardless of upgrades or corrections to the existing application's code base. It will still be necessary to verify that the existing application is indeed correct, using domain expertise, as it is important to recognize if the new application is correct while the legacy application is defective.

    • Document the existing application. The next step is to have a domain or application expert document the existing application, writing at least a paragraph on each feature, supplying various testing scenarios and their expected output. Preferably, a full analysis would be done on the existing application, but in practice this can add considerable time and personnel to the effort, which may not be feasible and is rarely funded. A more realistic approach is to document the features in paragraph form, and create detailed requirements only for complex interactions that require detailed documentation.

      It is usually not enough to document only the user interface(s) of the current application. If the interface functionality doesn't show the intricacies of the underlying functional behavīor inside the application and how such intricacies interact with the interface, this documentation will be insufficient.

    • Document updates to the existing application. Updates梩hat is, additional or changed requirements梖or the existing baseline application from this point forward should be documented for reference later, when the new application is ready to be upgraded. This will allow stable analysis of the existing functionality, and the creation of appropriate design and testing documents. If applicable, requirements, test procedures, and other test artifacts can be used for both products.

      If updates are not documented, development of the new product will become "reactive": Inconsistencies between the legacy and new products will surface piecemeal; some will be corrected while others will not; and some will be known in advance while others will be discovered during testing or, worse, during production.

    • Implement an effective development process going forward. Even though the legacy system may have been developed without requirements, design or test documentation, or any system-development processes, whenever a new feature is developed for either the previous or the new application, developers should make sure a system-development process has been defined, is communicated, is followed, and is adjusted as required, to avoid perpetuating bad software engineering practices.

    After following these steps, the feature set of the application under development will have been outlined and quantified, allowing for better organization, planning, tracking, and testing of each feature.

  • Effective Software Testing

    2007-06-14 23:11:15

    Effective Software Testing explores fifty critically important best practices, pitfalls, and solutions. Gleaned from the author's extensive practical experience, these concrete items will enable quality assurance professionals and test managers to immediately enhance their understanding and skills, avoid costly mistakes, and implement a state-of-the-art testing program.

    This book places special emphasis on the integration of testing into all phases of the software development life cycle-from requirements definition to design and final coding. The fifty lessons provided here focus on the key aspects of software testing: test planning, design, documentation, execution, managing the testing team, unit testing, automated testing, nonfunctional testing, and more.

    You will learn to:

    • Base testing efforts on a prioritized feature schedule

    • Estimate test preparation and execution

    • Define the testing team roles and responsibilities

    • Design test procedures as soon as requirements are available

    • Derive effective test cases from requirements

    • Avoid constraints and detailed data elements in test procedures

    • Make unit-test execution part of the build process

    • Use logging to increase system testability

    • Test automated test tools on an application prototype

    • Automate regression tests whenever possible

    • Avoid sole reliance on capture/playback

    • Conduct performance testing with production-sized databases

    • Tailor usability tests to the intended audience

    • Isolate the test environment from the development environment

    • Implement a defect tracking life cycle

    Throughout the book, numerous real-world case studies and concrete examples illustrate the successful application of these important principles and techniques.

    Effective Software Testing provides ready access to the expertise and advice of one of the world's foremost software quality and testing authorities.

数据统计

  • 访问量: 1209
  • 日志数: 7
  • 建立时间: 2007-06-14
  • 更新时间: 2007-09-16

RSS订阅

Open Toolbar