测试流程总结

发表于:2010-9-03 14:36

字体: | 上一篇 | 下一篇 | 我要投稿

 作者:Joanne    来源:51Testing软件测试网采编

  转测试一年又余,工作做了不少,但没有了以前痛定思痛的信念,好像很久没有总结性的文档出现了。上周末,师傅赶鸭子上架,美其名曰指导工作。自己都不知道怎么回事呢,和谈指导?!先把自己屡清楚了再说吧。师傅,总是那个逼着我进步的唐僧。。。

  先说一下测试流程吧。从大学到工作,看的书往往都是软件开发流程,测试作为其中必不可少的一部分,总是出现在最后最审美疲劳的那个阶段。很少看到有人来总结测试流程。我来针对实施型项目,整一个吧。

  1、测试范围的界定。依据-《需求说明书》,参照-《解决方案书》,《系统设计说明书》

  2、测试用例的设计。当测试范围界定成功后,应该针对功能需求及非功能需求进行测试执行的设计。输出的成果应该是《测试用例》。就目前项目现状而言,基本上不可能像各种书籍中所描述的,写出很详细的测试用例,一个是项目后期的需求变化比较大,一个是项目一般留给测试的时间是非常紧张的,不可能有大量的时间去进行测试用例的遍历及描述,所以,现在测试用例的描述,基本上是个大致的测试思路。很多技巧性的测试用例,都是测试人员自行在测试过程中执行。

  3、测试的执行。指导文档是《测试用例》,按照用例中的描述来进行遍历,一般不会出现大的逻辑性的测试漏失。成果应该是《测试报告》了。测试的执行很重要,是保证项目质量的关键,项目测试执行一般分为功能测试,UI交互测试,非功能性测试(性能测试,兼容性测试)。性能测试如门户平台,BI等,一个是用户量很大,一个是高端用户使用的系统要求的性能很高,所以就会将性能提升的很高。一般的小型业务系统,使用的用户少,就不是很关心这个问题了。兼容性测试一般是IE版本的兼容及操作系统的兼容问题,如果是小型客户端软件,就不必关注这一点。

  4、试运行测试。项目上线后,还需要一段时间的试运行,这个时间段内,需求的变化是相当大的。这个时候的测试工作量也是非常繁琐和繁重的。因为短时间内,由于需求变化对代码进行了更改,而项目组的人员,已经到了一个高原期,从观念到体力上,都不大能给予高端的支持,此时修改出的代码质量不会很高,而且由于用户使用的紧迫性,在修改时很可能会顾此失彼。这样的阶段,一个是靠项目经理的周密考虑,另一个最大的支撑,就是测试人员的把关。对于每一次变更,不但要保证修改功能点正确无误,更要保证的是,原已成熟的功能点,是否继续保持使用的有效性。

  5、项目总结。项目正式交接完成后,一般项目经理都应该对此项目做一个总结,同时测试工作也是应该在其中,主要是对缺陷登记平台上的缺陷进行分类,哪些是低级缺陷,哪些是需求时没有控制好的需求变更。最主要的是要找出,在试运行时,哪些缺陷是用户体验过程中发现的,而在测试执行时漏掉的。

《2023软件测试行业现状调查报告》独家发布~

精彩评论

  • ios_zhangchao
    2010-9-05 19:31:18

    Each process should be bond to metrics to continuously improve, not special for testing. It is better and supposed to be set up in the early stage, because each member could be observant on it to keep improvement associated with the final target. I just got chance to do that when the release cycle is near over, handy work is decreased down. I would like to know the actual result based on the rule, instead of only bugs and feelings. That will help me to know how to improve the testing coverage next time, how to reduce the waste during the development, how to prioritize the things, how to communicate with others with big picture and context. I am aimed to do an objective measure of the effectiveness and efficiency of testing. It might be a way to know what happened before and how to improve next. Most importantly, it tends to improve my mind.

    1. Define Metrics

    ·  Deciding the audience (executive team, test team)

    ·  Identifying the metrics which capture the status of each type of testing

    ·  Ensuring that all different categories in metrics are considered based on the project needs

    ·  Setting up easy mechanisms for data collection and data capture

    ·  Identifying the goals or problem areas where improvement is required

    ·  Refining the goals, using the “Goal-Question-Metric” technique

    2.  Identifying data required for the metric; if data is not available, identify/set up a process to capture the data

    ·  Provide the definition for each metric

    ·  Define the benchmark or goal for each metric

    ·  Verify whether benchmark or goal are realistic, by comparing with industry standards or with data of similar projects within the organization

    ·  Based on the type of testing, metrics are mainly classified into:

    §  Manual testing

    §  Automation testing

    §  Performance testing

    ·  Each of these are further categorized based on the focus area:

    §  Productivity

    §  Quality

    §  People

    §  Environment/Infrastructure

    §  Stability

    §  Progress

    §  Tools

    §  Effectiveness

    3. Communication with stakeholders

    To ensure better end-results and to increase buy-in, the metrics identifying and planning process must involve all stakeholders.

    ·   Communicate the need for metrics to all the affected teams

    ·   Educate the testing team regarding the data points that need to be captured for generating the metrics

    ·  Obtain feedback from stakeholders

    ·  Communicate with stakeholders - how often the data needs to be collected, how often the reports need be generated, etc.

    4. Capturing and verifying data

    ·  Ensure that the data capturing mechanism is set up and streamlined

    ·  Communicate and give proper guidelines to the team members on the data that is required

    ·  Set up verification points to ensure that all data is captured

    ·  Identifying the sources of inaccurate data for each basic metric and take corrective steps to eliminate inaccuracies

    ·  For each metrics, define a source of data and procedure to capture data

    ·  Ensure that minimum effort is spent on capturing the data by automating the data capturing process wherever possible (like some tools API)

    ·  Capture the data in a centralized location easily accessible to all members

    ·  Collect the data with minimal manual intervention

    5. Analyzing and processing data

    ·  Once the data is captured, the data must be analyzed for completeness

    ·  Verify whether the data filed is accurate and up-to-date

    ·  Define the process/template in which derived data must be captured

    ·  Calculate all the metrics (derived metrics) based on the base metrics

    ·  Verify whether the metrics are conveying the correct information

    ·  Automate the process of calculating derived metric from basic metric to reduce the effort

    6. Reporting

    ·  Define an effective approach for reporting, like a metric dashboard

    ·  It is advisable to obtain feedback from stakeholders and their representatives on the metrics to be presented by providing samples

    ·  Metrics should be presented based on the audience and in a consistent format

    ·  Reports should contain the summary of observations

    ·  Reporting should be in a clearly understandable format, preferably graphs and charts with guidelines to understand the report

    ·  Reports should clearly point out all the issues or highlights

    ·  Based on the request, user should be able to access the data

    ·  Reports should be presented in such a way that metrics are compared against benchmarks and trends shown

    ·  Reports could be easily customizable based on the user requirement

    ·  Ensure that efforts spent on reporting is minimal; whatever required try to automate (like macros)

    7. Continuous improvement

    ·  Continuous improvement is the key to the success of any process

    ·  After successful implementation of metrics and after achieving the benchmark, revisit the goals and benchmarks and set them above the industry standards

    ·  Regularly collect feedbacks from the stakeholders

    ·  Metrics report must be accessible to everyone

    ·  Evaluate new metrics to capture

    ·  Refine the report template

    ·  Ensure that efforts spent on reporting is minimal

    Challenges in implementation of a metrics program

    Up to 80 percent of all software metrics initiatives fail within two years. To avoid common pitfalls in test metrics, the following aspects need to be considered:

    · Management commitment: to be successful, every process improvement initiative needs strong management commitment in terms of owning and driving the initiative on an ongoing basis.

    · Measuring too much, too soon: one can add identify many metrics that can be captured in projects, but the key is to identify the most important ones that add value.

    · Measuring too litter, too late: The other mistake team make is to collect few metrics too late in the process. This does not provide the right information for proper decision making.

    · Wrong metrics: if the metrics do not really relate to the goals, it does not make sense to collect them.

    ·  Vague metrics definitions: Ambiguous metric definitions are dangerous, as different people may interpret them in different ways, thus resulting in inaccurate results.

    ·  Using metrics data to evaluate individuals: One of the primary reasons for a metrics program being not appreciated and supported by all levels of the team is the fear that the data may be used against them. So never use the metrics data to evaluate a person.

    ·  Using metrics to motivate rather than to understand: Many managers make the mistake of using metrics to motivate teams or projects. This may send the signals that the metrics are being used to evaluate individuals and team. So the focus must be on understanding the message given by the metrics.

    ·  Collecting data that is not used: There may be instances where data is collected but not really used for analysis; avoid such situations.

    ·  Lack of communication and training:

    Explain why: there is a need to explain to a skeptical team why you need to measure the items you choose.

    Share the results

    Define data items and procedures

    Key metrics for software testing

    Test progress tracking metric: Track the cumulative test cases or test points – planned, attempted and successful, over the test execution period.
    Defect Metrics

    1) Defects by action taken

    2) Defects by injection phase

    3) Defects by detection phase

    4) Defects by priority

    Release criteria

    If a defect falls on the shaded region, the software should not be released or will need a good reason for the defect to be waived off.

    Defect severity

    Priority
    Critical
    Serious
    Moderate
    Minor

    High
      
      
      
      

    Medium
      
      
      
      

    Low
      
      
      
      




    Defect by cause

    This metric will help the development team and the test team to focus on the areas for improvement. For example, it could be distributed into below areas,

    enhancement, impact not analyzed, Err in existing pgm, not applicable, insufficient information, insufficient time, lack of experience, lack of system understanding, improper setup, Stds. Not followed, less domain knowledge, lack of coordination and ambiguous spec.  

    Defect by type

    This metric can be a good pointer to areas for improvement. It could be distributed into below areas,

    Standards, TestSetup, Detailed Design, Comments, Consistency, Not a defect, user interface, documentation, incomplete test case, incomplete reqmts, detailed design, func architecture, performance, reusability, others, func architecture, invalid test case, naming conventions, logic, incorrect reqmts and planning.



    Reference:

    1. Software Testing Metrics (Infosys)

    2. Metrics used in testing


    本文来自CSDN博客,转载请标明出处:http://blog.csdn.net/ctina/archive/2010/08/27/5843996.aspx

关注51Testing

联系我们

快捷面板 站点地图 联系我们 广告服务 关于我们 站长统计 发展历程

法律顾问:上海兰迪律师事务所 项棋律师
版权所有 上海博为峰软件技术股份有限公司 Copyright©51testing.com 2003-2024
投诉及意见反馈:webmaster@51testing.com; 业务联系:service@51testing.com 021-64471599-8017

沪ICP备05003035号

沪公网安备 31010102002173号