发布新日志

  • Perficient...The purpose toward which an endeavor is directed

    2007-11-18 01:44:03

     

    Responsibilities:

     

    ·         Responsible for test strategy and planning, test engineering and test reporting on projects.

    ·         Execute functional/non-functional requirements review to identify and validate testability.

    ·         Effectively identify testing requirements for projects.  This includes the entry/exit criteria, test environment, test objectives (types of tests, coverage levels, pass/fail rates), issue tracking and management mechanisms, metrics gathering and reporting techniques, personnel and timelines

    ·         Create, maintain and manage/coordinate testing resources to execute test plans

    ·         Create, maintain and test the test case and test scrīpts

    ·         Establish test environments and test data

    ·         Demonstrated experience in organizing, prioritizing, and coordinating their own work and that of other testers.

    ·         Coach and train junior testers

    ·         Evaluate automatic test tools.

    ·         Help define and implement processes and best practices with regard to automated testing.

    ·         Create generic and reusable testing solutions that are maintainable and adaptable to fast changing environment.

    ·         Effectively and efficiently communicate test result with develop team

    ·         Do defects analysis and track defect status.

    ·         Work closely with management, as well as design and development resources to efficiently perform project tasks 

    ·         Promote and enforce the BoldTech project delivery methodology as well as its evolution

     

    Qualifications:

    ·         B.S. in Computer Science (or related field)

    ·         At least 3 years in testing and 1 year leading a team of at least 2 testers..

    ·         Ability to work in a fast moving / changing environment

    ·         Background in commercial software and/or Internet application projects

    ·         Proven effective use of wide range of testing methods, tools and standards

    ·         Knowledge of multi-tier application architectures, software design, development methodologies and related tools

    ·         Significant background in test environment creation and maintenance

    ·         Familiar with one or more of the following programming languages: C++, VB/VBA, Java, etc.

    ·         Expertise or certification in one or more of the following tools: Visual Test, SilkTest Suit, WinRunner, LoadRunner, Quick test Pro.

    ·         Led the automation of tests for multiple software systems

    ·         Solid debugging and problem solving capability

    ·         Use of UNIX and/or Windows NT

    ·         Experience with relational databases- Oracle, SQLServer, MySQL etc. (a plus)

    ·         Proficiency with Microsoft Excel, Word, and Project

    ·         Strong leadership, mentoring and interpersonal skills

    ·         Highly organized and conscientious; extremely motivated, cooperative and flexible

    ·         Experience in delivering in a CMM 3 level organization of higher preferred

    ·         Good communication skills in English (verbal and written)

  • Loadrunner函数中文解释

    2007-11-09 20:58:29

    web_url

    语法:
    Int Web_url(const char *name, const char * url, <Lists of Attributes>, [EXTRARES,<Lists of Resource Attributes>,LAST)

    返回值
    成功时返回LR_PASS (0),失败时返回 LR_FAIL (1)。

    参数:
    Name:VuGen中树形视图中显示的名称,在自动事务处理中也可以用做事务的名称。

    url:页面url地址。

    List of Attributes

    EXTRARES:分隔符,标记下一个参数是资源属性的列表了。

    List of Resource Attributes

    LAST:属性列表结束的标记符。


    说明

    Web_url根据函数中的URL属性加载对应的URL,不需要上下文。

    只有VuGen处于URL-based或者HTML-based(此时A scrīpt containing explicit URLs only选项被选中时)的录制模式时,web_url才会被录制到。

    可以使用web_url 模拟从FTP服务器上下载文件。web_url 函数会使FTP服务器执行文件被真实下载时的操作。除非手工指定了"FtpAscii=1",下载会以二进制模式完成。

    在录制选项中,Toos—Recording Option下,Recording选项中,有一个Advanced HTML选项,可以设置是否录制非HTML资源,只有选择了“Record within the current scrīpt step”时,List of Resource Attributes才会被录制到。非HTML资源的例子是gif和jpg图象文件。

    通过修改HTTP头可以传递给服务器一些附加的请求信息。使用HTTP头允许请求中包含其他的内容类型(Content_type),象压缩文件一样。还可以只请求特定状态下的web页面。

    所有的Web Vusers ,HTTP模式下的WAP Vusers或者回放模式下的Wireless Session Protocol(WSP),都支持web_url函数。


    web_image

    语法:
    Int web_image (const char *StepName, <List of Attributes>, [EXTRARES, <List of Resource Attributes>,] LAST );

    返回值
    成功时返回LR_PASS (0),失败时返回 LR_FAIL (1)。

    参数:
    StepName:VuGen中树形视图中显示的名称,在自动事务处理中也可以用做事务的名称。

    List of Attributes(服务器端和客户端映射的图片):SRC属性是一定会被录制到的,其他的ALT、Frame、TargetFrame、Ordinal则是有的话会被录制到。

    1、ALT:描述图象的元素。用鼠标指向图象时,所浮出来的文字提示。

    2、SRC:描述图象的元素,可以是图象的文件名. 如: button.gif。也可以使用SRC/SFX来指定图象路径的后缀。所有拥有相同此后缀的字符串都会被匹配到。

    3、Frame:录制操作时所在的Frame的名称。

    4、TargetFrame:见List of Attributes的同名参数。

    5、Ordinal:参见Web_link的同名参数。

    List of Attributes(客户端映射的图片):

    1、AreaAlt:鼠标单击区域的ALT属性。

    2、AreaOrdinal:鼠标单击区域的顺序号。

    3、MapName:图象的映射名。

    List of Attributes(服务器端映射的图片):尽管点击坐标不属于属性,但还是以属性的格式来使用。
     
    1、Xcoord:点击图象时的X坐标。

    2、Ycoord:点击图象时的Y坐标。

    EXTRARES:分隔符,标记下一个参数是资源属性的列表了。

    List of Resource Attributes:参见List of Resource Attributes一节。

    LAST:属性列表结束的标记符。

    说明

    web_image模拟鼠标在指定图片上的单击动作。此函数必须在有前置操作的上下文中使用。

    在Toos—Recording Option,如果录制级别设为基于HMTL的录制方式时,web_image才会被录制到。
     
    web_image支持客户端(client-side)和服务器端server-side的图片映射。

    在录制选项中,Toos—Recording Option下,Recording选项中,有一个Advanced HTML选项,可以设置是否录制非HTML资源,只有选择了“Record within the current scrīpt step”时,List of Resource Attributes才会被录制到。非HTML资源的例子是gif和jpg图象文件。

    通过修改HTTP头可以传递给服务器一些请求附加信息。使用HTTP头允许请求中包含内容,如同压缩文件一样。还可以只请求特定状态的web页面。

    web_image支持Web虚拟用户,不支持WAP虚拟用户。

    例子

    下面的例子模拟用户单击Home图标以回到主页(黑体部分):

    web_url("my_home", "URL=http://my_home/", LAST);

    web_link("Employees", "Text=Employees", LAST);

    web_image("Home.gif", "SRC=../gifs/Buttons/Home.gif", LAST);

    web_link("Library", "Text=Library", LAST);

    web_image("Home.gif", "SRC=../../gifs/buttons/Home.gif", LAST);

    下面的例子模拟用户在客户端映射的图片上单击:

    web_image("dpt_house.gif",


           "Src=../gifs/dpt_house.gif",


           "MapName=dpt_house",


           "AreaOrdinal=4",


           LAST);


    下面的例子模拟用户在服务端映射的图片上单击:

    web_image("The Web Developer's Virtual Library",


           "Alt=The Web Developer's Virtual Library",


           "Ordinal=1",


           "XCoord=91",


           "YCoord=17",


           LAST);

    下面是一个使用文件名后缀的例子:它指定了dpt_house.gif作为后缀,所以象../gifs/dpt_house.gif、/gifs/dpt_house.gif、gifs/dpt_house.gif、/dpt_house.gif等都会匹配到。

    web_image("dpt_house.gif",

            "Src/sfx=dpt_house.gif", LAST);


    web_link

    语法:
    Int web_link (const char *StepName, <List of Attributes>, [EXTRARES, <List of Resource Attributes>,] LAST );

    返回值
    成功时返回LR_PASS (0),失败时返回 LR_FAIL (1)。

    参数:
    StepName:VuGen中树形视图中显示的名称,在自动事务设置中也被用做事务名称。

    List of Attributes:支持下列的属性:

    1.Text:超链接中的文字,必须精确匹配。

    2.Frame:录制操作时所在的Frame的名称。

    3.TargetFrame、ResourceByteLimit:见List of Attributes一节。

    4.Ordinal:如果用给出的属性(Attributes)筛选出的元素不唯一,那么VuGen使用此属性来指定其中的一个。例如:“SRC=abc.gif”,“Ordinal=3”标记的是SRC的值是“abc.gif”的第3张图片。

    EXTRARES:表明下面的参数将会是list of resource attributes了。

    LAST:结尾标示符。

    说明

    模拟鼠标在由若干个属性集合描述的链接上进行单击。此函数必须在前置动作的上下文中才可以执行。

    web_link 仅仅在基于HTML的录制方式中才会被VuGen捕捉到。

    非HTML生成的资源的例子有.gif 和.jpg图像。对于List of Resource Attributes参数来说,仅仅当Recording Options--Recording --HTML-based scrīpt-- Record within the current scrīpt step选项被选中时,它们才会被插入到代码中。

    可以通过改变HTTP头信息给服务器传递一些附加信息。使用HTTP头信息可以,允许响应体中包含其他的内容类型(Content-Type),例如压缩文件,或者只有满足了特定的状态才去请求web页。

    此函数值支持Web虚拟用户,不支持WAP虚拟用户。


    web_submmit_form

    语法:
    Int web_submit_form (const char *StepName, <List of Attributes>, <List of Hidden Fields>, ITEMDATA, <List of Data Fields>, [ EXTRARES, <List of Resource Attributes>,] LAST );


    返回值
    成功时返回LR_PASS (0),失败时返回 LR_FAIL (1)。


    参数:
    StepName:Form的名字。VuGen中树形视图中显示的名称,在自动事务处理中也可以用做事务的名称。
    List of Attributes:支持以下属性:

    1.Action:Form中的ACTION属性,指定了完成Form中的操作用到的URL。也可以使用“Action/sfx” 表示使用此后缀的所有Action。

    2.Frame:录制操作时所在的Frame的名称。

    3.TargetFrame、ResourceByteLimit:见List of Attributes的同名参数。

    4.Ordinal:参见Web_link的同名参数。


    VuGen通过记录数据域唯一的标识每个Form。如果这样不足以识别Form,VuGen会记录Action 属性。如果还不足以识别,则会记录Ordinal 属性,这种情况下不会记录Action属性。

    List of Hidden Fields:补充属性(Serves)。 通过此属性可以使用一串隐含域来标识Form。使用下面的格式:

    STARTHIDDENS,

    "name=n1", "value=v1", ENDITEM,

    "name=n2", "value=v2", ENDITEM,

    ENDHIDDENS,

    List of Data Fields

    Data项用来标识form。Form是通过属性和数据来共同识别的。

    使用下面的格式来表示数据域列表

    "name=n1", "value=v1", ENDITEM,

    "name=n2", "value=v2", ENDITEM,

    ITEMDATA:Form中数据和属性的分隔符。

    EXTRARES:一个分隔符,标记下一个参数是资源属性的列表了。

    List of Resource Attributes:参见List of Resource Attributes一节。

    LAST:属性列表结束的标记符。

    说明

    web_submit_form 函数用来提交表单。此函数可能必须在前一个操作的上下文中执行。在Toos—Recording Option,只有录制级别设为基于HMTL的录制方式,web_image才会被录制到。

    在录制选项中,Toos—Recording Option下,Recording选项中,有一个Advanced HTML选项,可以设置是否录制非HTML资源,只有选择了“Record within the current scrīpt step”时,List of Resource Attributes才会被录制到。非HTML资源的例子是gif和jpg图象文件。

    通常情况下,如果录制了web_submit_form 函数,VuGen会把“name”和“value”一起录制到ITEMDATA属性中。如果不想在脚本中以明文显示“value”,可以对它进行加密。把“Value”改为“EncryptedValue”,然后把录制到的值改为加密后的值。

    例如:可以把 "Name=grpType", "Value=radRoundtrip", ENDITEM

    改为:"Name=grpType", EncryptedValue=409e41ebf102f3036b0549c799be3609", ENDITEM

    如果你完整的安装了LoadRunner,那么打开开始菜单--Mercury LoadRunner—Tools--Password Encoder,这个小工具是用来加密字符串的。把需要加密的值粘贴到Password一栏,再点Generate按钮。加密后的字符串会出现在Encoded string框中。接着点Copy按钮,然后把它粘贴到脚本中,覆盖原来显示的“Value”。

    加密的另一种方法时使用lr_decrypt函数。方法:选择整个字符串,例如“Value=radRoundtrip”(注意不要选择引号),右击鼠标,选择Encrypt string选现,脚本会变为:

    "Name=grpType", lr_decrypt("40d176c46f3cf2f5fbfaa806bd1bcee65f0371858163"), ENDITEM,

    web_submit_form支持Web虚拟用户,不支持WAP虚拟用户。

    例子:

    下面的例子中,web_submit_form 函数的名字是“employee.exe”。此函数提交了一个请求,此请求包含雇员信息John Green。此函数没有使用属性(Attributes)是因为通过数据项已经能唯一的标识这个Form了。

  • JAVA测试工具简介

    2007-11-06 23:37:37

    Java测试工具介绍
    现在有很多软件都是基于Java的,如何测试这些Java程序就成了一个测试工程师的新课题。以下介绍一些测试工具,可以提高Java 程序的测试效率。

    1. 美国Segue公司的Silk系列产品
    Segue公司一直专注于软件质量优化领域。在Segue的产品套件中,拥有业内最强劲且最容易使用的、用于企业应用测试、调优和监测的自动化工具,能够帮助用户保障应用在其生命周期内的可靠性和性能。
    (1) SilkPerformer——企业级性能测试工具
    企业级自动化测试工具能够支持多种系统,如Java、.Net、Wireless、COM、CORBA、Oracle、Citrix、MetaFrame、客户机/服务器、以及各种ERP/CRM应用
    多项专利技术精确模拟各种复杂的企业环境
    可视化脚本记录功能及自定义工具简化了测试创建工作
    SilkPerformer的Java/.NET浏览器以及JUnit/NUnit测试输入功能简化了对并发访问情况下远程应用组件的早期负载测试工作
    方便易用,工作流向导会逐步引导用户完成整个测试流程
    (2) SilkTest International——业内唯一的Unicode功能测试工具
    SilkBean 充分利用 Java 语言的“编写一次,随处使用”的优点,让用户不必修改现有的脚本而能够在多种基于 Unix 的系统上运行
    能够识别多种开发平台,如Java、Javascrīpt、HTML、ActiveX、Visual Basic 和C/C++等
    一套脚本可供所有支持的语言使用
    内置的错误恢复系统不仅具有自定义功能,可进行无人看守的自动测试
    赛格瑞(Segue)公司是全球范围内专注于软件质量优化解决方案的领导者。2005年,赛格瑞(Segue)公司在中国设立了专门的销售服务公司,因此,赛格瑞(Segue)公司的软件测试产品在中国有了更好的技术支持。
    参考网站:http://www.segue.com.cn/
    推荐指数:★★★★★

    2. MaxQ
    MaxQ是一个免费的功能测试工具。它包括一个HTTP代理工具,可以录制测试脚本,并提供回放测试过程的命令行工具。测试结果的统计图表类似于一些较昂贵的商用测试工具。MaxQ希望能够提供一些关键的功能,比如HTTP测试录制回放功能,并支持脚本。
    参考网站:http://maxq.tigris.org/
    推荐指数:★★★☆☆

    3. Httpunit
    HttpUnit是一个开源的测试工具,是基于JUnit的一个测试框架,主要关注于测试Web应用,解决使用JUnit框架无法对远程Web内容进行测试的弊端。
    HttpUnit提供的帮助类让测试者可以通过Java类和服务器进行交互,并且将服务器端的响应当作文本或者DOM对象进行处理。HttpUnit还提供了一个模拟Servlet容器,让测试者不需要发布Servlet,就可以对Servlet的内部代码进行测试。本文中作者将详细的介绍如何使用HttpUnit提供的类完成集成测试。
    参考网站:http://www.httpunit.org/
    推荐指数:★★★☆☆

    4. Junit
    是通用的测试 java 程序的测试框架JUnit可以对Java代码进行白盒测试。通过JUnitk可以用mock objects进行隔离测试;用Cactus进行容器内测试;用Ant和Maven进行自动构建;在Eclipse内进行测试;对Java应用程序、Filter、Servlet、EJB、JSP、数据库应用程序、Taglib等进行单元测试。
    提供了一个Java代码的单元测试框架,以方便Java程序员进行持续的单元
    测试。JUnit 是Open Source 的,在XP(Extreme Programming)圈子里颇受欢迎。
    参考网站:http://www.junit.org/
    推荐指数:★★★★★

    5. Jtest
    Jtest是Parasoft公司推出的一款针对java语言的自动化白盒测试工具,它通过自动实现java的单元测试和代码标准校验,来提高代码的可靠性。Jtest先分析每个java类,然后自动生成junit测试用例并执行用例,从而实现代码的最大覆盖,并将代码运行时未处理的异常暴露出来;另外,它还可以检查以DbC(Design by Contract)规范开发的代码的正确性。用户还可以通过扩展测试用例的自动生成器来添加更多的junit用例。Jtest还能按照现有的超过350个编码标准来检查并自动纠正大多数常见的编码规则上的偏差,用户可自定义这些标准,通过简单的几个点击,就能预防类似于未处理异常、函数错误、内存泄漏、性能问题、安全隐患这样的代码问题。
    JTest最大的优势在于静态代码分析,至于自动生成测试代码,当然生成测试代码框架也是不错的,但要做好单元测试用户还要做大量的工作。
    出品人:Parasoft 公司
    主要功能:功能很丰富,包括白盒测试、黑盒测试、回归测试,以及代码风格的检查。
    参考网站:http://www.parasoft.com/jsp/aep/aep.jsp
    推荐指数:★★★★☆

    6. Hansel
    Hansel 是一个测试覆盖率的工具——与用于单元测试的 JUnit framework 相集成,很容易检查单元测试套件的覆盖情况。
    参考网站:http://hansel.sourceforge.net/
    推荐指数:★★☆☆☆

    7. Cactus
    Cactus是一个基于JUnit框架的简单测试框架,用来单元测试服务端Java代码。Cactus框架的主要目标是能够单元测试服务端的使用Servlet对象的Java方法如HttpServletRequest,HttpServletResponse,HttpSession等
    针对外部可测试组件运行时,需要把JUnit测试运行为发送HTTP请求给组件的客户端进程。为了在服务器容器内部运行JUnit测试,可以用Cactus框架,它是一个免费的开源框架,是Apache Jakarta项目的一部分。Cactus 包含了关于JUnit客户端如何连接到服务器,然后使测试运行的详细信息。
    参考网站:http://jakarta.apache.org/cactus/
    推荐指数:★★★★☆

    8. JFCUnit
    JFCUnit使得你能够为Java偏移应用程序编写测试例子。它为从用代码打开的窗口上获得句柄提供了支持;为在一个部件层次定位部件提供支持;为在部件中发起事件(例如按一个按钮)以及以线程安全方式处理部件测试提供支持。
    参考网站:http://jfcunit.sourceforge.net/
    推荐指数:★★★☆☆

    9. StrutsTestCase
    StrutsTestCase(STC)框架是一个开源框架,用来测试基于 Struts 的 Web 应用程序。这个框架允许您在以下方面进行测试:
    在 ActionForm 类中的验证逻辑(validate() 方法)
    在 Action 类中的业务逻辑(execute() 方法)
    动作转发(Action Forwards)。
    转发 JSP
    STC 支持两种测试类型:
    Mock 方法 —— 在这种方法中,通过模拟容器提供的对象(HttpServletRequest、 HttpServletResponse 和 ServletContext),STC 不用把应用程序部署在应用服务器中,就可以对其进行测试。
    Cactus 方法 —— 这种方法用于集成测试阶段,在这种方法中,应用程序要部署在容器中,所以可以像运行其他 JUnit 测试用例那样运行测试用例。
    参考网站:http:// strutstestcase.sourceforge.net/
    推荐指数:★★★★☆

    10. TestNG
    TestNG是根据JUnit 和 NUnit思想而构建的一个测试框架,但是TestNG增加了许多新的功能使得它变得更加强大与容易使用比如:
    支持JSR 175注释(JDK 1.4利用JavaDoc注释同样也支持)
    灵活的Test配置
    支持默认的runtime和logging JDK功能
    强大的执行模型(不再TestSuite)
    支持独立的测试方法
    参考网站:http://testng.org/
    推荐指数:★★★★☆

    10.Bean-test
    出品人:RSW 软件公司
    网址:http://www.testmybeans.com/
    主要功能:对EJB 应用软件进行负载和压力测试(load/stress testing),以衡量它的扩展性(scalability)。

    11.EJBQuickTest
    网址:http://www.ejbquick.com/
    主要功能:模拟EJB 应用软件的客户程序,进行方法调用(method invocation)。支
    持回归测试(regression testing),测试数据的生成,以及性能和压力测试。

    12.JStyle
    出品人:Man Machine Systems
    网址:http://www.mmsindia.com/
    主要功能:分析Java 源代码的质量,包括产生源代码的有关统计信息和指标度量。

    13.JProbe
    出品人:Sitraka 软件公司
    网址:http://www.klgroup.com/
    主要功能:对Java 代码进行内存测试和性能剖析(profile),有针对EJB 的服务器端版本和针对普通Java 代码的客户端版本。

  • (转)选择测试自动化框架

    2007-11-06 23:30:34

       一种测试自动化框架(test automation framework)是由一些假设,概念和为自动化软件测试提供支持的实践组成的一个集合。这篇文章描述并演示了5种基本的框架。
     
    基于只使用一种捕获工具例如IBM Rational® Robot来录制并且回放测试用例而得出自动化测试工作量是有缺陷的。只使用一种捕获工具来运行复杂且巨大的测试是非常耗费时间和昂贵的。因为这些测试是随机创建的,他们的功能性是很难追踪和重现,而且维护成本也是非常昂贵的。
    对于一个刚刚起步的自动化测试小组,更好的选择是使用一种测试自动化框架,它已经定义好了由一些假设,概念和制定工作平台或为自动化测试提供支持的实践组成的集合。在这篇文章中我试着将一些我熟悉的测试自动化框架-特别是测试脚本模块化,测试库构架,关键字驱动/表格驱动测试,数据驱动测试和混合的测试自动化。我并不会评价哪一个框架更好或更差,而只是提供了一些关于他们的描述和演示,所适用的地方和如何使用IBM Rational工具集实现的一些技巧。
     
    测试脚本模块化框架(The Test scrīpt Modularity Framework
    测试脚本模块化框架需要创建能够代表测试下应用程序(application-under-test)的模块,零件(Section)和函数的小的,独立的脚本。然后用一种分级的方式将这些小脚本组成更大的测试,实现一个特定的测试用例。
    在我将提及的所有的框架中,这种框架应该是最容易精通且掌握的。就在一个部件前面构建一个抽象层以掩藏应用程序其他的部件方面,它是一个很著名的编程策略。它把应用程序从在部件的修改中隔离开来并规定了在应用程序设计中的模块性。为了提高自动化测试套件(test suite)的可维护性和可测量性,测试脚本模块化框架应用了抽象或封装的原则。
    为了演示这种框架的应用,我以自动化Windows计算器程序中的测试其基本功能(加,减,乘和除)的一个简单测试用例(如图)为例。
    脚本层次结构的最下层是独立的加减乘除的脚本。下面的第一个脚本是加法,第二个是减法。
     
     
    然后在层次结构中下一级的两个脚本用来代表视图菜单中的标准视图和科学视图。就像下面的关于标准视图的脚本中表现的一样,这些脚本调用了我们在之前创建的脚本。
     
    最后,在层次结构中最顶层的脚本应该是用来测试应用程序不同视图的测试用例。
     
     
    从这个简单的例子中你可以了解到这种框架是如何产生高度的模块化并且增加测试套件的全面的可维护性。如果以后计算器上的某一个控制键被移动了,你所需要改变的只是底层调用这个控制键的脚本,而不是测试这个控制键的所有测试用例。
     
    测试库构架框架(The Test Library Architecture Framework
    测试库构架框架和测试脚本模块化框架非常相似,有着同样的优势,但是它把测试下的应用程序分成过程和函数,而不是脚本。这种框架要求创建代表测试下应用程序模块,零件和函数的库文件(SQABasic libraries, APIs, DLLs等等)。然后这些库文件被测试用例脚本直接调用。
    为了演示这种框架的应用,我将自动化同上的测试用例,但使用了一个SQABasic的库文件。这个库文件包括执行操作的一个函数。以下是头文件(header file (.sbh))和库文件(library source file (.sbl))。
     
    使用这个库文件,产生以下测试用例脚本。
     
    从这个例子中,你能够看到这种框架也产生了高度的模块化,同样增加了测试套件的全面可维护性。就像在测试脚本模块化一样里,如果计算器中的一个控制键移动了,你所要做的只是更改库文件,这样同时也更新了所有调用控制键的脚本。
     
    关键字驱动或表驱动测试框架(The Keyword-Driven or Table-Driven Testing Framework
    关键字驱动和表格驱动测试是一种独立于应用程序的自动化框架,它们是可以互相替换的术语。这种框架要求开发于用来运行的自动化工具,驱动测试下应用程序和数据的测试脚本代码相独立的数据表和关键字。关键字驱动测试看上去非常象手工测试。在关键字测试里,应用程序的功能特性被写在表格和每个测试的详细指引里了。
    如果要映射出手工测试Windows计算器功能过程中用鼠标执行的操作,我们可以创建如下的表格。” Window”一列包含了我们执行鼠标操作的应用程序窗口的名字(在这个例子中,他们都发生在计算器窗口里)。” Control”一列指出了鼠标点击的控制键的类型。” Action” 一列列出了鼠标的操作(或是测试人员的)。”Arguments”列指出了特定的控制键(1, 2, 3, 5, +, -等)

    Window
    Control
    Action
    Arguments
    Calculator
    Menu
    View, Standard
    Calculator
    Pushbutton
    Click
    1
    Calculator
    Pushbutton
    Click
    +
    Calculator
    Pushbutton
    Click
    3
    Calculator
    Pushbutton
    Click
    =
    Calculator
    Verify Result
    4
    Calculator
    Clear
    Calculator
    Pushbutton
    Click
    6
    Calculator
    Pushbutton
    Click
    -
    Calculator
    Pushbutton
    Click
    3
    Calculator
    Pushbutton
    Click
    =
    Calculator
    Verify Result
    3

     这个表格代表了一个完整的测试,为了表示一系列测试可以根据需要增加。一旦你创建了数据表,你就可以简单地编写用来读取每一个步骤的程序或脚本集,基于Action字段中的关键字执行步骤,完成错误检查,然后记录任何相关的信息。这种程序或脚本集看上去象下面的伪代码:
     
    Main scrīpt / Program
    Connect to data tables.
    Read in row and parse out values.
    Pass values to appropriate functions.
    Close connection to data tables.
    Menu Module
    Set focus to window.
    Select the menu pad option.
    Return.
    Pushbutton Module
    Set focus to window.
    Push the button based on argument.
    Return.
    Verify Result Module
    Set focus to window.
    Get contents from label.
    Compare contents with argument value.
    Log results.
    Return.

    从这个例子里你可以看到为了生成许多的测试用例,这种框架只要求非常少的代码。用数据表生成不同的测试用例却可以重用相同的代码。IBM Rational工具集可以通过使用交互式的文件读取,查询或数据池延伸开来,或者你可以连同IBM Rational一起使用其他的工具(免费,其他的开发工具等)来构建这种类型的框架。
     
    数据驱动测试框架(The Data-Driven Testing Framework
    数据驱动测试是测试从数据文件(数据池,ODBC源,cvs文件,Excel文件,DAO对象等)中读取输入和输出数值并载入到捕获的或手工编码的脚本中变量里的一种框架。在这种框架里,输入数值和输出验证数值都使用变量。在测试脚本中编写贯穿程序的导航,数据文件的读取,记录测试状态和信息的日志的代码。
    测试用例包含在数据文件里而不是在脚本里的方面上,这种框架和表格驱动测试有些相似;脚本只是一种“驱动器”(driver)或传送数据的机制。尽管导航的数据不包含在表结构中,但和表格驱动测试还是不同的。在数据驱动测试里,只有测试数据包含在数据文件中。
    如果使用SQABasic语言和IBM Rational的数据池功能,IBM Rational工具集里有自带的数据驱动功能。为了演示这种框架的使用,我们将测试一个简单应用程序中的订单表格。
     
     
    如果我们录制这个窗口中的数据输入,得到以下脚本:
     
     
    我们可以使用数据池来设置测试有效和无效信用卡号和过期日期的测试用例。例如,下图中是用于测试数据字段的测试用例中的数据池,
     
     
    为了接收这些数据,我们修改脚本如下:
     
     
    为了使用数据池,我增加了SQABasic命令,还增加了“While”循环来处理在数据池中每一行数据。我必须说明一下在“If…Then”语句中的Ucase(SQABasic命令)函数。Ucase用于将参数(在这个例子里是指数据池返回的数值)全部转换成大写。这种方法不是大小写敏感的,所以代码更强壮。
    这个框架趋向于减少你为了实现所有测试用例而需要的全部的脚本数量,并且在开发绕开错误的办法(Workaround)和维护方面提供了最好的灵活性。和表格驱动测试非常相似的是,表格驱动测试只需要非常少的代码就可以产生大量的测试用例。用IBM Rational工具集实现这种框架是非常容易,并且它也提供了大量的关于指引和例子的详细文档。
     
    混合的测试自动化框架(The Hybrid Test Automation Framework
    最常见的已实现的框架是上述技术的组合,抽取它们的优点,剔除其弱点。这种混合的测试自动化框架是发展时间较长且应用项目最多的框架。下图可以让你对如何用IBM Rational工具集组合不同的框架有初步的认识。
     
    总结
    我描述了自动化测试小组可以考虑使用的5种测试自动化框架,而不是依赖某一种捕获工具。你可以使用一种或它们之间的组合。你可以通过嵌套测试脚本实现模块化并使用SQABasic库文件来实现功能和过程。不管你选择哪一种数据驱动技术,你都可以使用数据池,或者你还可以扩展Robot来处理其他数据存贮类型。应用最好的框架的窍门和解决他们的唯一方法是投入进去并开始使用它们。
  • Software Test Automation

    2007-11-06 23:24:49

    Session 1: Tools That Can Help

     Tools throughout the lifecycle: Where in the software development life cycle test activities take place and which are tool supportable.

     Tool support for testing: The different types of tool support for testing activities throughout the software development lifecycle.

     Benefits and pitfalls: The benefits and pitfalls that can be expected with the different types of test tool.

    Session 2: How Much Help Can Tools Provide?

     What help is required?: The reasons for introducing a test tool and whether this is the best solution to the problem.
     
     Are tools the best solution?: Two different approaches to comparing different solutions against a set of problems.

     Quantifying the return on investment: A simple approach to estimating the costs and benefits of testing with and without tool support.

     Example business cases: A few simple business cases for different tool types.

     Assessing readiness: How to assess the best time to Introduce change

    Session 3: Introducing a Test Tool

     Choosing the right tool: The need for a project to select a test tool and the steps involved.

     Role of a pilot project: The test tool implementation process and the importance of a pilot project.

     Developing best practices: A few objectives for the pilot project and the areas where early adoption of best practices is key to long term success.

     Lessons to learn: A few case histories where things went wrong and the lessons that can be learnt from them.

    Session 4: Automating the Test Process

     The test process: A detailed descrīption of the steps taken when creating and performing tests and their suitability for automation.

     Example application: A brief descrīption of a simple application with an explanation of one simple test that could be performed on it.

     Automating test execution: The problems incurred by using record / playback facilities alone.
     Automating test verification: Some of the many and varied choices that can be made.
     Automated but not Automatic: The problems involved in making the same automated test work under different situations and an introduction to possible solutions.

    Session 5: Case History

     Background: The reasons behind one organisation's decision to automate testing and the solutions they implemented.

     Results: The effort put into automation and the savings achieved.
     Lessons: Things learnt in this case history that need to be addressed by most other test automation projects.

    Session 6: scrīpting Techniques

     Example application: A brief descrīption of a simple application with an explanation of one simple test that could be performed on it.

     Introduction to scrīpting: A few facts about scrīpting that help explain its importance in test execution automation.
     
     scrīpting techniques: Five approaches to scrīpting (linear, structured, shared, data-driven and keyword-driven). The pros and cons of each technique using examples to illustrate the points made.

     scrīpt pre-processing: An explanation of scrīpt pre-processing and the benefits that can be achieved.

    Session 7: Automated Comparison

     Automated test verification: Some simple guidelines that can help you avoid making costly mistakes.

     Test sensitivity: The importance of achieving the right degree of verification and how it should be varied across different test cases.

     Implementing post-execution comparison: Alternative approaches to automating post-execution comparison.

     Simple versus complex comparisons: The benefits and pitfalls of using simple and complex comparisons and a few approaches to achieving them.

     Post execution comparisons: A practical approach to implementing comparisons

    Session 8: Testware Architecture

     Introduction: How testware is organised and structured. Three key issues that have to be addressed in order to avoid serious problems. An approach to organising testware that addresses each of these three issues.

     Implementation ideas: Explains one possible implementation of testware architecture.

    Session 9: Pre- and Post-Processing

     Introduction: The activities that are collectively called pre- and post-processing and the importance of automating them. Examples of pre- and post-processing.

     Implementation: Two different approaches to implementing pre- and post-processing.

    Session 10: Other Issues

     Test selection: The order in which tests should be run and how to efficiently select them for execution.

     Order of running: Considerations to take account of when executing a number of automated test cases.

     Test status: The end result of a test should not be limited to a simple pass or fail.

     Monitoring progress: Metrics for reporting progress and how to obtain and present them.

    Session 11: Test Suite Maintenance

     Attributes of test maintenance: Common reasons for high maintenance costs and how to deal with them
    Strategy and tactics: The general approach to keeping on top of test suite maintenance costs

  • QTP关键技术(六) - 嵌套Action间的参数传递

    2007-11-06 22:58:39

    参数传递思路:
    将Action1的输入参数InAction1传递给Action2的输入参数InAction2,
    将Action2的输出参数OutAction2传递给Action1的输出参数OutAction1。
      
    1)创建两个Action,嵌套关系,在关键字视图,拖动Action2到Action1下面有缩进的地方
     
    2)右键Action1,选Action Properties,
    在Input Parameters中添加参数InAction1,
    在Output Parameters中添加参数OutAction1,点OK
     
    3)右键Action2,选Action Properties,
    在Input Parameters中添加参数InAction2,
    在Output Parameters中添加参数OutAction2,点OK
     
    4)在Action1和Action2间建立关联
    右键Action2,选Action Call Properties,弹出Action Call Properties窗口;
     
    选中InAction2的Value,弹出Value Configuration Options窗口;
    在Parameter中共有四项可供选择,选择Test/Action parameter,
    在Parent action parameters的parameter中选择Action1
    同理,OutAction2的Store In值为OutAction1
     
    以上的操作就是把输入值 通过Action1的输入参数,传递给Action2的输入参数进行使用,
    然后Action2运行后,将输出参数通过Action1的输出参数传递出去。
    这里只是对嵌套Action进行最基本的讲解,在实际使用当中还要灵活运用。
  • QTP关键技术(五) - 并列Action间的参数传递

    2007-11-06 22:56:53

    思路:将Action1的输出参数,传递给Action2作为输入参数。
     
    1)创建两个Action,关系是并列关系,不是嵌套的.
    2)右键Action1,选Action Properties,在Output Parameters中添加参数OutAction1,点OK
     
    3)右键Action2,选Action Properties,在Input Parameters中添加参数InAction2,点OK
     
    4)将Action1的输出OutAction1,传递给Action2的输入InAction2
     此文右键Action2,选Action Call Properties,弹出Action Call Properties窗口;
    选中InAction2的Value,弹出Value Configuration Options窗口;
    在Parameter中共有四项可供选择,选择Test/Action parameter,
    在Output from previous call(s)中的Action选择Action1,Parameter中选择OutAction1;
    表示Action2中的参数InAction2,是由Action1中的参数OutAction1传递而来。
    以上就完成了两个并列Action间参数的传递,Action2只能调用Action1的输出参数,而不能调用Action的输入参数。
  • QTP关键技术(四) - Test和Top-Level Action间参数传递

    2007-11-06 22:44:56

    以下讲述一个关于QTPTest参数和Top-Level Action参数的使用例子,
      有些人不知道这个参数做什么用的,尤其是Test的output不知道怎么取。
    其实它是外部对象传给它的(这个外部对象可以是Quality Center,也可以是vbs这样的驱动程序)。
    以下给大家讲解一个关于QuickTest的Flight的例子。
    首先,在QTP里录制一段脚本,代码如下:
    SystemUtil.CloseProcessByName "Flight4a.exe"
    SystemUtil.Run Environment.Value(
    "ProductDir"& "\samples\flightapp\flight4a.exe"
    Dialog(
    "Login").WinEdit("Agent Name:").Set Parameter("InAction1")
    Dialog(
    "Login").WinEdit("Password:").SetSecure "46f1f4259cf01348f5a4c630bcee96084f3d1619"
    Dialog(
    "Login").WinButton("OK").Click
    Window(
    "Flight Reservation").Close
    Parameter(
    "OutAction1"= true
      
       然后在QTP中进行参数设置,
    1)设置Action的参数
    鼠标选中Keyword View中的Action1,
    点右键---Action Property,
    在Parameters的Tab标签下,分别加入:
    输入参数 InAction1 ,类型String;
    输出参数 OutAction1,类型 Boolean。
     
    2)设置Test的参数
    在QTP的菜单File--->>Settings的Parameters的Tab标签下,分别加入:
    输入参数 InTest1 ,类型String;
    输出参数 OutTest1,类型 Boolean。
     
    3)将Test和Action间参数进行关联传递
    鼠标还是选中Keyword View中的Action1,点右键,
    这次点“Action Call Properties”,
    在Parameter Values里进行参数化传递设置,
    把InTest1的值传递给InAction1,
    把OutAction1的值传递给OutTest1。
     
    以上设置完毕后,点“保存”,保存到C:\下,存为Test1好了。
     
    最后,在你的硬盘上新建一个vbs文件,文件内容如下:
    Dim qtApp ,pDefColl,pDef ,rtParams,rtParam 
    Set qtApp = CreateObject("QuickTest.Application"
    qtApp.Launch 
    qtApp.Visible 
    = True 
    qtApp.Open 
    "C:\Test1" 
    Set pDefColl = qtApp.Test.ParameterDefinitions 
    cnt 
    = pDefColl.Count 
    Indx 
    = 1 
    While Indx <= cnt 
        
    Set pDef = pDefColl.Item(Indx) 
        Indx 
    = Indx + 1 
    Wend 
    Set rtParams = pDefColl.GetParameters() 
    Set rtParam = rtParams.Item("InParam1"
    rtParam.Value 
    = "songfun" 
    qtApp.Test.Run , 
    True, rtParams
    MsgBox rtParams.Item("OutParam1").Value
     
    做完这步之后,保存这个vbs文件,双击执行这个vbs文件,你会发现它自动启动了QTP,而且进行了自动测试,最后还取到了运行成功与否的布尔值。
    这就是关于Test、Top-Level Action参数使用的例子,它的参数的整个传递过程是:
    外部vbs文件 传参数给QuickTest的Test的输入参数InTest1,然后InTest1传参数到InAction1去驱动了Action1的测试,
    然后通过这个Action1得出了OutAction1的值,然后通过OutAction1传给OutTest1,最后再传回到vbs文件中。
    示例用MsgBox来打出重新传回到vbs文件中的字符串。
  • QTP关键技术(三) - 对同步点的理解

    2007-11-06 22:41:56

    1)QTP的脚本语言是VBscrīpt,脚本在执行的时候,执行语句之间的时间间隔是固定的,也就是说脚本在执行完当前的语句之后,等待固定的时间间隔后开始执行下一条语句
     
    2)问题:假设后一条语句的输入是前一条语句的输出,如果前一条语句还没有执行完,这时候将要导致错误的发生!
     
    3)措施:加入同步点、加入Wait语句
     
    4)同步点Synchronization Point
    QTP脚本在执行过程中如果遇到同步点,则会暂停脚本的执行,直到对象的属性获取到了预先设定的值,才开始执行下一条脚本。
    如果在规定的时间内没有获取到预先设定的值,则会抛出错误信息。
     
    例如:
    Window("Flight Reservation").ActiveX("Threed Panel Control").WaitProperty "text", "Insert Done...", 10000
    执行到上面这条语句时,QTP会暂停执行,直到显示”Insert Done…”,
    如果在规定的时间10,000ms后text的值没有等于”Insert Done…”,则会抛出错误信息
     
    5)如何获取Synchronization Point
           A.在Recording状态下,通过Insert è Synchronization Point实现
           B.非Recording状态下,在Expert View下,通过Insert è Step Generator è Category(Test Objects)è Object(The Object you’re Testingè Operation(WaitProperty)è PropertyName、PropertyValue、TimeOut分别填写"text", "Insert Done...", 10000
     
    6)Wait
           总的来说就是死等,比如说wait 10,当运行到这条语句时,等待10秒钟后,才开始再读下面的语句。所以说写脚本的时候一定要估计好时间,否则的话会浪费运行的时间,或者出现等待时间不足的现象。
  • QTP关键技术(二) - 对Check Point的较为深入理解

    2007-11-06 22:37:05

    1. 定义:
    将特定属性的当前数据与期望数据进行比较的检查点,用于判定被测试程序功能是否正确
    Check Point可以分两类:QTP内置验证点和自定义验证点
     
    2. QTP内置验证点实现原理及优缺点
           A.录制时,根据用户设置的验证内容,记录数据作为基线数据
           B.回放时,QTP捕获对象运行时的数据,与脚本中的基线数据进行比较
           C.如果基线数据和运行数据相同,结果为PASS,反之为Failed.
           D.优点是 操作简单方便
           E.缺点是 QTP默认的检查的属性有时不符合自己的要求,如希望得到检查的属性没有在里面, 而默认的属性不需要检查等。
     
    3. QTP内置验证点结果的应用
           A.录制的验证点在没有进行调整前,仅仅是给出了检查结果是通过还是错误的
           B.实际的测试过程中,可以根据验证点的结果进行不同的操作
           If Window("Flight Reservation").WinEdit("Name:").Check(CheckPoint("Name:")) = True then
                  msgbox "oh, success!"
    Else
                  msgbox "oh, failure!"
    End If
     
    4. 自定义验证点的应用及优缺点
           A.使用条件语句对实际值和期望值进行对比,然后用Reporter对象报告结果
           '检查Ticket Number
    If CStr(dbTicketNumber) = CStr(DataTable("oTicketNumber", dtLocalSheet)) Then
           Reporter.ReportEvent micPass, "打开订单- TicketNumber", "期望结果是:" & dbTicketNumber & ", 界面显示实际结果是:" & DataTable("oTicketNumber", dtLocalSheet)
    Else
           Reporter.ReportEvent micPass, "打开订单- TicketNumber", "期望结果是:" & dbTicketNumber & ", 界面显示实际结果是:" & DataTable("oTicketNumber", dtLocalSheet)
    End If
           B.优点是 非常灵活,前者实现的所有检查都可以用此方法来实现;
           C.缺点是 代码量大,对测试人员的要求高。
     
    5. 对Check Point的深入理解
     
    A.个人认为在比较简单的和有Active Screen的情况下可以使用QTP内置的Check Point,在比较复杂的情况下可以通过编程和使用Reporter来完成.
    B.在使用check方法时,必须先在Keyword View或者Active Screen中新建CheckPoint。否则无法对该对象进行check,系统报错说无法在对象仓库中找到此对象。如果插入检查点,系统会自动把相关的对象添加到对象库中。
    我认为检查点并不是一个实实在在的对象。因为你可以对同一个对象设置不同的检查点,可以把它的某个属性既设定成True,也可以设定为False。而对象库中的对象的属性值是必须依赖于对象的实际属性值的。如果随意更改有可能无法识别。还有就是可以针对同一个对象设定多个检查点。在测试窗口中可以看到这两个检查点的名称是区分开来的。所以我认为检查点并不是实际存在的对象,而是一些类似映射的东西。
    尽管检查点并不是对象库中的实在的对象,但是它必须对应到对象库中的某个实实在在的对象,好像它的一个映像一样,而且在实际的操作过程中,QTP还是把它作为一个对象来处理的。
    因为我们无法像其他对象一样把“检查点对象”添加到对象库中,而QTP又认为它是个对象,所以我们无法在专家视图中直接添加检查点脚本。但是我们可以采用编成描述的方式来实现检查点的功能。
    CheckPoint 是一个依赖于Object Repository(对象库)中的某个对象的“虚拟对象”。其具体含义是:如果它所依赖的QTP 对象库中的对象没有了,那么此CheckPoint 也就不存在了;这个“虚拟对象”的属性是从它所依赖的对象的属性中“抽取”出来的,它具有它所依赖的对象的一个或几个属性,但不能增加它所依赖的对象没有的任何属性。
    CheckPoint 是一个“虚拟对象”的重要原因是:每个Object都能在Object Repository找到它的Name、Class Properties,而CheckPoint 在Object Repository中就根本不存在。选择脚本中的某个对象后,在Object Property 的对话框里面有个Respository按钮,点击它后,你会看到此对象在Object Respository 的Name、Class 和 Properties。
    选择一个CheckPoint后,在CheckPoint Properties 的对话框里没有 Respository 按钮,在Object Respository中也找不到此CheckPoint的Name、Class 和 Properties(因为它在对象库中根本就不存在!)。
  • QTP关键技术(一) - 对象识别及存储技术基本常识

    2007-11-06 22:35:22

    1)测试对象模型(Test Object Model)
    测试对象模型是QTP用来描述应用程序中对象的一组对象类。每个测试对象类拥有一系列用于唯一确定对象属性和一组QTP能够录制的方法
     
    2)测试对象(Test Object)
    用于描述应用程序实际对象的对象,QTP存储这些信息用来在运行时识别和检查对象
     
    3)运行时对象(Run-Time Object)
           是应用程序中的实际对象,对象的方法将在运行中被执行
     
    4)QTP的录制过程
           A.确定用于描述当前操作对象的测试对象类,并创建测试对象
           B.读取当前操作对象属性的当前值,并存储一组属性和属性值到测试对象中
           C.为测试对象创建一个独特的有别于其他对象的名称,通常使用一个突出属性的值
           D.记录在对象上执行的操作
     
    5)QTP的回放过程
           A.根据对象的名称到对象存储库(Object Repository)中查找相应的对象
           B.读取对象的描述,即对象的属性和属性值
           C.基于对象的描述,QTP在应用程序中查找相应的对象
           D.执行相关的操作
  • load runner 参数英汉对照

    2007-11-06 00:48:36

    LR函数:
    lr_start_transaction  为
    性能分析标记事务的开始
    lr_end_transaction  为性能分析标记事务的结束
    lr_rendezvous  在 Vuser 脚本中设置集合点
    lr_think_time  暂停 Vuser 脚本中命令之间的执行 
    lr_end_sub_transaction 标记子事务的结束以便进行性能分析
    lr_end_transaction 标记 LoadRunner 事务的结束
    Lr_end_transaction("trans1",Lr_auto);
    lr_end_transaction_instance 标记事务实例的结束以便进行性能分析
    lr_fail_trans_with_error 将打开事务的状态设置为 LR_FAIL 并发送错误消息
    lr_get_trans_instance_duration 获取事务实例的持续时间(由它的句柄指定)
    lr_get_trans_instance_wasted_time 获取事务实例浪费的时间(由它的句柄指定)
    lr_get_transaction_duration 获取事务的持续时间(按事务的名称)
    lr_get_transaction_think_time 获取事务的思考时间(按事务的名称)
    lr_get_transaction_wasted_time 获取事务浪费的时间(按事务的名称)
    lr_resume_transaction 继续收集事务
    数据以便进行性能分析
    lr_resume_transaction_instance 继续收集事务实例数据以便进行性能分析
    lr_set_transaction_instance_status 设置事务实例的状态
    lr_set_transaction_status 设置打开事务的状态
    lr_set_transaction_status_by_name 设置事务的状态
    lr_start_sub_transaction 标记子事务的开始
    lr_start_transaction 标记事务的开始
    Lr_start_transaction("trans1");
    lr_start_transaction_instance 启动嵌套事务(由它的父事务的句柄指定)
    lr_stop_transaction 停止事务数据的收集
    lr_stop_transaction_instance 停止事务(由它的句柄指定)数据的收集
    lr_wasted_time  消除所有打开事务浪费的时间
    lr_get_attrib_double 检索脚本命令行中
    使用的 double 类型变量
    lr_get_attrib_long 检索脚本命令行中使用的 long 类型变量
    lr_get_attrib_string 检索脚本命令行中使用的字符串
    lr_user_data_point 记录用户定义的数据示例
    lr_whoami 将有关 Vuser 脚本的信息返回给 Vuser 脚本
    lr_get_host_name 返回执行 Vuser 脚本的主机名
    lr_get_master_host_name 返回运行 LoadRunner Controller 的计算机名
    lr_eval_string 用参数的当前值替换参数
    lr_save_string 将以 NULL 结尾的字符串保存到参数中
    lr_save_var 将变长字符串保存到参数中
    lr_save_datetime 将当前日期和时间保存到参数中
    lr _advance_param 前进到下一个可用参数
    lr _decrypt 解密已编码的字符串
    lr_eval_string_ext 检索指向包含参数数据的缓冲区的指针
    lr_eval_string_ext_free 释放由 lr_eval_string_ext 分配的指针
    lr_save_searched_string 在缓冲区中搜索字符串实例,并相对于该字符串实例,将该缓冲区的一部分保存到参数中
    lr_debug_message 将调试信息发送到输出窗口
    lr_error_message 将错误消息发送到输出窗口
    lr_get_debug_message 检索当前消息类
    lr_log_message 将消息发送到日志文件
    lr_output_message 将消息发送到输出窗口
    lr_set_debug_message 设置调试消息类
    lr_vuser_status_message 生成带格式的输出,并将其写到 ControllerVuser 状态区域
    lr_message 将消息发送到 Vuser 日志和输出窗口
    lr_load_dll 加载外部 DLL
    lr_peek_events 指明可以暂停 Vuser 脚本执行的位置
    lr_think_time 暂停脚本的执行,以模拟思考时间(实际用户在操作之间暂停以进行思考的时间)
    lr_continue_on_error 指定
    处理错误的方法

    lr_continue_on_error (0);lr_continue_on_error (1);
    lr_rendezvous 在 Vuser 脚本中设置集合点
    TE_wait_cursor 等待光标出现在终端窗口的指定位置
    TE_wait_silent 等待客户端
    应用程序在指定秒数内处于静默状态
    TE_wait_sync 等待
    系统从 X-SYSTEM 或输入禁止模式返回
    TE_wait_text 等待字符串出现在指定位置
    TE_wait_sync_transaction 记录系统在最近的 X SYSTEM 模式下保持的时间

    WEB函数列表:

    web_custom_request 允许您使用 HTTP 支持的任何方法来创建自定义 HTTP 请求
    web_image 在定义的图像上模拟鼠标单击
    web_link 在定义的文本链接上模拟鼠标单击
    web_submit_data 执行“无条件”或“无上下文”的表单
    web_submit_form 模拟表单的提交
    web_url 加载由“URL”属性指定的 URL
    web_set_certificate 使 Vuser 使用在 Internet Explorer 注册表中列出的特定证书
    web_set_certificate_ex 指定证书和密钥文件的位置和格式信息
    web_set_user 指定 Web
    服务器的登录字符串和密码,用于 Web 服务器上已验证用户身份的区域
    web_cache_cleanup 清除缓存模拟程序的内容
    web_find 在 HTML 页内搜索指定的文本字符串
    web_global_verification 在所有后面的 HTTP 请求中搜索文本字符串
    web_image_check 验证指定的图像是否存在于 HTML页内
    web_reg_find 在后面的 HTTP 请求中注册对 HTML源或原始缓冲区中文本字符串的搜索
    web_disable_keep_alive 禁用 Keep-Alive HTTP 连接
    web_enable_keep_alive 启用 Keep-Alive HTTP 连接
    web_set_connections_limit 设置 Vuser 在运行脚本时可以同时打开连接的最大数目
    web_concurrent_end 标记并发组的结束
    web_concurrent_start 标记并发组的开始
    web_add_cookie 添加新的 Cookie 或修改现有的 Cookie
    web_cleanup_cookies 删除当前由 Vuser 存储的所有 Cookie
    web_remove_cookie 删除指定的 Cookie
    web_create_html_param 将 HTML 页上的动态信息保存到参数中。(LR 6.5 及更低版本)
    web_create_html_param_ex
    基于包含在 HTML 页内的动态信息创建参数(使用嵌入边界)(LR 6.5 及更低版本)。
    web_reg_save_param 基于包含在 HTML 页内的动态信息创建参数(不使用嵌入边界)
    web_set_max_html_param_len 设置已检索的动态 HTML 信息的最大长度
    web_add_filter 设置在下载时包括或排除 URL 的条件
    web_add_auto_filter 设置在下载时包括或排除 URL 的条件
    web_remove_auto_filter 禁用对下载内容的筛选
    web_add_auto_header 向所有后面的 HTTP 请求中添加自定义标头
    web_add_header 向下一个 HTTP 请求中添加自定义标头
    web_cleanup_auto_headers  停止向后面的 HTTP 请求中添加自定义标头
    web_remove_auto_header 停止向后面的 HTTP 请求中添加特定的标头
    web_revert_auto_header 停止向后面的 HTTP 请求中添加特定的标头,但是生成隐性标头
    web_save_header 将请求和响应标头保存到变量中
    web_set_proxy 指定将所有后面的 HTTP 请求定向到指定的代理服务器
    web_set_proxy_bypass 指定 Vuser 直接访问(即不通过指定的代理服务器访问)的服务器列表
    web_set_proxy_bypass_local 指定 Vuser 对于本地 (Intranet) 地址是否应该避开代理服务器
    web_set_secure_proxy 指定将所有后面的 HTTP 请求定向到服务器
    web_set_max_retries 设置操作步骤的最大重试次数
    web_set_timeout 指定 Vuser 等待执行指定任务的最长时间
    web_convert_param 将 HTML 参数转换成 URL 或纯文本
    web_get_int_property 返回有关上一个 HTTP 请求的特定信息
    web_report_data_point 指定数据点并将其添加到
    测试结果中
    web_set_option 在非 HTML 资源的编码、重定向和下载区域中设置 Web 选项
    web_set_sockets_option  设置套接字的选项

  • 测试英文简写

    2007-11-06 00:32:54

    ADOActiveX Data ObjectActiveX 数据对象。ASP语言访问数据库的中间件。
    BATBuild Acceptance Testing,工作版本可接受测试。新工作版本正式测试前进行的一项快速测试过程,目的是保证软件的基本功能和内容正确完整,具有可测试性,经过BAT测试后,就进入了正轨测试阶段。
    BRCBug Review Council,缺陷复查委员会。负责Adobe 软件缺陷的成员,负责复查报告的新缺陷是否正确,并且修正处理。
    CCJKChinese SimplifiedChinese Traditional, JapaneseKorean,简体中文,繁体中文,日文和朝鲜语。本地化测试中的四种典型东亚语言。
    CMMCapability Maturity Model,能力成熟度模型。美国卡内基·梅隆大学的软件工程研究院(SEI)开发的用于软件开发过程的管理及工程能力的提高与评估的方法,共五个级别。
    C/SClient/Server,客户机/服务器。局域网软件的一种模式。
    DBCSDouble Bytes Character Set,双字节字符集。用两个字节长度表示一个字符的字符编码系统。中文,日文和朝鲜文都用双字节字符集表示。
    DLLDynamic Link Library,动态链接库。大型软件常用的一种软件开发方法,按照功能模块将不同功能分别集成在不同的动态链接库中。国际化软件开发中通常将可以本地化的软件界面资源文件放在单独的动态链接库中,便于本地化处理。

    DTSDefect Tracking System,缺陷跟踪系统。软件测试中集中管理软件缺陷(bug)的数据库,完成缺陷报告、修改、查询、统计等功能。
    EOFEnd Of File,文件结尾。某些文件在存储时在结尾处写入代表结尾的特殊信息。
    ERPEnterprise Resource Planning,企业资源规划。它是从MRP(物料资源计划)发展而来的新一代集成化管理信息系统,它扩展了MRP的功能,其核心思想是供应链管理,它跳出了传统企业边界,从供应链范围去优化企业的资源,是基于网络经济时代的新一代信息系统。
    EULAEnd User License Agreement,终端用户许可协议。软件中关于终端用户安装和使用授权和其他许可的内容,通常是一个单独的文档。
    FIGSFrenchItalianGermanySpanish)。法语,意大利语,德语,西班牙语。是软件本地化的欧洲代表语言。
    FTP File Transform Protocol,文件传输协议。用于向网络登入显示文件及目录清单的传输文件的协议。FTP支持多种文件类型和文件格式,包括ASCII文件和二进制文件。在软件测试项目中,对于大型文件(例如,Build和测试计划文档等),经常放在客户指定的采用FTP机制上传和下载的文件服务器上。
    GMGolden Master,金版工作版本。是通过全部测试准备大量刻盘,正式对外发布的软件版本。
    GPMGlobal Project Manager,全球项目经理。负责多种语言测试的项目经理,与本地项目经理以及客户方的项目经理协调,完成测试项目。
    HTTPHypertext Transfer Protocol,超级文本传输协议。用于管理超级文本与其他超级文本文档之间的连接。
    IEInternet Explorer微软(Microsoft)开发的一种因特网(Internet)浏览器。
    IISInternet Information Server,因特网信息服务器。一种因特网Web服务器。配置网站管理信息和服务。

    ISOInternational Organization for Standardization,国际标准化组织。ISO是世界上最大的国际标准化组织。它成立于1947223,它的前身是1928年成立的国际标准化协会国际联合会(简称ISA)。ISO的最高权力机构是每年一次的全体大会,其日常办事机构是中央秘书处,设在瑞士的日内瓦。
    ITInformation Technology,信息技术。包含现代计算机、网络、通讯等信息领域的技术。IT的普遍应用,是进入信息社会的标志。
    LPMLocal Project Manager,本地项目经理。负责一种或多种特定区域语言测试的项目经理,与全球项目经理以及本地测试团队协调完成特定区域语言的测试项目。
    MISManagement Information System,管理信息系统。进行日常事物操作的系统。
    NLSNational Language Support,国家语言支持。允许用户设置区域等软件功能。识别用户使用的语言、日期和时间格式等信息。也包括支持键盘布局和特定语言的字体。
    ODBCOpen Database Connectivity,开放式数据库互连。是微软公司开放服务结构中有关数据库的一个组成部分,它建立了一组规范,并提供了一组对数据库访问的标准API(应用程序编程接口)。这些API利用SQL来完成其大部分任务。ODBC本身也提供了对SQL语言的支持,用户可以直接将SQL语句送给ODBC
    OSOperation System,操作系统。管理和控制计算机系统中的所有硬,软件资源,合理地组织计算机工作流程,并为用户提供一个良好的工作环境和友好的基础软件。测试中常用的OS包括WindowsMacLinux等。
    PDFPortable Document Format,便携式文档格式。由Adobe公司开发的基于Postscrīpt标准的文件格式。PDF文件可以由其他软件创建,主要用于电子文档的发布。
    POProject Order,项目工作单。测试项目经理为每个测试工程师发送的包含测试内容、测试要求、测试提交时间、测试工具说明等的人物单。
    PPRPost Project Review,项目后期审查。测试项目结束后的总结和审阅报告,包括成功的方面,失败的方面和今后的准备措施等。

    MSMicrosoft,微软,世界著名软件开发商。
    RCRelease Candidate,候补发布版本。软件测试中进行最后一次测试的软件工作版本。
    QAQuality Assurance,质量保证。ISO84021994中的定义是为了提供足够的信任表明实体能够满足品质要求,而在品质管理体系中实施并根据需要进行证实的全部有计划和有系统的活动。有些推行ISO9000的组织会设置这样的部门或岗位,负责ISO9000标准所要求的有关品质保证的职能,担任这类工作的人员就叫做QA人员。
    SGMLStandard Generalized Markup Language,标准通用标记语言。一种通用的文档结构描述置标语言,主要用来定义文献模型的逻辑和物理类结构。SGMLISO组织于1986年发布的ISO 8879国际标准。
    SSRSoftware Specification Review,软件规格审查。在实施测试之前,要审阅测试过程中用到的各种文档,包括完整性和正确性等。
    SQLStructured Query Language,结构化查询语言。它是目前使用最广泛的数据库语言,SQL是由IBM发展起来的,后来被许多数据库软件公司接受而成为了业内的一个标准。
    TLTeam Leader,团队主管。负责测试组的测试工作,包括资源分配,测试过程管理和提交测试结果,与测试经理和本地项目经理以及测试团队成员协调。
    TBDTo Be DecidedTo Be Determined,待定的。常出现在测试文档中,表示还没有正式确定的内容。
    TOCTable Of Content,内容目录表。常出现在书籍或长文章中正文前面,列出章节标题和对应页码。
    UIUser Interface,用户界面。软件的人机交互的借口,常见的UI包括菜单、对话框、窗口等。

    URLUniform Resource Locator,统一资源定位器。网站完整路径名的一种表示方法,例如中国本地化网URLhttp://www.globalization.com.cn
    UMLUnified Modeling Language,统一建模语言。一个支持模型化和软件系统开发的图形化语言,为软件开发的所有阶段提供模型化和可视化支持,包括由需求分析到规格,到构造和配置。
    VPNVirtual Private Network,虚拟专用网络。在公众网络上所建立的企业网络,并且此企业网络拥有与专用网络相同的安全、管理及功能等特点,它替代了传统的拨号访问,利用因特网公网资源作为企业专网的延续,节省昂贵的长途费用。
    VSSVisual Source Safe可视化资源配置管理,微软Windows平台下的一个小型软件配置管理工具。
    WWWWorld Wide WebInternet最新的一种信息服务。它是一种基于超文本文件的交互式浏览检索工具。用户可用WWWInternet网上浏览、传递、编辑超文本格式的文件。
    XMLExtensible Markup Language,可扩展标记语言。W3C发布的数据文件存储格式。可以以容易而一致的方式格式格式化和传送数据。

     

  • (转)J2EE程序的性能优化技巧(4)

    2007-10-20 22:07:36


    六、数据访问

      在J2EE开发的应用系统中,数据库访问一般是个必备的环节。数据库用来存储业务数据,供应用程序访问。

      在Java技术的应用体系中,应用程序是通过JDBC(Java Database Connectivity)实现的接口来访问数据库的,JDBC支持“建立连接、SQL语句查询、处理结果”等基本功能。在应用JDBC接口访问数据库的过程中,只要根据规范来实现,就可以达到要求的功能

      但是,有些时候进行数据查询的效率着实让开发人员不如所愿,明明根据规范编写的程序,运行效果却很差,造成整个系统的执行效率不高。

      ·使用速度快的JDBC驱动

      JDBC API包括两种实现接口形式,一种是纯Java实现的驱动,一种利用ODBC驱动和数据库客户端实现,具体有四种驱动模式并各有不同的应用范围,针对不同的应用开发要选择合适的JDBC驱动,在同一个应用系统中,如果选择不同的JDBC驱动,在效率上会有差别。

      例如,有一个企业应用系统,不要求支持不同厂商的数据库,这时就可以选择模式4的JDBC驱动,该驱动一般由数据库厂商实现的基于本地协议的驱动,直接调用数据库管理系统使用的协议,减少了模式3中的中间层。

      ·使用JDBC连接池

      为了提高访问数据库的性能,我们还可以使用JDBC 2.0的一些规范和特性,JDBC是占用资源的,在使用数据库连接时可以使用连接池Connection Pooling,避免频繁打开、关闭Connection。而我们知道,获取Connection是比较消耗系统资源的。

      Connection缓冲池是这样工作的:当一个应用程序关闭一个数据库连接时,这个连接并不真正释放而是被循环利用,建立连接是消耗较大的操作,循环利用连接可以显著的提高性能,因为可以减少新连接的建立。

      一个通过DataSource获取缓冲池获得连接,并连接到一个CustomerDB数据源的代码演示如下:

    Context ctx = new InitialContext();
    DataSource dataSource = (DataSource) ctx.lookup("jdbc/CustomerDB");
    Connection conn = dataSource.getConnection("password","username");

      ·缓存DataSource

      一个DataSource对象代表一个实际的数据源。这个数据源可以是从关系数据库到表格形式的文件,完全依赖于它是怎样实现的,一个数据源对象注册到JNDI名字服务后,应用程序就可以从JNDI服务上取得该对象,并使用之和数据源建立连接。

      通过上面的例子,我们知道DataSource是从连接池获得连接的一种方式,通过JNDI方式获得,是占用资源的。

      为了避免再次的JNDI调用,可以系统中缓存要使用的DataSource。

      ·关闭所有使用的资源

      系统一般是并发的系统,在每次申请和使用完资源后,应该释放供别人使用,数据库资源每个模式的含义可以参考SUN JDBC的文档,不同是比较宝贵的,使用完成后应该保证彻底的释放。

      请看下面的代码段:

    Connection conn = null;
    Statement stmt = null;
    ResultSet rs = null;
    try {
     DataSource dataSource = getDataSource();
     // 取的DataSource的方法,实现略。
     conn = datasource.getConnection();
     stmt = conn.createStatement();
     rs = stmt.executeQuery("SELECT * FROM ...");
     ... // 其他处理
     rs.close();
     stmt.close();
     conn.close();
    }catch (SQLException ex) {
     ... // 错误处理
    }

      粗看似乎没有什么问题,也有关闭相关如Connection等系统资源的代码,但当出现异常后,关闭资源的代码可能并不被执行,为保证资源的确实已被关闭,应该把资源关闭的代码放到finally块:

    Connection conn = null;
    Statement stmt = null;
    ResultSet rs = null;
    try {
     DataSource dataSource = getDataSource();
     // 取的DataSource的方法,实现略。
     conn = datasource.getConnection();
     stmt = conn.createStatement();
     rs = stmt.executeQuery("SELECT * FROM ...");

     ... // 其他处理
    }catch (SQLException ex) {
     ... // 错误处理

    }finally{
     if (rs!=null) {
      try {
       rs.close(); // 关闭ResultSet}
      catch (SQLException ex) {
       ... // 错误处理
      }
     }

     if (stmt!=null){
      try {
       stmt.close(); // 关闭Statement}
      catch (SQLException ex) {
       ... // 错误处理
      }
     }
     if (conn!=null){
      try {
       conn.close(); // 关闭Connection}
      catch (SQLException ex) {
       ... // 错误处理
      }
     }
    }

      ·大型数据量处理

      当我们在读取诸如数据列表、报表等大量数据时,可以发现使用EJB的方法是非常慢的,这时可以使用直接访问数据库的方法,用SQL直接存取数据,从而消除EJB的经常开支(例如远程方法调用、事务管理和数据序列化,对象的构造等)。

      ·缓存经常使用的数据

      对于构建的业务系统,如果有些数据要经常要从数据库中读取,同时,这些数据又不经常变化,这些数据就可以在系统中缓存起来,使用时直接读取缓存,而不用频繁的访问数据库读取数据。

      缓存工作可以在系统初始化时一次性读取数据,特别是一些只读的数据,当数据更新时更新数据库内容,同时更新缓存的数据值。

      一个例子是,在一套企业应用系统中,企业的信息数据(如企业的名称)在多个业务应用模块中使用,这时就可以把这些数据缓存起来,需要时直接读取缓存的企业信息数据。

      七、总结

      一般意义上说,参与系统运行的代码都会对性能产生影响,实际应用中应该养成良好的编程规范、编写高质量的代码,当系统性能出现问题时,要找到主要影响性能的瓶颈所在,然后集中精力优化这些代码,能达到事半功倍的效果。

      J2EE性能的优化包括很多方面的,要达到一个性能优良的系统,除了关注代码之外,还应该根据系统实际的运行情况,从服务器软硬件环境、集群技术、系统构架设计、系统部署环境、数据结构、算法设计等方面综合考虑。
  • (转)J2EE程序的性能优化技巧(3)

    2007-10-20 22:05:31


    三、I/O 性能

      输入/输出(I/O)包括很多方面,我们知道,进行I/O操作是很费系统资源的。程序中应该尽量少用I/O操作。使用时可以注意: . 合理控制输出函数System.out.println()对于大多时候是有用的,特别是系统调试的时候,但也会产生大量的信息出现在控制台和日志上,同时输出时,有序列化和同步的过程,造成了开销。

      特别是在发行版中,要合理的控制输出,可以在项目开发时,设计好一个Debug的工具类,在该类中可以实现输出开关,输出的级别,根据不同的情况进行不同的输出的控制。

      ·使用缓存

      读写内存要比读写文件要快很多,应尽可能使用缓冲。

      尽可能使用带有Buffer的类代替没有Buffer的类,如可以用BufferedReader 代替Reader,用BufferedWriter代替Writer来进行处理I/O操作。

      同样可以用BufferedInputStream代替InputStream都可以获得性能的提高。

      四、Servlet

      Servlet采用请求——响应模式提供Web服务,通过ServletResponse以及ServletRequest这两个对象来输出和接收用户传递的参数,在服务端处理用户的请求,根据请求访问数据、访问别的Servlet方法、调用EJB等等,然后将处理结果返回给客户端。

      ·尽量不使用同步

      Servlet是多线程的,以处理不同的请求,基于前面同步的分析,如果有太多的同步就失去了多线程的优势了。

      ·不用保存太多的信息在HttpSession中

      很多时候,存储一些对象在HttpSession中是有必要的,可以加快系统的开发,如网上商店系统会把购物车信息保存在该用户的Session中,但当存储大量的信息或是大的对象在会话中是有害的,特别是当系统中用户的访问量很大,对内存的需求就会很高。

      具体开发时,在这两者之间应作好权衡。

      ·清除Session

      通常情况,当达到设定的超时时间时,同时有些Session没有了活动,服务器会释放这些没有活动的Session,.. 不过这种情况下,特别是多用户并访时,系统内存要维护多个的无效Session。

      当用户退出时,应该手动释放,回收资源,实现如下:..

    HttpSession theSession = request.getSession();
    // 获取当前Session
    if(theSession != null){
     theSession.invalidate(); // 使该Session失效
    }

      五、EJB 问题

      EJB是Java服务器端服务框架的规范,软件厂商根据它来实现EJB服务器。应用程序开发者可以专注于支持应用所需的商业逻辑,而不用担心周围框架的实现问题。EJB规范详细地解释了一些最小但是必须的服务,如事务,安全和名字等。

      ·缓存Home接口

      EJB库使用Enterprise Bean 的客户端通过它的Home接口创建它的实例。客户端能通过JNDI访问它。服务器通过Lookup方法来获取。

      JNDI是个远程对象,通过RMI方式调用,对它的访问往往是比较费时的。所以,在设计时可以设计一个类专门用来缓存Home接口,在系统初始化时就获得需要的Home接口并缓存,以后的引用只要引用缓存即可。

      ·封装Entity Bean

      直接访问Entity Bean是个不好的习惯,用会话Bean封装对实体Bean的访问能够改进事务管理,因为每一个对get方法的直接调用将产生一个事务,容器将在每一个实体Bean的事务之后执行一个“Load-Store”.. 操作。

      最好在Session Bean中完成Entity Bean的封装,减少容器的事务处理,并在Session Bean中实现一些具体的业务方法。

      ·释放有状态的Session Bean

      相当于HttpSession,当把一个Session Bean设为Stateful,即有状态的Session Bean 后,应用容器(Container)就可能有“钝化”(Passivate)和活化(Activate)过程,即在主存和二级缓存之间对 SessionBean进行存储位置的转移,在这个过程中,存在序列化过程。

      通常有状态Session Bean的释放是在超时时发生,容器自动的清除该对象,但是如果交给容器管理,一方面可能产生对象钝化,另一方面未超时期间,系统还要维护一份该对象,所以如果我们确认使用完该StatefulSession Bean后不再需要时,可以显式的将其释放掉,方法是调用:

    theSesionBean.remove();
  • (转)J2EE程序的性能优化技巧(2)

    2007-10-20 22:03:42


    ·优化循环体

      循环是比较重复运行的地方,如果循环次数很大,循环体内不好的代码对效率的影响就会被放大而变的突出。考虑下面的代码片:..

    Vector vect = new Vector(1000);
    ...
    for( inti=0; i<vect.size(); i++){
     ...
    }

      for循环部分改写成:

    int size = vect.size();
    for( int i=0; i>size; i++){
     ...
    }

      如果size=1000,就可以减少1000次size()的系统调用开销,避免了循环体重复调用。

      再看如下的代码片:..

    for (int i = 0;i <100000;i++)
    if (i%10 == 9) {
     ... // 每十次执行一次
    }

      改写成也可以提高效率:..

    for(inti =0,j =10; i<100000; i++,j--){
     if(j == 0){
      ... // 每十次执行一次
      j = 10;
     }
    }

      所以,当有较大的循环时,应该检查循环内是否有效率不高的地方,寻找更优的方案加以改进。

      ·对象的创建

      尽量少用new来初始化一个类的实例,当一个对象是用new进行初始化时,其构造函数链的所有构造函数都被调用到,所以new操作符是很消耗系统资源的,new一个对象耗时往往是局部变量赋值耗时的上千倍。同时,当生成对象后,系统还要花时间进行垃圾回收和处理。

      当new创建对象不可避免时,注意避免多次的使用new初始化一个对象。

      尽量在使用时再创建该对象。如:

    NewObject ōbject = new NewObject();
    int value;
    if(i>0 )
    {
     value =object.getValue();
    }

      可以修改为:

    int value;
    if(i>0 )
    {
     NewObject ōbject = new NewObject();
     Value =object.getValue();
    }

      另外,应该尽量重复使用一个对象,而不是声明新的同类对象。一个重用对象的方法是改变对象的值,如可以通过setValue之类的方法改变对象的变量达到重用的目的。

      ·变量的注意事项

      尽量使用局部变量,调用方法时传递的参数以及在调用中创建的临时变量都保存在栈(Stack) 中,速度较快。其他变量,如静态变量、实例变量等,都在堆(Heap)中创建,速度较慢。

      尽量使用静态变量,即加修饰符static,如果类中的变量不会随他的实例而变化,就可以定义为静态变量,从而使他所有的实例都共享这个变量。

      ·方法(Method)调用

      在Java中,一切都是对象,如果有方法(Method)调用,处理器先要检查该方法是属于哪个对象,该对象是否有效,对象属于什么类型,然后选择合适的方法并调用。

      可以减少方法的调用,同样一个方法:

    public void CallMethod(int i ){
     if( i ==0 ){
      return;
     }
     ... // 其他处理
    }

      如果直接调用,

    int i = 0;
    ...
    CallMethod(i);

      就不如写成:

    int i = 0;
    ...

    if( i ==0 ){
     CallMethod(i);
    }

      不影响可读性等情况下,可以把几个小的方法合成一个大的方法。

      另外,在方法前加上final,private关键字有利于编译器的优化。

      ·慎用异常处理

      异常是Java的一种错误处理机制,对程序来说是非常有用的,但是异常对性能不利。抛出异常首先要创建一个新的对象,并进行相关的处理,造成系统的开销,所以异常应该用在错误处理的情况,不应该用来控制程序流程流程尽量用while,if等处理。

      在不是很影响代码健壮性的前提下,可以把几个try/catch块合成一个。

      ·同步

      同步主要出现在多线程的情况,为多线程同时运行时提供对象数据安全的机制,多线程是比较复杂话题,应用多线程也是为了获得性能的提升,应该尽可能减少同步。

      另外,如果需要同步的地方,可以减少同步的代码段,如只同步某个方法或函数,而不是整个代码。

      ·使用Java系统API

      Java的API一般都做了性能的考虑,如果完成相同的功能,优先使用API而不是自己写的代码,如数组复制通常的代码如下:

    int size = 1000;
    String[] strArray1 = new String[size];
    String[] strArray2 = new String[size];
    for(inti=0;i<size;i++){ // 赋值
     strArray1 = (new String("Array: " + i));
    }

    for(inti=0;i<size;i++){ // 复制
     strArray2=(new String((String)a));
    }

      如果使用Java提供的API,就可以提高性能:

    int size = 1000;
    String[] strArray1 = new String[size];
    String[] strArray2 = new String[size];
    for(inti=0;i<size;i++){ // 赋值
    strArray1 = (new String("Array: " + i));
    }

    System.arraycopy(strArray1,0,strArray2,0,size); // 复制

      同样的一个规则是,当有大量数据的复制时,应该使用System.arraycopy()。
  • [转]J2EE程序的性能优化技巧(1)

    2007-10-20 22:01:37

        应用J2EE平台开发的系统的性能是系统使用者和开发者都关注的问题,本文从服务端编程时应注意的几个方面讨论代码对性能的影响,并总结一些解决的建议。

      关键词:性能,Java,J2EE,EJB,Servlet,JDBC

      一、概要

      Java 2 Platform, Enterprise Edition (J2EE)是当前很多商业应用系统使用的开发平台,该技术提供了一个基于组件的方法来设计、开发、装配和部署企业级应用程序。J2EE平台提供了一个多层结构的分布式的应用程序模型,可以更快地开发和发布的新的应用解决方案。J2EE是一种技术规范,定义了整个标准的应用开发体系结构和一个部署环境,应用开发者开发时只要专注于具体商业逻辑和商业业务规则的实现上,而其他的诸如事务、持久化、安全等系统开发问题可以由应用程序容器或者服务器处理,开发完成后,就可以方便地部署到实现规范的应用服务器中。

      作为网络上的商业应用系统,同时访问的人数是很多的,在大量访问的情况下,过多的资源请求和有限的服务器资源(内存、CPU时间、网络带宽等)之间就会出现矛盾,应用系统的性能就显得很重要了,有时正确的代码并不能保证项目的成功,性能往往是最后决定一个项目是否成功关键。

      本文主要从性能的角度出发,讨论J2EE服务器端的代码性能优化和提升。

      二、常见的Java 编程

      J2EE语言基础是Java,常用的Java代码问题对应用系统的性能影响,下面讨论了一些应该注意方面。

      ·使用StringBuffer代替String

      当处理字符串的相加时,常见的写法是:..

    String str1 = "Hello";
    String str2 = "welcome to world";
    String str3 = str1 + ", " + str2 +"!";
    System.out.println(str3);

      很多人都知道,这样的代码效率是很低的,因为String是用来存储字符串常量的,如果要执行“+”的操作,系统会生成一些临时的对象,并对这些对象进行管理,造成不必要的开销。

      如果字符串有连接的操作,替代的做法是用StringBuffer类的append方法,它的缺省构造函数和append的实现是:

    public StringBuffer() { // 构造函数
    this(16); // 缺省容量16}

    public synchronized StringBuffer append(String str) {
     if (str == null) {
      str = String.valueOf(str);
     }

     int len =str.length();
     int newcount = count + len;
     if(newcount > value.length)

     expandCapacity(newcount);

     // 扩充容量
     str.getChars(0, len, value, count);
     count = newcount;
     return this;
    }

      当字符串的大小超过缺省16时,代码实现了容量的扩充,为了避免对象的重新扩展其容量,更好的写法为:

    StringBuffer buffer = new StringBuffer(30);
    // 分配指定的大小。
    buffer.append("hello");
    buffer.append(",");
    buffer.append("welcometo world!");
    String str = buffer.toString();

      ·生成对象时,分配合理的空间和大小

      Java中的很多类都有它的默认的空间分配大小,对于一些有大小的对象的初始化,应该预计对象的大小,然后使用进行初始化,上面的例子也说明了这个问题,StringBuffer创建时,我们指定了它的大小。

      另外的一个例子是Vector,当声明Vector vect=new Vector()时,系统调用:

    public Vector() {// 缺省构造函数
     this(10); // 容量是 10;
    }

      缺省分配10个对象大小容量。当执行add方法时,可以看到具体实现为:..

    public synchronized boolean add(Object o) {
     modCount++;
     ensureCapacityHelper(elementCount+1);
     elementData[elementCount++] =o;

     return true;
    }

    private void ensureCapacityHelper(int minCapacity) {
     int ōldCapacity = elementData.length;
     if (minCapacity > oldCapacity) {
      Object oldData[] = elementData;
      int newCapacity = (capacityIncrement > 0) ? (oldCapacity + capacityIncrement) :
    (oldCapacity * 2);
      if (newCapacity < minCapacity) {
       newCapacity = minCapacity;
      }
      elementData = new Object[newCapacity];
      System.arraycopy(oldData, 0, elementData, 0, elementCount);
     }
    }

      我们可以看到,当Vector大小超过原来的大小时,一些代码的目的就是为了做容量的扩充,在预先知道该Vector大小的话,可以指定其大小,避免容量扩充的开销,如知道Vector大小为100时,初始化是就可以象这样。
  • When Should a Test Be Automated?

    2007-10-20 20:33:16



    Brian Marick
    Testing Foundations
    marick@testing.com
    I want to automate as many tests as I can. I’m not comfortable running a test only once.
    What if a programmer then changes the code and introduces a bug? What if I don’t catch
    that bug because I didn’t rerun the test after the change? Wouldn’t I feel horrible?
    Well, yes, but I’m not paid to feel comfortable rather than horrible. I’m paid to be costeffective.
    It took me a long time, but I finally realized that I was over-automating, that
    only some of the tests I created should be automated. Some of the tests I was automating
    not only did not find bugs when they were rerun, they had no significant prospect of doing
    so. Automating them was not a rational decision.
    The question, then, is how to make a rational decision. When I take a job as a contract
    tester, I typically design a series of tests for some product feature. For each of them, I
    need to decide whether that particular test should be automated. This paper describes
    how I think about the tradeoffs.
    Scenarios
    In order for my argument to be clear, I must avoid trying to describe all possible testing
    scenarios at once. You as a reader are better served if I pick one realistic and useful
    scenario, describe it well, and then leave you to apply the argument to your specific
    situation. Here’s my scenario:
    1. You have a fixed level of automation support. That is, automation tools are available.
    You know how to use them, though you may not be an expert. Support libraries have
    been written. I assume you’ll work with what you’ve got, not decide to acquire new
    tools, add more than simple features to a tool support library, or learn more about test
    automation. The question is: given what you have now, is automating this test
    justified? The decision about what to provide you was made earlier, and you live with
    it.
    In other scenarios, you might argue for increased automation support later in the
    project. This paper does not directly address when that’s a good argument, but it
    provides context by detailing what it means to reduce the cost or increase the value of
    automation.
    2. There are only two possibilities: a completely automated test that can run entirely
    unattended, and a "one-shot" manual test that is run once and then thrown away.
    These are extremes on a continuum. You might have tests that automate only
    When Should a Test Be Automated?
    2
    cumbersome setup, but leave the rest to be done manually. Or you might have a
    manual test that’s carefully enough documented that it can readily be run again. Once
    you understand the factors that push a test to one extreme or the other, you’ll know
    better where the optimal point on the continuum lies for a particular test.
    3. Both automation and manual testing are plausible. That’s not always the case. For
    example, load testing often requires the creation of heavy user workloads. Even if it
    were possible to arrange for 300 testers to use the product simultaneously, it’s surely
    not cost-effective. Load tests need to be automated.
    4. Testing is done through an external interface ("black box testing"). The same analysis
    applies to testing at the code level - and a brief example is given toward the end of the
    paper - but I will not describe all the details.
    5. There is no mandate to automate. Management accepts the notion that some of your
    tests will be automated and some will be manual.
    6. You first design the test and then decide whether it should be automated. In reality,
    it’s common for the needs of automation to influence the design. Sadly, that
    sometimes means tests are weakened to make them automatable. But - if you
    understand where the true value of automation lies - it can also mean harmless
    adjustments or even improvements.
    7. You have a certain amount of time to finish your testing. You should do the best
    testing possible in that time. The argument also applies in the less common situation
    of deciding on the tests first, then on how much time is required.
    Overview
    My decision process uses these questions.
    1. Automating this test and running it once will cost more than simply running it
    manually once. How much more?
    2. An automated test has a finite lifetime, during which it must recoup that additional
    cost. Is this test likely to die sooner or later? What events are likely to end it?
    3. During its lifetime, how likely is this test to find additional bugs (beyond whatever
    bugs it found the first time it ran)? How does this uncertain benefit balance against the
    cost of automation?
    If those questions don’t suffice for a decision, other minor considerations might tip the
    balance.
    The third question is the essential one, and the one I’ll explore in most detail.
    Unfortunately, a good answer to the question requires a greater understanding of the
    product’s structure than testers usually possess. In addition to describing what you can do
    with that understanding, I’ll describe how to get approximately the same results without it.
    When Should a Test Be Automated?
    3
    What Do You Lose With Automation?
    Creating an automated test is usually more time-consuming (expensive) than running it
    once manually.1 The cost differential varies, depending on the product and the automation
    style.
    · If the product is being tested through a GUI (graphical user interface), and your
    automation style is to write scrīpts (essentially simple programs) that drive the GUI, an
    automated test may be several times as expensive as a manual test.
    · If you use a GUI capture/replay tool that tracks your interactions with the product and
    builds a scrīpt from them, automation is relatively cheaper. It is not as cheap as
    manual testing, though, when you consider the cost of recapturing a test from the
    beginning after you make a mistake, the time spent organizing and documenting all the
    files that make up the test suite, the aggravation of finding and working around bugs in
    the tool, and so forth. Those small "in the noise" costs can add up surprisingly
    quickly.
    · If you’re testing a compiler, automation might be only a little more expensive than
    manual testing, because most of the effort will go into writing test programs for the
    compiler to compile. Those programs have to be written whether or not they’re saved
    for reuse.
    Suppose your environment is very congenial to automation, and an automated test is only
    10% more expensive than a manual test. (I would say this is rare.) That still means that,
    once you’ve automated ten tests, there’s one manual test - one unique execution of the
    product - that is never exercised until a customer tries it. If automation is more expensive,
    those ten automated tests might prevent ten or twenty or even more manual tests from
    ever being run. What bugs might those tests have found?
    So the first test automation question is this:
    If I automate this test, what manual tests will I lose? How many bugs might I lose
    with them? What will be their severity?
    The answers will vary widely, depending on your project. Suppose you’re a tester on a
    telecom system, one where quality is very important and the testing budget is adequate.
    Your answer might be "If I automate this test, I’ll probably lose three manual tests. But
    I’ve done a pretty complete job of test design, and I really think those additional tests
    would only be trivial variations of existing tests. Strictly speaking, they’d be different
    executions, but I really doubt they’d find serious new bugs." For you, the cost of
    automation is low.
    1 There are exceptions. For example, perhaps tests can be written in a tabular format. A tool can then process the
    table and drive the product. Filling in the table might be faster than testing the product manually. See
    [Pettichord96] and [Kaner97] for more on this style. If manual testing is really more expensive, most of the analysis
    in this paper does not apply. But beware: people tend to underestimate the cost of automation. For example,
    filling in a table of inputs might be easy, but automated results verification could still be expensive. Thanks to
    Dave Gelperin for pressing me on this point.
    When Should a Test Be Automated?
    4
    Or you might be a testing version 1.0 of a shrinkwrap product whose product direction
    and code base has changed wildly in the last few months. Your answer might be "Ha! I
    don’t even have time to try all the obvious tests once. In the time I would spend
    automating this test, I guarantee I could find at least one completely new bug." For you,
    the cost of automation is high.
    My measure of cost - bugs probably foregone - may seem somewhat odd. People usually
    measure the cost of automation as the time spent doing it. I use this measure because the
    point of automating a test is to find more bugs by rerunning it. Bugs are the value of
    automation, so the cost should be measured the same way.2
    A note on estimation
    I’m asking you for your best estimate of the number of bugs you’ll miss, on average, by
    automating a single test. The answer will not be "0.25". It will not even be "0.25 ±
    0.024". The answer is more like "a good chance at least one will be missed" or "probably
    none".
    Later, you’ll be asked to estimate the lifetime of the test. Those answers will be more like
    "probably not past this release" or "a long time" than "34.6 weeks".
    Then you’ll be asked to estimate the number of bugs the automated test will find in that
    lifetime. The answer will again be indefinite.
    And finally, you’ll be asked to compare the fuzzy estimate for the manual test to the fuzzy
    estimate for the automated test and make a decision.
    Is this useful?
    Yes, when you consider the alternative, which is to make the same decision - perhaps
    implicitly - with even less information. My experience is that thinking quickly about these
    questions seems to lead to better testing, despite the inexactness of the answers. I favor
    imprecise but useful methods over precise but misleading ones.
    How Long Do Automated Tests Survive?
    Automated tests produce their value after the code changes. Except for rare types of
    tests, rerunning a test before any code changes is a waste of time: it will find exactly the
    same bugs as before. (The exceptions, such as timing and stress tests, can be analyzed in
    the roughly same way. I omit them for simplicity.)
    But a test will not last forever. At some point, the product will change in a way that
    breaks the test. The test will have to either be repaired or discarded. To a reasonable
    approximation, repairing a test costs as much as throwing it away and writing it from
    2 I first learned to think about the cost of automation in this way during conversations with Cem Kaner. Noel Nyman
    points out that it’s a special case of John Daly’s Rule, which has you always ask this question of any activity: "What
    bugs aren’t I finding while I’m doing that?"
    When Should a Test Be Automated?
    5
    scratch3. Whichever you do when the test breaks, if it hasn’t repaid the automation effort
    by that point, you would have been better off leaving it as a manual test.
    In short, the test’s useful lifespan looks like this:
    Test
    created Test
    run Code
    change
    Test
    run Code
    change
    Test
    run Code
    change Test run;
    it’s dead
    When deciding whether to automate a test, you must estimate how many code changes it
    will survive. If the answer is "not many", the test had better be especially good at finding
    bugs.
    To estimate a test’s life, you need some background knowledge. You need to understand
    something of the way code structure affects tests. Here’s a greatly simplified diagram to
    start with.
    Code Under
    Test
    Intervening Code
    Suppose your task is to write a set of tests that check whether the product correctly
    validates phone numbers that the user types in. These tests check whether phone numbers
    have the right number of digits, don’t use any disallowed digits, and so on. If you
    3 If you’re using a capture/replay tool, you re-record the test. That probably costs more than recording it in the first
    place, when you factor in the time spent figuring out what the test was supposed to do. If you use test scrīpts, you
    need to understand the current scrīpt, modify it, try it out, and fix whatever problems you uncover. You may
    discover the new scrīpt can’t readily do everything that the old one could, so it’s better to break it into two scrīpts.
    And so on. If your testing effort is well-established, repairing a scrīpted test may be cheaper than writing a new
    one. That doesn’t affect the message of the paper; it only reduces one of the costs of automation. But make sure
    you’ve measured the true cost of repair: people seem to guess low.
    When Should a Test Be Automated?
    6
    understood the product code (and I understand that you rarely do), you could take a
    program listing and use a highlighter to mark the phone number validation code. I’m
    going to call that the code under test. It is the code whose behavīor you thought about
    to complete your testing task.
    In most cases, you don’t exercise the code under test directly. For example, you don’t give
    phone numbers directly to the validation code. Instead, you type them into the user
    interface, which is itself code that collects key presses, converts them into internal
    program data, and delivers that data to the validation routines. You also don’t examine
    the results of the validation routines directly. Instead, the routines pass their results to
    other code, which eventually produces results visible at the user interface (by, for example,
    producing an error popup). I will call the code that sits between the code under test and
    the test itself the intervening code.
    Changes to the intervening code
    The intervening code is a major cause of test death. That’s especially true when it’s a
    graphical user interface as opposed to, say, a textual interface or the interface to some
    standard hardware device. For example, suppose the user interface once required you to
    type in the phone number. But it’s now changed to provide a visual representation of a
    phone keypad. You now click on the numbers with a mouse, simulating the use of a real
    phone. (A really stupid idea, but weirder things have happened.) Both interfaces deliver
    exactly the same data to the code under test, but the UI change is likely to break an
    automated test, which no longer has any place to "type" the phone number.
    As another example, the way the interface tells the user of an input error might change.
    Instead of a popup dialog box, it might cause the main program window to flash red and
    have the sound card play that annoying "your call cannot be completed as dialed" tone.
    The test, which looks for a popup dialog, should consider the new correct action a bug. It
    is effectively dead.
    "Off the shelf" test automation tools can do a limited job of preventing test death. For
    example, most GUI test automation tools can ignore changes to the size, position, or color
    of a text box. To handle larger changes, such as those in the previous two paragraphs,
    they must be customized. That is done by having someone in your project create productspecific
    test libraries. They allow you, the tester, to write your tests in terms of the
    feature you’re testing, ignoring - as much as possible - the details of the user interface. For
    example, your automated test might contain this line:
    try 217-555-1212
    try is a library routine with the job of translating a phone number into terms the user
    interface understands. If the user interface accepts typed characters, try types the phone
    number at it. If it requires numbers to be selected from a keypad drawn on the screen,
    try does that.
    In effect, the test libraries filter out irrelevant information. They allow your test to specify
    only and exactly the data that matters. On input, they add additional information required
    When Should a Test Be Automated?
    7
    by the intervening code. On output, they condense all the information from the
    intervening code down to the important nugget of information actually produced by the
    code under test. This filtering can be pictured like this:
    Code Under
    Test
    Intervening Code
    Test Tool and Libraries
    Many user interface changes will require no changes to tests, only to the test library.
    Since there is (presumably) a lot more test code than library code, the cost of change is
    dramatically lowered.
    However, even the best compensatory code cannot insulate tests from all changes. It’s just
    too hard to anticipate everything. So there is some likelihood that, at some point in the
    future, your test will break. You must ask this question:
    How well is this test protected from changes to the intervening code?
    You need to assess how likely are intervening code changes that will affect your test. If
    they’re extremely unlikely - if, for example, the user interface really truly is fixed for all
    time - your test will have a long time to pay back your effort in automating it. (I would
    not believe the GUI is frozen until the product manager is ready to give me $100 for every
    future change to it.)
    If changes are likely, you must then ask how confident you are that your test libraries will
    protect you from them. If the test library doesn’t protect you, perhaps it can be easily
    modified to cope with the change. If a half-hour change rescues 300 tests from death,
    that’s time well spent. Beware, though: many have grossly underestimated the difficulty
    of maintaining the test library, especially after it’s been patched to handle change after
    change after change. You wouldn’t be the first to give up, throw out all the tests and the
    library, and start over.
    If you have no test libraries - if you are using a GUI test automation tool in capture/replay
    mode - you should expect little protection. The next major revision of the user interface
    will kill many of your tests. They will not have much time to repay their cost. You’ve
    traded low creation cost for a short lifetime.
    When Should a Test Be Automated?
    8
    Changes to the code under test
    The intervening code isn’t the only code that can change. The code under test can also
    change. In particular, it can change to do something entirely different.
    For example, suppose that some years ago someone wrote phone number validation tests.
    To test an invalid phone number, she used 1-888-343-3533. At that time, there was no
    such thing as an "888" number. Now there is. So the test that used to pass because the
    product correctly rejected the number now fails because the product correctly accepts a
    number that the test thinks it should reject. This may or may not be simple to fix. It’s
    simple if you realize what the problem is: just change "888" to "889". But you might
    have difficulty deciphering the test well enough to realize it’s checking phone number
    validation. (Automated tests are notoriously poorly documented.) Or you might not
    realize that "888" is now a valid number, so you think the test has legitimately found a
    bug. The test doesn’t get fixed until after you’ve annoyed some programmer with a
    spurious bug report.
    So, when deciding whether to automate a test, you must also ask:
    How stable is the behavīor of the code under test?
    Note the emphasis on "behavīor". Changes to the code are fine, so long as they leave the
    externally visible behavīor the same.
    Different types of products, and different types of code under test, have different
    stabilities. Phone numbers are actually fairly stable. Code that manipulates a bank
    account is fairly stable in the sense that tests that check whether adding $100 to an
    account with $30 yields an account with $130 are likely to continue to work (except when
    changes to the intervening code breaks them). Graphical user interfaces are notoriously
    unstable.
    Additions to behavīor are often harmless. For example, you might have a test that checks
    that withdrawing $100 from an account with $30 produces an error and no change in the
    account balance. But, since that test was written, a new feature has been added:
    customers with certain accounts have "automatic overdraft protection", which allows them
    to withdraw more money than they have in the account (in effect, taking out a loan). This
    change will not break the existing test, so long as the default test account has the old
    behavīor. (Of course, new tests must be run against the new behavīor.)
    Where do we stand?
    We now know the hurdle an automated test must leap: its value must exceed the value of
    all the manual tests it prevents. We’ve estimated the lifespan of the test, the time during
    which it will have opportunities to produce value. Now we must ask how likely it is that
    the test actually will produce value. What bugs might we expect from it?
    Will the Test Have Continued Value?
    The argument here is complicated, so I will outline it first.
    When Should a Test Be Automated?
    9
    1. The code under test has structure. As a useful approximation, we can divide it into
    feature code and support code.
    2. Tests are typically written to exercise the feature code. The support code is invisible
    to the tester.
    3. But changes to the feature code usually change behavīor. Hence, they are more likely
    to end a test’s life than to cause it to report a bug.
    4. Most of a test’s value thus comes from its ability to find bugs in changes to the support
    code.
    5. But we don’t know anything about the support code! How can we know if there will
    be future bugs for the test to find? How can we guess if the test will do a good job at
    finding those bugs?
    - There will be bugs if there’s change. If there’s been change in the past, there will
    likely be more change in the future.
    - We may have a hard time knowing whether a test will do a good job, but there’s
    one characteristic that ensures it will do a bad one. Don’t automate such tests.
    6. The code under test interacts with the rest of the product, which can be considered
    still more support code. Changes to this support code also cause bugs that we hope
    automated tests will find.
    - We can again identify a characteristic of low-value tests. High-value tests are
    unlikely to be feature-driven tests; rather, they will be task-driven.
    More terminology
    To understand what makes an automated test have value, you need to look more closely at
    the structure of the code under test.
    Suppose the code under test handles withdrawals from a bank account. Further suppose
    that each test’s purpose has been concisely summarized. Such summaries might read like
    this:
    · "Check that a cash withdrawal of more than $9,999 triggers a Large Withdrawal audit
    trail record."
    · "Check that you can completely empty the account."
    · "Check that overdrafts of less than $100 are automatically allowed and lead to a Quick
    Loan tag being added to the account."
    · "Check that you can withdraw money from the same account at most four times a
    day."
    Now suppose you looked at a listing of the code under test and used a highlighter to mark
    the code that each test was intended to exercise. For example, the first test purpose
    would highlight the following code:
    When Should a Test Be Automated?
    10
    if (amount > 9999.00) audit(transaction, LARGE_WITHDRAWAL);
    When you finished, not all of the code under test would be highlighted. What’s the
    difference between the highlighted and unhighlighted code? To see, let’s look more closely
    at the last purpose ("check that you can withdraw money from the same account at most
    four times a day"). I can see two obvious tests.
    1. The first test makes four withdrawals, each of which should be successful. It then
    attempts another withdrawal, which should fail.
    2. The next makes four withdrawals, each of which should be successful. It waits until
    just after midnight, then makes four more withdrawals. Each of those should again
    succeed. (It might also make a fifth withdrawal, just to see that it still fails.)
    How does the code under test know that there have been four withdrawals from an
    account today? Perhaps there’s a chunk of code that maintains a list of all withdrawals
    that day. When a new one is attempted, it searches for matching account numbers in the
    list and totals up the number found.
    How well will the two tests exercise that searching code? Do they, for example, check
    that the searching code can handle very long lists of, say, 10,000 people each making one
    or a couple of withdrawals per day? No. Checking that wouldn’t occur to the tester,
    because the searching code is completely hidden.
    As a useful approximation, I divide the code under test into two parts:
    1. The feature code directly implements the features that the code under test provides. It
    is intentionally exercised by tests. It performs those operations a user selects (via the
    intervening code of the user interface).
    2. The support code supports the feature code. Its existence is not obvious from an
    external descrīption of the code under test; that is, from a list of features that code
    provides. It is exercised by tests written for other reasons, but no test specifically
    targets it.
    Here’s a picture:
    When Should a Test Be Automated?
    11
    Here, the support code lies beneath the horizontal line. The feature code lies above it.
    There are five distinct features. The particular test shown happens to exercise two of
    them.
    (Note that I am describing strictly the code under test. As we’ll see later, the rest of the
    system has some additional relevant structure.)
    The effects of change within the code under test
    Given this structure, what can we say about the effects of change? What types of change
    cause tests to have value?
    Suppose some feature code is changed, as shown by the gray box in this picture:
    It is very likely that this change will break tests that exercise the feature code. Most
    changes to feature code are intended to change behavīor in some way. Thus, if you’re
    hoping for your automated test to discover bugs in changed feature code, you stand a
    good chance of having the test die just as it has its first chance to deliver value. This is not
    economical if the cost of automating the test was high.
    What changes to the code under test should leave a given test’s behavīor alone? I believe
    the most common answer is changes to support code made in support of changes to other
    feature code. Consider this picture:
    Two features have changed. One new feature has been added. To make those changes
    and additions work, some support code - exercised only accidentally by any tests - has
    been changed. That changed support code is also used by unchanged features, and it is
    When Should a Test Be Automated?
    12
    exercised by tests of those unchanged features. If - as is certainly plausible - the support
    code was changed in a way that lets it work for the new code, but breaks it for the old
    code, such tests have a chance of catching it. But only if they’re rerun, which is more
    likely if they’re automated.
    This, then, is what I see as the central insight, the central paradox, of automated testing:
    An automated test’s value is mostly unrelated to the specific purpose
    for which it was written. It’s the accidental things that count:
    the untargeted bugs that it finds.
    The reason you wrote the test is not the reason it finds bugs. You wrote the test to check
    whether withdrawing all money from a bank account works, but it blows up before it even
    gets to that step.
    This is a problem. You designed a test to target the feature code, not the support code.
    You don’t know anything about the support code. But, armed with this total lack of
    knowledge, you need to answer two sets of questions. The first one is:
    How much will the support code change? How buggy are those changes likely to
    be?
    Unless there’s a reasonable chance of changes with bugs, you won’t recover your costs.
    Not easy questions to answer. I’ll discuss one way to go about it shortly, by means of an
    example.
    Support code changes aren’t enough. The test also must be good at detecting the bugs
    that result. What confidence can you have in that?
    Suppose that you have three tests targeted at some particular feature. For example, one
    test tries withdrawing all the money in the account. Another withdraws one cent. Another
    withdraws all the money but one cent. How might those tests exercise the code? They
    may exercise the feature code differently, but they will likely all exercise the support code
    in almost exactly the same way. Each retrieves a record from the database, updates it, and
    puts it back. Here’s a picture:
    When Should a Test Be Automated?
    13
    As far as the support code is concerned, these three tests are identical. If there are bugs to
    be found in the support code, they will either all find them, or none of them will. Once
    one of these tests has been automated, automating the other two adds negligible value
    (unless the feature code is likely to change in ways intended to preserve behavīor).
    That given, here’s a question to ask when considering whether to automate a test:
    If you ignore what the test does to directly fulfill its purpose, are the remainder of
    the test actions somehow different from those done by other tests?
    In other words, does the test have seemingly irrelevant variety? Does it do things in
    different orders, even though the ordering shouldn’t make any difference? You hope a test
    with variety exercises the support code differently than other tests. But you can’t know
    that, not without a deep understanding of the code.
    This is a general-purpose question for estimating a test’s long-term value. As you get
    more experience with the product, it will be easier for you to develop knowledge and
    intuition about what sorts of variations are useful.
    An example
    By this point, you’re probably screaming, "But how do I use this abstract knowledge of
    product structure in real life?" Here’s an example that shows how I’d use everything
    presented in this paper so far.
    Suppose I were testing an addition to the product. It’s half done. The key features are in,
    but some of the ancillary ones still need to be added. I’d like to automate tests for those
    key features now. The sooner I automate, the greater the value the tests can have.
    But I need to talk to people first. I’ll ask these questions:
    · Of the programmer: Is it likely that the ancillary features will require changes to
    product support code? It may be that the programmer carefully laid down the support
    code first, and considers the remaining user-visible features straightforward additions
    to that work. In that case, automated tests are less likely to have value. But the
    programmer might know that the support code is not a solid infrastructure because she
    was rushed to get this alpha version finished. Much rework will be required.
    Automated tests are more clearly called for. Or the programmer might have no idea -
    which is essentially the same thing as answering "yes, support code will change".
    · Of the product manager or project manager: Will this addition be an important part of
    the new release? If so, and if there’s a hot competitive market, it’s likely to change in
    user-visible ways. How much has the user interface changed in the past, and why
    should I expect it to change less often in the future? Are changes mostly additions, or
    is existing behavīor redone? I want a realistic assessment of the chance of change,
    because change will raise the cost of each automated test and shorten its life.
    · Of the person who knows most about the test automation toolset: What’s its track
    record at coping with product change? What kinds of changes tend to break tests (if
    that’s known)? Are those kinds possibilities for the addition I’m testing?
    When Should a Test Be Automated?
    14
    I should already know how many manual tests automating a test will cost me, and I now
    have a very rough feel for the value and lifetime of tests. I know that I’ll be wrong, so I
    want to take care not to be disastrously wrong. If the product is a shrinkwrap product
    with a GUI (where the cost of automation tends to be high and the lifetime short), I’ll err
    on the side of manual tests.
    But that doesn’t mean I’ll have no automated tests. I’ll follow common practice and create
    what’s often called a "smoke test", "sniff test", or "build verification suite". Such suites
    are run often, typically after a daily build, to "validate the basic functionality of the system,
    preemptively catching gross regressions" [McCarthy95]. The smoke test suite "exercises
    the entire system from end to end. It does not have to be an exhaustive test, but it should
    be capable of detecting major problems. By definition, if the build passes the smoke test,
    that means that it is stable enough to be tested and is a good build." [McConnell96]
    The main difference between my smoke test suite and anyone else’s is that I’ll concentrate
    on making the tests exercise different paths through the support code by setting them up
    differently, doing basic operations in different orders, trying different environments, and so
    forth. I’ll think about having a variety of "inessential" steps. These tests will be more
    work to write, but they’ll have more value. (Note that I’ll use variety in even my manual
    testing, to increase the chance of stumbling over bugs in support code.)
    My smoke test suite might also be smaller. I may omit tests for some basic features if I
    believe they wouldn’t exercise the support code in a new way or their lifetime will be
    shorter than average.
    I now have an automated test suite that I know is suboptimal. That’s OK, because I can
    improve it as my knowledge improves. Most importantly, I’ll be tracking bug reports and
    fixes, both mine and those others file against the code under test. From them I’ll discover
    important information:
    · what seemingly irrelevant factors actually turn up bugs. I might discover through a
    bug report that the number of other people who’ve withdrawn money is, surprisingly,
    relevant because of the structure of the support code. I now know to include
    variations in that number in my tests, be they manual or automated. I’ve gained a
    shallow understanding of the support code and the issues it has to cope with. Shallow,
    but good enough to design better tests.
    · where bugs lie. By talking to the developer I’ll discover whether many bugs have been
    in the support code. If so, it’s likely that further bugs will be, so I’ll be encouraged to
    automate.
    · how stable the code’s behavīor really is. I’m always skeptical of claims that "this
    interface is frozen". So if it really was, I’ll have underautomated. After the smoke test
    is done, I’ll keep testing - both creating new tests for the key features and testing the
    ancillary features as they are completed. As I grow more confident about stability, I’ll
    automate more of those tests (if there’s enough time left in the project for that to be
    worthwhile).
    When Should a Test Be Automated?
    15
    Over time, my decisions about whether to automate or run a test manually will get better.
    I will also get a larger test suite.
    When feature code changes, I hope that the smoke tests are unaffected. If the change
    passes the smoke test, I must now further test it manually. That involves rerunning some
    older manual tests. If they were originally documented in terse, checklist form, they can
    readily be rerun. These new executions won’t be the same as the earlier ones; they may be
    rather different. That’s good, as they can serve both to check regressions and perhaps find
    bugs that were there all along. I must also create new tests specific to the change. It is
    unlikely that old tests will do a thorough job on code that didn’t exist when they were first
    designed. I’ll decide whether to automate the new tests according to the usual criteria.
    Sometimes, product changes will break tests. For example, I’d expect the development for
    a new major release to break many old tests. As I looked at each broken test, I’d decide
    anew whether automation was justified. I have seen testing organizations where vast
    amounts of effort are spent keeping the automated test suites running, not because the
    tests have value, but because people really hate to throw them away once they’re written.
    Example: Automated product-level tests after code-level testing
    What if the code under test contains no untargeted code? What if there are specific tests,
    including code-level tests, for every line of code in the system, including support code that
    would be invisible to a product-level "black box" tester? In that case, fewer product-level
    tests are needed. Many bugs in changed support code will be caught directly (whether by
    automated tests or by manual tests targeted at the changes). A few are still useful.
    A common type of bug is the missing change, exemplified by this picture:
    A B C D E
    Feature code A has been changed. Some support code has changed as well. Some facet
    of its behavīor changed to support A’s new behavīor. Unbeknownst to the programmer,
    feature code E also depended on that facet of behavīor. She should have changed E to
    match the support code change, but overlooked the need. As a consequence, E is now
    broken, and that brokenness will not be caught by the most thorough tests of the support
    code: it works exactly as intended. A test for E will catch it. So an automated test for E
    has value, despite the thoroughness of support code testing.
    When Should a Test Be Automated?
    16
    More about code structure: the task-driven automated test
    My picture of the structure of the code was oversimplified. Here’s a better one:
    Code Under
    Test B
    Intervening Code
    Code Under
    Test A
    Support X Support Y
    Support Z
    Not all support code is part of the code under test. Hidden from external view will be
    large blocks of support code that aid many diverse features. Examples include memory
    management code, networking code, graphics code, and database access code.
    The degree to which such support code is exercised by feature tests varies a lot. I have
    been in situations where I understood the role played by Support X. I was able to write
    tests for Code Under Test A that exercised both it and Support X well. That, combined
    with ordinary smoke tests for Code Under Test B (which also used X), gave me
    reasonable confidence that broken changes to Support X would be detected.
    That’s probably the exception to the rule. The question, then, is once again how you can
    exercise well both support code and interactions between support code and feature code
    without knowing anything about the support code.
    The situation is exacerbated by the fact that support code often contains persistent state,
    data that lasts from invocation to invocation (like the records in a database). As a result, a
    bug may only reveal itself when Code Under Test A does something with Support Y
    (perhaps changing a database record to a new value), then Code Under Test B does
    something else that depends - incorrectly - on that changed state (perhaps B was written
    assuming that the new value is impossible).
    The tests you need are often called task-driven tests, use-case tests, or scenario tests.
    They aim to mimic the actions a user would take when performing a typical task. Because
    users use many features during normal tasks, such tests exercise interactions that are not
    probed when each feature is tested in isolation. Because scenario tests favor common
    tasks, they find the bugs most users will find. They also force testers to behave like users.
    When they do, they will discover the same annoyances and usability bugs that will
    frustrate users. (These are cases where the product performs "according to spec", but the
    spec is wrong.)
    When Should a Test Be Automated?
    17
    This style of testing is under-documented compared to feature testing. My two favorite
    sources describe the use of scenarios in product design and marketing. [Moore91]
    discusses target-customer characterization. [Cusumano95] describes activity-based
    planning. See [Jacobson92] for an early descrīption of use cases, including a brief
    discussion of their use in testing. See also [Ambler94], [Binder96], and [Beizer90]
    (chapter 4).
    Some subset of these tests should be automated to catch interaction bugs introduced by
    later change. As before, how many should be automated depends on your expectation
    about how many such bugs will be introduced in the future and not caught by feature
    testing, weighed against how many such bugs you might find now by trying more of the
    manual task-driven tests.
    Note that task-driven tests are more likely to be broken by changes to the product,
    because each one uses more of it than any single feature test would. You’re likely to
    automate fewer of them than of feature tests.
    Secondary Considerations
    Here are some other things I keep in mind when thinking about automation.
    · Humans can notice bugs that automation ignores. The same tools and test libraries that
    filter out irrelevant changes to the UI might also filter out oddities that are signs of
    bugs. I’ve personally watched a tester notice something odd about the way the mouse
    pointer flickered as he moved it, dig into the symptom further, and find a significant
    bug. I heard of one automated test suite that didn’t notice when certain popup
    windows now appeared at X,Y coordinates off the visible screen, so that no human
    could see them (but the tool could).
    · But, while humans are good at noticing oddities, they’re bad at painstaking or precise
    checking of results. If bugs lurk in the 7th decimal place of precision, humans will miss
    it, whereas a tool might not. Noel Nyman points out that a tool may analyse more
    than a person can see. Tools are not limited to looking at what appears on the screen;
    they can look at the data structures that lie behind it.
    · The fact that humans can’t be precise about inputs means that repeated runs of a
    manual test are often slightly different tests, which might lead to discovery of a
    support code bug. For example, people make mistakes, back out, and retry inputs,
    thus sometimes stumbling across interactions between error-handling code and the
    code under test.
    · Configuration testing argues for more automation. Running against a new OS, device,
    3rd party library, etc., is logically equivalent to running with changed support code.
    Since you know change is coming, automation will have more value. The trick,
    though, is to write tests that are sensitive to configuration problems - to the
    differences between OSes, devices, etc. It likely makes little sense to automate your
    whole test suite just so you can run it all against multiple configurations.
    When Should a Test Be Automated?
    18
    · If the test finds a bug when you first run it, you know you’ll need to run it again when
    the bug is ostensibly fixed. That in itself is probably not enough to tip the scales
    toward automation. It does perhaps signal that this part of the code is liable to have
    future changes (since bugs cluster) and may thus motivate you to automate more tests
    in this area, especially if the bug was in support code.
    · If your automation support is strong enough that developers can rerun tests easily, it
    may be faster to automate a test than write a detailed descrīption of how to reproduce
    a bug. That level of automation support is rare, though. Developers sometimes have
    difficulty using the test tools, or don’t have them installed on their machine, or can’t
    integrate them with their debugger, or can’t find the test suite documentation, or have
    an environment that mysteriously breaks the test, and so forth. You can end up
    frustrating everyone and wasting a lot of time, just to avoid writing detailed bug
    reports.
    · It’s annoying to discover a bug in manual testing and then find you can’t reproduce it.
    Probably you did something that you don’t remember doing. Automated tests rarely
    have that problem (though sometimes they’re dependent on parts of the environment
    that change without your noticing it). Rudimentary tracing or logging in the product
    can often help greatly - and it’s useful to people other than testers. In its absence, a
    test automation tool can be used to create a similar log of keystrokes and mouse
    movements. How useful such logs are depends on how readable they are - internally
    generated logs are often much better. From what I’ve seen, many testers could benefit
    greatly from the lower-tech solution of taking notes on a pad of paper.
    · An automated test suite can explore the whole product every day. A manual testing
    effort will take longer to revisit everything. So the bugs automation does find will
    tend to be found sooner after the incorrect change was made. When something that
    used to work now breaks, a first question is "what code changes have been made since
    this last worked?" Debugging is much cheaper when there’s only been a day’s worth of
    changes. This raises the value of automation.
    Note that the really nasty debugging situations are due to interactions between
    subsystems. If the product is big, convoluted, and hard to debug, automated tests will
    have more value. That’s especially true of task-driven tests (though, unfortunately,
    they may also have the shortest expected life).
    · After a programmer makes a change, a tester should check it. That might include
    rerunning a stock set of old tests, possibly with variations. It certainly includes
    devising new tests specifically for the change. Sometimes communication is poor:
    testers aren’t told of a change. With luck, some automated tests will break, causing
    the testers to notice the change, thus be able to test it and report bugs while they’re
    still cheapest to fix. The smaller the automated test suite, the less likely this will
    happen. (I should note that test automation is an awfully roundabout and expensive
    substitute for basic communication skills.)
    When Should a Test Be Automated?
    19
    · Because test automation takes time, you often won’t report the first bugs back to the
    programmer as soon as you could in manual testing. That’s a problem if you finally
    start reporting bugs two weeks after the programmer’s moved on to some other task.
    · It’s hard to avoid the urge to design tests so that they’re easy to automate, rather than
    good at finding bugs. You may find yourself making tests too simplistic, for example,
    because you know that reduces the chance they’ll be broken by product changes. Such
    simplistic tests will be less likely to find support code bugs.
    · Suppose the product changes behavīor, causing some automated tests to report failure
    spuriously. Fixing those tests sometimes removes the spurious failures but also greatly
    reduces their ability to find legitimate bugs. Automated test suites tend to decay over
    time.
    · Automated tests, if written well, can be run in sequence, and the ordering can vary
    from day to day. This can be an inexpensive way to create something like task-driven
    tests from a set of feature tests. Edward L. Peters reminded me of this strategy after
    reading a draft of this paper. As Noel Nyman points out, automated tests can take
    advantage of randomness (both in ordering and generating inputs) better than humans
    can.
    · You may be designing tests before the product is ready to test. In that case, the extra
    time spent writing test scrīpts doesn’t count - you don’t have the alternative of manual
    testing. You should still consider the cost of actually getting those scrīpts working
    when the product is ready to test. (Thanks to Dave Gelperin for this point.)
    · An automated test might not pay for itself until next release. A manual test will find
    any bugs it finds this release. Bugs found now might be worth more than bugs found
    next release. (If this release isn’t a success, there won’t be a next release.)
    Summary
    This paper sketches a systematic approach to deciding whether a test should be
    automated. It contains two insights that took me a long time to grasp - and that I still find
    somewhat slippery - but which seem to be broadly true:
    1. The cost of automating a test is best measured by the number of manual tests it
    prevents you from running and the bugs it will therefore cause you to miss.
    2. A test is designed for a particular purpose: to see if some aspects of one or more
    features work. When an automated test that’s rerun finds bugs, you should expect it to
    find ones that seem to have nothing to do with the test’s original purpose. Much of the
    value of an automated test lies in how well it can do that.
    For clarity, and to keep this paper from ballooning beyond its already excessive length,
    I’ve slanted it toward particular testing scenarios. But I believe the analysis applies more
    broadly. In response to reader comments, I will discuss these issues further on my web
    page, <http://www.stlabs.com/marick/root.htm>.
    When Should a Test Be Automated?
    20
    Acknowledgements
    A version of this paper was briefly presented to participants in the third Los Altos
    Workshop on Software Testing. They were Chris Agruss (Autodesk), James Bach
    (SmartPatents), Karla Fisher (Intel), David Gelperin (Software Quality Engineering), Chip
    Groder (Cadence Design Systems), Elisabeth Hendrickson (Quality Tree Consulting),
    Doug Hoffman (Software Quality Methods), III (Systemodels), Bob Johnson, Cem Kaner
    (kaner.com), Brian Lawrence (Coyote Valley Software Consulting), Thanga Meenakshi
    (Net Objects), Noel Nyman (Microsoft), Jeffery E. Payne (Reliable Software
    Technologies), Bret Pettichord (Tivoli Software), Johanna Rothman (Rothman Consulting
    Group), Jane Stepak, Melora Svoboda (Facetime), Jeremy White (CTB/McGraw-Hill),
    and Rodney Wilson (Migration Software Systems).
    Special thanks to Dave Gelperin, Chip Groder, and Noel Nyman for their comments.
    I also had students in my University of Illinois class, "Pragmatics of Software Testing and
    Development", critique the paper. Thanks to Aaron Coday, Kay Connelly, Duangkao
    Crinon, Dave Jiang, Fred C. Kuu, Shekhar Mehta, Steve Pachter, Scott Pakin, Edward L.
    Peters, Geraldine Rosario, Tim Ryan, Ben Shoemaker, and Roger Steffen.
    Cem Kaner first made me realize that my earlier sweeping claims about automation were
    rooted in my particular environment, not universal law.
    References
    [Ambler94]
    Scott Ambler, "Use-Case Scenario Testing," Software Development, July 1995.
    [Beizer90]
    Boris Beizer, Software Testing Techniques (2/e), Van Nostrand Reinhold, 1990.
    [Binder96]
    Robert V. Binder, "Use-cases, Threads, and Relations: The FREE Approach to System Testing,"
    Object Magazine, February 1996.
    [Cusumano95]
    M. Cusumano and R. Selby, Microsoft Secrets, Free Press, 1995.
    [Jacobson92]
    Ivar Jacobson, Magnus Christerson, Patrik Jonsson, and Gunnar gvergaard, Object-Oriented
    Software Engineering: A Use Case Driven Approach, Addison-Wesley, 1992.
    [Kaner97]
    Cem Kaner, “Improving the Maintainability of Automated Test Suites,” in Proceedings of the
    Tenth International Quality Week (Software Research, San Francisco, CA), 1997.
    (http://www.kaner.com/lawst1.htm)
    [Moore91]
    Geoffrey A. Moore, Crossing the Chasm, Harper Collins, 1991.
    [Pettichord96]
    Bret Pettichord, “Success with Test Automation,” in Proceedings of the Ninth International
    Quality Week (Software Research, San Francisco, CA), 1996.
    (http://www.io.com/~wazmo/succpap.htm)

数据统计

  • 访问量: 9817
  • 日志数: 18
  • 建立时间: 2007-09-13
  • 更新时间: 2007-12-10

RSS订阅

Open Toolbar