发布新日志

  • 对软件测试团队“核心价值”的思考(ZT)

    2013-08-06 17:35:12

    之前曾写过《软件质量管理的困境与对策思考》,在其中谈到开发部门与质量管理部门(QA)应形成一个有“交集的双环”而非“哑铃型”组织,也指出软件质量管理应重实践轻量化,其目标应是帮助工程师改善工作习惯和提升开发环境的效率。那时并没有认真地思考过测试团队的核心价值,直到读到@段念-段文韬老师的《测试团队与咖啡店》。


    通常,软件开发团队似乎几乎不谈论自己的“核心价值”,而针对测试团队总有对该问题的特有思考是不是折射出了现实的一些状况?因为但凡探寻“核心价值”时,往往意味着价值不够清晰或者找不准重点。


    以我过去一直从事软件开发相关工作的经历来看,测试团队对于“核心价值”的特有思考的确存在其必然性。探讨其根源让我们从一个“游戏”开始。


    “零和游戏”之困

    多数软件企业都设立了开发与测试两个部门,且两个部门属于企业价值链中的两个有交互但又独立的节点。在企业中只要是个部门就大多存在绩效考核问题,似乎只有这样才能证明该部门存在的必要性。


    软件测试部门的角色通常定位为“质量卫士”。自然地,他们所发现软件缺陷的数量和严重程度与其绩效潜移默化地有着紧密关联。于是乎,测试工程师为了体现其价值,希望尽可能在缺陷跟踪系统中新建缺陷记录。但开发工程师就不干了,因为缺陷数量同样可以作为考核指标以衡量其开发质量。进一步我们有了这样的工作场景:测试工程师发现问题后,首先与开发工程师进行沟通,在征得开发工程师的同意后再新建缺陷记录(这个过程有时变成了一种博弈,而非真正为了工作效率);开发工程师对于测试工程师所发现的问题不是持感激态度,反而认为他们是在“找麻烦”。由于“质量卫士”的存在,开发工程师心安理得、堂而皇之地认为保证软件质量是测试部门的事。


    不难发现,这样的体制下其实创造了一个“零和游戏”。这个游戏给我们带来的困境是:测试部门的“赢”(发现了更多的缺陷)意味着开发部门的“输”(开发质量不佳);反之亦然。总而言之,两个部门很难达成共赢,有时甚至出现各自为政的极端状况。


    软件质量的概念

    估计没有人会质疑测试活动本身的价值,其背后的道理恐怕再简单不过了。但我们仍需先探讨一下什么是软件质量(后面简称为“质量”),不明晰这一概念会很难保证测试活动能有的放矢。


    我在《专业嵌入式软件开发》一书中曾指出,质量是分级的,它包含用户和团队两个级别。简单说来,用户级的质量由软件缺陷去反映,而团队级的质量反映于开发团队能否按步就班地实施开发工作而非经常处于“救火”的“紧急状态”,团队级的质量是涵盖用户级的更高形式。我在该书中还指出,软件设计是质量之本,只有高质的软件设计才能保证团队级的质量,并最终长期给我们带来用户级的质量。这些主张很明确地表示,质量首先由软件开发工程师负责。


    用户级的软件质量我们可以通过根据需求文档编写的测试用例来加以评估。但是团队级质量(即软件设计质量)则很难通过这些测试用例去评估,但软件缺陷数量却也能反映出一定的情况。如果某项目的软件缺陷数量在相当长的时间内出现幅度较大的波动,这大多意味着软件设计存在问题,也表明开发团队并没有从根源上解决问题,而是采取了“七修八补”的短期行为。另外,从开发团队是否时常处于“救火”状态也能很大程度地反映出软件的团队级质量水准。


    我们需要测试工程师吗?

    理想情况下,测试可以由开发工程师们去完成,因为“他们知道软件的所有实现细节”,但现实与理想总是存在差距。完全由开发工程师去完成测试工作存在以下可操作性问题:

    1)对开发工程师的能力要求太高。这些家伙不仅要精通编程语言、熟悉业务,为了测试还得掌握测试理论、测试方法和实施测试所需的脚本语言。要求一旦太高,就容易出现资源稀缺的现象。另外,我们设想一下,工程师如何才能达到这么高的要求?学校里学?还是工作中学?如果在工作中学,那么在没有达到要求前他们在工作中承担的角色又是什么?

    2)当开发时间很紧的情形下,要让开发工程师同时关注设计质量和测试质量几乎不大可能。现实中,能顾及前者就算是很出色的开发工程师了。此外,这种不可能不是单方面由开发工程师决定的,而是有太多的项目管理团队总有“压缩时间能促使开发团队不散漫”的错误认识,而没有意识这种认识驱使的行为所带来的副作用——影响了软件的设计质量。

    3)开发工程师们通过采用软件模块化的方式实现协作,这种情形下一定需要有人从测试的角度去统领整个系统,靠开发工程师自己实在勉为其难。


    面对这些可操作性问题,如果有人还一味地坚持测试工作应完全由开发工程师完成,那只能说他在否认社会分工的益处,也很可能是忘记了自己在成长为“全能选手”前所承担的角色。


    综上所述,我们需要有测试工程师与开发工程师共同努力以实现质量目标,而这也意味着测试工程师是有价值的!


    测试工程师贡献价值的方向

    测试工程师要很好地贡献价值,首先要与开发工程师有共同的目标。也就是说,开发与测试团队先要把质量目标变成“我们的”,而不是“你的”或“我的”,否则很难打破“零和游戏”所带来的困境。就这一点,我完全认同@吴穹老师在他的《测试的双重目的性及理性质量观》一文中所倡导的“只有将开发和测试完全地混合在一起,不分彼此,才能够真正获得好的质量,不应试图去隔绝开发与测试团队”。换句话说,开发与测试团队在组织架构上的关系要做适当的修正以支撑这种主张,否则它是阻碍测试团队输出价值的第一个巨大障碍。


    所制定的共同质量目标最好是团队级别的,因为从开发工程师的角度来看,只有这样才能保证开发工作按步就班,也意味着他们和公司能从中获得最大的收益。从这个角度说来,测试工程师可以考虑“我如何帮助改善软件的设计质量”。这个问题或许太大,对测试工程师的要求也太高(后面我们还会谈这方面的内容),但我们可以从“如何保证软件的可测试性”这种更具有指导性的问题入手。


    退而求其次的是,测试团队与开发团队共享相同的用户级质量目标。在这个层面上,测试团队将发现巨大的发挥空间。比如,测试团队能否搭建或改善单元测试的平台,以帮助开发工程师更方便地实施单元测试;又或者能否帮助开发工程师构建持续集成的平台;等等。


    请注意,我并非主张测试团队应对开发团队言听计从,而是主张测试团队应使用自己的测试专业知识帮助开发团队提高开发质量与效率,而非只充当检验员的角色。测试工程师一定需要建立团队的自信:测试是一个专业领域,在质量保证方面我们有自己独到的见解,能为开发工程师提供帮助。


    总的说来,测试团队需要站在测试专业领域的高度为开发团队提供指导与帮助,也只有这样开发工程师才能感受到“我们拥有同一个质量目标”。这种观念我想也正是@段念-段文韬老师在《测试团队与咖啡店》一文中想强调的。另外,Google让测试团队隶属于Engineering Productivity这一FA(Focus Area)或许也正是出于这一考虑的吧!


    现实的无奈

    读者可以从网上搜索《Google是如何做测试的》这个系列翻译文章,其中谈到了Google是如何组织测试的,里面的内容很值得我们学习与思考。总体说来,我觉得Google对于测试工程师和测试开发工程师的要求相比国内我所见到的更高,且其中开发测试工程师的作用非常关键,他们review设计、审查代码的质量与风险、重构代码使之具备更好的可测试性、编写单元测试和自动化测试框架等。


    回头看看国内,好象将测试当作比开发次要而非同级。对测试工程师的要求似乎也没有开发工程师高,这一点从招聘时碰到某位工程师不适合开发岗位时会考虑他是否适合做测试可以看出。以我的理解,测试工程师应当是开发工程师出身且水平更高,因为只有水平高了才能对软件质量有更深刻的认识,才有能力从质量层面贴心地指导和帮助开发工程师的日常工作。


    测试团队对“核心价值”困惑的存在,很大程度上是由于国内对测试的重视不足,强行割裂开发与测试两个活动而导致的。其所带来的更大危害在于,测试人才缺乏一定的梯度。

  • Google是如何做测试的?

    2013-08-06 17:33:39

    PART I 
    This is the first in a series of posts on this topic. 

    The one question I get more than any other is "How does Google test?" It's been explained in bits and pieces on this blog but the explanation is due an update. The Google testing strategy has never changed but the tactical ways we execute it has evolved as the company has evolved. We're now a search, apps, ads, mobile, operating system, and so on and so forth company. Each of these Focus Areas (as we call them) have to do things that make sense for their problem domain. As we add new FAs and grow the existing ones, our testing has to expand and improve. What I am documenting in this series of posts is a combination of what we are doing today and the direction we are trending toward in the foreseeable future. 

    Let's begin with organizational structure and it's one that might surprise you. There isn't an actual testing organization at Google. Test exists within a Focus Area called Engineering Productivity. Eng Prod owns any number of horizontal and vertical engineering disciplines, Test is the biggest. In a nutshell, Eng Prod is made of:

    1. A product team that produces internal and open source productivity tools that are consumed by all walks of engineers across the company. We build and maintain code analyzers, IDEs, test case management systems, automated testing tools, build systems, source control systems, code review schedulers, bug databases... The idea is to make the tools that make engineers more productive. Tools are a very large part of the strategic goal of prevention over detection. 

    2. A services team that provides expertise to Google product teams on a wide array of topics including tools, documentation, testing, release management, training and so forth. Our expertise covers reliability, security, internationalization, etc., as well as product-specific functional issues that Google product teams might face. Every other FA has access to Eng Prod expertise. 

    3. Embedded engineers that are effectively loaned out to Google product teams on an as-needed basis. Some of these engineers might sit with the same product teams for years, others cycle through teams wherever they are needed most. Google encourages all its engineers to change product teams often to stay fresh, engaged and objective. Testers are no different but the cadence of changing teams is left to the individual. I have testers on Chrome that have been there for several years and others who join for 18 months and cycle off. Keeping a healthy balance between product knowledge and fresh eyes is something a test manager has to pay close attention to. 

    So this means that testers report to Eng Prod managers but identify themselves with a product team, like Search, Gmail or Chrome. Organizationally they are part of both teams. They sit with the product teams, participate in their planning, go to lunch with them, share in ship bonuses and get treated like full members of the team. The benefit of the separate reporting structure is that it provides a forum for testers to share information. Good testing ideas migrate easily within Eng Prod giving all testers, no matter their product ties, access to the best technology within the company. 

    This separation of project and reporting structures has its challenges. By far the biggest is that testers are an external resource. Product teams can't place too big a bet on them and must keep their quality house in order. Yes, that's right: at Google it's the product teams that own quality, not testers. Every developer is expected to do their own testing. The job of the tester is to make sure they have the automation infrastructure and enabling processes that support this self reliance. Testers enable developers to test. 

    What I like about this strategy is that it puts developers and testers on equal footing. It makes us true partners in quality and puts the biggest quality burden where it belongs: on the developers who are responsible for getting the product right. Another side effect is that it allows us a many-to-one dev-to-test ratio. Developers outnumber testers. The better they are at testing the more they outnumber us. Product teams should be proud of a high ratio! 

    Ok, now we're all friends here right? You see the hole in this strategy I am sure. It's big enough to drive a bug through. Developers can't test! Well, who am I to deny that? No amount of corporate kool-aid could get me to deny it, especially coming off my GTAC talk last year where I pretty much made a game of developer vs. tester (spoiler alert: the tester wins).

    Google's answer is to split the role. We solve this problem by having two types of testing roles at Google to solve two very different testing problems. In my next post, I'll talk about these roles and how we split the testing problem into two parts.

    PART II

     

    By James Whittaker

    In order for the “you build it, you break it” motto to be real, there are roles beyond the traditional developer that are necessary. Specifically, engineering roles that enable developers to do testing efficiently and effectively have to exist. At Google we have created roles in which some engineers are responsible for making others more productive. These engineers often identify themselves as testers but their actual mission is one of productivity. They exist to make developers more productive and quality is a large part of that productivity. Here's a summary of those roles:

    The SWE or Software Engineer is the traditional developer role. SWEs write functional code that ships to users. They create design documentation, design data structures and overall architecture and spend the vast majority of their time writing and reviewing code. SWEs write a lot of test code including test driven design, unit tests and, as we explain in future posts, participate in the construction of small, medium and large tests. SWEs own quality for everything they touch whether they wrote it, fixed it or modified it. 

    The SET or Software Engineer in Test is also a developer role except their focus is on testability. They review designs and look closely at code quality and risk. They refactor code to make it more testable. SETs write unit testing frameworks and automation. They are a partner in the SWE code base but are more concerned with increasing quality and test coverage than adding new features or increasing performance. 

    The TE or Test Engineer is the exact reverse of the SET. It is a a role that puts testing first and development second. Many Google TEs spend a good deal of their time writing code in the form. of automation scripts and code that drives usage scenarios and even mimics a user. They also organize the testing work of SWEs and SETs, interpret test results and drive test execution, particular in the late stages of a project as the push toward release intensifies. TEs are product experts, quality advisers and analyzers of risk. 

    From a quality standpoint, SWEs own features and the quality of those features in isolation. They are responsible for fault tolerant designs, failure recovery, TDD, unit tests and in working with the SET to write tests that exercise the code for their feature. 

    SETs are developers that provide testing features. A framework that can isolate newly developed code by simulating its dependencies with stubs, mocks and fakes and submit queues for managing code check-ins. In other words, SETs write code that allows SWEs to test their features. Much of the actual testing is performed by the SWEs, SETs are there to ensure that features are testable and that the SWEs are actively involved in writing test cases. 

    Clearly SETs primary focus is on the developer. Individual feature quality is the target and enabling developers to easily test the code they write is the primary focus of the SET. This development focus leaves one large hole which I am sure is already evident to the reader: what about the user?

    User focused testing is the job of the Google TE. Assuming that the SWEs and SETs performed module and feature level testing adequately, the next task is to understand how well this collection of executable code and data works together to satisfy the needs of the user. TEs act as a double-check on the diligence of the developers. Any obvious bugs are an indication that early cycle developer testing was inadequate or sloppy. When such bugs are rare, TEs can turn to their primary task of ensuring that the software runs common user scenarios, is performant and secure, is internationalized and so forth. TEs perform. a lot of testing and test coordination tasks among TEs, contract testers, crowd sourced testers, dog fooders, beta users, early adopters. They communicate among all parties the risks inherent in the basic design, feature complexity and failure avoidance methods. Once TEs get engaged, there is no end to their mission. 

    Ok, now that the roles are better understood, I'll dig into more details on how we choreograph the work items among them. Until next time...thanks for your interest.

    PART III 

    By James Whittaker

    Lots of questions in the comments to the last two posts. I am not ignoring them. Hopefully many of them will be answered here and in following posts. I am just getting started on this topic. 

    At Google, quality is not equal to test. Yes I am sure that is true elsewhere too. “Quality cannot be tested in” is so cliché it has to be true. From automobiles to software if it isn’t built right in the first place then it is never going to be right. Ask any car company that has ever had to do a mass recall how expensive it is to bolt on quality after-the-fact. 

    However, this is neither as simple nor as accurate as it sounds. While it is true that quality cannot be tested in, it is equally evident that without testing it is impossible to develop anything of quality. How does one decide if what you built is high quality without testing it? 

    The simple solution to this conundrum is to stop treating development and test as separate disciplines. Testing and development go hand in hand. Code a little and test what you built. Then code some more and test some more. Better yet, plan the tests while you code or even before. Test isn’t a separate practice, it’s part and parcel of the development process itself. Quality is not equal to test; it is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other. 

    At Google this is exactly our goal: to merge development and testing so that you cannot do one without the other. Build a little and then test it. Build some more and test some more. The key here is who is doing the testing. Since the number of actual dedicated testers at Google is so disproportionately low, the only possible answer has to be the developer. Who better to do all that testing than the people doing the actual coding? Who better to find the bug than the person who wrote it? Who is more incentivized to avoid writing the bug in the first place? The reason Google can get by with so few dedicated testers is because developers own quality. In fact, teams that insist on having a large testing presence are generally assumed to be doing something wrong. Having too large a test team is a very strong sign that the code/test mix is out of balance. Adding more testers is not going to solve anything. 

    This means that quality is more an act of prevention than it is detection. Quality is a development issue, not a testing issue. To the extent that we are able to embed testing practice inside development, we have created a process that is hyper incremental where mistakes can be rolled back if any one increment turns out to be too buggy. We’ve not only prevented a lot of customer issues, we have greatly reduced the number of testers necessary to ensure the absence of recall-class bugs. At Google, testing is aimed at determining how well this prevention method is working. TEs are constantly on the lookout for evidence that the SWE-SET combination of bug writers/preventers are screwed toward the latter and TEs raise alarms when that process seems out of whack. 

    Manifestations of this blending of development and testing are all over the place from code review notes asking ‘where are your tests?’ to posters in the bathrooms reminding developers about best testing practices, our infamous Testing On The Toilet guides. Testing must be an unavoidable aspect of development and the marriage of development and testing is where quality is achieved. SWEs are testers, SETs are testers and TEs are testers. 

    If your organization is also doing this blending, please share your successes and challenges with the rest of us. If not, then here is a change you can help your organization make: get developers fully vested in the quality equation. You know the old saying that chickens are happy to contribute to a bacon and egg breakfast but the pig is fully committed? Well, it's true...go oink at one of your developer and see if they oink back. If they start clucking, you have a problem.

  • 测试团队与咖啡店(ZT)

    2013-08-06 17:31:23

    测试团队的“核心价值”应该是什么?这个问题无数次被提出,又无数次在讨论中归于沉寂。所谓的“核心价值”,恐怕在不同的组织和不同的产品中很难达成一致,因此,归于沉寂也就不足为奇。

    但“核心价值”的问题本身确实值得探究,因为没有价值观的认同,就很难去讨论测试的前进方向。在不少测试相关的会议上,我的演讲都旗帜鲜明地把“价值”作为测试理所当然的前进方向,在我看来,测试本身需要找到自己存在的价值,然后才可以讨论“该做什么”和“该怎样做”。

    据说IT工程师在想要转行时,通常的第一考虑是开餐馆或是咖啡店,坦白说,我也曾有过这样的念头:)现在,假设我终于要离开IT行业,转行去开个咖啡店。于是我选择了一个在我看来合适的地段(某IT人士集中的写字楼下),用多年积攒的工资缴纳了租金,然后正式开始了我的咖啡店老板生涯……起初,一切都非常完美,我按照自己的规划装潢了这家咖啡店,用的是我最喜欢的墙纸,欧式的格调,柔和的灯光,散发着柔和光泽的咖啡用具……嗯,连音乐都是我精挑细选的,应该说,这就是我梦想中最完美的咖啡店。可是,在按照我的心意打造了这家咖啡店之后不久,我突然意识到了一个大问题。那就是,似乎我的客户和我在品味上并不一致,虽然有些人口头表达了对我这家咖啡店的认同,但实际情况是他们很少有足够的耐心等待细致的咖啡制作环节完成,甚至一段时间后,有人问我能不能提供火锅。提供火锅?!这可不是我的梦想,我的梦想是全世界最好的咖啡店!在断然拒绝他们的无理要求之后,毫不意外,我的咖啡店生意越来越差,梦想还在,但离赚钱的目标却越来越远……

    好吧,实际上,我还不打算现在转行去开咖啡店。讲这个故事,只不过是想以咖啡店老板的心态来探讨测试团队的价值取向。把故事里的咖啡店老板换成测试团队,你会发现其中有不少的相同点:

    1,测试团队按照自己的梦想建立了“完美的”测试体系(测试流程、自动化测试工具、缺陷跟踪与分析体系等);

    2,开发工程师偶尔也会赞扬这些看上去很完美的流程、测试用例等,但却始终关心这个问题:“你们能不能在我们每次修改代码之后帮我们验证一下”;

    3,显然,测试团队会非常不喜欢开发工程师的要求,断然拒绝;

    4,测试团队继续着自己的梦想和期望,长吁短叹开发对自己的不理解与不认同,然后……

    那么,我的问题是,如果你是咖啡店的老板,你准备怎么做?显然,一旦了解谁是客户,你立刻就会明白:如果客户要的并不是环境优美、格调高雅的咖啡店,那么我们就提供给客户他需要的东西。如果我们的目的是最终盈利的话,显然,在这种情况下把咖啡店改成火锅店会更加合适一点。

    回到测试团队的话题,谁是测试团队的客户?单纯的说是开发团队也不尽合理,实际上测试团队和开发团队的客户一样,都是产品的直接用户。但考虑到开发团队的直接产出是用户可使用的软件产品,测试团队把开发团队当成自己的客户还是合情合理的。

    那么,当客户对你说他想要火锅(持续不断地对提交的产品进行验证),而不想要格调高雅的咖啡(晚上的流程、分析等),你该怎么办?最直接的反应是把咖啡店改成火锅店,提供火锅而不是咖啡。所以,测试团队可以用手工的方式帮助开发人员验证不断地验证产品,直到……自己再也无法应付为止。更好的方法也许是探寻客户真正的需求,例如,一个不断要求火锅的客户很可能不是非火锅不吃,而是因为他喜欢吃火锅的热闹,选择的自由,以及在花费不多的情况下鼓腹而出。那么,我们大可以给他提供他真正想要的东西。当开发人员对测试团队说“我需要你们在每次提交后对产品进行测试”,他真正想要的只不过是能够有一种机制,使得每次代码提交之后都能验证产品是不是存在明显的问题。通过CI、分层的自动化测试,测试团队可以用更轻松、更快捷也更优雅的方式解决开发团队的问题。在这个基础上,也许你还可以说服开发团队建立一系列的标准,用于评估产品的生产率和质量,让测试团队和开发团队一起推动持续的生产率增长和质量提升。

    也许,对测试团队来说,咖啡店不是个最好的比喻(毕竟不是每个测试工程师都像我这么没有想象力)。但当你下回纠结于“开发工程师为什么不接纳我们辛辛苦苦建立的流程”的时候,至少可以把自己想象成咖啡店老板,换个角度想想你的客户需要什么样的服务。

    本文作者简介

    段念,工程副总裁@豆瓣。对软件开发管理与团队管理颇有兴趣,喜欢和各位同好交流。

    本文来自InfoQ微信公众账号:infoqchina

Open Toolbar