Context:To have a deliberately successful
performance testing project, as opposed to an accidentally useful one,
both the approach to testing performance and the testing itself must be
relevant to the context of the project. The project context includes,
but is not limited to, the overall vision or intent of the project,
performance testing objectives, performance success criteria, the
development life cycle, the project schedule, the project budget, the
available tools and environments, the skill set of the performance
tester and the team, the priority of detected performance concerns, and
the business impact of deploying an application that performs poorly.
Without an understanding of those items, performance testing is bound
to focus on the items that the performance tester or test teamassumesmust be important, which frequently leads to wasted time, frustration and conflicts.
Criteria:Performance acceptance criteria include
requirements, goals, targets, thresholds and objectives related to both
the application's performance and the performance testing sub-project.
While many of those items will undoubtedly change during the project
life cycle, keeping up with them will help to ensure that performance
testing stays in sync with the overall priorities of the project. If
you are unfamiliar with this particular characterization of performance
criteria, I've defined them below:
- Performance requirements:Criteria that are absolutely
non-negotiable due to contractual obligations, service-level agreements
(SLA) or fixed business needs.
- Performance goals:Criteria that are desired
for product release but may be negotiable under certain circumstances.
These are typically, but not necessarily, end-user focused
- Performance testing objectives:These refer to
data that is collected through the process of performance testing and
that is anticipated to have value in determining or improving the
quality of the product. However, these objectives are not necessarily
quantitative or directly related to other stated performance criteria.
- Performance targets:These are the desired
values for resources of interest under a particular set of conditions,
usually specified in terms of response times, throughput and resource
utilization levels.
- Performance thresholds:These represent the
maximum acceptable value for resources of interest, usually specified
in terms of response times, throughput (transactions per second), and
resource utilization levels.
Design:Like any other type of testing, to ensure that
performance tests collect the data of interest, represent intended
situations and yield both meaningful and valid results, tests must be
well-designed. A significant component of performance test design is
determining, designing and creating data associated with the natural
variances of application users. Whether the design is completed well in
advance of or in line with test execution is relevant only as it
relates to the context of the project.
Install:This heuristic is actually short for "Install and
Configure or Update Tools and the Load Generation Environment." Based
on various performance criteria, project context and the design of your
tests, you will need a variety of tools to generate load and collect
data of interest. Additionally, to ensure that the test results and
collected data represent what they are intended to represent, the load
generation environment and associated tools must be validated to ensure
that the act of data collection and/or load generation does not
inadvertently skew the data or results.
scrīpt:It is most likely that your test design will be
implemented using a load generation tool that requires some degree of
scrīpting. The act of scrīpting is, of course, extremely tool-specific.
But no matter what tool you use, or how else you may choose to generate
load, you will need to validate that, once implemented, the tests
interact with the application in the manner intended by the test
design, collect the intended data, and return meaningful and accurate
data and results.
Execute:Test execution, as it relates to performance
testing, is the activity most people envision as "clicking the go
button and babysitting machines." The fact is that test execution
involves continually validating the tests and test environment, running
new tests and archiving all of the data associated with test execution.
Analyze:Analyzing test results and collected data, whether
to determine requirement compliance, track trends, detect bottlenecks
or evaluate the effectiveness of tuning efforts, is crucial to the
success of a performance testing project.
Report:Reporting on the results and analysis is just as
significant as the collection and analysis of the data. If the reports
are not clear and intuitive for their intended audience, critical
performance issues can go unresolved due to nothing more technical than
failed communication.
Iterate:Iterating is virtually a given for any type of
testing. Sometimes we iterate based on builds, defect resolutions or
environment changes. The one part that isn't always obvious is that
iteration applies to each of these activities as the project context,
objectives and priorities have a habit of changing throughout the
project life cycle.
Using these activities, or a similar set of activities named and
grouped according to a team's established goals, process and
terminology, it becomes relatively straightforward to arrange them into
an approach that fits the existing project structure and then fill in
any additional activities, tasks, approval gates or processes necessary
to make the approach flow seamlessly within the project at hand.
Now if someone asks you what the "'one true"' approach to
performance testing is, you can simply respond by saying "Organize CCD
IS EARI into a flow that fits your project."