Workshop: Interpreting and Reporting Performance Test Results
Listen To This STP Radio Episode Now!
You’ve worked hard to define, develop and execute a performance test on a new application to determine its behavior under load. You have barrels full of numbers. What’s next? The answer is definitely not to generate and send a canned report from your testing tool. Results interpretation and reporting is where a performance tester earns their stripes.
In the first half of this workshop we’ll start by looking at some results from actual projects and together puzzle out the essential message in each. This will be a highly interactive session where we will display a graph, provide a little context, and ask “what do you see here?” We will form hypotheses, draw tentative conclusions, determine what further information we need to confirm them, and identify key target graphs that give us the best insight on system performance and bottlenecks.
Please bring your own sample results (on a thumb drive so that we can load and display) and we’ll interpret them together!
In the second half of this workshop, we’ll try to codify the analytic steps we went through in the first session, and consider a CAVIAR approach for collecting and evaluating test results: Collecting, Aggregating, Visualizing, Interpreting, Analyzing, And Reporting.
Training in interpreting results: Data + Analysis = Information.
Examples of telling performance test graphs.
Advice on Reporting: compel action with your information.
Interpretation of your results – please bring them along!