Development and deployment contexts have changed considerably over the last few years. The discipline of performance testing has had difficulty keeping up with modern testing principles and software development and deployment processes.
Most people still see performance testing as a single experiment, run against a completely assembled, code-frozen, production-resourced system, with the “accuracy” of simulation and environment considered critical to the value of the data the test provides. But what can we do to provide actionable and timely information about performance and reliability when the software is not complete, when the system is not yet assembled, or when the software will be deployed in more than one environment? How can you effectively performance test in continuous integration to continually collect feedback on the project’s performance and scalability characteristics?
Eric will deconstruct “realism” in performance simulation, talk about performance testing more cheaply to test more often, and suggest strategies and techniques to get there. He will share findings from WOPR22, where performance testers from around the world came together to discuss this theme in a peer workshop.