This topic is designed to provoke curiosity.

I know this one is going to open a large can of Mongolian death worms…..but who in your environment “owns” performance requirements? Now contrast this with who “should own” performance requirements?

We have all been there. It’s time to engage the performance test team and the first item on the list for collection is performance requirements. Our practice, after all, is a micro-scale development effort. We are no different than any other development effort, we need requirements to define our level of effort, how we deploy, and in the end how we report back to our stakeholders, but all to often we are greeted with “I don’t know, how fast do you think it should be?” from our stakeholders.

Ouch! Recall the fire trucks. We have a major [process] malfunction here.

We are performance testers, some of us are even performance engineers (yes, there is a difference), but what we are not is the business owner of the application. How the he** are we supposed to know how fast it needs to be to support your business? Sure, we can go through and make some educated guesses, but guess what? Will our educated guesses and assumptions be the same ones made by application architecture, platform engineering & development in the months prior engaging the performance testing team? It is almost a certainly not. And this difference of perspective on what constitutes acceptable performance will be a huge and ugly problem when you start reporting success or failure back to the stakeholders, for they may have a completely different view of performance than the technical teams that preceded your team’s efforts. Moreover, without effective requirements it becomes very difficult for our team to prioritize our performance test development efforts and to understand when we are done.

There is a phrase for testing when requirements are not present, “Art Testing.” It is art because I am an artist and I call it art: It is a test because I am a tester and I call it a test. It matters not if the art or test have value, it is because I deem it so. This is the path to low value and with compensation following value, well, you see where this is headed.

Taking requirements outside of the normal requirements process also has issues with the standard requirements vetting process. Does everyone agree so we have a common understanding across the enterprise? Are the requirements sufficiently narrow enough to be considered at all testable? Are they objective in nature so that any two individuals can look at the requirements, measure and assess them in the same manner? Is this a requirement than can be measured more than once? Can we tie the requirement to specific business needs? All of this is problematic with late collection.

How are your performance requirements collected today? How are the requirements vetted to ensure soundness? How do Architecture, Engineering, Development and even Functional testing measure against these requirements? How is this impacting your ability to deliver as a performance tester and the value of your output to your stakeholders?

Is it time for a change?