| || |
- 1 Post By jpulley3
I'd like to get your thoughts on how performance test resources are used succesfully in larger organizations. We all know a performance test professional does much more than write and execute VuGen scripts, but I'm curious to see if my organization is running out of balance.
Currently, my performance testers are budgeted out of our QA organization. This seems to make sense for all the load testing, soak testing, etc, but in looking at my numbers out of Performance Center, I find that over 80% of our executions are done in conjunction with the development team to optimize code prior to the actual load testing towards the end of a Sprint. This becomes a challenge when trying to quantify the value that performance testing contributes to the overall QA picture. Additionally, it would appear that the 80% work could be deemed application development and not QA, raising the argument of true cost for development vs. cost for QA.
Do my numbers seem to be out of whack to anyone? They seem high to me, but my previous experience in performance testing was at a small organization, perhaps large organizations find this normal.
Thanks for any feedback.
What is your problem?
You don't need conjunction with funtional tester's - ok.
You work with development team to optimize code - ok.
Looks like you have nothing to do once you do such useless analysis
So your organisation considers QA work (in this specific example load testing) performed iteratively during the development phase to be a pure development cost? Surely there would be other forms of QA work performed during the development sprint too right? Such as documentation reviews, peer-reviews, reviews against standards, etc. If so, how are these costed - against the QA budget or development budget?
Most best practice QA models suggest that QA is an intrinsic part of the development process, rather than a bolt-on process at the end, so doing iterative performance testing as you go along matches that model. It seems to me that rather than hold on to a budgetary model that suits the old outmoded waterfall type of QA/Testing model where most of the QA costs are incurred at the end of formal development, you would be better served trying to drive some QA best practise education across the management team of your organisation. That way, they would be aware that QA costs would (and should) be incurred throughout not only the development but design, development, implementation and even post-implementation stages of the project - that is, it is a part of the total cost of ownership of the system, not simply tacking it on at the end.
You appear to be doing the right thing, but the organisation's culture needs education to bring it up to speed on how QA operates in an agile environment.
You're not paranoid if they really are out to get you.
In theory or in common practice?
....how QA operates in an agile environment.
Or, as some might suggest a development view of QA under Agile, “”We are the Developers. Lower your quality and surrender your practices. We will add your technological distinctiveness to our own where we feel it has value. Your culture will adapt to service us. Resistance is futile.”
Thanks, I really enjoyed the quote!
Originally Posted by jpulley3
My concern is that I'm able to quantify performance testing when the people who pull the purse strings start asking questions. The organization I'm with seems to go full speed ahead with almost no focus on budget, and then they suddenly pull back and scrutinize everything. I recognize I could do baselines early in the process to compare against, but as the application features are built out through builds and sprints, it really renders the baseline worthless.
I appreciate the feedback...
If you are unable to qualify the value of performance testing after two++ months of solid coverage of performance problems of healthcare.gov directly related to a lack of performance engineering/testing practices then you will never able to convince your management. Just walk away - there is no reason to argue with a stone.