Centralised Testing debate
I need to present a succinct and well argued paper recommending why my IT organisation must centralise testing if our Business is not suffer a high profile systems failure. I would welcome assistance, guidance, and reference papers. I work in an organisation currently structured on Development Programmes (stove pipe mentality), with each programme having their own independent Build and Testing capability.
The organisation has significant change taking place across integrated multiple platforms, i.e. desktop, middleware, mainframe, telephony. The applications operate across these platforms.
Parallel developments are underway, however the Development Programmes are focused on testing their changes only despite sharing common code. There is no pre-production environment for end to end operational acceptance testing. I anticipate a significant systems event shortly when the production system will fail as a result of migrating integrated software that has not undergone regression testing. The other event that may occur is that development and testing grind to a halt.
I look forward to receiving your observations and suggestions.
Thank you in advance.
Re: Centralised Testing debate
Is there any automation involved? I ask because the arguments for centralization then are easy. Otherwise, I have found and worked in instances where it was beneficial and necessary to separate testing functions. But, and this is the rub, in those instances I found that it was essential to have an overall "system" for quality in place.
Some of this depends on the software life-cycle model being developed because I believe the testing life-cycle should be wedded to that and you mention that each program has a build and test environment - but you do not mention if they are part of the same cycle.
To me centralized testing makes sense when you have a common software life-cycle across your organization because the quality system you develop keeps you on track and does not (necessarily) make you dependent on a single application or project.
Since you have different platform types (desktop, middleware, mainframe, telephony, etc.) you probably will have different testing cycles but I think they should all fall under a common system for quality and a common testing process (whether manual or automated). This system should probably be the work of a centralized group of people within QA but the testing can be distributed (such as, for example, unit testing).
You mention that integration testing is going to be the norm rather than a full cycle of system testing. Presumably this means you will have a regression cycle although you seem to suggest that you will not. In this case centralization would be nice for coordination of various projects especially if your build cycle is relatively high (meaning fast-paced). Even if we are talking about one project, your different platforms will require some differing methods of testing - even though the processes will generally be the same. Do you see the distinction I am not quite so elegantly putting it words? The methods will be distributed but the processes will be centralized.
You mention that you see development and testing "grinding to a halt" but if you couple the testing cycle with the software cycle, this is less likely to happen in my opinion. At first glance it might seem that this would hasten it, but I have found it does not because you have checkpoints or phases at each major shift in the models. A full regression cycle cannot begin, for example, without a unit tested build. A lot of this depends on your environment as well since you say that there is no pre-production environment you run into the issue of your development or testing environment being dissimilar.
I think a lot of this depends on the nature of your testing (particularly whether or not you are using automation), the types of environments, the staffing resources you have, and the testing methodology you are going to follow given your staffing resources and environment.