I'm a test engineer for a company that have been combining System and Integration test phases for every release. Testing has always called out that the 2 phases should be separated and the owner for system should be development and testing owns integration. However, development does not feel the same way.
Just a background history, we've always combined the 2 phases because testing had a bigger lab with more equipment. Somehow, it made sense to everyone to combine the 2 phases for testing.
We recently moved our lab. We took this opportunity and called out that system and integration can finally separate. System Test is now owned by Development and Integration is owned by Testing.
However, it seems like even though we have separated the phases, testing is still doing all the work. Development wants to run testing's integration test scripts.
Does anyone know what the industry standard is for these 2 phases and who owns what? What kind of testing should be done at system level?
Re: System Test
Hmm... that's a new take on things. Usually it is the Test group that is doing the System Test (meaning it is doing end-to-end business style, and other system wide type, testing) and Development doing the Integration (meaning combining functional components into a whole application). Thus you would have Test doing the Integration style tests that should be done by development.
Now to answer your question. Yes, development should be held accountable for doing some level of testing of the product before it comes to the Test group. This would be an internal quality gate that would help to reduce the amount of basic bugs/defects/issues that can cause a headaches for the test group. By doing simple checks of their own code development can reduce the re-work time and delay time for the Test group. I find nothing more egregious and annoying than basic defects (like an edit field that does not do overflow handling, or a links on a webpage that cause 404 or 510 errors due to lack of stubbing) that development should have caught before giving it to Test. Ten minutes of their time can save 4 hours of your's.
I think you have a situation where the development team is 'above' doing testing. They need to be kicked in the ***. As an ex-developer myself it ****es me off when my cohorts are sloppy like that. They need to look at their job descriptions, I bet good money that "testing" is part the job tasks/responsibilites.
"I know you are, but what am I?!" - Peewee Herman
Re: System Test
I would agree that your company seems to have it backwards. The testing group is mainly responsible for the system as it functions in the production environment. Integration is mainly the focus of Development: "does my module work with its connecting components?". Of course, I wouldn't say that anyone is wrong. Maybe this process will work for you.
Re: System Test
The company that I work at and where I have worked in the past the following has happened:
Development did Unit testing and Integration testing
QA/QC did System Application testing
We defined integration testing as a test integrating the module changes with those it integrated/interfaced with and this was usually done in more of a white box mode.
We defined System Application testing as testing that followed the path used by the business/clients and this covers all modules in a function/application. This was always black box testing.
I have not failed. I've just found 10,000 ways that won't work" --Thomas Edison
Re: System Test
I have looked at your question earlier, and I am puzzled by what is the right approach to answer. In fact, I cannot believe how silly some company rules are to start talking about "who owns what" as the focus should be on "who can support whom" in order to get the job done.
Basically, QA should work in parallel with the developers, but sufficiently independent so that the thinking process does not get corrupted. The whole idea is that if one group thinks in terms of a solution, and the other group thinks in terms of how to figure out if the solution works, that there is less chance that one ends up biased to approve a less than effective solution.
Personally I like to minimize the end-loading of a project with testing, and do as much of the work earlier on when there is more opportunity to perform preventive/corrective work that does not add cost of rework to the project. If in the context of your company policies the QA testing is strictly isolated to a limited stretch of time in the SDLC that opportunity is missed.
In my ideal model (a concept I am working towards and not something that I work with) the QA team develops test cases and data scripts at the same time as the developers design and code the application. The idea is to use the scripts to drive test frameworks that validate the code as it is being developed. The developers execute the test as part of the coding effort, so that a much cleaner product is delivered. I really don't care how many tests they need to run to get the proper results, so long as there are no major flaws left by the time the code is finally integrated and handed off to testing.
The final testing should be black-box oriented, to verify that the functionality is complete and accurate when operated through the prescribed user interface. The fact that you execute this in an independent fashion has nothing to do with your participation in unit test creation, it is just an independent validation of the tested code that eliminates any possible bias that the developer might have to pass his/her code even if it has small flaws.
It sounds to me that this basic model has become corrupted into a competitive / adversarial model that does not serve your company well. The next step will be to start collecting statistics that supposedly prove how bad things are and to prove the value of QA (I am getting a real sense of this type of thinking becoming more prevalent, and we should do what we can to stamp it out). The value of QA is to minimize the introduction of bugs as much as to catch and report errors that find their way into the final product despite every attempt to prevent that. The fewer bugs the better: it demonstrates the value of a careful QA in support of development.
If, as in your case, "testing does all the work" at the end of the SDLC your company has achieved the worst possible state in which you detect the most errors when it is most expensive to go back and fix those errors. It may demonstrate that QA is very busy, but it also misrepresents the real opportunity cost wasted on allowing the bugs to be buried into integrated code where it is more difficult to find those bugs, so that there is a greater probability of those bugs showing up for end-users.
So, while Lynne's model is the one that seems to be most commonly used in effective organizations (that I know of), any modifications should focus on improving the preventive aspects of catching potential bugs (starting with validating that all requirements are testable, for example), and streamlining the development with QA prepared test framework test cases. In light of your recent reorganization it is probably not the answer you like, but I think it is the answer you need to consider.