Ideal workflows for website development
This is more of a project management question than a QA question. Our current processes are that if a bug is fixed or a feature is implemented, the developer (there's only one, we're small) releases it to a DEV server, and QA tests the fix or feature. Bugfixes and features accumulate in DEV until the end of a sprint, then get collectively released to STAGE, where QA verifies that the upload was performed correctly by testing all the bugfixes and features again. Then, we do this again and release all the bugfixes and features to PRODUCTION, the live site, where QA verifies the bugfixes and features.
In effect, QA tests the same bugfixes and features three times.
I'm QA, this is my first website development project. Is this how website development is normally run? What better way can it be organized? I'm not familiar with the development side of things as I'm just a tester, but shouldn't there be some version control such that this should only have to be tested once, maybe twice? This seems remarkably inefficient to me, as QA, but I don't know what better way it could be done. Everyone here wants to improve our processes but doesn't know how. Thanks.
Oh, boy. This process is actually an improvement on some I've seen even though it does require extra testing effort (this is where automation comes in handy - if the only thing that needs to change for automation is which host the site is on, it becomes a lot less hassle to test.
Here's the sticking point: if your website is one where it's not feasible to simply copy the entirety of the dev code to staging, and the entirety of the staging code to prod, this is your best option. Most websites hit this: it's rare that a website update can be managed by copying all the code and running a database upgrade script. In my experience what tends to happen is that the dev site accumulates a collection of partially-complete changes that are too big for a single sprint so a manual merge gets done to the staging site, where further fixes happen because of the different code-base and environment (things that were masked by the dev site being in a semi-permanent transitional state). Moving from this to the production site is usually clean, but there's invariably something that isn't quite the same which causes further issues.
So the short version is yes, you're stuck with it, and maybe you want to look into the possibility of automating as many of your tests as possible to reduce the amount of repetitive work you're having to do.
I think this is true for most organizations. In ours we essentially are testing 4 times, but most of it is automated. We have the following workflow.
Dev instances (each our our dev has one) -> CI -> QA -> Stage -> Live/Production
Manual testing happens on Dev, QA, Stage, and Production. Although the amount and focus of the testing changes. We do this all in one sprint, however, we are doing less work each sprint. Each sprint is essentially 1 tiny bug fix, or a partial feature that is hidden from the public (until the feature is complete, then we enable the feature via feature flag days after the release when we're safe to say the system is stable)
On Dev, we're more concerned about getting Devs early feedback on usability and overall design. Testing here is mostly exploratory and focus on getting quick feedback.
Our CI systems are automated tests. Smoke tests are run upon every checkin, automated regression nightly. At the end of a day, there's a ready to deploy build for QA, but some they are labeled as Good/Bad builds.
On our QA systems, this is completely controlled by QA. We run the deployment scripts and do additional manual testing necessary.
Staging is a prerelease test. Our testing consistes of smoke tests. If any system architecture changes, we may additionally do some load and stress testing.
On production, we mainly just check if the deployment is ok, run some smoke tests and go out for celebratory donuts.
Last edited by dlai; 06-03-2013 at 09:36 AM.
We don't do Agile. And we don't do sprints. And it seems like we don't use quite the same process of "testing 3 times".
Our Developers build and unit test on their own systems. When features and fixes are ready, they are collected into internal builds.
Internal builds are given to QA, and installed and tested on QA test systems by QA.
When released to Production, QA performs what we call "Release Testing" on the Production systems. These are seldom exactly the same tests that have already been performed. Usually they are a subset, sometimes with some migration-testing activities added. Basically at this point we are only trying to prove to ourselves that what has already been tested has been successfully installed and configured correctly in Production. (We've already tested everything on the QA systems, we don't want to test it all again).
see: All Things Quality: QA Q and A - Release Tests
Joe - I envy you that process... Where I am they don't even have functioning source control for their web development process! Plus, it's a classic ASP site, not handled as distinct builds, the dev site is also where the developers do their work (not on their own systems) so there's no telling what's going to be there from one day to the next (debug code appears and disappears, things that were working break then start working again...). As a result, bug fixes get tested there to check that it works as it should. Then they get tested on the staging site to check the integration because the two environments are so different. The production site test is usually a smoke test because staging is kept as close to production as possible.
We're working on improving the flow here, but it's a long process - I'm the first dedicated tester they've ever had, so... It's an interesting journey.