| || |
Testing database changes in QA
I'm currently looking into a way in which to implement a new process for my team whereby we can test changes being committed to a database after executing test cases.
A developer made a change to a MySQL update statement as part of a defect which should have only updated one row in a table. The change was tested by the developer and was successful. QA then tested this change and it was also successful although they didn't check if it affected other data in the DB.
I want to enable the QA team to perform tests against the DB when changes like these arise so that there is no impact on other data and that the fix is solely for the particular defect in question. This will mean a change to how we execute our tests and confirm that the defect, functional change or enhancement has been fully QA'd.
What I'd like to implement is as follows:
1. Database with good fresh test data (this is already in place)
2. Backup of the DB with good data (would be called something like PreTestDB)
3. A second DB where the tests would be executed against (called ExecTestDB)
4. Ability to take a snapshot of the DB
5. Execute the tests against ExecTestDB
6. Use a tool to compare ExecTestDB vs PreTestDB to check for tables, rows and anything else that may have changed during the test execution
7. Have the results of this exported to a file, i.e. HTML, Excel etc
Please bare in mind that this is MySQL running on Linux/Apache virtual servers and there may be limitations in some of the tools available out there. We've taken a look at Redgate although this seems to not have snapshot capability.
Any advice would be appreciated. Thanks,
In my experience, DB testing is a slippery slope. You'll start getting into territory where testing that one time change will be much more expensive than the change by orders of magnitude. It's also hard to predict what happens with live data. And testing internally with live data is very bad practice. You'll most likely break a lot of policy and laws in the process (think about the number of personal, billing, and confidential info that's stored in the live db).
My thoughts are..
1. Always have a roll-back plan. The DB update script / migration, should always come with an undo, both the migration/update script and the undo script should be thoroughly code reviewed.
2. Test should be done against a version database and clean data created by builders or factories. Using copies of say a QA database is bad because there can be bad data in there from a bad test or from a bad build (this actually should be wiped every deploy) making this test invalid. Using copies of live data is bad because you violate laws and privacy policies (unless your app makes no promises and stores nothing sensitive).
3. Keep your DB architecture map up to date and map out it's dependencies so you know where to perform regression.
Database checkpoints. See QTP and WinRunner before this, SilkTest, Rational Functional Tester. It has been a standard feature of the big commercial tools for the better part of a decade.
thanks for the excellent informqation
You can easily diff the testdb against a snapshot or golden data set using Diff Kit. It is an open source Java tool for diffing data structures. diffkit.org