| || |
automating database restore
User (email@example.com) posted:
can I get any feedback on automating the restoration of DB on server side.
Currently, I have a number of packaged scripts in suites that will run on
the client workstation.
I essentially read and write to the registry to track pass/fail of the
suite and launch outlook to send email.....in the event of a failure(stop
the run of the Master suite).
...long term, it would be nice to restore a backup of the db data before
continuing with the next suite in a Master suite.
Has anyone written scripts to remote login, etc. or would I have to
go through Test agent on Server?
Re: automating database restore
User Bill Kidwell (firstname.lastname@example.org) posted:
Our DBA wrote a set of scripts that "duplicates" a master schema. No matter
how much we change and screw up the slave schema, we can drop it and
recreate it from the master schema. This works great for us. I wrote a
script that kicks off the process in a batch file and waits for it to close.
I like the duplicating schema better because I can make changes to the
tables / stored procedures / etc... without worrying about the backup
working with the new structure. This may be true for backups as well.
Speed is probably faster than a backup, but I do not know for sure. We are
currently using Oracle 8.1.7
Re: automating database restore
User (email@example.com) posted:
Here's how we do it at my current job:
We normally have separate dev, test, and production databases. If the
database structure changes, a DBA makes the changes to each
database/server. They use DDL scripts for most changes. Once in awhile,
someone will call for a "refresh" of dev and/or test databases, which is a
restore from the most recent production backup. This is done when data
gets stale or testing munges the data too much.
The strategy I use in automating functional tests for client/server apps
is to create my own test data and/or use existing production data that is
"always there". This is all driven from the client side, using Robot.
The exact approach depends on how dependable certain "base" data is, and
how difficult/time consuming it is to create it on the fly (we have a lot
of complex data relationships spread across databases). My scripts start
by deleting any existing test data. Test cases are ordered so that the
data is built up in layers. I run "ADD" tests first (this creates the
initial test data), then "UPDATE" tests, then "DELETE" tests. IOW, most
of the time, data is added via the User Interface, because this is part of
the functional testing. In cases where some base data is required (that
has nothing to do with the test), I use SQL to insert the base data.
Note that we also do data validation testing prior to rolling out a new
version, so the functional testing does not require using "live" data.
If you need to run SQL scripts on the server side using Robot, this can
easily be done using the Windows Telnet application and the InputKeys
function. I write the commands to a file on the server, then execute them
and redirect stdout and stderr to a file. I then send commands to check
whether the job is finished (in a loop), and when it's done, I send a
command that sends me an email message. When the expected message appears
(in Outlook), I break out of the loop. If it is not found within the
timeout period, I error out.