We have applications written on the solaris platform that we are trying to port to red hat enterprise linux.
There were no test cases built to test these applications when they were built on Solaris, and these applications have been in use for years and are stable.
The port requires us to somehow and in someway "verify" that the port was successful, and the app on linux should work the same way that it did on solaris. As you can understand this would be difficult since no test scripts were written for the solaris app.
Now, i have a couple of thoughts on how to verify this port:
1) Write manual test scripts based on the solaris app and run them against the linux app when the port is finally done.
2) Use a tool to record scripts on the solaris app and playback on the linux app.
My question is,
1) Any ideas you guys have for this kind of problem? Management now realizes the need for building a testing framework etc and is open to the idea of learning and implementing a concerete test philosophy across all product lines - but starting from what we learn in this round. We would like to leverage automated testing and Im currently looking at keyword driven testing methologies and learning about the pitfalls of automated testing. But we dont mind investing and beefing up our QA dept with the right tools and solutions to solve this problem and ones down the road.
2) Are there any tools that would be able to work and record on solaris (motif based widget) and have them replay back in linux (motif also)?
3) Our company also heavily does web development. It would be nice to have a tool that can be used to test for a desktop environment (Cross-platform) and be used for the web as well, BUT with the same script syntax and language (which would help in building a keyword driven framework).
But my main interest is to figure out if any of you gurus have any other suggestions for verifying this port.
Successful implementation of an extensible test automation solution calls for a number of interrelated prerequisites:
• A structured test automation framework based on maintainable and reusable modules • A solid software quality assurance infrastructure consisting of well-defined processes that support manual, as well as automated testing efforts
• Well designed test cases that are properly scoped, independent of each other, and independent of test bed environment issues
• Creation of an abstraction layer between the AUT and test tools
• Adoption of, and adherence to established coding standards and implementation guidelines
Several criteria must be met in order for a manual test case or script to be considered for automation. A criteria checklist for determining what will be automated asks the following questions:
• Does the goal have a large payback in time and resources?
• Is the test case to be automated critical?
• Is the test case to be automated well defined?
• Is the test case to be automated repeatable?
• Is the target application functionality and UI stable?
• Are the expected results of the test case known and measurable?
• Does the test case contain a long list of mundane mental and tactile activities?
• Does the functionality being validated have multiple paths?
• Is there complex business logic behind the functionality being validated?
• Is the functionality being validated affected by locale- or culturally-dependent rules or data?
With regards to test tools that support Solaris and Linux as well as web-based apps, I believe that Segue SilkTest supports the various *nix flavors, as well as the Win32 OSes.
Some general comments on validating a ported application... I have had experience (both good and bad) with ported apps from Windows to Mac OS. While a common set of test cases for validating the core AUT functionality can be shared between the two AUT flavors, there will be test cases that are specific to OS-specific features, functionality, and UI attributes, assuming that the port was done properly.