We've been told by Compuware Sales that test cases are built/modified mainly by recording. Programming should only be necessary to put the scripts, which are automatically produced through recording, together. Does this work in practice, or do you build your test cases mainly by programming code?
You'll find it's usually a mix of both, if there was a test that required you to enter 5 sets of information into the same form it would be a bit long winded to just record yourself performing the action 5 times.
To make things more efficent you'd record it once then go into the code and put it in a loop. (for example)
"does this mean that generally it is possible to record any operation and the scripts are usually produced correctly"
Depending on the state of your application and the phase of the moon... sure!
Most of the time you'll end up with a completely functional script within minutes of starting. Just turn on the learn and then turn it off. But then you'll want to add in checkpoints to verify text, bitmaps, menus, control items, etc are properly being displayed within your app.
You'll also probably get concerned about synchronizing the replay once it fails a couple times due to a window opening too slowly or a field not being populated fast enough... you'll have to code this.
Then you'll also have work around why so many object map entries are being created for the same window or control. This falls more under system configuation, but can be added to individual scripts for specialized replay.
There are a ton of different components to add to a script to make it more functional... many of them require coding.
In my world, we have taken our test cases and created a data driven solution. For example, we have roughly 2000 test cases we use on an application that generates insurance illustrations for our field agents. Since these test cases are saved within the illustration application, I merely built an MS Access database to pull data out of the system in order to create a test case list which is used to drive automation.
On the qarun scripting side, I have one set of scripts that handle the actual testing process. I wanted to create a scripting solution that was reusable and could adapt to frequent changes in the application. Thinking about recording 2000 scripts that did the same thing didn't thrill me and you know how much maintenance that would create... So what I did was make the scripts as generic as possible. I identified the procedures needed to run the test and turned them into scripts that QARun is now using.
In regards to what your vendor/sales person said, to do automation right (and this is a subjective point of view here), recording is roughly 10% of the overal automation development. 90% of your development will be coding. If you just record and playback, you end up creating a bunch of throw-away scripts and that costs time and money.