large datapools in RPT
I'm trying to create some performance tests for an application with a lot of users. This isn't a big deal, the scripts are simple, but the number of concurrent users and lack of reusability in data means huge datapools. RPT just can't handle anything over 10000 records. I found code on the IBM site that would cycle through a file and get values out of it but it only works for a single column CSV file (worthless). I'm working now to update that code to actually be useful but I figured I'd ask if someone had already done it before I spent to much time on it.
So... anyone written code to access CSV files for data instead of the datapools in RPT?