Big breath…

I submitted a service request to Mercury that 'db_execute_query' was generating an EXCEPTION_ACCESS_VIOLATION when the recordset returned by the query was around 60,000+ records. The initial response was that the Readme file documents that the record limit was 16,383 and was asked to justify my need for working with such a large recordset. Responded that the record limit referenced in the Readme was regarding "data-driven testing", i.e. the ddt_* functions, and that I was referring to the database session functions – db_*. Also, I have successfully returned recordsets from Oracle of 55,000 records or so.

And my justification: Need to validate that data from our datawarehouse is properly loaded to our datamart. Would like to create baseline files using 'db_execute_query' and then 'db_write_records', and then compare the datamart result set with the baseline using file_compare (or a separate diff utility). I don't want to use the Database Checkpoint (which fails here too, anyway) because I want the results in ASCII format, and the database checkpoint files are not in this format.

Response this time was that the record limit referred to the database functions as well. Interesting that I can exceed that by 3 times.

So, is my expectation unreasonable here? I do have a workaround in that I can return subsets by parameterizing the queries (blech). Any thoughts/ideas on this?