Just wondering what strategies people here use to keep performance test scripts updated in an evolving application, especially when the Perf test team is a seperate team from the dev/test teams.

We follow Agile practices in our project, which means, between builds, there are often changes in the nature of http requests (new requests, different POST parameters in existing requests etc). This means we spend a lot of time analysing and updating our scripts with each build that we test. We are thinking of automating this process (the analysis part - to tell us if there are differences), but driving a workflow using a selenium script and recording the requests using fiddler/jmeter proxy, and then comparing it to the requests in the previous build.

Please note that I'm talking NOT about new features/functionalities introduced in a build, but changes to requests in existing functionality.

Anyone used any alternate approaches to solving this problem?