I am a test lead. I am responsible for testing an architecture on which the application rides on. The architecture can be broken up into several components. When I say components, it is nothing but a set of resusable code. There are several components for DB handling, Security, audits, exceptions etc. These components expose set of public APIs which are called by the application developers. Now the requirement is to do performance testing for these components. I am not sure about the scope of performance testing at the component level and I am also not sure on what parameters should these component's performance be verified on . And I also require a list of tools for the same. Please note, all these components use the technology of Microsoft and they are developed in C# .NET. Any inputs would be of great help to me.
In addition to what Sridhar proposes, you may be required to write a shell application to call the API's and to present a normal interface to the test tool. That should not impose much distortion with respect to the results: a greater distortion will be caused if you run the testing tool in the same platform as the code under test. When you do a performance test on a small segment of code you may also measure performance out of proportion to what you'd expect to see in a normal context. The total processing performed may be comparatively limited so that physical I/O may represent a much larger proportion of the resources used, and thus yield different statistics than when the same API is buried within a real user application. As far as a decision what to measure, you need to focus on the business process contribution of the API, what it does to justify its existence, and then devise measures to judge its performance in that context. This will be different for various API's that provide different services. For example, the system shut-down API might have to close down a laptop in 30 seconds (so that a commuter can do a shut down, put away the laptop, and exit, before the train doors shut).