I have been asked to see if the app that we are testing has a memory leak. I started collecting private byte usage via perfmon. When I look at this data the memory usage goes up, sometimes it goes down. I can not see a real pattern to the priavte byte usage. The starting and ending usage is different on each run. When I asked the development about this they think its ok.
- Since doing a simple day to day comparision of the raw data doesn't seam to be working is there any kind of statistical analysis that can be done on the raw data to show a differance? If so what can be used (slope of the memory growth)?
- Since I don't run out of memory, should I just run my tests longer?
In my experience and opinion, if you have the time to run a test for a long period of time it would be worth the effort especially if you can simulate daly load expectations. One reason for this in my experience is cost of repairing or fixing a leaked compared to cost or time lost doing a recycle to recover the memory. In a lot of projects I test for even though they may be 24/7 operations there are exceptions for system recycles or reboots. If you system can run for 5-6 days without running completely out of memory it might be cheaper to monitor and recycle the system then to spend a lot of dollars trying to find the cause of a leak.
But in the end doesn't it all come down to BEER? Beer is the ultimate answer to all questions in the universe so yes the answer to your question is BEER.