90th or 95th seems to be good. It depends upon your utilization specs in terms of what makes sense performance-wise for app(s) relative to any system sepcs or service-level agreements.
1) Sometimes it might pay to sample at a faster rate.
2) Must batch processing be considered?
3) The cpu purpose and its role in the architecture. If it is a DB server and nearly idle most of the time while other servers in front of it are struggling - then tuning needs to occur and/or other action(s).
4) It makes sense to double-check that the readings you seek are correct especially when dealing with logical CPU/Mem partitions as in LPARs or VM.
When I am looking at a load test, I will isolate the samples for the period under maximum load. I can then take an average at that point, plus maybe throw a graph out on that for people to nod wisely over.
Not to start a semantics discussion, but what does peak mean? Highest recorded value? Highest average over any five samples? Jake's got good stuff there, I guess the context is what matters.