It's possible to get precision to the clock tick level. That doesn't necessarily give you any more accuracy on a Windows system, however.
I once did some performance testing on client machines in a client/server system. I was trying to capture the user's "perceived performance" (as shown through the UI).
The test harness I used at the time delivered precision only at the 1 second level.
I worked at it until I found a much more precise method (in clock ticks). But, it turned out that the tasks I was timing would normally fluctuate + or - 2 seconds. So, I scrapped the new method and went back to the 1 second timing level. It worked just fine, and was very predictable, when I made sure I used at least hundreds (and often thousands) of iterations in my tests.