Well here's the definition: Virtual User or Virtual Tester
A user simulated by an automated testing tool. Performance tests are run with virtual users. A virtual user is a simulation of a real user conducting transactions on an application. During a performance test, many virtual users can be run from one computer. Load tests run a number of virtual users. A virtual user is a program that acts just like a real user would in making requests to a Web application. During a load test, a considerable number of virtual users can be run on one computer, in this context called a driver machine.
You find more of these definitions at the Performance Testing knowledge base at http://stens.ca/kb
Coming back to your original question, your calculation seems to be a reasonable if the performance universe was was indeed that simple.
If the system has no additional overhead related to each of the simulated users then your approach would be valid. However, in most systems each individual user actually incurs additional overhead. You can think about sessions management, buffers, connections etc.
and to give you an example out of my practice, a few years ago we were simulating activity against an application and we figured that although this application needed to support 100 users that we could simulate the activity with just twenty users and simply crank up the amount of transactions that we were sending. To our surprise , even after our tests had shown that the twenty users who were simulating the same amount of transactions that 100 users would simulate, the system started to crash when we actually tried to simulate 100 users instead of twenty users.
As it turned out there was information kept in the system for every connected users. When we were simulating traffic with twenty users with the increased transaction rate we had no problems. But with the hundred users, we were running out of space that was a reserved for keeping track of each individual user, this caused problems and even crashes.