We use XP (eXtreme Programming) over here and we directly tie the development requirements (stories) to the testing requirements.
If you don't know the desired output, you can't test. If you can't test the requirement, than it's invalid and needs to be changed. Simple enough.
As far as mapping test cases to test requirements... sounds like an aweful amount of work! We usually run light on documentation and heavy on test automation which in turn works as our test cases. We use the output from the automated scripts as validation that the tests were run and that the application is behaving as the original design requires.
I can't believe that you meant to say that you do not map requirements to test cases. That is the basis for a traceability matrix. Do you really not do that? If not, how do you know if you tested all your requirements?