Not really. I mean each use case will have it's own depth, so you can't really say that each use case will equate to 3 test cases and, therefore, will take X amount of time to complete.
You might be able to determine averages, and if you had a significant amount of historical data, you might be able to make an assumption based on the number of use cases, but it would be difficult to trust without some seriously concrete data.
9 out of 10 people I prove wrong agree that I'm right. The other person is my wife.
I agree with Brent that it is indeed difficult to estimate the time to derive. It depends on how much knowledge you have on the area as well along with the depth. If you know the area where the use case effects, you know the functionalities that you have to check. From here you can have an idea on how much it would take you to get around. However there are some things you can do to have an idea.
You can take an hour or so to brainstorm, get knowledge and derive some scenarios of the use case in a logical perspective.
Once you have the scenarios covered, you may be able to find out the different physical test cases you need to come up with for those scenarios.
This way, you can cover each and every scenario and come up with an estimated time.
I try this way and it helps me get an idea. See if it helps you. Good luck
Not that it will help right away but something that I have found useful with this is to record all the times things take on project after project to record sets of metrics to perform these gestimates from. That way it is still an estimate but based on what you have seen in real life. The stats could even just be recorded in and excel spread sheet.
I have heard their are tools out there that contain metrics from a large number of projects of varying complexity that you can use to assist in providing fairly accurated estimates so this might be an option. Sadly I dont know where you can find one off hand although someone on here might.