I agree with BJ on this. Even with equivalence class partitioning and boundary testing, you are not 100% coverage. This may conclude the data-driven testing that you have defined, but it is ceratinly not 100% coverage.
I would argue that it isn't even 100% coverage for data-driven testing. Unfortunately, any number in a given input field may generate an error. Therefore, if you do not test every single input that could be possible for that field, there is still the risk of an error.
This is where using exploratory testing techniques in conjunction with your equivalence partitions and boundary testing can help to produce some unexpected results also. However, if you are able to automate that input field, then you should be able to get more iterations running through that field and get closer to a full coverage.
Unfortunately there will almost always be an infinite number of inputs that can be entered into a field. Especially a text field. So just make sure that you have documented the necessary testing you are performing and if that amount of testing is acceptable to your management, then that should be good.
9 out of 10 people I prove wrong agree that I'm right. The other person is my wife.
I didn't think we were refering to coverage at all; we were discussing the technique of equivalence class partitioning. Equivalence class partitioning is a functional testing technique designed to evaluate the functional capabilities or attributes of a specific input or output parameter through a detailed analysis of the variable data for that parameter.
So, the application of the technique may apply to functional coverage, it may affect code coverage (or not) and it may affect requirements coverage (or not).
When a specific technique is taken out of context and basterdize its usage by attempting to apply it to other types of testing then the value of that technique diminishes rapidly.