There are many different flavors of this, but the main idea is to query the database, and use the resulting data as expected results. I've never seen the validity of this technique, as the correctness of the data is dependant on the code which processes it. Mostly I'm worried about the following scenario:
1) You capture a slice of data and use it for expected results.
2) In a subsequent build, the schema and data storage algorithm remains the same, but there is a change in the data processing code.
3) The change results in a bug, so the data isn't displayed correctly in the gui.
4) Your automated test misses the defect, because it's looking at the database and not the actual output of the application.
Please share your thoughts. I'm trying to steer my team away from this practice, but have some influential counterparts who don't see it my way.
I see your point - the gui might not be refreshed. But how can you ever know if the stored value is correct without knowing the details of the functions which process the data? If you know that this code hasn't changed, then OK. It's more in the arena of unit / white box testing.