Basically, what we are looking at is a disconnect between the SQL scripts and the model. The SQL scripts are usually tested on creation, but then fail to keep up with changes. When they are run, they are run late, and therefore give error feedback late in the process, where it becomes more expensive to fix.
At 42 we have worked on a JPA-aware solution to read testdata from an Excel file and store the values in the database. The net effect is that the testdata suddenly becomes a first class citizen in the application domain.
This setup has now been tried on one of our projects and proven by doing a round robin from Excel to database and back again. The approach works very well for us. We have accrued a number of tangible advantages with this approach:/p>
- the structure and syntax of all the testdata files are checked against the model by dry-running the scripts during the unit tests
- the scripts are easier to maintain by non-developers, though still require technical affiliation
- complex test scenarios are more easy to set up for unit tests
the data is validated by the model before persistence