Keeping your integration tests isolated from each other

In this blog post I will describe the difficulties that occur with integration tests regarding isolation, what problems this can lead to and how you can address these problems in an in-memory database environment.

Integration tests serve the meaningful goal of verifying that functional, performance or reliability requirements of major items are met. Contrary to unit tests in integration tests your units of code are tested collectively rather than independently. This tells us that the combination of units achieve the (functional) goal the developer had in mind.

In the context of a web-application such a test may consist of using an automated web browser such as Selenium to simulate user actions that will trigger activity on the back-end of the application. This subsequently may result in a response to the front-end of the application which can then be asserted. By doing this you can test a functional use case vertically through the system.

In a unit test it is important that there is only a single reason for it to fail. This allows us to pin-point the cause of the failure making it easier to fix the underlying problem. Mocking out dependencies with the help of a mocking framework can help you achieve this goal. What is evenly important is that one unit test does not cause another unit test to fail, since this makes it more difficult to locate the error. It is for this reason that unit tests that involve database activity are usually rolled-back after execution. This keeps your unit tests isolated from each other.

Ideally you would want to do something similar with integration tests: after the test finishes the changes that were triggered should be reverted. This way the next integration test can start from the exact same pre-test conditions as the previously started one. Unfortunately you can not rely on transactions to achieve this goal, because the system may trigger several vertical processes leading to multiple database transactions which are independently committed. This would mean you would have to clean up the mess yourself, which also may not be trivial. Even more difficult are situations where multiple Java Virtual Machines (JVM's) are involved, as will be explained further in this post.

Problems caused by unisolated integration tests

Recently a problem with unisolated integration tests was made painfully clear to one of our development teams. The product we were working on was a web application containing several Selenium integration tests. These tests typically consisted of filling out and submitting forms, searching data and verifying that the expected results were shown in the user interface. These operations traversed all the way from the user interface, through the Java application to the database. Someone in the development team had just commited a change to the application after verifying the build ran correctly using Maven. I was almost ready to do the same, pulled in the latest changes and ran the tests. The build failed on an integration test I had not touched, but since these integration tests traverse vertically through the system it might had been very well be the case that some piece of code that I edited triggered this result. I could however not find a logical explanation for this failing test in my changes and thus decided to check our Bamboo build server. This showed the exact same failing integration test, whilst on my colleague's machine the build ran just fine.

We started looking for possible explanations and the main difference we found was the fact that my own machine as well as the build server ran on Linux distributions, whilst my colleague used Windows. This in itself did not explain why the tests were failing but it did give a direction to search in. Eventually we noticed in the build logs that the order in which the Maven Failsafe plugin executed the integration tests was different in Windows and Linux: the 'runOrder' configuration argument was by default set to 'filesystem' which seemed to explain everything. Indeed, after setting it to 'alphabetical' the tests were executed properly on my machine as well and I was able to commit my code. The fact that the integration tests didn't conflict with each other when run alphabetically was a lucky coincidence however, and not a solution to the root cause of the problem. Similar to the ACID properties of database transactions the tests should have been isolated from each other.

This similarity of the  ACID  properties of database transactions was earlier pointed out by another blogger named Todd Kaufman. As earlier mentioned however, in integration tests this is not as trivial to upkeep as in unit tests. So, the investigation for a fitting solution began.

Possible solution

In this project we used one of the more familiar in-memory databases called "HyperSQL Database" (or HSQL DB). Such a database helps you to quickly get an application running on your local machine and keep a clean database every time you start it, as the data inside it is lost as soon as the JVM is closed. It allows for basic database functionality such as CRUD operations as well as transactionality. The latter is useful for rolling back the changes made in your unit tests, but for integration tests it becomes a bit more difficult. Since these test vertical parts of the entire system, it means that the entire system must be operational. We solved this by starting the application in an Apache Tomcat container and running the integration tests against the locally started server. As a consequence, two JVM's are operational simultaneously: one which runs the web application using Tomcat and the other which runs the integration tests. This results in the fact that it it impossible to clean up the database after the integration tests are done, as the JVM which runs these has no access to the in-memory database of the other. Similarly, the JVM has no idea when one or all of the integration tests are finished.

Fortunately there is a solution. HSQL DB as well as other in memory databases such as H2 offer possibilities to expose your in-memory databases as a server on a certain address and port. This allows other JVM's to connect with them, opening the possibility of reverting database status after an integration tests has completed. To configure this in HSQL DB you can use the start function of the org.hsqldb.server.Server class. You can specify all sorts of parameters to fine-tune the database server to your desires. In a Spring configuration you could start this up in the following way:
<bean id="hsqldbServer" class="org.hsqldb.server.Server" init-method="start">
<property name="properties" ref="hsqldbServerProperties"/>
<property name="address" value="localhost"/>
<bean id="hsqldbServerProperties" class="org.hsqldb.persist.HsqlProperties">
<prop key="server.database.0">mem:webapp</prop>
<prop key="server.dbname.0">webapp-database-server</prop>

This exposes your existing in memory database called webapp to the following JDBC url: "jdbc:hsqldb:hsql://localhost:9001/webapp-database-server", allowing other processes to connect to the database. This includes SQL clients such as Squirrel SQL, allowing you to browse through your in-memory database as a cool added bonus. We can now create a new bean profile for integration tests which accesses this exposed server with Spring configuration similar to this:
<beans profile="integration-tests-datasource">
<bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource" destroy-method="close">
<property name="driverClassName" value="org.hsqldb.jdbcDriver" />
<property name="url" value="jdbc:hsqldb:hsql://localhost:9001/webapp-database-server" />
<property name="username" value="SA" />
<property name="password" value="" />
<bean id="hibernateDialect" class="java.lang.String">
<constructor-arg value="org.jarbframework.utils.orm.hibernate.ImprovedHsqlDialect" />

The next step is making sure the integration tests are run with the necessary context files and profiles activated, for example by creating a super class for all integration tests as follows:
@ActiveProfiles(profiles = { "test", "integration-tests-datasource" })
@ContextConfiguration(locations = { "classpath:application-context.xml" })
public abstract class IntegrationTestCase {

Because we now have a super class for every integration test, we can easily add code to wipe the database and repopulate it with the necessary data after every test using HSQL DB's "TRUNCATE SCHEMA" query available from version 2.2.6 onward. Be warned that this deletes every record in the database schema, so be careful with it. As an extra security measure I added an assert statement which checks the datasource.
    public void cleanup() throws SQLException {
        Assert.isTrue(dataSource.getConnection().getMetaData().getDriverName().equals("HSQL Database Engine Driver"),
                "This @After method wipes the entire database! Do not use this on anything other than an in-memory database!");"Deleting integration test database records");
Statement databaseTruncationStatement = null;
try {
        databaseTruncationStatement = dataSource.getConnection().createStatement();
        databaseTruncationStatement.executeUpdate("TRUNCATE SCHEMA public AND COMMIT");
} finally {
}"Repopulating integration test database");

The integration tests now connect to the HSQL DB server and repopulate the database after every test from a clean schema. This has several advantages:

  • Database records are not shared between tests, reducing the chance that these tests affect each other.

  • Your database populator or database population scripts can become more minimal, depending on your situation. Previously you might had to create database records for several tests in one single populator, now you could access the database directly in the test and only save records that are relevant for that specific test. This way it becomes much more clear what data is used by what test.

  • Since you can now access the database in your integration tests, you have gained the possibility to check the database after an action has been triggered in the user interface while previously you could only do asserts on that same user interface. It can still be of value to do asserts on the user interface but now you have more control over what you want to check.

It also has a few disadvantages however:

  • As mentioned before, the script which deletes the entire database schema is a dangerous one. If somebody replaces the datasource bean with one that belongs to a physical database but also support the same query it will wipe that entire schema. I don't know of any other databases that do, but it is better to be safe than sorry and make sure this won't happen. Alternatively you could also create @After methods in every specific tests in which you manually put back the database in the previous state, but depending on the situation this can be difficult.

  • The HSQL DB server takes some time to startup, and also adds some latency when requests from the integration tests are incoming. Of course repopulating the entire database after every test will cost time as well. Before implementing such a solution one should outweigh the increase in flexibility with the loss of performance in the specific situation.

  • This setup adds some complexity to the project configuration, it contains configuration for a HSQL DB server, configuration for connecting to that same server and programming code which puts your database back in a certain state. The database populator however may become a lot clearer because this can now only contain crucial data for integration tests such as users to log into when performing a test. Also, the added bonus of starting up a HSQL DB server is that this can also help you during development as you can now easily view the database content.

Altogether it seems that this is a plausible solution to the problem, but it all depends on your specific situation. I hope to have made you aware of the most important advantages and disadvantages however, so you can make an educated decision on the matter.


In this blog post I have described the problems that integration tests can face with regards to isolation: it is not trivial to keep multiple integration tests isolated from each other. When this isn't properly done and a test causes another test to fail, this can be really difficult to spot. The nature of these integration tests make it impossible to rely on methods used to restore the database in unit tests. Directly deleting database records after the integration tests are executed isn't trivial with an in-memory database either, as the integration tests can run in a different JVM which makes it impossible for you to directly connect with the datasource. A possible solution is to  expose the in-memory database via an HSQL DB or H2 server and resetting the database to the initial state after each test. This also allows you to check the data in your in-memory database but isn't without disadvantages either. Before making such a choice you should be aware of the effects for your situation. Maybe you have struggled with the same problem in the past and found an alternative solution, in which case I would like to invite you to leave a comment.