Gotta Catch ’Em All - 100% test coverage

In this day and age, it is a generally accepted best practice that unit testing is a good thing. The way to check the degree of unit testing in a code base is to measure the test coverage. However, 100% test coverage is often frowned upon. It can be quite expensive, sometimes even impossible to reach this exalted state. Is it worth it? Can it be done?

The function of the first 99% and the last 1%

The purpose of unit testing is to automatically assert that your code does what it is supposed to do. It is not too hard to get to 99% coverage with useful tests. The last 1% tends to be senseless or impossible to test. So why should you bother?

The coverage report is an instrument that helps you to achieve this. It is not a goal in itself. To make the best use of this instrument, you read the report, scan over the uncovered lines and make a judgement call how you can best cover those lines. The problem with this approach is that you have to scan the 'impossible' uncovered lines over and over again. Over time, concentration breaks down because as humans we are not very well equipped to deal with repetitive tasks that require a measure of intelligence.

Eliminating the last 1% is not about testing code at all. If you believe this, you are wrong. The last 1% is solely about eliminating false positives, so that every uncovered line is meaningful and the coverage report remains an effective instrument to deal with the quality and quantity of your automated tests.

To deal with the last 1% you should have no qualms whatsoever to cheat your way to 100%. Remember the goal -- eliminating false positives. If you must construct a class to clear that line, do so. If you have to exclude, go for it. If an error can not ever be thrown, mock your way through it. As long as the quality of your real tests is up to par, this approach is perfectly valid.

The hard to reach cases

The following parts of the code often gives us headaches:

  • Catching hardware exceptions, like IOException

  • Closing streams (Java 7 solves this with try-with-resources)

  • File and other disk-based operations

  • Static methods that have hard-to-reach logic

  • External dependencies that are not available during the test suite run

Let's have a look at the instruments we have to crack the tough nuts.

Mockito and PowerMock - the top 6 tools

These are the methods we see most to get to 100%. The first four are standard Mockito, the last two belong to PowerMockito.

  • mock; gets a mock implementation of a class

  • spy; gets a mock shell around a real class, also called stub, or partial mock

  • when; allows you to set the response of a mock when its method gets called

  • doThrow; same as when, except that this works as well for methods that return void

  • whenNew; same as when, except that this can be used on the construction of new classes

  • mockStatic; same as mock, except that this mocks the static parts of a class as well

If you want to make use of Mockito add the following annotation before your test class definition:

If you want to make use of PowerMock add the following annotation before your test class definition:


When you have no interest in testing any of the functionality of a class, mock is the way to go. It works in a simple way, just say what class you need mocked.
import static org.mockito.Mockito.when;

HttpResponse response = mock(HttpResponse.class);

Also, mocks can be made with an annotation, which can be very useful.
import org.mockito.Mock;

private HttpResponse response;

The mock can now be instructed to act in a certain way if its methods get called. See when


It is possible that you are testing a class and you don't want all of it mocked, just a part of it. Enter spy.
import static org.mockito.Mockito.spy;

HttpResponse response = spy(response);

A spied object not only allows you to instruct it, but also to have more control over what has been called on it, ie, the verifications
import static org.mockito.Mockito.verify;

verify(response, never()).getStatusLine();


The trick is to get your mocks to do something, so that the calling logic keeps working. The when method instructs (partial) mocks to execute actions upon being called.
import static org.mockito.Mockito.when;

Header header = mock(Header.class);
when(response.getHeaders("Etag")).thenReturn(new Header[] { header })

Now, if you're lucky, it's just a matter of injecting or passing this mock to your class and you're done.


doThrow is required for the methods that have a void return value. It's a bit of an anomaly, but your regularly need it, so it's best to get used to its quirky syntax.
import static org.mockito.Mockito.doThrow;

doThrow(new NullPointerException(null))


Let us suppose you want to open a FileInputStream and you require it to simulate an IOException. In this case you could make use of whenNew. The hardest thing to instinctively grasp about whenNew is that you must annotate your test class with a reference to the class where the construction is going to take place.
@PrepareForTest( SomeFileHandlerClass.class )

In the above example, you expect the construction to take place in SomeFileHandlerClass.
import static org.powermock.api.mockito.PowerMockito.whenNew;

.thenThrow(new IOException());


This is one of the most powerful tools of PowerMock. Let's say there is a util class and you need one of its methods to throw an exception. First you need to tell your test class which static class you are going to mock. Note that this instruction is different from the one whenNew requires, you now tell which class to operate on, whereas before you told which class performed the operation. Be aware of this inconsistency.
@PrepareForTest( SomeUtilClass.class )

Now that you have done this, you can instruct the util mock to throw the exception:
import static org.powermock.api.mockito.PowerMockito.mockStatic;
import static org.mockito.Mockito.when;

when(SomeUtilClass.someMethod())).thenThrow(new IOException());


Equipped with Mockito and PowerMock you can now bridge the gap between 99% and 100% test-covered code. Once the 100% rule is in place, you can enforce more strictness in your build, for example triggering a build failure when the coverage is below 100%. If you find this too strict, your coverage report will give sensible feedback on code that is uncovered. Whether the costs of the 100% step are worth it, is up to you. If you are very close to 100%, you are the prime candidate for this article and you are likely to derive value from going all the way.