Scratching a JUnit Itch


  • I like JUnit. It is simple and clean, and it is ubiquitous in the Java world.
  • I like Go’s testing package. It is even simpler and cleaner, and distinguishes between failed checks and fatal test failures. It doesn’t use exceptions to do this.
  • I wanted to see what Go’s testing semantics would look like in JUnit, so I wrote JGoTesting. Some people helped me.
  • I’m quite pleased with how it is turning out so I want to share it with you.

My history with JUnit

I’ve been using JUnit since its very early days. It was the tool I learned my TDD chops with, way back in the early 2000s. I remember Chris Stevenson writing agiledox to pretty-print JUnit test names, which was one of the inspirations for BDD. I watched Nat Pryce and Steve Freeman extracting the matchers from JMock into the Hamcrest library, and I was there when Joe Walnes came up with assertThat as a neat bridge between JUnit assertions and Hamcrest matchers. I saw methods-starting-with-test give way to @Test annotations. What I’m saying is JUnit and I have some history.

My history with Go

I started playing with Go around the time it turned 1.0. I had seen it before, but this seemed a good time to start taking it seriously. I have been saying for some time I thought Go was going to be significant as a language, in fact I have used that phrase many times to describe it. It has subsequently become the darling of the DevOps generation, such as the talented folks at Hashicorp and Docker inc., not least because of its deployment story: Take these statically-linked bytes that contain their own subset of the Go runtime, and run them somewhere. No preinstalling a runtime, no classpath or load path, no runtime dependency hell, no deployment bootstrapping problem.

Its core libraries are consistent and guessable, it has opinions about code layout which are guaranteed to offend practically everyone, but which instantly neutralise all those code layout religious wars. Nat Pryce has described Go as “a language so ugly the only thing you can do with it is work,” which I think sums it up nicely. It is as simple, pragmatic and workaday as a pair of Doc Martens. I like it.

Testing in Go

One of my epiphany moments in Go was when I started using its testing package. It has two primary types, testing.T for tests, and testing.B for benchmarks1. There aren’t any special constructs for assertions like JUnit’s assertEquals(...) and friends. Instead each test method receives an instance of testing.T, named t by convention, and you just use if to check for errors:

if thing.IsBroken() {
 t.Error("thing is broken")

// test continues here
// ...

My epiphany was simply this: A failure doesn’t have to stop the test!

The testing.T type has a number of methods for recording errors:

  • Fail() marks the test as failed, but allows the test to continue.
  • Error(msg) does the same but records a message2.
  • Log() records a message which is only displayed if the test fails.

The Fail and Error methods have corresponding FailNow and ErrorNow methods, which mark the test as failed and terminate it. This was when the lights started going on for me. FailNow is a longer name than Fail. Fail is a single word, Fail Now is more words. That suggests the designers of the testing package think fail-and-continue should be the more common case. What if they were right?

Testing in JUnit

JUnit uses exceptions to signify test errors. I’ve never been happy with this because it forces you into one of two modes. Either you have lots of near-identical tests with a different assertion in each one, or you have one test with lots of assertions, which tells you nothing after the first one fails. The former is noisier to read, the latter is less informative at runtime.

JUnit started out as SUnit in Smalltalk, where the language editor is method-oriented—Smalltalk doesn’t have source files as such—and methods tend to be small, so having lots of little tests is completely normal. In a file-based language like Java it just ends up looking like chatter, and it is difficult to find the tiny difference between the many similar-looking validation tests, say.

There have been many heated discussions over the years about the best way to structure an xUnit test in the various languages it has been ported to. Some say you should have one assert per test. Some, like myself, say one intent per test, although for reasons of pragmatism I will often break this rule because of that risk of information loss with multiple assertions.

Checking vs. Testing

Using Go’s testing.T methods got me thinking about the difference between checking and testing. Some people in the software testing community have been using these words to distinguish between things that can be done by a computer, and things you need a human’s contextual awareness for. I think this is a valuable distinction but checking and testing don’t mean those things.

Testing is any activity that increases confidence in something, usually for someone else. Different kinds of testing increase confidence for different people. Security testing increases confidence for security people, audit testing increases confidence for audit people, functional correctness testing increases confidence for business sponsors and end users, and so on. There are many schools of thought on what testing means. This is the definition I find most useful.

Experienced software testers, at least the ones I know, have two superpowers. The first is empathy: the ability to get inside a stakeholder’s head and figure out what they are concerned about and what would give them more confidence. The second is insight: the ability to understand what they need to do with a computer system to provide that confidence. They often have a third superpower, which is balance: the ability to make suitable trade-offs within a limited timeframe, to provide enough stakeholders with enough confidence that they are prepared to let people use the software.

Checking is carrying out some kind of inspection, or check. Checks often come in the form of a list, which is where we get the term “checklist”. Each list item has a box next to it where you can record that you carried out the inspection. You put a checkmark in the checkbox.

I think there is a quadrant waiting to be drawn, with Checking and Testing on one axis, and Human and Computer on the other. Some kinds of testing needs humans, some can be done by computers. Likewise some checks can be carried out by a computer, some need a human.

As an example, some years ago I was waiting to board a plane in a small airport in Denmark and a man with a clipboard was walking round the little aircraft on the tarmac. He would look at various parts of the wings, fuselage, wheels, landing gear, and other things I don’t know the name of, and make check marks as he made his round. I have no idea what he was checking for but I felt reassured he had done it. I also doubt the aircraft-checking equivalent of a Roomba would have made me feel nearly so safe.

To confuse matters, many tests take the form of a series of checks. In the UK you have to take your car for an annual roadworthiness test called an MOT test. This consists of a number of checks: brakes, electrics, emissions, headlamp angle, wipers, etc.

The JUnit equivalent of an MOT test with multiple assertions would go something like this:

Me: Hello, can you test my car please? Tester: Sure, pop it up on this ramp. Me: Ok, there you go. Tester: Your brakes are worn. Me: Ok, thanks.[drives off, comes back with brakes fixed]. It’s me again. Can you test my car? Tester: Sure. Your emissions are too high. Me: *sigh* [drives off, comes back with emissions fixed]. How about now? Tester: Your headlamps… Me: Oh come on! Can’t you just tell me what’s wrong with my car?

Where the Go version would be:

Tester: Here is a report of all the things wrong with your car. Me: Thanks, I’ll go fix those.

Introducing JGoTesting

So all this got me wondering what Go’s testing semantics would look like in JUnit. These were my design principles:

  • It should feel natural to a seasoned JUnit user.
  • It should use seams in JUnit where possible.
  • It should use idiomatic Java style. Rather than having different method names for Fail() and Error(msg), just provide overrides.
  • It should use idiomatic JUnit style, including static assertXxx methods. I would add equivalent checkXxx static methods that do the same thing.
  • It should play nice with Hamcrest matchers.
  • I would target Java 1.7 and JUnit 4, but it should allow checks using Java 8 lambdas.
  • Switching over to JGoTesting should require minimal code changes.
  • You should be able to use JGoTesting only where you want to. Everything else will still use vanilla JUnit.
  • It should be available from Maven Central.
  • A failed check shouldn’t stop a test, but should be recorded.
  • It should be possible to stop a test when a failed check means there is no point going on. If you can’t start the engine there is no point doing any of the emissions checks.

I started by writing a custom org.junit.runner.Runner, so you could annotate a class with @RunWith(JGoTesting.class) and it would Just Work. After some experimenting, and some lively discussion with other Java developers, I switched this over to use a @Rule, which it turns out should really be called a @TestDecorator, since it allows you to decorate a test invocation, which is exactly what I wanted.

By adding a @Rule public JGoTestRule instance to a test class, the default JUnit runner wraps each test in JGoTesting goodness.

So now it is up on GitLab! This is from the README:

Quick start

  1. Add a JGoTesting @Rule instance to your test class.

    import org.jgotesting.rule.JGoTestingRule;
    public class MyTest {
        public final JGoTestRule test = new JGoTestRule();
  2. Use JGoTesting’s static assertXxx methods in place of the JUnit ones just by replacing an import. Or use the checkXxx ones if you prefer. All tests in a class with the @Rule will be managed by JGoTesting.

    import static org.jgotesting.Assert.*; // same methods as org.junit.Assert.*
    import static org.jgotesting.Check.*; // ditto, with different names
    public class MyTest {
    public final JGoTestRule test = new JGoTestRule();
    public void checksSeveralThings() {
        // These are all checked, then they all report as failures
        // using assert methods
        assertEqual("this fails", "one", "ONE");
        assertEqual("this also fails", "two", "TWO");
        // same again using check aliases
        checkEqual("so does this", "one", "ONE");
        checkEqual("and this", "two", "TWO");
        // Test fails with four errors. Sweet!
  3. The rule instance is a reference to the current test, so you can chain checks together. You can log messages that will only be printed if the test fails, using log methods. That way you can capture narrative about a test without having lots of verbose output for passing tests.

    public class MyTest {
        public final JGoTestRule test = new JGoTestRule();
        public void checksSeveralThings() {
            test.log("This message only appears if we fail");
            // All these are checked, then they all report as failures
                .check("this fails", "one", equalTo("ONE")) // Hamcrest matcher
                .check("this also fails", "two", equalTo("TWO"))
                .check("so does this", "one".equals("ONE")) // boolean check
                .check("and this", "two".equals("TWO"));
            // Fails with four errors. Sweet!
  4. Sometimes a test fails and there is no point continuing. In that case you can terminate the test with a message, or throw an exception like you would elsewhere:

    public class MyTest {
        public final JGoTestRule test = new JGoTestRule();
        public void terminatesEarly() {
            // ...
            test.terminateIf("unlikely", moon, madeOf("cheese"));
            // We may not get here
            test.terminate("It's no use. I can't go on.");
            // We definitely won't get here
            throw new IllegalStateException("how did we get here?");

Worth knowing about

  • All the log, check and terminate methods work with either a simple boolean expression, a Hamcrest Matcher<>, or a Checker<>, which is a Single Abstract Method (SAM) interface so you can use Java 8 lambdas for checking.
  • The log, fail and terminate methods have logf, failf and terminatef variants that take printf-like string formatters.
  • The Testing class contains static versions of all the log, fail and terminate method variants.

Warning: contents may settle

This is still very much an experiment. The API is still not stable. I’ve flip-flopped between failIf and failWhen as method names for conditional failures, and may do so again. I’m currently favouring failIf. If you like trying new things, please take JGoTesting for a spin and let me know what you think. If you prefer stability, I would wait a while.


The feedback I’ve had so far has been surprisingly encouraging. JGoTesting was mostly written in two spurts at different unconferences. The first, using the Runner, was written on a bus travelling north from Jfokus in January, the second, porting to use a @Rule, was at JCrete in August.

I was privileged to have access to some amazing developers at both these events, who were generous with their time and knowledge, and were patient with my rusty Java skills. In roughly chronological order I am hugely grateful to Bas Knopper, Leonard Gram, Andres Almiray, Mattias Jidderman, Josh Long, Dmitry Vyazelenko, Ixchel Ruiz, Rickard Öberg among others. Also thanks to Mattias Karlsson and Heinz Kabutz for organising Jfokus and JCrete respectively, and Kirk Pepperdine for allowing me to gatecrash the latter.

  1. Go 1.7 adds testing.M for wrapping test runs in a main method. You can see a naming theme. ↩︎

  2. Go doesn’t have method overloading so you need a different method name if you want a different signature. ↩︎