Infinitest for IntelliJ 5.0 Released
Working Effectively with Transitional Code (Part 1)

Comparing Infinitest and JUnitMax

 

As the author of Infinitest, a lot of people have been asking me what the difference is between my tool and JUnitMax. They both run tests continuously. They both are packaged as Eclipse plugins. They both report test errors in the "Problems" view just like compiler errors. In all honesty, their similarities are much more numerous than their differences. So it's hard for me to compare them in a way that's helpful to those that ask.

 

Compounding this is the fact that we're doing, on average, a couple of releases each week. I don't know if Kent is keeping track of what we're doing, but we're very aware of what he's doing. When it comes to responding to change, I think we do it as well as anyone. So any kind of feature by feature breakdown is likely to be out of date by the time anyone reads it.

 

There are some differences though, and I think they are significant. Here are three that I think are unlikely to change in the near term:


There is an IntelliJ version of Infinitest

 

Kent has blogged about possibly doing an IntelliJ version at some point. We've got a working version today, maintained primarily by my friend and colleague Rod Coffin. I think it's a more mature tool than the Eclipse version in some respects, but they're both based on the same core. 

 

Infinitest selects a subset of tests to run, while JUnitMax runs them all

 

I think there are two basic functions that make a continuous test runner worthwhile. The first is test prioritization. The order that you run your tests in is very important. You can think of this as "the fastest possible red bar". If a test is likely to fail, I want it to be the very first test I run. Both Infinitest and JUnitMax track test metrics and use them to prioritize the tests. 

 

The other function that I think a CT runner needs to do is test selection. If you're doing TDD, you're probably running a single test every time you make a change. If you're doing CI, you're running all your tests on a semi regular basis. Somewhere in the middle of these two approaches is a good balance of feedback quality vs speed. I think that's where CT tools should be focused. Infinitest uses dependency analysis to determine what tests need to be run for a given change. JUnitMax runs them all. My philosophy is that time spent running tests that are highly unlikely to fail is wasted. I want "the fastest green bar possible" so that I can have confidence in my last change and move on to the next one. And while the multicore revolution may one day provide so much desktop CPU horsepower that we can run all the tests in a typical project, concurrently, in a second or two, we're not there yet.

 

You don't have to buy Infinitest to use it

 

All of the committers to the Infinitest project share a common vision. Not only of Infinitest as a useful tool, but also of Continuous Testing as a valuable and transformative developer practice. We've been writing code this way for a couple of years at this point, and we find the difference between Continuous Testing and plain Test Driven Development to be quite substantial. It reminds me of when I first got addicted to doing TDD...there's just no way I could go back.

 

I'm a software craftsman. That means that I'm committed to not only building quality software, but also contributing back to the profession by helping others learn. With Infinitest, I get to do both. 

 

I think these differences make Infinitest a better tool. I also think there's plenty of room in the CT tools market for multiple vendors serving different market segments. We'll see how everything settles out in the next year or so...it promises to be a very interesting ride.

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Brendan Humphreys

A third entry into this discussion is Clover http://www.atlassian.com/software/clover/ which provides optimized testing functionality.

How does Clover compare to Infinitest and JUnitMax?

First up, Clover is commercial software - it is a sophisticated code coverage tool, and test optimization is only one of its features.

Secondly, we've come at this problem from a Continuous Integration angle, but with version 2.5 take the first steps towards Continuous Testing.

Clover provides test optimization in Ant and Maven, with Eclipse & IntelliJ support in the 2.5 release (due beginning of May).

Clover's test optimization provides both test prioritization (like JUnitMax and Infinitest) and test selection (like Infinitest). Unlike Infinitest, Clover's test selection is based on coverage analysis rather than static dependency analysis. Use of static analysis means that Infinitest will not work with dynamic/reflection-based invocations, whereas Clover will.

Unlike Infinitest and JUnitMax, we work with existing test runners, rather than providing our own.

Clover's test optimization works very well in a CI setting. We use it internally at Atlassian as a "gateway build": rather than running a full test suite for every commit, we first use Clover to run just those tests relevant to the changes in the commit. This way we can find test failures in a fraction of the time it would normally take.

Finally, coming in 2.5, Clover will support distributed test optimization: The ability to optimize your functional or integration tests, even if the components exercised in the test are in separate VMs or machines.


Cheers,
-Brendan
Clover developer

Ben Rady

Brendan,

Thanks for the heads up!

Ben

Michael L Perry

Cool plug-in, Ben. I especially love the test selection feature. However, I think you are doing more work to get it than you need to.

I moderate an open-source project that is at first glance completely unrelated, but it does a very similar kind of dependency discovery. I do this without static code analysis. I think you can too.

I've posted details on my blog:

http://adventuresinsoftware.com/blog/?p=438

Thanks for your hard work and contribution.

The comments to this entry are closed.