Share and Enjoy

David Saff's blog of technological joys.

Wednesday, January 30, 2008

 

Meet me at the Green Bar

Blogging here has been fun, but I wanted a nicer place to talk without blogger.com hanging around all the time. So, from now on, meet me at the Green Bar.

Wednesday, May 09, 2007

 

TDD in academia: a brief review

The new millenium has seen growing academic interest in test-driven development. Here, I'll review the papers I've seen so far, as a starting point for further research, a reference for on-line discussions, and a source for resolving bar bets. I've squashed some interesting nuances in each study regarding the exact processes compared, and I'm happy to revise if any reader feels I've mis-quoted or over-simplified.

First, a couple studies, both using students, have found either no significant difference, or just slight improvement, in quality and productivity from using TDD.


Laurie Williams' group at North Carolina State has conducted several studies of TDD using professional programmers.

Other groups have also found productivity and quality gains from using TDD.


Many of these papers are well-summarized by Janzen and Saiedian, in a survey paper that takes a positive view of TDD and predicts growing acceptance. Janzen and Saiedian also suggest test-driven learning, an application of TDD to the software engineering classroom.

To draw some tentative conclusions:


Tuesday, May 08, 2007

 

Exploring with JUnit Factory: 103 points in the first frame

In my last post, we shook out a Theory about the number of bowls and frames in a game of bowling:


@Theory
public void shouldBeTenFramesWithTwoRollsInEach(Game game, Bowl first,
Bowl second) {
assumeNotNull(game, first, second);
assumeThat(game.isAtBeginning(), is(true));
assumeThat(game.getPlayers().size(), is(1));
assumeThat(first.isStrike(), is(false));
assumeThat(second.completesSpareAfter(first), is(false));

for (int frame = 0; frame < 10; frame++) {
game.bowl(first);
game.bowl(second);
}

assertThat(game.isGameOver(), is(true));
}


We used JUnitFactory to find some missing datapoints and missing assumptions, and got to the point where all JUnitFactory could find were parameters that either failed the assumptions or passed the tests, which is good.

At this point, we should think about the next functionality to test. I have almost a dozen methods that I've stubbed with fake answers, but before I completely forget about this Theory, I check the test that JUnit Factory has produced to test the "passing" path through this Theory:


public void testShouldBeTenFramesWithTwoRollsInEach() throws Throwable {
Game STARTING_GAME = BowlingTests.STARTING_GAME;
BowlingTests bowlingTests = new BowlingTests();
bowlingTests.shouldBeTenFramesWithTwoRollsInEach(STARTING_GAME, BowlingTests.THREE, new Bowl(100));
assertNotNull("bowlingTests.assume", getPrivateField(bowlingTests, "assume"));
}


Well, bust my buffers, where did JUnit Factory come up with the idea to bowl 100 pins with one ball? I'll have to look at its league records to see if there's been similar grade inflation. This is not the perfect test we'd like to see--no Bowl should exist with over 100 pins. I could fix this directly in the code, but we need a failing test first. Our current Theory would pass with 100 as a datapoint--what I need is a new Theory.

It's tempting to test that the Bowl constructor throws an exception whenever a pinCount over 10 is passed to it. However, testing for exceptions can be obfuscated, and it's not what I necessarily want to say--I want to say that no Bowl exists with more than 10 pins bowled:


@Theory
public void maximumPinCountIsTen(Bowl bowl) {
assumeNotNull(bowl);
assertThat(bowl.pinCount(), lessThanOrEqualTo(10));
}


This is easily passed by having pinCount() always return, say, 5. I am being deliberately difficult here, employing what Kent Beck calls "Fake it till you make it". The correct implementation of pinCount (return the pins passed in the constructor) is obvious, but it's worth our time to notice that our current tests don't distinguish between the obviously right and obviously wrong implementations.

The reason we can get away with a fake return from pinCount is that all we've required of the method is that it return something less than or equal to 10. Let's add another Theory about the normal behavior of pinCount:


@Theory
public void pinCountMatchesConstructorParameter(int pinCount) {
assertThat(new Bowl(pinCount).pinCount(), is(pinCount));
}


To make this pass, we can now put in the obvious definition of pinCount:


private final int pinCount;

public Bowl(int pinCount) {
this.pinCount = pinCount;
}

public int pinCount() {
return pinCount;
}


Now all of our theories pass on our current data points. Let's look for other datapoints using JUnit Factory. We get this excellent test (from now on, I'll edit out all of the unimportant bits of the test, leaving the name and invocation


public void testMaximumPinCountIsTenThrowsAssertionError() throws Throwable {
bowlingTests.maximumPinCountIsTen(new Bowl(100));
}


This is what we were hoping for. Our theories now catch a 100-point Bowl as an error. Before going further, I need to add this as a data point:


public static ONE_HUNDRED_BOWL = new Bowl(100);


Now maximumPinCountIsTen fails. To fix this, I'll prevent the construction of Bowls that have more than 10 pins:


public Bowl(int pinCount) {
if (pinCount > 10)
throw new IllegalArgumentException("At most 10 pins in one bowl");
this.pinCount = pinCount;
}


Now, everything falls apart. When trying to create an instance of BowlingTests, the line


public static ONE_HUNDRED_BOWL = new Bowl(100);


causes construction to fail with an IllegalArgumentException, so no tests get run. We could remove the data point, but if we were ever to regress and forget to check arguments to the Bowl constructor, this is a test that will remind us. Popper will allow us to wrap the datapoint in a method, annotated with @DataPoint. Any @DataPoint method that throws an exception is simply ignored, so we can keep this data point around in case it's ever needed again:


@DataPoint public Bowl oneHundredBowl() { return new Bowl(100); }


Running the tests now, we get an IllegalArgumentException on pinCountMatchesConstructorParameter:



@Theory
public void pinCountMatchesConstructorParameter(int pinCount) {
assertThat(new Bowl(pinCount).pinCount(), is(pinCount));
}


Since any integer can be passed in, some of those integers will cause IllegalArgumentExceptions. However, these integers are invalid parameters. I could explicitly check that all ints coming in are between 0 and 10, but that would duplicate the logic that the constructor itself should be doing. Instead, I'll recognize that an IllegalArgumentException is a signal that the parameter is invalid:


@Theory
public void pinCountMatchesConstructorParameter(int pinCount) {
try {
assertThat(new Bowl(pinCount).pinCount(), is(pinCount));
} catch (IllegalArgumentException e) {
assumeNoException(e);
}
}


Here, assumeNoException turns an otherwise fatal exception into an assumption failure.

Now, we run the tests again to find out that we haven't supplied any valid integers as parameters to the Bowl constructor. We can easily come up with one ourselves, but let's overuse JUnit Factory instead. We generate tests, and JUnit Factory chooses the data point 0 to test pinCountMatchesConstructorParameter. We add that data point to our TheoryContainer, and find that:



Now, I'm finally ready to move on to my next bit of functionality, but I'll let this series on bowling with Popper and JUnit Factory draw to a close. For those of you following along at home, here's the final source of our Theory class. Some things to note:


Tuesday, May 01, 2007

 

Exploring with JUnit Factory: not null, I assume

In my last post, we used Popper to create a Theory about scoring a bowling game. The end result boiled down to roughly this:


@RunWith(Theories.class)
public class BowlingTests extends TheoryContainer {
public static Game STARTING_GAME = new Game();
public static Bowl THREE = new Bowl(3);
public static Bowl FOUR = new Bowl(4);

@Theory
public void shouldBeTenFramesWithTwoRollsInEach(Game game, Bowl first,
Bowl second) {
assumeThat(game.isAtBeginning(), is(true));
assumeThat(game.getPlayers().size(), is(1));
assumeThat(first.isStrike(), is(false));
assumeThat(second.completesSpareAfter(first), is(false));

for (int frame = 0; frame < 10; frame++) {
game.bowl(first);
game.bowl(second);
}

assertThat(game.isGameOver(), is(true));
}
}


I believe Theories are useful for any Java developer. However, so far, I've been concentrating on test-driven development (TDD). The heartbeat of TDD with Tests and Theories is similar to that with just Tests, with one essential difference.


  1. Use automated or manual exploration to find any data points that invalidate the current Theories. If such a data point exists, add it to the currently accepted data points. Otherwise, write a focused Test or Theory for the next bit of functionality needed. At the end of this step, a Test or Theory should fail.
  2. Change the code so that all current Tests and Theories pass.
  3. Refactor to the best design that passes the current tests and theories.
  4. Repeat.


We've already done step 1--since there were no Theories, I wrote a new one, above, about ten frames with two rolls. I'll do step 2 off-screen...

There. I've written the simplest code I can think of to pass this theory--we can hard-code most of the boolean answers, especially making sure that isGameOver always returns true. Running the test in Eclipse tells me it passes for all of the parameters I've considered (all three of them). However, what about the infinite number of parameters I haven't considered? I can stare at the code and consider other options, or I can just ask JUnit Factory.

JUnit Factory is a free service accessed through a free Eclipse plug-in. Its primary purpose is to generate characterization tests for domain classes. It uses static analysis, dynamic analysis, and tuned heuristics in an attempt to characterize the current behavior of your classes, especially in unanticipated circumstances.

By turning the powerful eye of JUnit Factory on a TheoryContainer, I can see automatically if there are any inputs to my theory that pass the assumptions, but fail the assertions. I've already downloaded the plug-in, and I've made sure I meet these prerequisites:



I focus my editor on BowlingTests push Shift-F9, and in about 30 seconds, I get my first set of characterization tests. Remember that these are tests of the methods of my TheoryContainer, not of the Game or Bowl class themselves. When scanning these tests, I'm looking only for parameters and outcomes, not the assertions themselves, which are unlikely to be interesting--all of my Theory methods, remember, return void. There's usually a few that indicate proper returns from my Theory methods, and a few indicating exceptional returns. Scanning the outline, I see:


testShouldBeTenFramesWithTwoRollsInEach()
testShouldBeTenFramesWithTwoRollsInEachThrowsNullPointerException()
testShouldBeTenFramesWithTwoRollsInEachThrowsNullPointerException1()
testShouldBeTenFramesWithTwoRollsInEachThrowsNullPointerException2()


So JUnitFactory found at least one way to make the Theory pass, and three ways to make it throw a NullPointerException. Since my tests pass with my current data points, there must be new data points I need to include to find these exceptional behaviors. Let's look at the first NullPointerException test:

    
public void testShouldBeTenFramesWithTwoRollsInEachThrowsNullPointerException() throws Throwable {
BowlingTests bowlingTests = new BowlingTests();
try {
bowlingTests.shouldBeTenFramesWithTwoRollsInEach(new Game(), BowlingTests.THREE, null);
fail("Expected NullPointerException to be thrown");
} catch (NullPointerException ex) {
assertNull("ex.getMessage()", ex.getMessage());
assertThrownBy(BowlingTests.class, ex);
assertNotNull("bowlingTests.assume", getPrivateField(bowlingTests, "assume"));
}
}


This is annoying--JUnitFactory is passing a null Bowl to my theory. Of course, my Theory currently claims that it accepts any value of type Bowl, and null fits that description. The other two NullPointerException tests make use of another null Bowl, and a null Game.

In order to deal with this new information, we add the two new data points to our TheoryContainer:


public static Game STARTING_GAME = new Game();
public static Game NULL_GAME = null;

public static Bowl THREE = new Bowl(3);
public static Bowl FOUR = new Bowl(4);
public static Bowl NULL_BOWL = null;


Running the test, it fails with a NullPointerException, as expected. Now, the theory needs to be updated to assume that the parameters are not null. Using just Popper, we can simply use an attribute of the @Theory annotation, @Theory(nullsAccepted=false). Unfortunately, JUnitFactory does not understand this attribute, so instead, we'll have to explicitly add the assumptions:


@Theory
public void shouldBeTenFramesWithTwoRollsInEach(Game game, Bowl first,
Bowl second) {
assumeThat(game, isNotNull());
assumeThat(first, isNotNull());
assumeThat(second, isNotNull());

// ...


This is a common pattern when using Popper together with JUnit Factory. In order to make it as painless as possible, you can use a shorthand from Popper 0.5:


@Theory
public void shouldBeTenFramesWithTwoRollsInEach(Game game, Bowl first,
Bowl second) {
assumeNotNull(game, first, second);

// ...


Now, generating the tests, we see the following methods:


testShouldBeTenFramesWithTwoRollsInEach()
testShouldBeTenFramesWithTwoRollsInEachThrowsInvalidTheoryParameterException()


This is what we want to see: sometimes the parameters are invalid, but these are caught by our assumptions. Anything getting past our assumptions is passing the test. Excellent.

This may feel like a lot of work for a simple skeleton. However, this work will be paid off as we move forward--this Theory actually says a lot about bowling games, and as we make Game and Bowl more sophisticated, this Theory will be waiting to catch any weirdness introduced.

And, in the future, I may publish an Eclipse plug-in that will better manage this "mash-up" of Theories and JUnit Factory, for example, by automatically inserting assumeNotNull where desirable.

Friday, April 27, 2007

 

Bowling with Popper style

A few months ago, I released the first version of Popper, an extension to JUnit that allows you to supplement Tests (statements about how one particular object acts on one set of inputs) with Theories (statements about how all objects meeting certain criteria act on all inputs meeting other criteria). Popper has grown up to version 0.4, and I'd like you to try it out. Yes, you.

The big idea here is being even more precise about what you test, and how you communicate it. Tests written in a JUnit style end up saying both more and less than the developer knows--Theories help you say exactly what you know.

As an example, let's work through the a unit test Kevin Lawrence has suggested for a bowling scorer. The first requirement Kevin considers is


// 2.1.1 A game of tenpins consists of ten frames. A player delivers two balls in each of the first
// nine frames unless a strike is scored. In the tenth frame, a player delivers three balls if a
// strike or spare is scored. Every frame must be completed by each player bowling in regular
// order.


Here's Kevin's test:


@Test
public void shouldBeTenFramesWithTwoRollsInEach() {
for(int frame = 0; frame < 10; frame++){
game.bowl(3);
game.bowl(4);
}

assertThat(game.isGameOver(), is(true));
}


Kevin's using Hamcrest, a matcher library that allows the convenient assertThat syntax. This is good, because we will too.

This test talks about a fixture field game, which is initialized as such:


public class GameTest {
private Game game;

@Before
public void createGame() {
game = new Game();
}
}


This is a fairly well-written JUnit test setup, and it will correctly catch a number of bugs. However, as a communication tool to a future maintainer, it falls short:


  1. The requirement talks about each player bowling in regular order--does the default constructor of Game produce a single-player game?
  2. Does the property of having twenty bowls left only apply right after construction, or to Games in other states?
  3. What's special about 3 and 4? Would any other numbers do? Just those particular numbers?


When writing a theory, the strategy is to remove specifics that are not important to the behavior under consideration, and use the object's own protocol to fill in the details that are left out. First, we can remove the invocation of the default constructor from the understanding of the test--what's important is not the constructor call, but the state the game is in at the beginning of the sequence, and the number of players. We do this by making game a parameter of the method, which is now a Theory, and making assumptions about its current state:


@Theory
public void shouldBeTenFramesWithTwoRollsInEach(Game game) {
assumeThat(game.isAtBeginning(), is(true));
assumeThat(game.getPlayers().size(), is(1));

for(int frame = 0; frame < 10; frame++){
game.bowl(3);
game.bowl(4);
}

assertThat(game.isGameOver(), is(true));
}


In the future, it may be possible to have Games that are are "at the beginning", but don't result directly from constructor calls--for example, a Game loaded from an intermediate "save-game" file. This Theory will automatically apply to those Games, as well.

Next, what's special about 3 and 4? Well, they're just two numbers that indicate that neither a strike nor spare was bowled:


@Theory
public void shouldBeTenFramesWithTwoRollsInEach(Game game, int firstBowl, int secondBowl) {
assumeThat(game.isAtBeginning(), is(true));
assumeThat(game.getPlayers().size(), is(1));
assumeThat(firstBowl, lessThan(10));
assumeThat(firstBowl + secondBowl, lessThan(10));

for(int frame = 0; frame < 10; frame++){
game.bowl(firstBowl);
game.bowl(secondBowl);
}

assertThat(game.isGameOver(), is(true));
}


(Actually, we're missing the fact here that spares in the first nine frames also lead to two bowls--the original test left that fact out, and I choose to do the same in this Theory).

The line assumeThat(firstBowl + secondBowl, lessThan(10)) bothers me. It doesn't match up with the concept of "spare" from the requirements, and it likely duplicates logic that will end up in the domain soon enough. Therefore:


@Theory
public void shouldBeTenFramesWithTwoRollsInEach(Game game, Bowl first, Bowl second) {
assumeThat(game.isAtBeginning(), is(true));
assumeThat(game.getPlayers().size(), is(1));
assumeThat(first.isStrike(), is(false));
assumeThat(second.completesSpareAfter(first), is(false));

for(int frame = 0; frame < 10; frame++){
game.bowl(first);
game.bowl(second);
}

assertThat(game.isGameOver(), is(true));
}


Now, we've removed unhelpful particulars, and added some helpful domain concepts and explicit assumptions. How does this Theory get run? There's two answers. We can use this theory for validation (Does every parameter combination we've considered in the past still work?), or exploration (Is there any new parameter combination that passes the assumptions, but fails the assertions?).

For validation, the Theory method must be declared on a subclass of TheoryContainer, which causes it to be run with a custom JUnit runner. By default, the subclass is also expected to declare as constants any valid parameter values that are currently believed to pass the theory:


@RunWith(Theories.class)
public class BowlingTheories extends TheoryContainer {
public static Game STARTING_GAME = new Game();
public static Bowl GUTTER_BALL = new Bowl(0);
public static Bowl STRIKE_BALL = new Bowl(10);
public static Bowl THREE = new Bowl(3);
public static Bowl FOUR = new Bowl(4);

@Theory
public void shouldBeTenFramesWithTwoRollsInEach(Game game, Bowl first, Bowl second) {
// as above ...
}
}


The custom runner will try every possible combination of parameters from the set given, but it will not try anything outside that set.

Once I'm satisfied that my code passes the Theory for all the parameters I can think of myself, I'm ready for some automated exploration, to search for parameters that I haven't thought of. However, if your code is free from legal entanglements, JUnit Factory, from Agitar, works very well for exploration. More on that in the next post. Right now, here's some fun things to try:


Thursday, April 26, 2007

 

assertThrownException

In my previous post, I described imposterization, a pattern for which I'm finding new uses. What can imposterization do for us? Let's look at testing for thrown exceptions.


In JUnit 3, this is the standard way of testing that a method throws an exception:


public void testIndexOutOfBounds() {
try {
new ArrayList().get(0);
fail("Should have thrown exception");
} catch (IndexOutOfBoundsException e) {
assertEquals("Index: 0, Size: 0", e.getMessage());
}
}


When creating JUnit 4, Kent and Erich recognized that this was a very commonly repeated pattern, and created an annotation-based way of testing for a thrown exception:


@Test(expected=IndexOutOfBoundsException.class)
public void indexOutOfBounds() {
new ArrayList().get(0);
}


There are advantages to each approach. The explicit try/catch block looks more like the expected client code, and allows assertions about more than just the type of the thrown exception. However, as with any repetition, an experienced user starts to glaze over the details of the test, and can miss important mistakes, such as:


public void testIndexOutOfBounds() {
try {
new ArrayList().get(0);
// OOPS! Now the test will pass even with no exception!
// fail("Should have thrown exception");
} catch (IndexOutOfBoundsException e) {
assertEquals("Index: 0, Size: 0", e.getMessage());
}
}


With imposterization, we can get the succinctness of the annotation-based approach while allowing arbitrary assertions about the thrown exception:


@Test public void indexOutOfBounds() {
IndexOutOfBoundsException e = new IndexOutOfBoundsException("Index: 0, Size: 0");
List emptyList = new ArrayList();
assertThrownException(is(e)).when(emptyList).get(0);
}


This now looks even less like expected client code, but it almost reads like English, and provides a one-line statement of the behavior I expect. Here, I'm using imposterization as a technique for building a mini-language, just as jMock 2 does for setting expectations. This kind of imposterization I might call syntactic imposterization: the impostor only appears in the test code, in order to refer to methods of the interface. I could then use the term semantic imposterization for uses in which the impostor is used in the same context that an original implementor would be used--for example, a mock object, or a capturing decorator from test factoring.

The expression is(e) uses Matchers from the hamcrest project, giving me a lot of power to express properties of the exception that's thrown.

When developing a mini-language using syntactic imposterization, it can be tricky to make the interface comprehensible. It would be wonderful to be able to say:


assertThat(exceptionThrownBy(emptyList.get(0)), is(equalTo(e)));


This would use the assertThat statement made famous by Joe Walnes, which I love, but unfortunately, the expressions that I'd like to talk about often have void return values, making it impossible to compile the above statement. Therefore, the verb must always go to the end, resulting in statements that sometimes read a little more like German, or perhaps Yoda. There's three ways I've noticed so far to get around this problem without making my natural language brain hurt too much:


  1. Dick and Jane style: in this style, I create a mutable object that remembers the methods called on it, and repeat the noun to finish the thought:


    MethodCall mc = new MethodCall();
    mc.calls(emptyList).get(0);
    assertThat(mc, will(throwException(is(e))));


  2. Fragment style: this is like Dick and Jane, only the noun is implicitly stored in global state or the host object, making the statements stateful. This is how JMock captures expectations:


    call(emptyList).get(0);
    assertThrownException(is(e));


  3. Subordinate clause style: this style, already shown in the first code example, puts the result before the verb, allowing everything to take place in one statement, if perhaps a little backwards:


    assertThrownException(is(equalTo(e))).when(emptyList).get(0);



Currently, I'm mixing the types of syntax imposterization in my tests, for asserting on thrown exceptions, as above; for identifying methods to be operated upon by my custom test framework:


FunctionPointer function = new FunctionPointer();
function.pointsAt(this).getStringReturnsA(null);
Object[] stubs = oldPopulator.stubsFor(function);


for identifying prerequisite tests:


onlyIfPassing(StubValueTableTest.class).cantAddSameValueTwice();


and for generating custom matchers based on observer method calls:


public static Matcher hasId(final String id) {
ViewReferencePropertyMatcher matcher = new ViewReferencePropertyMatcher();
matcher.mustMatch(id).getId();
return matcher;
}


However, I plan to try to coalesce a couple of these uses. I'm currently in the overuse phase* of syntax imposterization, and looking to bring my code back to its most readable state.

* Thanks to Martin Fowler for identifying this phase.

Wednesday, December 20, 2006

 

Interface imposterization

I learn all sorts of things from jMock. From jMock 1, I was infected with the idea of a fluent interface for creating mocks, and the power of Constraint objects. Lately, I've been playing with jMock 2, and it's had me thinking even more deeply about a pattern I'm for the moment calling interface imposterization.*

To explain imposterization, I'd like to make a distinction between two kinds of subtypes of a given type (for this discussion, let's consider implementations of a Java interface). An implementor of an interface satisfies all of the documented and implied contracts for that interface. An impostor of an interface is also an implementation, as far as the object runtime is concerned, but is free to violate the interface contracts, in order to learn or assert or learn a property about the code that uses it.

If you've run across mock objects in unit testing, then you've already seen one instance of imposterization. Consider constructing a mock for a BankAccount interface, in order to test a woefully primitive transaction processor, using jMock 1 syntax:
interface BankAccount {
void deposit(int amount);
void withdraw(int amount);
int getBalance();
}

@Test public void readDeposit() {
BankAccount account = mock(BankAccount.class);
account.expects(once()).method("deposit").with(eq(1000));
TransactionReader reader = new TransactionReader(account);
reader.readLine("deposit 1000");
}
Here, account is an impostor of the BankAccount interface. It's not really a proper implementor, because there are all kinds of contracts, perhaps documented, and perhaps implied, that our mock BankAccount breaks. For one thing, it probably throws an expectation exception when getBalance() is called, something that no proper implementation of BankAccount would do. But we're not trying to create a general-purpose implementation--the point of our imposterization is to learn something about the readLine method: does it interact in the right way with account?

jMock 2 also uses imposterization to set expectations on mock objects. Here's the same test in jMock 2 syntax:

@Test public void readDeposit() {
BankAccount account = mock(BankAccount.class);
expects(new InAnyOrder() {{
one(account).deposit(1000);
}});
TransactionReader reader = new TransactionReader(account);
reader.readLine("deposit 1000");
}
Here, the result of one(account) is a different impostor, which records a method call (.deposit(1000)) as a method call expected to be called later in the test. one(account) doesn't make any pretense of being a real BankAccount. From the point of view of the intent of object-oriented design, this is almost pure evil. But it is useful, because it reduces the "meta-noise" of the test. Rather than having to invent a new language to talk about a method invocation, we can use regular Java to just invoke the method itself.**

Impostors are not a new idea to me--my thesis work on test factoring involves creating "capturing decorators" to record invocations, and mocks to replay them--both kinds of impostors that we saw above. This is done automatically to create unit tests from arbitrary program executions, and run them after program changes, without the developer having to really know what's going on under the hood.

The new idea from jMock 2 is that plain-Java interfaces for creating impostors can be quite elegant and succinct. Knowing this, more and more problems begin to suggest imposterization to me. In my next post, I'll talk about using impostors to simplify testing exception-throwing methods, and later, we'll look at using impostors to automatically generate stubs for verifying Theories.

* The name imposterization comes from the interface Imposteriser in jMock 2. However, I won't claim that the jMock authors had exactly the same idea of what imposterization means, and I'll just have to agree to disagree about the -ize vs. -ise ending, thanks to Samuel Johnson.

** The idea of using an interface impostor to record mock object expectations has been around in EasyMock (and perhaps other frameworks) for a while. The benefits of this syntax are not without costs--I have always been uncomfortable with EasyMock's two-phase mock objects, which first capture and then replay, with a global method call in the middle to switch state. That smells to me of a missed abstraction. jMock 2 also, it turns out, uses two-phase mocks under the covers, but the interface at least encourages thinking about the recording phase differently, since developers call one(account).deposit(1000) instead of directly account.deposit(1000). But I still have the same concerns that something is being missed.

Archives

February 2005   June 2005   March 2006   August 2006   December 2006   April 2007   May 2007   January 2008  

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]