20% more bugs? Or 20% less features? 188

Posted by Uncle Bob Wed, 07 Apr 2010 03:03:23 GMT

People often make the argument that time to market is more important that quality. I’m not sure just what they mean by that. Do they mean that it’s ok if 20% of the features don’t work so long as they deliver quickly? If so, that’s just stupid. Why not develop 20% fewer features, and develop them well. It seems to me that choosing which 20% you are not going to develop and then choosing to develop the other 80% to a high standard of quality is a better management decision than telling the developers to work sloppily.

Software on the Cheap 262

Posted by Uncle Bob Mon, 01 Feb 2010 22:17:00 GMT

When it comes to software, you get what you pay for.

Have you ever stopped to wonder how much a line of code costs? It ought to be easy to figure out.

In the last 14 months, I have written about 20KSLOC in FitNesse. Of course that was part time. My real job is running Object Mentor, consulting, teaching, mentoring, writing, and a whole load of other things. Programming takes up perhaps 15% of my time.

On the other hand most programmers have lots of other things to do. They go to meetings, and then they go to more meetings. When they are done with those meetings, they go to meetings. And then there are the meetings to go to. Oh yeah, and then there’s all the fiddling around with time accounting tools, and horrific source code control systems that perform like a salamander crawling through frozen mud.

So, maybe 15% isn’t such a bad ratio.

The loaded rate (Salary plus everything else) for a typical programmer is on the order of $200K. (I know that sounds like a lot, but you can look it up.) So $200K / (20KSLOC / 14mo * 12mo) = $11.66/SLOC.

Let’s look at one of those lines: StringBuffer nameBuffer = new StringBuffer(); Does that look like $11.66 to you? Would you pay that much for it? Well, don’t answer yet, because for each StringBuffer line you buy, you get import java.lang.StringBuffer; absolutely free!

Some factories pay their employees a “piece rate”. Would you accept $11.66 per line of code instead of a salary? Of course it couldn’t just be any old line of code. It’d have to be tested!

Hey, I bet all programmers would do TDD if we paid them a piece rate!

Down to business.

The point of that silly analysis was to demonstrate that software is expensive. Even the dumbest little app will likely require more than 1,000 lines of code; and that means it could cost $12K to write!

Imagine that you aren’t a programmer, but you have a clever idea for a new website that’ll make you a zillion dollars. You’ve storyboarded it all out. You’ve worked out all the details. Now all you need is some high-school kid to zip out the code for you. Right? Hell, you could pay him minimum wage! The little twerp would be happy to get it!

That tragic comedy is altogether too common. Too many people have borrowed money against their father’s retirement account to fund a terrible implementation of a good idea. Appalled at how much the reputable firms charge per hour ($100 or more) they go looking for a cheap solution.

“After all, this software is simple.” Or so the reasoning goes. “It’s not like we’re trying to send a rocket to the moon or anything. And, besides, those expensive guys were just out to cheat us. Software just isn’t that hard to write.” Uh huh.

So the poor schmuck finds some freshman in college, or maybe a bored housewife who read a book on HTML last year and created a cute website to show off her kittens. Have these programmers heard about TDD? Have they heard about Design Patterns? Principles? How about source code control?

Clearly they haven’t. They’re going to sling a bunch of horrific code together, without any tests, versioning, or control. The project will start well, with exciting initial results. But then it will slowly grind to a halt, while the cash continues out the door unabated.

In the end the website isn’t going to get built (and the poor schmuck’s father won’t be retiring as soon as he thought). It will be a disaster that will either be terminated, or will require double or triple the investment to get right.

The Bottom Line.

The bottom line is that, when it comes to software, you get what you pay for. If you want good software done well, then you are going to pay for it, and it will probably cost you $12/line or more. And, believe me, that’s the cheapest way to get your software done.

If you go out hunting for the cheap solution, then you’re going to end up paying more, and losing time. Software is one of those things that costs a fortune to write well, and double that to write poorly. If you go for cheap, you’re going to pay double; and maybe even triple.

Mocking Mocking and Testing Outcomes. 383

Posted by Uncle Bob Sat, 23 Jan 2010 17:32:00 GMT

The number of mocking frameworks has proliferated in recent years. This pleases me because it is a symptom that testing in general, and TDD in particular, have become prevalent enough to support a rich panoply of third-party products.

On the other hand, all frameworks carry a disease with them that I call The Mount Everest Syndrome: “I use it because it’s there.” The more mocking frameworks that appear, the more I see them enthusiastically used. Yet the prolific use of mocking frameworks is a rather serious design smell…

Lately I have seen several books and articles that present TDD through the lens of a mocking framework. If you were a newbie to TDD, these writings might give you the idea that TDD was defined by the use of mocking tools, rather than by the disciplines of TDD.

So when should use use a mocking framework? The answer is the same for any other framework. You use a framework only when that framework will give you a significant advantage.

Why so austere? Why shouldn’t you use frameworks “just because they are there”? Because frameworks always come with a cost. They must be learned by the author, and by all the readers. They become part of the configuration and have to be maintained. They must be tracked from version to version. But perhaps the most significant reason is that once you have a hammer, everything starts to look like a nail. The framework will put you into a constraining mindset that prevents you from seeing other, better solutions.

Consider, for example, this lovely bit of code that I’ve been reviewing recently. It uses the Moq framework to initialize a test double:

var vehicleMock = Mocks.Create<IClientVehicle>()
 .WithPersistentKey()
 .WithLogicalKey().WithLogicalName()
 .WithRentalSessionManager(rsm =>
    {
      var rs = Mocks.Create<IRentalSession>();
      rsm.Setup(o => o.GetCurrentSession()).Returns(rs.Object);
      rsm.Setup(o =>
       o.GetLogicalKeyOfSessionMember(It.IsAny<string>(),
        It.IsAny<int>())).Returns("Rental");
    })
 .AddVehicleMember<IRoadFactory>()
 .AddVehicleMember<IRoadItemFactory>(rf => rf.Setup(t => 
    t.CreateItems(It.IsAny<IRoad>())).Returns(pac))
 .AddVehicleMember<ILegacyCorporateRental>()
 .AddVehicleMember<IRentalStation>(
    m => m.Setup(k => k.Facility.FacilityID).Returns(0))
 .AddVehicleMember<IRoadManager>(m=>
    m.Setup(k=>k.GetRoundedBalanceDue(25,It.IsAny<IRoad>())).Returns(25));
Some of you might think I’m setting up a straw-man. I’m not. I realize that bad code can be written in any language or framework, and that you can’t blame the language or framework for bad code.

The point I am making is that code like this was the way that all unit tests in this application were written. The team was new to TDD, and they got hold of a tool, and perhaps read a book or article, and decided that TDD was done by using a mocking tool. This team is not the first team I’ve seen who have fallen into this trap. In fact, I think that the TDD industry as a whole has fallen into this trap to one degree or another.

Now don’t get me wrong. I like mocking tools. I use them in Ruby, Java, and .Net. I think they provide a convenient way to make test-doubles in situations where more direct means are difficult.

For example, I recently wrote the following unit test in FitNesse using the Mockito framework.

  @Before
  public void setUp() {
    manager = mock(GSSManager.class);
    properties = new Properties();
  }

  @Test
  public void credentialsShouldBeNullIfNoServiceName() throws Exception {
    NegotiateAuthenticator authenticator = 
      new NegotiateAuthenticator(manager, properties);
    assertNull(authenticator.getServerCredentials());
    verify(manager, never()).createName(
      anyString(), (Oid) anyObject(), (Oid) anyObject());
  }
The first line in the setUp function is lovely. It’s kind of hard to get prettier than that. Anybody reading it understands that manager will be a mock of the GSSManager class.

It’s not too hard to understand the test itself. Apparently we are happy to have the manager be a dummy object with the constraint that createName is never called by NegotiateAuthenticator. The anyString() and anyObject() calls are pretty self explanatory.

On the other hand, I wish I could have said this:

assertTrue(manager.createNameWasNotCalled());

That statement does not require my poor readers to understand anything about Mockito. Of course it does require me to hand-roll a manager mock. Would that be hard? Let’s try.

First I need to create a dummy.

  private class MockGSSManager extends GSSManager {
    public Oid[] getMechs() {
      return new Oid[0];
    }

    public Oid[] getNamesForMech(Oid oid) throws GSSException {
      return new Oid[0];
    }

    public Oid[] getMechsForName(Oid oid) {
      return new Oid[0];
    }

    public GSSName createName(String s, Oid oid) throws GSSException {
      return null;
    }

    public GSSName createName(byte[] bytes, Oid oid) throws GSSException {
      return null;
    }

    public GSSName createName(String s, Oid oid, Oid oid1) throws GSSException {
      return null;
    }

    public GSSName createName(byte[] bytes, Oid oid, Oid oid1) throws GSSException {
      return null;
    }

    public GSSCredential createCredential(int i) throws GSSException {
      return null;
    }

    public GSSCredential createCredential(GSSName gssName, int i, Oid oid, int i1) throws GSSException {
      return null;
    }

    public GSSCredential createCredential(GSSName gssName, int i, Oid[] oids, int i1) throws GSSException {
      return null;
    }

    public GSSContext createContext(GSSName gssName, Oid oid, GSSCredential gssCredential, int i) throws GSSException {
      return null;
    }

    public GSSContext createContext(GSSCredential gssCredential) throws GSSException {
      return null;
    }

    public GSSContext createContext(byte[] bytes) throws GSSException {
      return null;
    }

    public void addProviderAtFront(Provider provider, Oid oid) throws GSSException {
    }

    public void addProviderAtEnd(Provider provider, Oid oid) throws GSSException {
    }
  }

“Oh, ick!” you say. Yes, I agree it’s a lot of code. On the other hand, it took me just a single keystroke on my IDE to generate all those dummy methods. (In IntelliJ it was simply command-I to implement all unimplemented methods.) So it wasn’t particularly hard. And, of course, I can put this code somewhere where nobody had to look at it unless they want to. It has the advantage that anybody who knows Java can understand it, and can look right at the methods to see what they are returning. No “special” knowledge of the mocking framework is necessary.

Next, let’s’ make a test double that does precisely what this test needs.

  private class GSSManagerSpy extends MockGSSManager {
    public boolean createNameWasCalled;

    public GSSName createName(String s, Oid oid) throws GSSException {
      createNameWasCalled = true;
      return null;
    }
  }
Well, that just wasn’t that hard. It’s really easy to understand too. Now, let’s rewrite the test.

  @Test
  public void credentialsShouldBeNullIfNoServiceNameWithHandRolledMocks() throws Exception {
    NegotiateAuthenticator authenticator = new NegotiateAuthenticator(managerSpy, properties);
    assertNull(authenticator.getServerCredentials());
    assertFalse(managerSpy.createNameWasCalled);
  }
Well, that test is just a load easier to read than verify(manager, never()).createName(anyString(), (Oid) anyObject(), (Oid) anyObject());.

“But Uncle Bob!” I hear you say. “That scenario is too simple. What if there were lots of dependencies and things…” I’m glad you asked that question, because the very next test is just such a situation.


  @Test
  public void credentialsShouldBeNonNullIfServiceNamePresent() throws Exception {
    properties.setProperty("NegotiateAuthenticator.serviceName", "service");
    properties.setProperty("NegotiateAuthenticator.serviceNameType", "1.1");
    properties.setProperty("NegotiateAuthenticator.mechanism", "1.2");
    GSSName gssName = mock(GSSName.class);
    GSSCredential gssCredential = mock(GSSCredential.class);
    when(manager.createName(anyString(), (Oid) anyObject(), (Oid) anyObject())).thenReturn(gssName);
    when(manager.createCredential((GSSName) anyObject(), anyInt(), (Oid) anyObject(), anyInt())).thenReturn(gssCredential);
    NegotiateAuthenticator authenticator = new NegotiateAuthenticator(manager, properties);
    Oid serviceNameType = authenticator.getServiceNameType();
    Oid mechanism = authenticator.getMechanism();
    verify(manager).createName("service", serviceNameType, mechanism);
    assertEquals("1.1", serviceNameType.toString());
    assertEquals("1.2", mechanism.toString());
    verify(manager).createCredential(gssName, GSSCredential.INDEFINITE_LIFETIME, mechanism, GSSCredential.ACCEPT_ONLY);
    assertEquals(gssCredential, authenticator.getServerCredentials());
  }
Now I’ve got three test doubles that interact with each other; and I am verifying that the code under test is manipulating them all correctly. I could create hand-rolled test doubles for this; but the wiring between them would be scattered in the various test-double derivatives. I’d also have to write a significant number of accessors to get the values of the arguments to createName and createCredential. In short, the hand-rolled test-double code would be harder to understand than the Mockito code. The Mockito code puts the whole story in one simple test method rather than scattering it hither and yon in a plethora of little derivatives.

What’s more, since it’s clear that I should use a mocking framework for this test, I think I should be consistent and use if for all the tests in this file. So the hand-rolled MockGSSManager and ManagerSpy are history.

“But Uncle Bob, aren’t we always going to have dependencies like that? So aren’t we always going to have to use a mocking framework?”

That, my dear reader, is the real point of this blog. The answer to that salient questions is a profound: “No!

Why did I have to use Mockito for these tests? Because the number of objects in play was large. The module under test (NegotiateAuthenticator) used GSSName, GSSCredential, and GSSManager. In other words the coupling between the module under test and the test itself was high. (I see lightbulbs above some of your heads.) That’s right, boys and girls, we don’t want coupling to be high!

It is the high coupling between modules and tests that creates the need for a mocking framework. This high coupling is also the cause of the dreaded “Fragile Test” problem. How many tests break when you change a module? If the number is high, then the coupling between your modules and tests in high. Therefore, I conclude that those systems that make prolific use of mocking frameworks are likely to suffer from fragile tests.

Of the 277 unit test files in FitNesse, only 11 use Mockito. The reason for small number is two-fold. First, we test outcomes more often than we test mechanisms. That means we test how a small group of classes behaves, rather than testing the dance of method calls between those classes. The second reason is that our test doubles have no middle class. They are either very simple stubs and spies or they are moderately complex fakes.

Testing outcomes is a traditional decoupling technique. The test doesn’t care how the end result is calculated, so long as the end result is correct. There may be a dance of several method calls between a few different objects; but the test is oblivious since it only checks the answer. Therefore the tests are not strongly coupled to the solution and are not fragile.

Keeping middle-class test doubles (i.e. Mocks) to a minimum is another way of decoupling. Mocks, by their very nature, are coupled to mechanisms instead of outcomes. Mocks, or the setup code that builds them, have deep knowledge of the inner workings of several different classes. That knowledge is the very definition of high-coupling.

What is a “moderately complex fake” and why does it help to reduce coupling? One example within FitNesse is MockSocket. (The name of this class is historical. Nowadays it should be called FakeSocket.) This class derives from Socket and implements all its methods either to remember what was sent to the socket, or to allow a user to read some canned data. This is a “fake” because it simulates the behavior of a socket. It is not a mock because it has no coupling to any mechanisms. You don’t ask it whether it succeeded or failed, you ask it to send or recieve a string. This allows our unit tests to test outcomes rather than mechanisms.

The moral of this story is that the point at which you start to really need a mocking framework is the very point at which the coupling between your tests and code is getting too high. There are times when you can’t avoid this coupling, and those are the times when mocking frameworks really pay off. However, you should strive to keep the coupling between your code and tests low enough that you don’t need to use the mocking framework very often.

You do this by testing outcomes instead of mechanisms.

Dependency Injection Inversion 1859

Posted by Uncle Bob Sun, 17 Jan 2010 18:42:00 GMT

Dependency Injection is all the rage. There are several frameworks that will help you inject dependencies into your system. Some use XML (God help us) to specify those dependencies. Others use simple statements in code. In either case, the goal of these frameworks is to help you create instances without having to resort to new or Factories.

I think these frameworks are great tools. But I also think you should carefully restrict how and where you use them.

Consider, for example, this simple example using Google’s Guice framework.


 public class BillingApplication {
   public static void main(String[] args) {
    Injector injector = Guice.createInjector(new BillingModule());
    BillingService billingService = injector.getInstance(BillingService.class);
    billingService.processCharge(2034, "Bob");
  }
 }

My goal is to create an instance of BillingService. To do this, I first get an Injector from Guice. Then I use the injector to get an instance of my BillingService class. What’s so great about this? Well, take a look at the constructor of the BillingService class.


class BillingService {
  private CreditCardProcessor processor;
  private TransactionLog transactionLog;

  @Inject
  BillingService(CreditCardProcessor processor, TransactionLog transactionLog) {
    this.processor = processor;
    this.transactionLog = transactionLog;
  }

  public void processCharge(int amount, String id) {
    boolean approval = processor.approve(amount, id);
    transactionLog.log(
      String.format("Transaction by %s for %d %s",
      id, amount, approvalCode(approval)));
  }

  private String approvalCode(boolean approval) {
    return approval?"approved":"denied";
  }
}

Oh ho! The BillingService constructor requires two arguments! A CreditCardProcessor and a TransactionLog. How was the main program able to create an instance of BillingService without those two arguments? That’s the magic of Guice (and of all Dependency Injection frameworks). Guice knows that the BillingService needs those two arguments, and it knows how to create them. Did you see that funky @Inject attribute above the constructor? That’s how it got connected into Guice.

And here’s the magic module that tells Guice how to create the arguments for the BillingService


public class BillingModule extends AbstractModule {
  protected void configure() {
    bind(TransactionLog.class).to(DatabaseTransactionLog.class);
    bind(CreditCardProcessor.class).to(MyCreditCardProcessor.class);
  }
}

Clever these Google-folk! The two bind functions tell Guice that whenever we need an instance of a TransactionLog it should use an instance of DatabaseTransactionLog. Whenever it needs a CreditCardProcessor it should use an instance of MyCreditCardProcessor.

Isn’t that cool! Now you don’t have to build factories. You don’t have to use new. You just tell Guice how to map interfaces to implementations, and which constructors to inject those implementations in to, and then call Injector.getInstance(SomeClass.class); and voila! You have your instance automatically constructed for you. Cool.

Well, yes it’s cool. On the other hand, consider this code:


public class BillingApplicationNoGuice {
  public static void main(String[] args) {
    CreditCardProcessor cp = new MyCreditCardProcessor();
    TransactionLog tl = new DatabaseTransactionLog();
    BillingService bs = new BillingService(cp, tl);
    bs.processCharge(9000, "Bob");
  }
}

Why is this worse? It seems to me it’s better.

But Uncle Bob, you’ve violated DIP by creating concrete instances!

True, but you have to mention concrete instances somewhere. main seems like a perfectly good place for that. Indeed, it seems better than hiding the concrete references in BillingModule.

I don’t want a bunch of secret modules with bind calls scattered all around my code. I don’t want to have to hunt for the particular bind call for the Zapple interface when I’m looking at some module. I want to know where all the instances are created.

But Uncle Bob, You’d know where they are because this is a Guice application.

I don’t want to write a Guice application. Guice is a framework, and I don’t want framework code smeared all through my application. I want to keep frameworks nicely decoupled and at arms-length from the main body of my code. I don’t want to have @Inject attributes everywhere and bind calls hidden under rocks.

But Uncle Bob, What if I want to get an instance of BillingService from deep in the bowels of my application? With Guice I can just say injector.getInstance(BillingService.class);.

True, but I don’t want to have createInstance calls scattered all through my code. I don’t want Guice to be poured all over my app. I want my app to be clean, not soaked in Guice.

But Uncle Bob, That means I have to use new or factories, or pass globals around.

You think the injector is not a global? You think BillingService.class is not a global? There will always be globals to deal with. You can’t write systems without them. You just need to manage them nicely.

And, no, I don’t have to use new everywhere, and I don’t need factories. I can do something as simple as:


public class BillingApplicationNoGuice {
  public static void main(String[] args) {
    CreditCardProcessor cp = new MyCreditCardProcessor();
    TransactionLog tl = new DatabaseTransactionLog();
    BillingService.instance = new BillingService(cp, tl);

    // Deep in the bowels of my system.
    BillingService.instance.processCharge(9000, "Bob");
  }
}

But Uncle Bob, what if you want to create many instances of BillingService rather than just that one singleton?

Then I’d use a factory, like so:


public class BillingApplication {
   public static void main(String[] args) {
    Injector injector = Guice.createInjector(new BillingModule());
    BillingService.factory = new BillingServiceFactory(injector);

    // Deep in the bowels of my code.
    BillingService billingService = BillingService.factory.make();
    billingService.processCharge(2034, "Bob");
  }
}

But Uncle Bob, I thought the whole idea was to avoid factories!

Hardly. After all, Guice is just a big factory. But you didn’t let me finish. Did you notice that I passed the Guice injector into the factory? Here’s the factory implementation.


public class BillingServiceFactory extends AbstractModule {
  private Injector injector;

  public BillingServiceFactory(Injector injector) {
    this.injector = injector;
  }

  protected void configure() {
    bind(TransactionLog.class).to(DatabaseTransactionLog.class);
    bind(CreditCardProcessor.class).to(MyCreditCardProcessor.class);
  }

  public BillingService make() {
    return injector.getInstance(BillingService.class);
  }
}

I like this because now all the Guice is in one well understood place. I don’t have Guice all over my application. Rather, I’ve got factories that contain the Guice. Guicey factories that keep the Guice from being smeared all through my application.

What’s more, if I wanted to replace Guice with some other DI framework, I know exactly what classes would need to change, and how to change them. So I’ve kept Guice uncoupled from my application.

Indeed, using this form allows me to defer using Guice until I think it’s necessary. I can just build the factories the good old GOF way until the need to externalize dependencies emerges.

But Uncle Bob, don’t you think Dependency Injection is a good thing?

Of course I do. Dependency Injection is just a special case of Dependency Inversion. I think Dependency Inversion is so important that I want to invert the dependencies on Guice! I don’t want lots of concrete Guice dependencies scattered through my code.

BTW, did you notice that I was using Dependency Injection even when I wasn’t using Guice at all? This is nice and simple manual dependency injection. Here’s that code again in case you don’t want to look back:



public class BillingApplicationNoGuice {
  public static void main(String[] args) {
    CreditCardProcessor cp = new MyCreditCardProcessor();
    TransactionLog tl = new DatabaseTransactionLog();
    BillingService bs = new BillingService(cp, tl);
    bs.processCharge(9000, "Bob");
  }
}

Dependency Injection doesn’t require a framework; it just requires that you invert your dependencies and then construct and pass your arguments to deeper layers. Consider, for example, that the following test works just fine in all the cases above. It does not rely on Guice, it only relies on the fact that dependencies were inverted and can be injected into BillingService


public class BillingServiceTest {
  private LogSpy log;

  @Before
  public void setup() {
    log = new LogSpy();
  }

  @Test
  public void approval() throws Exception {
    BillingService bs = new BillingService(new Approver(), log);
    bs.processCharge(9000, "Bob");
    assertEquals("Transaction by Bob for 9000 approved", log.getLogged());
  }

  @Test
  public void denial() throws Exception {
    BillingService bs = new BillingService(new Denier(), log);
    bs.processCharge(9000, "Bob");
    assertEquals("Transaction by Bob for 9000 denied", log.getLogged());    
  }
}

class Approver implements CreditCardProcessor {
  public boolean approve(int amount, String id) {
    return true;
  }
}

class Denier implements CreditCardProcessor {
  public boolean approve(int amount, String id) {
    return false;
  }
}

class LogSpy implements TransactionLog {
  private String logged;

  public void log(String s) {
    logged = s;
  }

  public String getLogged() {
    return logged;
  }
}

Also notice that I rolled my own Test Doubles (we used to call them mocks, but we’re not allowed to anymore.) It would have been tragic to use a mocking framework for such a simple set of tests.

Most of the time the best kind of Dependency Injection to use, is the manual kind. Externalized dependency injection of the kind that Guice provides is appropriate for those classes that you know will be extension points for your system.

But for classes that aren’t obvious extension points, you will simply know the concrete type you need, and can create it at a relatively high level and inject it down as an interface to the lower levels. If, one day, you find that you need to externalize that dependency, it’ll be easy because you’ve already inverted and injected it.

Archeological Dig 113

Posted by Uncle Bob Wed, 11 Nov 2009 16:39:00 GMT

I was going through some old files today, and I stumbled upon some acetate slides from 1995. They were entitled: “Managing OO Projects”. Wow! What a difference fifteen years makes! (Or does it?) ...

In 1995-99 I was frequently asked to speak to managers about what a transition to OO (usually from C to C++) would do for (or to) them. I would spend a half day to a day going over the issues, costs, and benefits.

One part of that talk (usually about 90 min) was a discussion about software process. It was the process part of the talk that those acetate slides that I found described.

1995 was during the ascendency of Waterfall. Waterfall thinking was king. RUP had not yet been conceived as an acronym. And though Booch was beating the drum for incrementalism, most people (even many within Rational) were thinking in terms of six to eighteen month waterfalls.

So, here are the slides that I uncovered deep within an old filing cabinet. I scanned them in. They were produced on a Macintosh using the old “More” program. (Where is that program now? It was so good.)

Go ahead and read them now. Then come back here and continue…
What struck me about those slides was the consistency of the message with today. It was all about iterative development. Small iterations (though I never deigned to define the length in the slides, I frequently told people 2 weeks), measured results, etc. etc. Any Agile evangelist could use those slides today. He or she would have to dance quickly around a few statements, but overall the message has not shifted very much.

What’s even more interesting is the coupling between the process, and OO. The slides talk a lot about dependency management and dependency structure. There are hints of the SOLID principles contained in those slides. (Indeed several of the principles had already been identified by that time.) This coupling between process and software structure was a harbinger of the current craftsmanship/clean-code movement.

Of course the one glaring omission from these slides is TDD. That makes me think that TDD was the true catalyst of change, and the bridge that conveyed our industry from then to now.

Anyway, I guess the more things change, the more they stay the same.

Comments please!

Excuse me sir, What Planet is this? 144

Posted by Uncle Bob Thu, 05 Nov 2009 16:35:00 GMT

Update 12 hours later.

I’m not very proud of this blog (or as one commenter correctly called it “blart”). It is derisive, sneering, and petulant. It is unprofessional. I guess I was having a bad morning. I slipped. I didn’t check with my green band.

So I apologize to the audience at large, and to Cashto. You should expect better from me.

I thought about pulling the blog down; but I think I’ll leave it up here as an example of how not to write a blog.

Some folks on twitter have been asking me to respond to this blough (don’t bother to read it right now, I’ll give you the capsule summary below. Read it later if you must). It’s a typical screed complete with all the usual complaints, pejoratives, and illogic. Generally I don’t respond to blarts like this because I don’t imagine that any readers take them very seriously. But it appears that this blelch has made the twitter rounds and that I’m going to have to say something.

Here are the writer’s main points:

  • He likens unit tests to training wheels and says you can’t use them to win the Tour de France.
    • I think winning the Tour de France has much more to do with self-discipline than he imagines it does. I mean it’s not really just as simple as: “Get on a bike and ride like hell!”
  • He says testing is popular amongst college students
    • I’d like to see his data!
  • He goes on to say: (cue John Cleese): “(ahem) unit tests lose their effectiveness around level four of the Dreyfus model of skill acquisition”.
    • (blank stunned stare). Is this a joke? All false erudition aside, WTF is he talking about?
  • He says that unit tests don’t give us confidence in refactoring because they over-specify behavior and are too fine-grained.
    • He apparently prefers hyphenations to green bars.
  • He says they mostly follow the “happy path” and therefore don’t find bugs.
    • Maybe when he writes them! This is a big clue that the author can’t spell TDD.
  • He complains about Jester
    • without getting the joke!
  • He says unit tests “encourage some pretty questionable practices.” He flings a few design principles around and says that unit testing doesn’t help with them.
    • as the author, editor, and/or advocate of many of those principles; I have a slightly different view.
  • He says that “many are starting to discover that functional programming teaches far better design principles than unit testing ever will”
    • Oh no! Not the old “My language teaches design.” claim. We’ve heard it all before. They said it about C++, Java, COM (?!), etc… The lesson of the ‘90s? Languages don’t teach design. You can make a mess in any language.
  • He says: “tests can have negative ROI. Not only do they cost a lot to write, they’re fragile, they’re always broken, they get in the way of your refactoring, they’re always having you chase down bogus failures, and the only way to get anything done is to ignore them.”
    • In one of my favorite episodes of Dr. Who, Tom Baker exits the Tardis, walks up to someone on the street and says: “Excuse me sir, what planet is this?”
  • He says: “What I’m saying is that it’s okay if you don’t write unit tests for everything. You probably have already suspected this for a long time, but now you know. I don’t want you to feel guilty about it any more.”
    • Translation: “I don’t want to feel guilty about it anymore so I’m going to try to convince you…” I sincerely doubt this author has your best interests at heart.
  • He says: “Debugging is easy, at least in comparison to writing all those tedious tests.”
    • Refer back to the Dr. Who quote.



To quote Barack Obama: “Enough!”

Has this guy ever done TDD? I rather doubt it. Or if he did, he was so inept at it that his tests were “fragile”, “always broken”, and “in the way of refactoring”. I think he should give it another try and this time spend a bit more time on test design.

Perhaps he’s one of those guys who thought that unit tests were best written after the code. Certainly his list of complains makes a lot of sense in that light. Hint: If you want to fail at unit testing, write them last.

The bottom line is that the guy probably had a bad experience writing unit tests. He’s tired of writing them and wants to write fewer of them. He’d rather debug. He thinks he can refactor without tests (which is definitively false). He thinks he can go faster by writing fewer tests. Fine, that’s his choice. And he’s found a rationalization to support his desires. Great.

I predict that his results will not compare well with those who adopt the discipline of TDD. I predict that after a few years he’ll either change his mind, or go into management.

Oh, and to the author: Gesundheit!

Manual Mocking: Resisting the Invasion of Dots and Parentheses 232

Posted by Uncle Bob Wed, 28 Oct 2009 23:12:16 GMT

The twittersphere has been all abuzz today because of something I tweeted early this morning (follow @unclebobmartin). In my tweet I said that I hand-roll most of my own mock objects in Java, rather than using a mocking framework like mockito.

The replies were numerous and vociferous. Dave Astels poignantly stated that hand-rolling mocks is so 2001!

So why do I roll my own mocks?

Consider the following two tests:
public class SelectorTest {
  private List<Object> list;

  @Before
  public void setup() {
    list = new ArrayList<Object>();
    list.add(new Object());
  }

  @Test
  public void falseMatcherShouldSelectNoElements_mockist() {
    Matcher<Object> falseMatcher = mock(Matcher.class);
    Selector<Object> selector = new Selector<Object>(falseMatcher);
    when(falseMatcher.match(anyObject())).thenReturn(false);
    List<Object> selection = selector.select(list);
    assertThat(selection.size(), equalTo(0));
  }

  @Test
  public void falseMatcherShouldSelectNoElements_classic() {
    Matcher<Object> falseMatcher = new FalseMatcher();
    Selector<Object> selector = new Selector<Object>(falseMatcher);
    List<Object> selection = selector.select(list);
    assertThat(selection.size(), equalTo(0));}

  private static class FalseMatcher implements Matcher<Object> {
    public boolean match(Object element) {
      return false;
    }
  }
}

The first test shows the really cool power of mockito (which is my current favorite in the menagerie of java mocking frameworks). Just in case you can’t parse the syntax, let me describe it for you:

  • falseMatcher is assigned the return value of the “mock” function. This is a very cool function that takes the argument class and builds a new stubbed object that derives from it. In mockito, the argument can be a class or an interface. Cool!
  • Now don’t get all panicy about the strange parenthetic syntax of the ‘when’ statement. The ‘when’ statement simply tells the mock what to do when a method is called on it. In this case it instructs the falseMatcher to return false when the ‘match’ function is called with any object at all.

The second test needs no explanation.

...

And that’s kind of the point. Why would I include a bizzare, dot-ridden, parentheses-laden syntax into my tests, when I can just as easily hand-roll the stub in pure and simple java? How hard was it to hand-roll that stub? Frankly, it took a lot less time and effort to hand-roll it than it took to write the (when(myobj.mymethod(anyx())).)()).))); statement.

OK, I’m poking a little fun here. But it’s true. My IDE (InteliJ) generated the stub for me. I simply started with:

Matcher<Object> falseMatcher = new Matcher<Object>() {};

InteliJ complained that some methods weren’t implemented and offered to implement them for me. I told it to go ahead. It wrote the ‘match’ method exactly as you see it. Then I chose “Convert Anonymous to Inner…” from the refactoring menu and named the new class FalseMatcher. Voila! No muss, no fuss, no parenthetic maze of dots.

Now look, I’m not saying you shouldn’t use mockito, or any of these other mocking tools. I use them myself when I must. Here, for example, is a test I wrote in FitNesse. I was forced to use a mocking framework because I did not have the source code of the classes I was mocking.
  @Before
  public void setUp() {
    manager = mock(GSSManager.class);
    properties = new Properties();
  }

  @Test
  public void credentialsShouldBeNonNullIfServiceNamePresent() throws Exception {
    properties.setProperty("NegotiateAuthenticator.serviceName", "service");
    properties.setProperty("NegotiateAuthenticator.serviceNameType", "1.1");
    properties.setProperty("NegotiateAuthenticator.mechanism", "1.2");
    GSSName gssName = mock(GSSName.class);
    GSSCredential gssCredential = mock(GSSCredential.class);
    when(manager.createName(anyString(), (Oid) anyObject(), (Oid) anyObject())).thenReturn(gssName);
    when(manager.createCredential((GSSName) anyObject(), anyInt(), (Oid) anyObject(), anyInt())).thenReturn(gssCredential);
    NegotiateAuthenticator authenticator = new NegotiateAuthenticator(manager, properties);
    Oid serviceNameType = authenticator.getServiceNameType();
    Oid mechanism = authenticator.getMechanism();
    verify(manager).createName("service", serviceNameType, mechanism);
    assertEquals("1.1", serviceNameType.toString());
    assertEquals("1.2", mechanism.toString());
    verify(manager).createCredential(gssName, GSSCredential.INDEFINITE_LIFETIME, mechanism, GSSCredential.ACCEPT_ONLY);
    assertEquals(gssCredential, authenticator.getServerCredentials());
  }

If I’d had the source code of the GSS classes, I could have created some very simple stubs and spies that would have allowed me to make these tests a lot cleaner than they currently appear. Indeed, I might have been able to test the true behavior of the classes rather than simply testing that I was calling them appropriately…

Mockism

That last bit is pretty important. Some time ago Martin Fowler wrote a blog about the Mockist and Classical style of TDD. In short, Mockists don’t test the behavior of the system so much as they test that their classes “dance” well with other classes. That is, they mock/stub out all the other classes that the class under test uses, and then make sure that all the right functions are called in all the right orders with all the right arguments. etc. There is value to doing this in many cases. However you can get pretty badly carried away with the approach.

The classical approach is to test for desired behavior, and trust that if the test passes, then the class being tested must be dancing well with its partners.

Personally, I don’t belong to either camp. I sometimes test the choreography, and I sometimes test the behavior. I test the choreography when I am trying to isolate one part of the system from another. I test for the behavior when such isolation is not important to me.

The point of all this is that I have observed that a heavy dependence on mocking frameworks tends to tempt you towards testing the dance when you should be testing behavior. Tools can drive the way we think. So remember, you dominate the tool; don’t let the tool dominate you!

But aren’t hand-rolled mocks fragile?

Yes, they can be. If you are mocking a class or interface that it very volatile (i.e. you are adding new methods, or modifying method signatures a lot) then you’ll have to go back and maintain all your hand-rolled mocks every time you make such a change. On the other hand, if you use a mocking framework, the framework will take care of that for you unless one of the methods you are specifically testing is modified.

But here’s the thing. Interfaces should not usually be volatile. They should not continue to grow and grow, and the methods should not change much. OK, I realize that’s wishful thinking. But, yes, I wish for the kind of a design in which interfaces are the least volatile source files that you have. That’s kind of the point of interfaces after all… You create interfaces so that you can separate volatile implementations from non-volatile clients. (Or at least that’s one reason.)

So if you are tempted to use a mocking framework because you don’t want to maintain your volatile interfaces, perhaps you should be asking yourself the more pertinent question about why your interfaces are so volatile.

Still, if you’ve got volatile interfaces, and there’s just no way around it, then a mocking framework may be the right choice for you.

So here’s the bottom line.

  • It’s easy to roll your own stubs and mocks. Your IDE will help you and they’ll be easier and more natural to read than the dots and parentheses that the mocking frameworks impose upon you.
  • Mocking frameworks drive you towards testing choreography rather than behavior. This can be useful, but it’s not always appropriate. And besides, even when you are testing choreography, the hand-rolled stubs and mocks are probably easier to write and read.
  • There are special cases where mocking tools are invaluable, specifically when you have to test choreography with objects that you have no source for or when your design has left you with a plethora of volatile interfaces.

Am I telling you to avoid using mocking frameworks? No, not at all. I’m just telling you that you should drive tools, tools should not drive you.

If you have a situation where a mocking tool is the right choice, by all means use it. But don’t use it because you think it’s “agile”, or because you think it’s “right” or because you somehow think you are supposed to. And remember, hand-rolling often results in simpler tests without the litter of dots and parentheses!

We must ship now and deal with consequences 113

Posted by Uncle Bob Thu, 15 Oct 2009 11:17:00 GMT

Martin Fowler has written a good blog about technical debt. He suggests that there are two axes of debt: deliberate and prudent. This creates four quadrants: deliberate-prudent, deliberate-imprudent, inadvertent-prudent, and inadvertent-imprudent. I agree with just about everything in his blog except for one particular caption…

Inadvertent-Imprudent Debt.

There is more of this debt than any other kind. It is all too common that software developers create a mess and don’t know they are doing it. They have not developed a nose that identifies code smells. They don’t know design principles, or design patterns. They think that the reek of rotten code is normal, and don’t even identify it as smelling bad. They think that their slow pace through the thick morass of tangled code is the norm, and have no idea they could move faster. These people destroy projects and bring whole companies to their knees. Their name is Doom.

Deliberate-Imprudent Debt.

There is a meme in our industry (call it the DI meme) that tells young software developers that rushing to the finish line at all costs is the right thing to do. This is far worse than the ignorance of the first group because these folks willfully create debt without counting the cost. Worse, this meme is contagious. People who are infected with it tend to infect others, causing an epidemic of deliberately imprudent debtors (sound familiar?) The end result, as we are now all know, is economic catastrophe, inflation (of estimates) and crushing interest (maintenance) payments. They have become death, the destroyer of worlds.

Inadvertent-Prudent Debt.

This is something of an oxymoron. Ironically, it is also the best of all possible states. The fact is that no matter how careful we are, there is always a better solution that we will stumble upon later. How many times have you finished a system only to realize that if you wrote it again, you’d do it very differently, and much better?

The result is that we are always creating a debt, because our hindsight will always show us a better option after it is too late. So even the best outcome still leaves us owing. (Mother Earth will eventually collect that debt!)

Deliberate-Prudent Debt.

This is the quadrant that I have the biggest problem with. And it is this quadrant in which Martin uses the caption I don’t like. The Caption is: “We must ship now and deal with consequences.”

Does this happen? Yes. Should it happen? Rarely, yes. But it damned well better not happen very often, and it damned well better not happen out of some misplaced urge to get done without counting the cost.

The problem I have with this quadrant (DP) is that people who are really in quadrant DI think they are in DP, and use words such as those that appear in the caption as an excuse to rack up a huge imprudent debt.

The real issue is the definition of the word: Imprudent.

So let me ask you a question. How prudent is debt? There is a very simple formula for determining whether debt is prudent or imprudent. You can use this formula in real life, in business, and in programming. The formula is: Does the debt increase your net worth, and can you make the payments?

People often focus on the first criterion, without properly considering the second. Buying a house is almost certain to increase your net worth despite the debt (though lately…). On the other hand, if you cannot make the payments, you won’t keep that house for long. The reason for our current economic woes has a lot to do with people trying to increase their net worth despite the fact that they couldn’t afford the payments. (indeed, they were encouraged by a meme very similar to the DI meme!)

Bad code is always imprudent.

Writing bad code never increases your net worth; and the interest rate is really high. People who write bad code are like those twenty-somethings who max out all their credit cards. Every transaction decreases net worth, and has horrendous consequences for cash flow. In the end, the vast bulk of your effort goes to paying the interest (the inevitable slow down of the team as they push the messes around). Paying down the principle becomes infeasible. (Just the way credit card companies like it.)

Some Suboptimal Design Decision are Prudent Debt.

But most are not. Every once in awhile there is a suboptimal design decision that will increase the net worth of the project by getting that project into customer’s hand’s early.

This is not the same as delivering software that is under-featured. It is often prudent to increase the net worth of a project by giving customers early access to a system without a full and rich feature set. This is not debt. This is more like a savings account that earns interest.

Indeed, this is one reason that most technical debt is imprudent. If you are truly concerned about getting to market early, it is almost always better to do it with fewer features, than with suboptimal design. Missing features are a promise that can be kept. Paying back suboptimal designs creates interest payments that often submerge any attempts at payback and can slow the team to the breaking point.

But there are some cases where a sub-optimal design can increase your net worth by allowing you to deliver early. However, the interest rate needs to be very low, and the principle payments need to be affordable, and big enough to pay back the debt in short order.

What does a low interest rate mean? It means that the sub-optimal design does not infiltrate every part of your system. It means that you can put the sub-optimal design off in a corner where it doesn’t impact your daily development life.

For example, I recently implemented a feature in FitNesse using HTML Frames. This is sub-optimal. On the other hand, the feature is constrained to one small part of the system, and it simply doesn’t impact any other part of the system. It does not impede my progress. There is no mess for me to move around. The interest rate is almost zero! (nice deal if you can get it!)

Implementing that feature with ajax is a much larger project. I would have had to invest a great deal of time and effort, and would have had to restructure massive amounts of the internal code. So the choice was a good one.

Better yet, the customer experience has pretty much been a big yawn. I thought people would really like the feature and would drive me to expand upon it. Instead, the customer base has virtually ignored it.

So my solution will be to pay back this debt by eliminating the feature. It was a cheap experiment, that resulted in my not having to spend a lot of time and effort on a new architecture! Net worth indeed!

But it might have gone the other way. My customers may have said: “Wow, Great! We want more!” At that point it would have been terrible to expand on the HTML Frames! That decision would have been in the DI quadrant. Deliberate imprudence! Rather, my strategy would have been to replace the suboptimal Frames design of the feature with an isolated ajax implementation, and then to gradually migrate the ajax solution throughout the project. That would have been annoying, but loan payments always are.

Summary

So, don’t let the caption in the DP quadrant be an excuse. Don’t fall for the DI meme that says “We just gotta bite the bullet”. Tread very carefully when you enter the DP quadrant. Look around at all your options, because it’s easy to think you are in the DP quadrant when you are really in the DI quadrant.

Remember: Murphy shall send you strong delusion, that you should believe you are in DP; so that you will be damned in DI.

TDD Derangement Syndrome 235

Posted by Uncle Bob Wed, 07 Oct 2009 13:32:00 GMT

My recent blog about TDD, Design Patterns, Concurrency, and Sudoku seemed to draw the ire of a few vocal TDD detractors. Some of these people were rude, insulting, derisive, dismissive, and immature. Well, Halloween is not too far away.

In spite of their self-righteous snickering they did ask a few reasonable questions. To be fair I thought it would be appropriate for me to answer them.

Is there any research on TDD?

It turns out that there is a fair bit.

  • One simple google search led me to this blog by Phil Haack in which he reviewed a TDD research paper. Quoting from the paper:

We found that test-first students on average wrote more tests and, in turn, students who wrote more tests tended to be more productive. We also observed that the minimum quality increased linearly with the number of programmer tests, independent of the development strategy employed.

  • The same google search led me to this blog by Matt Hawley, in which he reviewed several other research papers. Part of his summary:

* 87.5% of developers reported better requirements understanding. * 95.8% of developers reported reduced debugging efforts. * 78% of developers reported TDD improved overall productivity. * 50% of developers found that it decreased overall development time. * 92% of developers felt that TDD yielded high-quality code. * 79% of developers believed TDD promoted simpler design.

Actually, I recognize some of Matt’s results as coming from a rather famous 2003 study (also in the list of google results) by Laurie Wiliams and Boby George. This study describes a controlled experiment that they conducted in three different companies. Though Matt’s summary above is based (in part) on that study, there is more to say.

In the George-William study teams that practiced TDD took 16% longer to claim that they were done than the teams that did not practice TDD. Apparently tests are more accurate than claims since the non-TDD teams failed to pass one third of the researcher’s hidden acceptance tests, whereas the TDD teams passed about 6 out of 7. To paraphrase Kent Beck: “If it doesn’t have to work, I can get it done a lot faster!”

Another point of interest in this study is that the TDD teams produced a suite of automated tests with very high test coverage (close to 100% in most cases) whereas most of the non-TDD teams did not produce such a suite; even though they had been instructed to.

  • Jim Shore wrote a review of yet another research summary which I found in the same google search. This one combines 7 different studies (including George-Williams). Here the results range from dramatically improved quality and productivity to no observed effect.
  • Finally, there is this 2008 case Study of TDD at IBM and Microsoft which shows that TDDers enjoy a defect density reduction ranging from 30% to 90% (as measured by defect tracking tools) and a productivity cost of between 15% and 35% (the subjective opinion of the managers). I refer you back to Kent Beck’s comment above.

I’m sure there is more research out there. After all this was just one google search. I think it’s odd that the TDD detractors didn’t find anything when they did their google searches.

  • Oh yeah, and then there was that whole issue of IEEE Software that was dedicated to papers and research on TDD.

What projects have been written with TDD, hmmm?

Quite a few, actually. The following is a list of projects that have an automated suite of unit tests with very high coverage. Those that I know for a fact use TDD, I have noted as such. The others, I can only surmise. If you know of any others, please post a comment here.

  • JUnit. This one is kind of obvious. JUnit was written by Kent Beck and Erich Gamma using TDD throughout. If you measure software success by sheer distribution, this particular program is wildly successful.
  • Fit. Written by Ward Cunningham. The progenitor of most current acceptance testing frameworks.
  • FitNesse. This testing framework has tens of thousands of users. It is 70,000 lines of java code, with 90%+ code coverage. TDD throughout. Very small bug-list. Again, if you measure by distribution, another raving success.
  • Cucumber,
  • Rspec. These two are Testing frameworks in Ruby. Of course you’d expect a testing framework to be written with TDD, wouldn’t you? I know these were. TDD throughout.
  • Limelight. A gui framework in JRUby. TDD throughout.
  • jfreechart.
  • Spring
  • JRuby
  • Smallsql
  • Ant
  • MarsProject
  • Log4J
  • Jmock

Are there others? I’m sure there are. This was just a quick web search. Again, if you know of more, please add a comment.

Echoes from the Stone Age 295

Posted by Uncle Bob Tue, 06 Oct 2009 16:07:29 GMT

The echoes from Joel Spolsky’s Duct Tape blog continue to bounce off the blogosphere and twitterverse. Tim Bray and Peter Seibel have both written responses to Joel, me, and each other.

Here are some stray thoughts…

TDD

Anyone who continues to think that TDD slows you down is living in the stone age. Sorry, that’s just the truth. TDD does not slow you down, it speeds you up.

Look, TDD is not my religion, it is one of my disciplines. It’s like dual entry bookkeeping for accountants, or sterile procedure for surgeons. Professionals adopt such disciplines because they understand the theory behind them, and have directly experienced the benefits of using them.

I have experienced the tremendous benefit that TDD has had in my work, and I have observed it in others. I have seen and experienced the way that TDD helps programmers conceive their designs. I have seen and experienced the way it documents their decisions. I have seen and experienced the decouplings imposed by the tests, and I have seen and experienced the fearlessness with which TDDers can change and clean their code.

To be fair, I don’t think TDD is always appropriate. There are situations when I break the discipline and write code before tests. I’ll write about these situations in another blog. However, these situations are few and far between. In general, for me and many others, TDD is a way to go fast, well, and sure.

The upshot of all this is simple. TDD is a professional discipline. TDD works. TDD makes you faster. TDD is not going away. And anyone who has not really tried it, and yet claims that it would slow them down, is simply being willfully ignorant. I don’t care if your name is Don Knuth, Jamie Zawinski, Peter Seibel, or Peter Pan. Give it a real try, and then you have the right to comment.

Let me put this another way. And now I’m talking directly to those who make the claim that TDD would slow them down. Are you really such a good programmer that you don’t need to thoroughly check your work? Can you conceive of a better way to check your work than to express your intent in terms of an executable test? And can you think of a better way to ensure that you can write that test other than to write it first?

If you can, then I want to hear all about it. but I don’t want to hear that you write a few unit tests after the fact. I don’t want to hear that you manually check your code. I don’t want to hear that you do design and therefore don’t need to write tests. Those are all stone-age concepts. I know. I’ve been there.

So there. <grin>

The Design Pattern Religion

Tim Bray said:

My experience suggests that there are few surer ways to doom a big software project than via the Design Patterns religion.

He’s right of course. The Design Patterns religion is a foul bird that ravages teams and cuts down young projects in their prime. But let’s be clear about what that religion is. The Design Patterns religion is the ardent belief that the use of design patterns is good.

Here’s a clue. Design Patterns aren’t good. They also aren’t bad. They just are. Given a particular software design situation, there may be a pattern that fits and is beneficial. There may also be patterns that would be detrimental. It’s quite possible that none of the currently documented patterns are appropriate and that you should close the book and just solve the problem.

Here’s another clue. You don’t use patterns. You don’t apply patterns. Patterns just are. If a particular pattern is appropriate to solve a given problem, then it will be obvious. Indeed it is often so obvious that you don’t realize that the pattern is in place until you are done. You look back at your code and realize: “Oh, that’s a Decorator!”.

So am I saying that Design Patterns are useless?

NO! I want you to read the patterns books. I want you to know those patterns inside and out. If I point at you and say “Visitor” I want you at the board drawing all the different variants of the pattern without hesitation. I want you to get all the names and roles right. I want you to know patterns.

But I don’t want you to use patterns. I don’t want you to believe in patterns. I don’t want you to make patterns into a religion. Rather I want you to be able to recognize them when they appear, and to regularize them in your code so that others can recognize them too.

Design Patterns have a huge benefit. They have names. If you are reading code, and you see the word “Composite”, and if the author took care to regularize the code to the accepted names and roles of the “Composite” pattern, then you will know what that part of the code is doing instantly. And that is powerful!

Minimizing Concurrency.

In my first Duct Tape blog I made the statement:

I found myself annoyed at Joel’s notion that most programmers aren’t smart enough to use templates, design patterns, multi-threading, COM, etc. I don’t think that’s the case. I think that any programmer that’s not smart enough to use tools like that is probably not smart enough to be a programmer period.

Tim responds with:

...multi-threading is part of the problem, not part of the solution; that essentially no application programmer understands threads well enough to avoid deadlocks and races and horrible non-repeatable bugs. And that COM was one of the most colossal piles of crap my profession ever foisted on itself.

Is concurrency really part of the problem? Yes! Concurrency is a really big part of the problem. Indeed, the first rule of concurrency is: DON’T. The second rule is: REALLY, DON’T.

The problem is that some times you have no choice. And in those situations, where you absolutely must use concurrency, you should know it inside and out!

I completely and utterly reject the notion that ignorance is the best defense. I reject that lack of skill can ever be an advantage. So I want you to know concurrency. I want to shout “Dining Philosophers” and have you run to the board without hesitation and show me all the different solutions. If I holler “Deadlock”, I want you to quickly identify the causes and solutions.

Here’s a clue. If you want to avoid using something, know that something cold.

Sudoku

At the end of his blog, Peter jumps on the pile of bodies already crushing Ron Jeffries regarding the Sudoku problem from July of 2006.

I find the pile-up disturbing. Ron had the courage to fail in public. Indeed he announced up front that he might “crash and burn”. And yet he got lambasted for it by people who hid behind someone else’s work. The responses to Ron’s tutorial blogs were completely unfair because the authors of those blogs had everything worked out for them by Dr. Peter Norvig before they published their screeds. They were comparing apples to oranges because their responses were about the solution whereas Ron’s blogs were about the process.

Which one of us has not gone down a rat-hole when hunting for a solution to a complex problem? Let that person write the first blog. Everyone else ought to be a bit more humble.

Do the people on the pile think that Ron is unable to solve the Sudoku problem? (Some have said as much.) Then they don’t know Ron very well. Ron could code them all under the table with one hand tied behind his back.

Personal issues aside, I find the discussion fascinating in it’s own right. Ron had attempted to solve the Sudoku problem by gaining insight into that problem through the process of coding intermediate solutions. This is a common enough TDD approach. Indeed, the Bowling Game and the Prime Factors Kata are both examples where this approach can work reasonably well.

This approach follows the advice of no less than Grady Booch who (quoting Heinlein) said: “when faced with a problem you do not understand, do any part of it you do understand, then look at it again.

Ron was attempting to use TDD to probe into the problem to see if he could gain any insight. This technique often bears fruit. Sometimes it does not.

Here is a classic example. Imagine you were going to write a sort algorithm test first:

  • Test 1: Sort an empty array. Solution: Return the input array.
  • Test 2: Sort an array with one element. Solution: Return the input array.
  • Test 3: Sort an array with two elements. Solution: Compare the two elements and swap if out of order. Return the result.
  • Test 4: Sort an array with three elements. Solution: Compare the first two and swap if out of order. Compare the second two and swap if out of order. Compare the first two again and swap if out of order. Return the result.
  • Test 5: Sort an array with four elements. Solution: Put the compare and swap operations into a nested loop. Return the result.

The end result is a bubble sort. The algorithm virtually self assembles. If you had never heard of a bubble sort before, this simple set of tests would have driven you to implement it naturally.

Problems like Bowling, Prime Factors, and Bubble Sort hold out the interesting promise that TDD may be a way to derive algorithmms from first principles!

On the other hand, what set of tests would drive you to implement a QuickSort? There are none that I know of. QuickSort and Sudoku may require a serious amount of introspection and concentrated thought before the solution is apparent. They may belong to a class of algorithms that do not self-assemble like Bowling, Prime Factors, and Bubble Sort.

This blog by Kurt Christensen provides all the links to the various Sudoku articles, and sums it up this way.

TDD may not be the best tool for inventing new algorithms, it may very well be the best tool for applying those algorithms to the problem at hand.

Actually I think TDD is a good way to find out if an algorithm will self-assemble or not. It usually doesn’t take a lot of time to figure out which it’s going to be.

Older posts: 1 2 3 ... 5