The Tricky Bit 224

Posted by Uncle Bob Fri, 23 Apr 2010 09:28:35 GMT

I once heard a story about the early days of the Concorde. A british MP was on board for a demonstration flight. As the jet went supersonic he disappointedly commented to one of the designers that it didn’t feel any different at all. The designer beamed and said: “Yes, that was the tricky bit.”

I wonder if the MP would have been happier if the plane had begun to vibrate and shake and make terrifying noises.

While at #openvolcano10, Gojko Adzic and Steve Freeman told the story of a company in which one team, among many, had gone agile. Over time that team managed to get into a nice rhythm, passing it’s continuous builds, adding lots of business value, working normal hours, and keeping their customers happy. Meanwhile other teams at this company were facing delays, defects, frustrated customers, and lots of overtime.

The managers at this company looked at the agile team, working like a well-oiled machine, and looked at all the other teams toiling in vain at the bilge pump, and came to the conclusion that the project that the agile team was working on was too simple to keep them busy. That the other teams’ projects were far more difficult. It didn’t even occur to them that the agile team had figured out the tricky bit.

Sapient Testing: The "Professionalism" meme. 308

Posted by Uncle Bob Thu, 15 Apr 2010 11:18:00 GMT

James Bach gave a stirring keynote today at ACCU 2010. He described a vision of testing that our industry sorely needs. Towit: Testing requires sapience.

Testing, according to Bach, is not about assuring conformance to requirements; rather it is about understanding the requirements. Even that’s not quite right. It is not sufficient to simply understand and verify the requirements. A good tester uses the behavior of the system and the descriptions in the requirements, (and face-to-face interaction with the authors of both) to understand the motivation behind the system. Ultimately it is the tester’s job to divine the system that the customer imagined; and then to illuminate those parts of the system that are not consistent with that imagination.

It seems to me that James is attempting to define “professionalism” as it applies to testing. A professional tester does not blindly follow a test plan. A professional tester does not simply write test plans that reflect the stated requirements. Rather a professional tester takes responsibility for interpreting the requirements with intelligence. He tests, not only the system, but also (and more importantly) the assumptions of the programmers, and specifiers.

I like this view. I like it a lot. I like the fact that testers are seeking professionalism in the same way that developer are. I like the fact that testing is becoming a craft, and that people like James are passionate about that craft. There may yet be hope for our industry!

There has been a long standing frission between James’ view of testing and the Agile emphasis on TDD and automated tests. Agilists have been very focussed on creating suites of automated tests, and exposing the insanity (and inanity) of huge manual testing suites. This focus can be (and has been) misinterpreted as an anti-tester bias.

It seems to me that professional testers are completely compatible with agile development. No, that’s wrong. I think professional testers are utterly essential to agile development. I don’t want testers who rotely execute brain-dead manual test plans. I want testers using their brains! I want testers to be partners in the effort to create world-class, high-quality software. As a professional developer I want – I need – professional testers helping me find my blind spots, illuminating the naivete of my assumptions, and partnering with me to satisfy the needs of our customers.

Archeological Dig 113

Posted by Uncle Bob Wed, 11 Nov 2009 16:39:00 GMT

I was going through some old files today, and I stumbled upon some acetate slides from 1995. They were entitled: “Managing OO Projects”. Wow! What a difference fifteen years makes! (Or does it?) ...

In 1995-99 I was frequently asked to speak to managers about what a transition to OO (usually from C to C++) would do for (or to) them. I would spend a half day to a day going over the issues, costs, and benefits.

One part of that talk (usually about 90 min) was a discussion about software process. It was the process part of the talk that those acetate slides that I found described.

1995 was during the ascendency of Waterfall. Waterfall thinking was king. RUP had not yet been conceived as an acronym. And though Booch was beating the drum for incrementalism, most people (even many within Rational) were thinking in terms of six to eighteen month waterfalls.

So, here are the slides that I uncovered deep within an old filing cabinet. I scanned them in. They were produced on a Macintosh using the old “More” program. (Where is that program now? It was so good.)

Go ahead and read them now. Then come back here and continue…
What struck me about those slides was the consistency of the message with today. It was all about iterative development. Small iterations (though I never deigned to define the length in the slides, I frequently told people 2 weeks), measured results, etc. etc. Any Agile evangelist could use those slides today. He or she would have to dance quickly around a few statements, but overall the message has not shifted very much.

What’s even more interesting is the coupling between the process, and OO. The slides talk a lot about dependency management and dependency structure. There are hints of the SOLID principles contained in those slides. (Indeed several of the principles had already been identified by that time.) This coupling between process and software structure was a harbinger of the current craftsmanship/clean-code movement.

Of course the one glaring omission from these slides is TDD. That makes me think that TDD was the true catalyst of change, and the bridge that conveyed our industry from then to now.

Anyway, I guess the more things change, the more they stay the same.

Comments please!

Excuse me sir, What Planet is this? 144

Posted by Uncle Bob Thu, 05 Nov 2009 16:35:00 GMT

Update 12 hours later.

I’m not very proud of this blog (or as one commenter correctly called it “blart”). It is derisive, sneering, and petulant. It is unprofessional. I guess I was having a bad morning. I slipped. I didn’t check with my green band.

So I apologize to the audience at large, and to Cashto. You should expect better from me.

I thought about pulling the blog down; but I think I’ll leave it up here as an example of how not to write a blog.

Some folks on twitter have been asking me to respond to this blough (don’t bother to read it right now, I’ll give you the capsule summary below. Read it later if you must). It’s a typical screed complete with all the usual complaints, pejoratives, and illogic. Generally I don’t respond to blarts like this because I don’t imagine that any readers take them very seriously. But it appears that this blelch has made the twitter rounds and that I’m going to have to say something.

Here are the writer’s main points:

  • He likens unit tests to training wheels and says you can’t use them to win the Tour de France.
    • I think winning the Tour de France has much more to do with self-discipline than he imagines it does. I mean it’s not really just as simple as: “Get on a bike and ride like hell!”
  • He says testing is popular amongst college students
    • I’d like to see his data!
  • He goes on to say: (cue John Cleese): “(ahem) unit tests lose their effectiveness around level four of the Dreyfus model of skill acquisition”.
    • (blank stunned stare). Is this a joke? All false erudition aside, WTF is he talking about?
  • He says that unit tests don’t give us confidence in refactoring because they over-specify behavior and are too fine-grained.
    • He apparently prefers hyphenations to green bars.
  • He says they mostly follow the “happy path” and therefore don’t find bugs.
    • Maybe when he writes them! This is a big clue that the author can’t spell TDD.
  • He complains about Jester
    • without getting the joke!
  • He says unit tests “encourage some pretty questionable practices.” He flings a few design principles around and says that unit testing doesn’t help with them.
    • as the author, editor, and/or advocate of many of those principles; I have a slightly different view.
  • He says that “many are starting to discover that functional programming teaches far better design principles than unit testing ever will”
    • Oh no! Not the old “My language teaches design.” claim. We’ve heard it all before. They said it about C++, Java, COM (?!), etc… The lesson of the ‘90s? Languages don’t teach design. You can make a mess in any language.
  • He says: “tests can have negative ROI. Not only do they cost a lot to write, they’re fragile, they’re always broken, they get in the way of your refactoring, they’re always having you chase down bogus failures, and the only way to get anything done is to ignore them.”
    • In one of my favorite episodes of Dr. Who, Tom Baker exits the Tardis, walks up to someone on the street and says: “Excuse me sir, what planet is this?”
  • He says: “What I’m saying is that it’s okay if you don’t write unit tests for everything. You probably have already suspected this for a long time, but now you know. I don’t want you to feel guilty about it any more.”
    • Translation: “I don’t want to feel guilty about it anymore so I’m going to try to convince you…” I sincerely doubt this author has your best interests at heart.
  • He says: “Debugging is easy, at least in comparison to writing all those tedious tests.”
    • Refer back to the Dr. Who quote.



To quote Barack Obama: “Enough!”

Has this guy ever done TDD? I rather doubt it. Or if he did, he was so inept at it that his tests were “fragile”, “always broken”, and “in the way of refactoring”. I think he should give it another try and this time spend a bit more time on test design.

Perhaps he’s one of those guys who thought that unit tests were best written after the code. Certainly his list of complains makes a lot of sense in that light. Hint: If you want to fail at unit testing, write them last.

The bottom line is that the guy probably had a bad experience writing unit tests. He’s tired of writing them and wants to write fewer of them. He’d rather debug. He thinks he can refactor without tests (which is definitively false). He thinks he can go faster by writing fewer tests. Fine, that’s his choice. And he’s found a rationalization to support his desires. Great.

I predict that his results will not compare well with those who adopt the discipline of TDD. I predict that after a few years he’ll either change his mind, or go into management.

Oh, and to the author: Gesundheit!

Manual Mocking: Resisting the Invasion of Dots and Parentheses 232

Posted by Uncle Bob Wed, 28 Oct 2009 23:12:16 GMT

The twittersphere has been all abuzz today because of something I tweeted early this morning (follow @unclebobmartin). In my tweet I said that I hand-roll most of my own mock objects in Java, rather than using a mocking framework like mockito.

The replies were numerous and vociferous. Dave Astels poignantly stated that hand-rolling mocks is so 2001!

So why do I roll my own mocks?

Consider the following two tests:
public class SelectorTest {
  private List<Object> list;

  @Before
  public void setup() {
    list = new ArrayList<Object>();
    list.add(new Object());
  }

  @Test
  public void falseMatcherShouldSelectNoElements_mockist() {
    Matcher<Object> falseMatcher = mock(Matcher.class);
    Selector<Object> selector = new Selector<Object>(falseMatcher);
    when(falseMatcher.match(anyObject())).thenReturn(false);
    List<Object> selection = selector.select(list);
    assertThat(selection.size(), equalTo(0));
  }

  @Test
  public void falseMatcherShouldSelectNoElements_classic() {
    Matcher<Object> falseMatcher = new FalseMatcher();
    Selector<Object> selector = new Selector<Object>(falseMatcher);
    List<Object> selection = selector.select(list);
    assertThat(selection.size(), equalTo(0));}

  private static class FalseMatcher implements Matcher<Object> {
    public boolean match(Object element) {
      return false;
    }
  }
}

The first test shows the really cool power of mockito (which is my current favorite in the menagerie of java mocking frameworks). Just in case you can’t parse the syntax, let me describe it for you:

  • falseMatcher is assigned the return value of the “mock” function. This is a very cool function that takes the argument class and builds a new stubbed object that derives from it. In mockito, the argument can be a class or an interface. Cool!
  • Now don’t get all panicy about the strange parenthetic syntax of the ‘when’ statement. The ‘when’ statement simply tells the mock what to do when a method is called on it. In this case it instructs the falseMatcher to return false when the ‘match’ function is called with any object at all.

The second test needs no explanation.

...

And that’s kind of the point. Why would I include a bizzare, dot-ridden, parentheses-laden syntax into my tests, when I can just as easily hand-roll the stub in pure and simple java? How hard was it to hand-roll that stub? Frankly, it took a lot less time and effort to hand-roll it than it took to write the (when(myobj.mymethod(anyx())).)()).))); statement.

OK, I’m poking a little fun here. But it’s true. My IDE (InteliJ) generated the stub for me. I simply started with:

Matcher<Object> falseMatcher = new Matcher<Object>() {};

InteliJ complained that some methods weren’t implemented and offered to implement them for me. I told it to go ahead. It wrote the ‘match’ method exactly as you see it. Then I chose “Convert Anonymous to Inner…” from the refactoring menu and named the new class FalseMatcher. Voila! No muss, no fuss, no parenthetic maze of dots.

Now look, I’m not saying you shouldn’t use mockito, or any of these other mocking tools. I use them myself when I must. Here, for example, is a test I wrote in FitNesse. I was forced to use a mocking framework because I did not have the source code of the classes I was mocking.
  @Before
  public void setUp() {
    manager = mock(GSSManager.class);
    properties = new Properties();
  }

  @Test
  public void credentialsShouldBeNonNullIfServiceNamePresent() throws Exception {
    properties.setProperty("NegotiateAuthenticator.serviceName", "service");
    properties.setProperty("NegotiateAuthenticator.serviceNameType", "1.1");
    properties.setProperty("NegotiateAuthenticator.mechanism", "1.2");
    GSSName gssName = mock(GSSName.class);
    GSSCredential gssCredential = mock(GSSCredential.class);
    when(manager.createName(anyString(), (Oid) anyObject(), (Oid) anyObject())).thenReturn(gssName);
    when(manager.createCredential((GSSName) anyObject(), anyInt(), (Oid) anyObject(), anyInt())).thenReturn(gssCredential);
    NegotiateAuthenticator authenticator = new NegotiateAuthenticator(manager, properties);
    Oid serviceNameType = authenticator.getServiceNameType();
    Oid mechanism = authenticator.getMechanism();
    verify(manager).createName("service", serviceNameType, mechanism);
    assertEquals("1.1", serviceNameType.toString());
    assertEquals("1.2", mechanism.toString());
    verify(manager).createCredential(gssName, GSSCredential.INDEFINITE_LIFETIME, mechanism, GSSCredential.ACCEPT_ONLY);
    assertEquals(gssCredential, authenticator.getServerCredentials());
  }

If I’d had the source code of the GSS classes, I could have created some very simple stubs and spies that would have allowed me to make these tests a lot cleaner than they currently appear. Indeed, I might have been able to test the true behavior of the classes rather than simply testing that I was calling them appropriately…

Mockism

That last bit is pretty important. Some time ago Martin Fowler wrote a blog about the Mockist and Classical style of TDD. In short, Mockists don’t test the behavior of the system so much as they test that their classes “dance” well with other classes. That is, they mock/stub out all the other classes that the class under test uses, and then make sure that all the right functions are called in all the right orders with all the right arguments. etc. There is value to doing this in many cases. However you can get pretty badly carried away with the approach.

The classical approach is to test for desired behavior, and trust that if the test passes, then the class being tested must be dancing well with its partners.

Personally, I don’t belong to either camp. I sometimes test the choreography, and I sometimes test the behavior. I test the choreography when I am trying to isolate one part of the system from another. I test for the behavior when such isolation is not important to me.

The point of all this is that I have observed that a heavy dependence on mocking frameworks tends to tempt you towards testing the dance when you should be testing behavior. Tools can drive the way we think. So remember, you dominate the tool; don’t let the tool dominate you!

But aren’t hand-rolled mocks fragile?

Yes, they can be. If you are mocking a class or interface that it very volatile (i.e. you are adding new methods, or modifying method signatures a lot) then you’ll have to go back and maintain all your hand-rolled mocks every time you make such a change. On the other hand, if you use a mocking framework, the framework will take care of that for you unless one of the methods you are specifically testing is modified.

But here’s the thing. Interfaces should not usually be volatile. They should not continue to grow and grow, and the methods should not change much. OK, I realize that’s wishful thinking. But, yes, I wish for the kind of a design in which interfaces are the least volatile source files that you have. That’s kind of the point of interfaces after all… You create interfaces so that you can separate volatile implementations from non-volatile clients. (Or at least that’s one reason.)

So if you are tempted to use a mocking framework because you don’t want to maintain your volatile interfaces, perhaps you should be asking yourself the more pertinent question about why your interfaces are so volatile.

Still, if you’ve got volatile interfaces, and there’s just no way around it, then a mocking framework may be the right choice for you.

So here’s the bottom line.

  • It’s easy to roll your own stubs and mocks. Your IDE will help you and they’ll be easier and more natural to read than the dots and parentheses that the mocking frameworks impose upon you.
  • Mocking frameworks drive you towards testing choreography rather than behavior. This can be useful, but it’s not always appropriate. And besides, even when you are testing choreography, the hand-rolled stubs and mocks are probably easier to write and read.
  • There are special cases where mocking tools are invaluable, specifically when you have to test choreography with objects that you have no source for or when your design has left you with a plethora of volatile interfaces.

Am I telling you to avoid using mocking frameworks? No, not at all. I’m just telling you that you should drive tools, tools should not drive you.

If you have a situation where a mocking tool is the right choice, by all means use it. But don’t use it because you think it’s “agile”, or because you think it’s “right” or because you somehow think you are supposed to. And remember, hand-rolling often results in simpler tests without the litter of dots and parentheses!

We must ship now and deal with consequences 113

Posted by Uncle Bob Thu, 15 Oct 2009 11:17:00 GMT

Martin Fowler has written a good blog about technical debt. He suggests that there are two axes of debt: deliberate and prudent. This creates four quadrants: deliberate-prudent, deliberate-imprudent, inadvertent-prudent, and inadvertent-imprudent. I agree with just about everything in his blog except for one particular caption…

Inadvertent-Imprudent Debt.

There is more of this debt than any other kind. It is all too common that software developers create a mess and don’t know they are doing it. They have not developed a nose that identifies code smells. They don’t know design principles, or design patterns. They think that the reek of rotten code is normal, and don’t even identify it as smelling bad. They think that their slow pace through the thick morass of tangled code is the norm, and have no idea they could move faster. These people destroy projects and bring whole companies to their knees. Their name is Doom.

Deliberate-Imprudent Debt.

There is a meme in our industry (call it the DI meme) that tells young software developers that rushing to the finish line at all costs is the right thing to do. This is far worse than the ignorance of the first group because these folks willfully create debt without counting the cost. Worse, this meme is contagious. People who are infected with it tend to infect others, causing an epidemic of deliberately imprudent debtors (sound familiar?) The end result, as we are now all know, is economic catastrophe, inflation (of estimates) and crushing interest (maintenance) payments. They have become death, the destroyer of worlds.

Inadvertent-Prudent Debt.

This is something of an oxymoron. Ironically, it is also the best of all possible states. The fact is that no matter how careful we are, there is always a better solution that we will stumble upon later. How many times have you finished a system only to realize that if you wrote it again, you’d do it very differently, and much better?

The result is that we are always creating a debt, because our hindsight will always show us a better option after it is too late. So even the best outcome still leaves us owing. (Mother Earth will eventually collect that debt!)

Deliberate-Prudent Debt.

This is the quadrant that I have the biggest problem with. And it is this quadrant in which Martin uses the caption I don’t like. The Caption is: “We must ship now and deal with consequences.”

Does this happen? Yes. Should it happen? Rarely, yes. But it damned well better not happen very often, and it damned well better not happen out of some misplaced urge to get done without counting the cost.

The problem I have with this quadrant (DP) is that people who are really in quadrant DI think they are in DP, and use words such as those that appear in the caption as an excuse to rack up a huge imprudent debt.

The real issue is the definition of the word: Imprudent.

So let me ask you a question. How prudent is debt? There is a very simple formula for determining whether debt is prudent or imprudent. You can use this formula in real life, in business, and in programming. The formula is: Does the debt increase your net worth, and can you make the payments?

People often focus on the first criterion, without properly considering the second. Buying a house is almost certain to increase your net worth despite the debt (though lately…). On the other hand, if you cannot make the payments, you won’t keep that house for long. The reason for our current economic woes has a lot to do with people trying to increase their net worth despite the fact that they couldn’t afford the payments. (indeed, they were encouraged by a meme very similar to the DI meme!)

Bad code is always imprudent.

Writing bad code never increases your net worth; and the interest rate is really high. People who write bad code are like those twenty-somethings who max out all their credit cards. Every transaction decreases net worth, and has horrendous consequences for cash flow. In the end, the vast bulk of your effort goes to paying the interest (the inevitable slow down of the team as they push the messes around). Paying down the principle becomes infeasible. (Just the way credit card companies like it.)

Some Suboptimal Design Decision are Prudent Debt.

But most are not. Every once in awhile there is a suboptimal design decision that will increase the net worth of the project by getting that project into customer’s hand’s early.

This is not the same as delivering software that is under-featured. It is often prudent to increase the net worth of a project by giving customers early access to a system without a full and rich feature set. This is not debt. This is more like a savings account that earns interest.

Indeed, this is one reason that most technical debt is imprudent. If you are truly concerned about getting to market early, it is almost always better to do it with fewer features, than with suboptimal design. Missing features are a promise that can be kept. Paying back suboptimal designs creates interest payments that often submerge any attempts at payback and can slow the team to the breaking point.

But there are some cases where a sub-optimal design can increase your net worth by allowing you to deliver early. However, the interest rate needs to be very low, and the principle payments need to be affordable, and big enough to pay back the debt in short order.

What does a low interest rate mean? It means that the sub-optimal design does not infiltrate every part of your system. It means that you can put the sub-optimal design off in a corner where it doesn’t impact your daily development life.

For example, I recently implemented a feature in FitNesse using HTML Frames. This is sub-optimal. On the other hand, the feature is constrained to one small part of the system, and it simply doesn’t impact any other part of the system. It does not impede my progress. There is no mess for me to move around. The interest rate is almost zero! (nice deal if you can get it!)

Implementing that feature with ajax is a much larger project. I would have had to invest a great deal of time and effort, and would have had to restructure massive amounts of the internal code. So the choice was a good one.

Better yet, the customer experience has pretty much been a big yawn. I thought people would really like the feature and would drive me to expand upon it. Instead, the customer base has virtually ignored it.

So my solution will be to pay back this debt by eliminating the feature. It was a cheap experiment, that resulted in my not having to spend a lot of time and effort on a new architecture! Net worth indeed!

But it might have gone the other way. My customers may have said: “Wow, Great! We want more!” At that point it would have been terrible to expand on the HTML Frames! That decision would have been in the DI quadrant. Deliberate imprudence! Rather, my strategy would have been to replace the suboptimal Frames design of the feature with an isolated ajax implementation, and then to gradually migrate the ajax solution throughout the project. That would have been annoying, but loan payments always are.

Summary

So, don’t let the caption in the DP quadrant be an excuse. Don’t fall for the DI meme that says “We just gotta bite the bullet”. Tread very carefully when you enter the DP quadrant. Look around at all your options, because it’s easy to think you are in the DP quadrant when you are really in the DI quadrant.

Remember: Murphy shall send you strong delusion, that you should believe you are in DP; so that you will be damned in DI.

TDD Derangement Syndrome 235

Posted by Uncle Bob Wed, 07 Oct 2009 13:32:00 GMT

My recent blog about TDD, Design Patterns, Concurrency, and Sudoku seemed to draw the ire of a few vocal TDD detractors. Some of these people were rude, insulting, derisive, dismissive, and immature. Well, Halloween is not too far away.

In spite of their self-righteous snickering they did ask a few reasonable questions. To be fair I thought it would be appropriate for me to answer them.

Is there any research on TDD?

It turns out that there is a fair bit.

  • One simple google search led me to this blog by Phil Haack in which he reviewed a TDD research paper. Quoting from the paper:

We found that test-first students on average wrote more tests and, in turn, students who wrote more tests tended to be more productive. We also observed that the minimum quality increased linearly with the number of programmer tests, independent of the development strategy employed.

  • The same google search led me to this blog by Matt Hawley, in which he reviewed several other research papers. Part of his summary:

* 87.5% of developers reported better requirements understanding. * 95.8% of developers reported reduced debugging efforts. * 78% of developers reported TDD improved overall productivity. * 50% of developers found that it decreased overall development time. * 92% of developers felt that TDD yielded high-quality code. * 79% of developers believed TDD promoted simpler design.

Actually, I recognize some of Matt’s results as coming from a rather famous 2003 study (also in the list of google results) by Laurie Wiliams and Boby George. This study describes a controlled experiment that they conducted in three different companies. Though Matt’s summary above is based (in part) on that study, there is more to say.

In the George-William study teams that practiced TDD took 16% longer to claim that they were done than the teams that did not practice TDD. Apparently tests are more accurate than claims since the non-TDD teams failed to pass one third of the researcher’s hidden acceptance tests, whereas the TDD teams passed about 6 out of 7. To paraphrase Kent Beck: “If it doesn’t have to work, I can get it done a lot faster!”

Another point of interest in this study is that the TDD teams produced a suite of automated tests with very high test coverage (close to 100% in most cases) whereas most of the non-TDD teams did not produce such a suite; even though they had been instructed to.

  • Jim Shore wrote a review of yet another research summary which I found in the same google search. This one combines 7 different studies (including George-Williams). Here the results range from dramatically improved quality and productivity to no observed effect.
  • Finally, there is this 2008 case Study of TDD at IBM and Microsoft which shows that TDDers enjoy a defect density reduction ranging from 30% to 90% (as measured by defect tracking tools) and a productivity cost of between 15% and 35% (the subjective opinion of the managers). I refer you back to Kent Beck’s comment above.

I’m sure there is more research out there. After all this was just one google search. I think it’s odd that the TDD detractors didn’t find anything when they did their google searches.

  • Oh yeah, and then there was that whole issue of IEEE Software that was dedicated to papers and research on TDD.

What projects have been written with TDD, hmmm?

Quite a few, actually. The following is a list of projects that have an automated suite of unit tests with very high coverage. Those that I know for a fact use TDD, I have noted as such. The others, I can only surmise. If you know of any others, please post a comment here.

  • JUnit. This one is kind of obvious. JUnit was written by Kent Beck and Erich Gamma using TDD throughout. If you measure software success by sheer distribution, this particular program is wildly successful.
  • Fit. Written by Ward Cunningham. The progenitor of most current acceptance testing frameworks.
  • FitNesse. This testing framework has tens of thousands of users. It is 70,000 lines of java code, with 90%+ code coverage. TDD throughout. Very small bug-list. Again, if you measure by distribution, another raving success.
  • Cucumber,
  • Rspec. These two are Testing frameworks in Ruby. Of course you’d expect a testing framework to be written with TDD, wouldn’t you? I know these were. TDD throughout.
  • Limelight. A gui framework in JRUby. TDD throughout.
  • jfreechart.
  • Spring
  • JRuby
  • Smallsql
  • Ant
  • MarsProject
  • Log4J
  • Jmock

Are there others? I’m sure there are. This was just a quick web search. Again, if you know of more, please add a comment.

The CSM Integrity Deficit 127

Posted by Uncle Bob Fri, 18 Sep 2009 13:44:43 GMT

Scott Ambler wrote a blog, and an editorial about the dirty dealings and desperate deception of the Scrum Alliance and their slimy certification scam. He rightly points out that the certification means little more than the applicant’s check didn’t bounce.

He goes on to imply that the entire agile community is guilty of keeping silent while this huge chicanery was foisted upon an innocent industry. He calls this conspiratorial silence: “integrity debt”.

Oh bollux! What an incredible load of Dingoes Kidneys!

Look. I’m not a big fan of CSM. I think it’s a gimmick. I am not a CSM myself, and have no intention of joining their ranks. When I meet someone who proclaims themselves to be a CSM, I’m not particularly impressed. I know what that certification means, and I take it with a grain of salt. To me, the title of CSM is worth little more than a shrug.

We at Object Mentor do a lot of training in things like Test Driven Development, Agile Methods. Object Oriented Principles, Java, C#, etc. etc. At the end of every course we often sign and pass out certificates to the students. Those certificates proclaim that the student attended the course. I see no difference between that certificate (which is a certification after all) and the CSM certificate. I suppose the students who take our TDD course could claim to be Object Mentor Certified TDDers; and they’d be right.

Have we created an “Integrity Debt” by handing out those certificates? Of course not. Everybody knows exactly what they mean. Nobody misrepresents their intent. They are an honest statement of fact. And the same is true of the CSM certificate.

Is it troubling that some HR people are starting to put CSM requirements on Job postings? Not at all! It is perfectly within the rights of any company to decide that they want to hire people who have been appropriately trained. Are there some HR people who overestimate the value of CSM? Probably, but that’s their own fault.

In my humble opinion there is no significant integrity issue here. Oh it wouldn’t surprise me to learn that there might have been some back-door deals in the early days of CSM. Perhaps some people were given CST status, or CSM status without careful controls. If that happened, I chalk it up to birthing pains which the Scrum Alliance is striving to correct. I don’t think anybody was out to scam anybody else. I don’t think CSM is a far flung conspiracy to ruin the software industry, and I don’t think the US government flew those jets into the twin towers.

Bottom line. There is no “Integrity Debt” here. What there is is a group of honest and caring folks who are trying to figure out the best ways to get Agile concepts adopted in an industry that sorely needs them.

In that regard I think that the agile movement has enjoyed a significant boost because of the interest generated by the CSM program. There are more companies doing Agile today because of CSM. So if anybody owes a debt here, it may be the Agile community owing a debt to the CSM program.

Maybe, instead of accusing and castigating and pointing the finger of judgement and doom we ought give a salute to Ken Schwaber, and say: “Thanks Ken.”

QCon SF 2008: Radical Simplification through Polyglot and Poly-paradigm Programming 143

Posted by Dean Wampler Tue, 15 Sep 2009 15:35:00 GMT

InfoQ has posted the video of my talk at last year’s QCon San Francisco on Radical Simplification through Polyglot and Poly-paradigm Programming. I make the case that relying on just one programming language or one modularity paradigm (i.e., object-oriented programming, functional programming, etc.) is insufficient for most applications that we’re building today. That includes embedded systems, games, up to complex Internet and enterprise applications.

I’m giving an updated version of this talk at the Strange Loop Conference, October 22-23, in St. Louis. I hope to see you there.

Clean Code and Battle Scarred Architecture 94

Posted by Uncle Bob Wed, 20 May 2009 22:35:00 GMT

If you go here you’ll see that I struggle to keep the CRAP out of FitNesse. Despite the whimsical name of this metric (which I take a perverse delight in), I find it to be remarkably useful.

CRAP is a metric that is applied to each function in the system. The formula for CRAP is: CRAP = comp(f)2. X (1 – cov(f)/100)3. + comp(f)

Where comp(f) = the cyclomatic complexity of the function f. and cov(f) = the unit test coverage of function f.

So a function’s CRAP will be small iff the cyclomatic complexity is low and the test coverage is high. CRAP will be huge if cyclomatic complexity is high, and there is no coverage.

What does this have to do with architecture? Read on…

I work very hard to keep the ratio of crappy methods near .1%. Of the 5643 methods in FitNesse, only 6 are crappy, and five of those I have no control over.

If you study the graph you can see how quickly I react to even the slightest uptick in crap. I don’t tolerate it because it means that either I’ve got a horrifically complex method that needs to be refactored, or (and this is far more likely) I’ve got a method that isn’t sufficiently tested.

Why am I so fastidious about this? Why am I so concerned about keeping the crap out of FitNesse? The reason is pretty simple. It’s the least I can do.

If you look inside of FitNesse, you’ll find that there are lots of structures and decisions that don’t seem to make a lot of sense at first reading. There are complexities and abstractions that will leave you shaking your head.

For example. We generate all our HTML in code. Yes, you read that correctly. We write Java code that constructs HTML. And yes, that means we are slinging angle brackets around.

To be fair, we’ve managed to move most of the angle bracket slingers into a single module that hides the HTML construction behind an abstraction barrier. This helps a lot, but cripe who would sling angle brackets when template system are so prevalent? I hope nobody. But FitNesse was not conceived at a time when template systems made sense (at least to us).

Fear not, I am working through the Fitnesse code replacing the HTML generation with Velocity templates. It’ll take some time, but I’ll get it done. The point is, that just like every other software system you’ve seen, FitNesse is a collection of historical compromises. The architecture shows the scars of many decisions that have since had to be reversed or deeply modified.

What does this have to do with CRAP? Simply this. The battle scarred architecture is something that will never really go away. I can stop the bleeding, and disinfect the wounds, but there will always be evidence of the battle.

That scarring makes the system hard to understand. It complicates the job of adding features and fixing bugs. It decreases the effectiveness of the developers who work on FitNesse. And though I work hard to massage the scars and bandage the wounds, the war goes on.

But I can keep the CRAP out! I can keep the code so clean and simple at the micro level, that the poor folks who try to make sense out of the macro scale aren’t impeded by huge deeply nested functions that aren’t tested!

Think of it this way. CRAP is disease at the cellular level. CRAP is a rot so pervasive that it can infest every nook and cranny of the system. My system may have scars, but it’s not diseased! In fact, despite the evidence of battles long past, the FitNesse code is very healthy.

And I aim to keep it that way.

Older posts: 1 2 3 4