The Scatology of Agile Architecture 85

Posted by Uncle Bob Sat, 25 Apr 2009 20:23:09 GMT

One of the more insidious and persistent myths of agile development is that up-front architecture and design are bad; that you should never spend time up front making architectural decisions. That instead you should evolve your architecture and design from nothing, one test-case at a time.

Pardon me, but that’s Horse Shit.

This myth is not part of agile at all. Rather it is an hyper-zealous response to the real Agile proscription of Big Design Up Front (BDUF). There should be no doubt that BDUF is harmful. It makes no sense at all for designers and architects to spend month after month spinning system designs based on a daisy-chain of untested hypotheses. To paraphrase John Gall: Complex systems designed from scratch never work.

However, there are architectural issues that need to be resolved up front. There are design decisions that must be made early. It is possible to code yourself into a very nasty cul-de-sac that you might avoid with a little forethought.

Notice the emphasis on size here. Size matters! ‘B’ is bad, but ‘L’ is good. Indeed, LDUF is absolutely essential.

How big are these B’s and L’s? It depends on the size of the project of course. For most projects of moderate size I think a few days ought to be sufficient to think through the most important architectural issues and start testing them with iterations. On the other hand, for very large projects, I seen nothing wrong with spending anywhere from a week to even a month, thinking through architectural issues.

In some circles this early spate of architectural thought is called Iteration 0. The goal is to make sure you’ve got your ducks in a row before you go off half-cocked and code yourself into a nightmare.

When I work on FitNesse, I spend a lot of time thinking about how I should implement a new feature. For most features I spend an hour or two considering alternative implementations. For larger features I’ve spent one or two days batting notions back and forth. There have been times when I’ve even drawn UML diagrams.

On the other hand, I don’t allow those early design plans to dominate once I start TDDing. Often enough the TDD process leads me in a direction different from those plans. That’s OK, I’m glad I made those earlier plans. Even if I don’t follow them they helped me to understand and constrain the problem. They gave me the context to evaluate the new solution that TDD helped me to discover. To paraphrase Eisenhower: Individual plans may not turn out to be helpful, but the act of planning is always indispensable.

So here’s the bottom line. If you are working in an Agile team, don’t feel guilty about taking a day or two to think some issues through. Indeed, feel guilty if you don’t take a little time to think things through. Don’t feel that TDD is the only way to design. On the other hand, don’t let yourself get too vested in your designs. Allow TDD to change your plans if it leads you in a different direction.

C++ shared_ptr and circular references, what's the practice? 33

Posted by Brett Schuchert Sat, 25 Apr 2009 07:30:00 GMT

I’m looking for comments on the practice of using shared pointers in C++. I’m not actively working on C++ projects these days and I wonder if you’d be willing to give your experience using shared pointers, if any.

I’m porting one of our classes to C++ from Java (it’s already in C#). So to remove memory issues, I decided to use boost::shared_ptr. It worked fine until I ran a few tests that resulted in a circular reference between objects.

Specifically:
  • A book may have a receipt (this is a poor design, that’s part of the exercise).
  • A receipt may have a book.

Both sides of the relationship are 0..1. After creating a receipt, I end up with a circular reference between Receipt and Book.

In the existing Java and C# implementations, there was no cleanup code in the test teardown to handle what happens when the receipt goes away. This was not a problem since C# and Java garbage collection algorithms easily handle this situation.

Shared pointers, however, do not handle this at all. They are good, sure, but not as good as a generation-scavenging garbage collector (or whatever algorithms are used these days – I know the JVM for 1.6 sometimes uses the stack for dynamic allocation based on JIT, so it’s much more sophisticated than a simple generation-scavenger, right?)

OK, so how to fix this problem? One way I could do is is manually break the circularity:
boost::shared_ptr<Receipt> r = ...;
CHECK(xxx, yyy);
r.setCopy(boost::shared_ptr<Book>());

(I did not use these types like this. When I use templates, especially those in a namespace, I use typedefs and I even, gasp, use Hungarian-esque notation.)

That would work, though it is ugly. Also, it is error prone and will either require violating DRY or making an automatic variable a field.

I could have removed the back reference from the Receipt to the book. That’s OK, but is a redesign of a system deliberately written with problems (part of the assignment).

Maybe I could explicitly “return” the book, which could remove the receipt and the back-reference. That would make the test teardown a bit more complex (and sort of upgrade the test from a unit test to something closer to an integration test), but it makes some sense. The test validate borrowing a book, so to clean up, return the book.

Instead of any of these options, I decided to use a boost::weak_ptr on the Receipt side. (This is the “technology to the rescue solution”, thus my question, is this an OK approach.)

I did this since the lifetime of a book object is much longer than its receipt (you return your library books, right?). Also, the Receipt only exists on a book. But the book could exist indefinitely without a Receipt.

This fixed the problem right away. I got a clean run using CppUTest. All tests passed and no memory leaks.

Once I had the test working, I experimented. Why? The use of a weak_ptr exposes some underlying details that I didn’t like exposing. For example, this line of code:
aReceipt->getBook()->getIsbn();

(Yes, violating Law of Demeter, get over it, the alternative would make a bloated API on the Book class.)

Became instead:
aReceipt->getBook().lock()->getIsbn();

The lock() method promotes a weak_ptr to a shared_ptr for the life of the expression. In this case, it’s a temporary in that line of code.

This worked fine, but I decided to put that promotion into the Receipt class. So internally, the class stores weak_ptr, but when you ask the receipt for its book, it does the lock:
boost::shared_ptr<Book> getBook() {
    return book.lock();
}

On the one hand, anybody using the getBook() method is paying the price of the promotion. However, the weak_ptr doesn’t allow access to its payload without the promotion so it’s really required to be of any value. Or at least that’s my take on it.

Do you have different opinions?

Please keep in mind, this is example code we use in class to give students practice naming things like methods and variables and also practice cleaning up code by extracting methods and such.

Even so, what practice do you use, if any, when using shared_ptr? Do you use weak_ptr?

Thanks in advance for your comments. I’ll be reading and responding as they come up.

Crap Code Inevitable? Rumblings from ACCU. 374

Posted by Uncle Bob Thu, 23 Apr 2009 09:56:22 GMT

I gave the opening Keynote at ACCU 2009 on Wednesday. It was entitled: The Birth of Craftsmanship. Nicolai Josuttis finshed the day with the closing keynote: Welcome Crappy Code – The Death of Code Quality. It was like a blow to the gut.

In my keynote I attempted to show the historical trajectory that has led to the emergence of the software craftsmanship movement. My argument was that since the business practices of SCRUM have been widely adopted, and since teams who follow those practices but do not follow the technical practices of XP experience a relentless decrease in velocity, and since that decrease in velocity is exposed by the transparency of scrum, then if follows that the eventual adoption of those technical XP practices is virtually assured. My conclusion was that Craftsmanship was the “next big thing” (tm) that would capture the attention of our industry for the next few years, driven by the business need to increase velocity. (See Martin Fowler’s blog on Flaccid Scrum) In short, we are on a trajectory towards a higher degree of professionalism and craftsmanship.

Nicolai’s thesis was the exact opposite of mine. His argument was that we are all ruled by marketing and that businesses will do whatever it takes to cut costs and increase revenue, and therefore businesses will drive software quality inexorably downward. He stipulated that this will necessarily create a crisis as the defect rates and deadline slips increased, but that all attempts to improve quality would be short lived and followed by a larger drive to decrease quality even further.

Josuttis’ talk was an hour of highly depressing rhetoric couched in articulate delivery and brilliant humor. One of the more memorable moments came when he playacted how a manger would respond to a developer’s plea to let them write clean code like Uncle Bob says. The manager replies: “I don’t care what Uncle Bob says, and if you don’t like it you can leave and take Uncle Bob with you.”

One of the funnier moments came when Josuttis came up with his rules for crap code, one of which was “Praise Copy and Paste”. Here he showed the evolution of a module from the viewpoint of clean code, and then from the viewpoint of copy-paste. His conclusion, delivered with a lovely irony, was the the copy-paste solution was more maintainable because it was clear which code belonged to which version.

It was at this point that I thought that this whole talk was a ribald joke, an elaborate spoof. I predicted that he was about to turn the tables on everyone and ringingly endorse the craftsmanship movement.

Alas, it was not so. In the end he said that he was serious about his claims, and that he was convinced that crap code would dominate our future. And then he gave his closing plea which went like this:

We finally accepted that requirements change, and so we invented Agile.

We must finally accept that code will be crap and so we must ???

He left the question marks on the screen and closed the talk.

This was like a blow to the gut. The mood of the conference changed, at least for me, from a high of enthralled geekery, to depths of hoplessness and feelings of futile striving against the inevitable. Our cause was lost. Defeat was imminent. There was no hope.

Bulls Bollocks!

To his credit, there are a few things that Josuttis got right. There is a lot of crap code out there. And there is a growing cohort of crappy coders writing that crap code.

But the solution to that is not to give up and become one of them. The solution to that is to design our systems so that they don’t require an army of maintainers slinging code. Instead we need to design our systems such that the vast majority of changes can be implemented in DSLs that are tuned to business needs, and do not require “programmers” to maintain.

The thing that Josuttis got completely wrong, in my mildly arrogant opinion, is the notion that low quality code is cheaper than high quality code. Low quality code is not cheaper; it is vastly more expensive, even in the short term. Bad code slows everyone down from the minute that it is written. It creates a continuous and copious drag on further progress. It requires armies of coders to overcome that drag; and those armies must grow exponentially to maintain constant velocity against that drag.

This strikes at the very heart of Josuttis’ argument. His claim that crappy code is inevitable is based on the notion that crappy code is cheaper than clean code, and that therefore businesses will demand the crap every time. But it has generally not been business that has demanded crappy code. Rather it has been developers who mistakenly thought that the business’ need for speed meant that they had to produce crappy code. Once we, as professional developers, realize that the only way to go fast is to create clean and well designed code, then we will see the business’ need for speed as a demand for high quality code.

My vision of the future is quite different from Josuttis’. I see software developers working together to create a discipline of craftsmanship, professionalism, and quality similar to the way that doctors, lawyers, architects, and many other professionals and artisans have done. I see a future where team velocities increase while development costs decrease because of the steadily increasing skill of the teams. I see a future where large software systems are engineered by relatively small teams of craftsmen, and are configured and customized by business people using DSLs tuned to their needs.

I see a future of Clean Code, Craftsmanship, Professionalism, and an overriding imperative for Code Quality.

Is the Supremacy of Object-Oriented Programming Over? 226

Posted by Dean Wampler Tue, 21 Apr 2009 02:45:00 GMT

I never expected to see this. When I started my career, Object-Oriented Programming (OOP) was going mainstream. For many problems, it was and still is a natural way to modularize an application. It grew to (mostly) rule the world. Now it seems that the supremacy of objects may be coming to an end, of sorts.

I say this because of recent trends in our industry and my hands-on experience with many enterprise and Internet applications, mostly at client sites. You might be thinking that I’m referring to the mainstream breakout of Functional Programming (FP), which is happening right now. The killer app for FP is concurrency. We’ve all heard that more and more applications must be concurrent these days (which doesn’t necessarily mean multithreaded). When we remove side effects from functions and disallow mutable variables, our concurrency issues largely go away. The success of the Actor model of concurrency, as used to great effect in Erlang, is one example of a functional-style approach. The rise of map-reduce computations is another example of a functional technique going mainstream. A related phenomenon is the emergence of key-value store databases, like BigTable and CouchDB, is a reaction to the overhead of SQL databases, when the performance cost of the Relational Model isn’t justified. These databases are typically managed with functional techniques, like map-reduce.

But actually, I’m thinking of something else. Hybrid languages like Scala, F#, and OCaml have demonstrated that OOP and FP can complement each other. In a given context, they let you use the idioms that make the most sense for your particular needs. For example, immutable “objects” and functional-style pattern matching is a killer combination.

What’s really got me thinking that objects are losing their supremacy is a very mundane problem. It’s a problem that isn’t new, but like concurrency, it just seems to grow worse and worse.

The problem is that there is never a stable, clear object model in applications any more. What constitutes a BankAccount or Customer or whatever is fluid. It changes with each iteration. It’s different from one subsystem to another even within the same iteration! I see a lot of misfit object models that try to be all things to all people, so they are bloated and the teams that own them can’t be agile. The other extreme is “balkanization”, where each subsystem has its own model. We tend to think the latter case is bad. However, is lean and mean, but non-standard, worse than bloated, yet standardized?

The fact is, for a lot of these applications, it’s just data. The ceremony of object wrappers doesn’t carry its weight. Just put the data in a hash map (or a list if you don’t need the bits “labeled”) and then process the collection with your iterate, map, and reduce functions. This may sound heretical, but how much Java code could you delete today if you replaced it with a stored procedure?

These alternatives won’t work for all situations, of course. Sometimes polymorphism carries its weight. Unfortunately, it’s too tempting to use objects as if more is always better, like cow bell.

So what would replace objects for supremacy? Well, my point is really that there is no one true way. We’ve led ourselves down the wrong path. Or, to be more precise, we followed a single, very good path, but we didn’t know when to take a different path.

Increasingly, the best, most nimble designs I see use objects with a light touch; shallow hierarchies, small objects that try to obey the Single Responsibility Principle, composition rather than inheritance, etc. Coupled with a liberal use of functional idioms (like iterate, map, and reduce), these designs strike the right balance between the protection of data hiding vs. openness for easy processing. By the way, you can build these designs in almost any of our popular languages. Some languages make this easier than others, of course.

Despite the hype, I think Domain-Specific Languages (DSLs) are also very important and worth mentioning in this context. (Language-Oriented Programming – LOP – generalizes these ideas). It’s true that people drink the DSL Kool-Aid and create a mess. However, when used appropriately, DSLs reduce a program to its essential complexity, while hiding and modularizing the accidental complexity of the implementation. When it becomes easy to write a user story in code, we won’t obsess as much over the details of a BankAccount as they change from one story to another. We will embrace more flexible data persistence models, too.

Back to OOP and FP, I see the potential for their combination to lead to a rebirth of the old vision of software components, but that’s a topic for another blog post.

FitNesse.Slim Table Table Tutorial and a few minor features 27

Posted by Brett Schuchert Sun, 19 Apr 2009 04:45:00 GMT

Finally, the last in the table series is ready for prime time: http://schuchert.wikispaces.com/FitNesse.Tutorials.TableTables.

So what’s next? What do you want to see in additional tutorials?
  • New FitNesse.Slim features
  • Acceptance Test Staging
  • ...
During this past week I added a few things into FitNesse (very small things compared to what others are doing). When the next release happens (or if you build from source):
  • If you create a page whose name ends in “Examples”, FitNesse will set its type to Suite.
  • If you create a page whose begins with or ends with “Example” (in addition to “Test”), FitNesse will set its type to Test.
  • The SetUp and TearDown links are back. They add a SetUp or TearDown page as a child of the page you are currently viewing. So if you wanted to add a SetUp/TearDown for a suite, go to the suite page and click SetUp or TearDown.
  • By default, when you build from source, ant starts FitNesse on port 8080 to run acceptance tests. This can be a problem if you, like me, typically keep FitNesse running on port 8080. You can set an environment variable, FITNESSE_TEST_PORT, and the ant build.xml will pick up that environment variable and use the port specified instead.

Enjoy!

Wormholes, FitNesse and the return of SetUp and TearDown links 38

Posted by Brett Schuchert Thu, 16 Apr 2009 18:33:00 GMT

Recently, I was working on adding back a feature to FitNesse that had been removed, links at the bottom of each page to add a setup or teardown page to the current page. After racking my brain and spending time in git with file histories, I had discovered a point in time where the feature was there and the next commit where it was gone. It was not obvious to me what had changed to break the feature until I talked with Bob Martin (much of this has to do with my lack of experience using git). He mentioned a change in the handling of importing header and footer pages (related to my problem) and sure enough, when I took a look in the debugger, I found out that the information I needed to reintroduce the feature had essentially be removed as a result of a bug fix.

This was, apparently, not a heavily used feature. In fact, I had not used it much until I recently started working on tutorials for FitNesse. And given the release criterion for FitNesse, removal of the feature did not break anything (no acceptance tests nor unit tests).

Anyway, the point at which the information was available that I needed and the point where I needed to use the information were may steps away from each other both in the call stack as well as the instance hierarchy (think composite pattern). I did not want to significantly change the method call hierarchy so I instead decided to hand-roll the wormhole pattern as it manifests itself in AspectJ (and not the wormhole anti-pattern).

For background on AspectJ and AOP in general, have a look at these self-study tutorials.

The wormhole pattern is well-documented in AspectJ In Action. It consists of two point cuts that combine two different, separated, parts of the system. It grabs information available at (in this case) the entry point to the system and makes that information available deeper in the system without passing the parameter directly. That’s where the name comes from, the pattern bridges information across to unconnected points via the wormhole. It is my favorite patter name (not favorite pattern, just name).

AspectJ happens to store certain runtime information in thread local storage and the wormhole pattern exploits this fact. Here’s another example that’s close to the wormhole pattern. This is actually a common technique. If you’ve read up on the recommendations on using Hibernate in a JSE environment either in “Hibernate in Action” or the more recent Javer Persistence with Hibernate, one recommendation is to pass around sessions and such in thread local variables. Under the covers, JEE containers do the same thing.

Even though this is a “common” technique, I’m not a huge fan of using thread-locals. 1. They are thread-specific global variables. 1. You better be sure there’s no threading between the two points. In this case, the threading has already happened. Once a FitNesse responder gets a hold of an HTTP request, the remaining processing is in a single thread.

On the other hand, if I did not use thread local storage, the change required to get information I needed would have either require changing one of the objects already in the parameters being passed (very ugly) or changing method signatures all over the place (somewhat less ugly but a stronger violation of LSP). So in this case, I think thread local variables are the least ugly of the available options.

(As a side note, that’s my definition of design: Select the solution that sucks the least.)

If you’re like me, you don’t use thread locals very often. I wanted to make sure that the thread local information would get properly cleaned up and I wanted to hide all of the magic in one place, so I created a simple class called ThreadLocalUtil. I used TDD to create the class and when I though I was done, I wanted to make sure that I had written the clear() method correctly. I knew that for a single thread the clear() method worked as expected, but I wanted to make sure it did not affect other threads.

So my problem was I needed 2 threads and I wanted a particular ordering of events:
  • T1: Store value in thread-local storage.
  • T2: Clear its local storage.
  • T1: Read its local storage, value stored should still be available.
This really isn’t a hard test to write other than the ordering of events. To make that work, I used latches and I had the threads manually single each other. Here’s the test:
package fitnesse.threadlocal;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNull;

import java.util.concurrent.CountDownLatch;

import org.junit.After;
import org.junit.Before;
import org.junit.Test;

public class ThreadLocalUtilTest {
  private String valueFound;

  // snip - several other tests removed to focus on this test

  CountDownLatch t1Latch = new CountDownLatch(1);
  CountDownLatch t2Latch = new CountDownLatch(1);

  class T1 implements Runnable {
    public void run() {
      try {
        ThreadLocalUtil.setValue("t1", "value");
        t2Latch.countDown();
        Thread.yield();
        t1Latch.await();
        valueFound = ThreadLocalUtil.getValue("t1");
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
  }

  class T2 implements Runnable {
    public void run() {
      try {
        t2Latch.await();
        ThreadLocalUtil.clear();
        t1Latch.countDown();
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
  }

  @Test
  public void assertThatClearInOneThreadDoesNotMessUpAnotherThread()
      throws InterruptedException {
    Thread t1 = new Thread(new T1());
    Thread t2 = new Thread(new T2());
    t1.start();
    t2.start();
    t1.join();
    t2.join();
    assertEquals("value", valueFound);
  }
}

This example illustrates using the java.util.concurrent.CountDownLatch class to signal between threads.

Main Test Method
  • Creates two threads and start them.
  • Wait for both threads to complete.
T1.run(), T2.run()
  • If T2 starts running before T1, no problem it waits on the countdown latch.
  • When T1 starts, it stores a value in a thread local using the util class.
  • T1 then counts down, releasing T2.
  • T1 then yields, it is done until its countdown latch is signaled.
  • T1 waits for a signal.
  • T2 is released from its countdown latch.
  • T2 sends clear to the thread local util class (there’s a test that verifies clear() works as named in a single thread.
  • T2 then signals T1 to continue by calling its countdown latch.
  • T2 completes at some point later.
  • T1 starts running again, grabs the value out of thread local storage and puts it in the valueFound.
  • T1 completes.
Main Test Method
  • The completion of T1 and T2 make the join methods called in the original test method to return, whereupon the test can complete.
  • Verifies that the variable valueFound, set in T1.run, stores the expected result.

This kind of hand-rolled synchronization is certainly error prone. This is where having a second pair of eyes can really help. However, this seemed to verify that the clear method as written worked in multiple threads as expected.

If you’re interested in seeing the ThreadLocalUtil class or ThreadLocalUtilTest class, grab a copy of FitNesse from github.

X Tests are not X Tests 361

Posted by Michael Feathers Mon, 13 Apr 2009 16:48:00 GMT

Testing is a slippery subject, and it’s reasonably hard to talk about for one simple reason: the nomenclature is chaotic. Years ago, I went to a summit with some testing gurus. I was one of the lone developers there and I asked about the taxonomy of testing. Cem Kaner, Bret Pettichord, Brian Marick, and James Bach went through it for us on a flipchart, and it was a nightmare. You can name tests after their scope: (unit, component, system), their place in the development process (smoke, integration, acceptance, regression), their focus (performance, functional) their visibility (white box, black box), the role of the people writing them (developer, customer).. The list goes on. There are far more than I can remember.

Why is it so confusing? There are are a couple of reasons. One is that different communities have developed different nomenclature over time. But, let’s face it, that’s true in most fields. The thing which makes testing nomenclature worse is that the tests themselves aren’t all that different, or at least, they are often not different enough for us to for us to distinguish them without being told. Yes, we can tell the difference between a unit test and an acceptance test in most systems, but really there is no force which prevents tests of different types from bleeding through into each other. Often the “type” of a test is more like an attribute: “here I have a blackbox smoke test, written by a developer for component integration.” In the end, all we have are tests and each of them can serve purposes beyond the purpose we originally intended.

Earlier today, I read a blog by Stephen Walther: TDD Tests are not Unit Tests. In it, he draws some distinctions between various types of testing. It’s great that he wrote it because it’s nice for us to have mental categories for these things, but we have to remember is that they really are just categories. We get to choose how distinct they will be. When I write code, most of my TDD tests end up being the same as my unit tests. I find value in forcing that overlap, and in general, I think overlapping test purposes are great to the degree that the purposes don’t conflict. You get more for less that way.

I don’t see any remedy for the muddle of test types. We will continue to make up terms to distinguish tests. We’ll just have to remember that the types are labels, not bins.

Twitter Does Not Allow For Nuance 20

Posted by Brett Schuchert Mon, 13 Apr 2009 03:26:00 GMT

If you have something deep to say, 140 characters is not going to cut it very often. And sometimes, when it does, it’s almost too opaque to be grasped by anybody who doesn’t already grok it.

Here’s one example:
Polymorphsim – Same request different response.

That’s the essence of polymorphism, which can help with the SRP and the OCP. That one happens to fit in to a single bullet of < 140 characters. However, there’s a lot there. In fact, while that could be the definition, it has many ramifications and the context matters. So this probably falls in the opaque category.

A few days ago I made a pithy statement on twitter:
Test is Definition (TDD), therefore code w/o Test, not defined. Therefore, it is broken (or never wrong, take your pick).
I wanted to make that fit in a single tweet. I forgot to put in to whom I was replying (and now I cannot remember[sorry]). Anyway, I got the following replies:
@dws – Test is a form of definition. It’s a very good form, but it’s not the only one.
@jamesmarcusbach – If test is definition, then * must be the same as +, because 2+2=4 & 2 * 2=4. No, a test is just an event. (via )
@ecomba I love this statement!
I wanted to reply with a little more length, so I figured this was a good place to do it.
To @ecomba, thank you. I’m assuming your reply was in response to that statement, but if not, then thanks anyway!-)
To jamesmarcusbach – I do not agree. When I assess “Test is Definition (TDD)”, that suggests to me that there are many, many unit tests (and even acceptance test, load tests, smoke tests, manual tests, exploratory tests, debugging, ...). The union of all of those tests form the definition. You’ve picked one example and erroneously extrapolated from it to discount a tweet, and I don’t think you did it. (And to be clear, I strongly prefer certain forms of tests over others.)

I also do not agree with your interpretation of testing as an event. At the very least it is a process. Even more so, it is a continuous process that is only done when the project is done, which is when the customer stops paying. So I think you and I are using the same 4 letters (test) in very different ways. I suspect, however, that I’ve committed the same error in interpreting your use of the word event as you’ve interpreted my use of the word test.

And finally, I don’t agree with your example for many reasons, two of which are:
  • I mentioned TDD, I don’t check the compiler very often, so I won’t be testing 2+2 or 2*2.
  • However, if I am, I would not only pick your two examples. I’d have many trying to capture all of the equivalence classes.

To @dws – Sure, test is a form a definition. And I also agree that it is not the only form. I never said it was the only form (I think that came from you). As for not including the word form in my tweet, I did include TDD. Does that not invoke a large context, part of which is that TDD can possibly be a form of definition? Of course, saying Test is Definition is actually a metaphor, right?
OK, having replied with > 140 characters, I’m going to restate the tweet. Since the restatement is longer than > 140 characters it will have the luxury of being wrong in many more ways than the original.
Test is one form of Definition (TDD). If you do not have any other form [the context of the original tweet I believe, to which I was responding but mistakenly forgot to include the @..] (e.g., some requirements specification or a verbal agreement with some sales person), then the tests are one good definition of what is/is not correct. If we go with the definition of our system in terms of the tests, then where there are no tests, there is no definition and therefore any behavior is OK. Sure you can argue, the system should not crash when a user enters a character into a field that expects numbers, but really, if that behavior is not defined, saying “it should not do that because of common sense” really is saying “Well I assumed it would not do that, you violated my assumption therefore I will prove you wrong in a battle royal.” Even more, it’s great because when it happens, the user will be so mad that s/he will make sure to let you know your definition of the system is incorrect. You can respond by writing another test to improved the fidelity of your understanding of the system.
This last part is really a tip of the hat to Jerry Weinberg who said (and I paraphrase probably incorrectly):
If there are no requirements, any solution will do.
Of course, he was probably referring to Alice in Wonderland…
There’s a lot more to this subject. For example, I don’t believe in proving systems correct. Why? Even if you’ve proven that your system conforms 100% to your formal specification, there’s no reasonable way to prove:
  • Your formal description is complete (yes you can for simple data structures, BFD, don’t care about formally proving a simple data structure).
  • For any complex system described in terms of a formal language, the original inception of the system was in natural language. Prove the transformation, and then prove the natural language specification is correct/complete.
  • I’m a big fan of Gödel’s incompleteness theorem. It is related to this at least loosely because.

By logical extension, you can pick up from this that I’m not a big fan of using formal languages like UML to build my system “from the diagrams.”

So in conclusion I like my original statement. I understand that it is not literally true or “The Truth.” It’s a way of thinking about things. I think the feedback was valuable because communication is ambiguous.

I could have been more clear. Maybe I should not have tried to even express the idea on Twitter. By throwing something out there, it gets refined through feedback and there’s a bettering understanding to be had. At some point we can create some pithy statement that has all of the meaning and none of the meaning at the same time. When we’ve done that, we get to start all over again.

Maybe the statement is simply wrong. At the very least it has an agenda. Question is, does it help, hinder or simply represent a single drop in an infinite bucket?

Flame on. I deserve it.

FitNesse Scenario Tables 69

Posted by Brett Schuchert Fri, 10 Apr 2009 07:21:00 GMT

This is a slightly rougher draft that the previous tutorials. Bob and I have talked about writing a new FitNesse book and these tutorials are practice for one part of the book. By the time these examples make it into a book, they will be the 3rd or 4th major revision.

In any case, here’s a tutorial that picks up where Script tables left off: http://schuchert.wikispaces.com/FitNesse.Tutorials.ScenarioTables.

I’ll be updating it over the weekend and into next week. I also hope to get the table table example done so I’ll have a complete set. If you are chomping at the bits for a table table example, let me know.

If you have comments about any of the tutorials or a wish list of what you’d like to see in a book related to acceptance testing with FitNesse, please post a comment or email me directly. shoe at objectmentor dot com.

FitNesse Script Tables Tutorials + updates 37

Posted by Brett Schuchert Tue, 07 Apr 2009 16:18:00 GMT

There is now a 4th FitNesse tutorial at: http://schuchert.wikispaces.com/FitNesse.Tutorials.ScriptTables. As the URL suggests, this is about Script Tables.

Next up: Scenario Tables and then Table Tables.

One other change, each of the 4 first tutorials now have source available at github along with tags:
  • FitNesse.Tutorials.0.Start
  • FitNesse.Tutorials.1.Start
  • FitNesse.Tutorials.2.Start
  • FitNesse.Tutorials.ScenarioTables.Start
  • FitNesse.Tutorials.ScriptTables.Start

So you can start at any of the tutorials rather than having to work your way through each one from the beginning. (Of course, that last tag is my starting point for the next tutorial, which will take a few days to update and add to this sequence of tutorials.)

Comments/suggestions/requests please.

Older posts: 1 2