How to Guarantee That Your Software Will Suck 57

Posted by Uncle Bob Tue, 09 Dec 2008 01:35:17 GMT

This blog is a quick comment about Justin Etheredge’s blog by the same name.

I thought the blog was good. Really. No, I did. It’s a pretty good blog. Honestly.

My problem with is is that it points the finger outwards. It’s as though software developers have no responsibility. The blog seems to suggest that projects fail because managers do dumb-ass things like not buying dual monitors, setting deadlines, and requiring documentation.

Reading a blog like Justin’s may make you feel like high-five-ing and doing a little touch-down jig. OK, fine. And, after all, there’s some truth to what Justin has to say. But there’s another side of the coin too. A pretty big side.

Your software will suck if you write it badly. Yes, you should have good tools. Yes, you should work under realistic schedules. Yes, you should have time for social interaction. But these aren’t the things that make software suck. YOU, make your software suck.

Can you write good software with just one monitor? Of course you can. It might not be ideal, but what is?

Can you write good software if the deadlines are unreasonable? Of course you can! The definition of an unreasonable deadline is a deadline you won’t make, so you might as well make the code as good as it can be in the time you’ve got. If we’ve learned anything in the last 50 years it’s that rushing won’t get you to the deadline faster.

Can you write good software if you also have to write documentation? Can you write good software if your machine isn’t top-of-the-line? Can you write good software while standing on your head under water? (er, well, I’ll give you that might be tough, but for all the others:) Of course you can!

Don’t get me wrong. I think short-shrifting on tools, monitors, and schedules is stupid. I think Justin’s points are all valid. But the burden doesn’t fall solely upon management. We also have to do our jobs well.

The promise of Shalt. 14

Posted by Uncle Bob Sun, 07 Dec 2008 21:41:56 GMT

When writing requirements we often use the word “shall”. In Rspec, we use the word “should”. Some folks use the word “must”. These statements create the connotation of a law, or a command.

Let’s take a very old law: “Thou shalt not kill.” There are two ways to look at this law. We could look at it as a command[ment], or … we could look at it as a promise.

Here are the two different versions:

  • Thou shalt not kill!
  • Thou shalt not kill.

The difference is in the punctuation. And what a difference it makes. One is a stern order, a demand. The other is a promise… a confident guarantee… a statement of simple truth.

Now look at these rspec statements from RubySlim. I started writing the specs this way because it was shorter; but I like the connotation of confidence and truth. “I can has cheezburger.”


describe StatementExecutor do
  before do
    @caller = StatementExecutor.new
  end

  it "can create an instance" do
    response = @caller.create("x", "TestModule::TestSlim",[])
    response.should == "OK" 
    x = @caller.instance("x")
    x.class.name.should == "TestModule::TestSlim" 
  end

  it "can create an instance with arguments" do
    response = @caller.create("x", "TestModule::TestSlimWithArguments", ["3"])
    response.should == "OK" 
    x = @caller.instance("x")
    x.arg.should == "3" 
  end

  it "can't create an instance with the wrong number of arguments" do
    result = @caller.create("x", "TestModule::TestSlim", ["noSuchArgument"])
    result.should include(Statement::EXCEPTION_TAG + "message:<<COULD_NOT_INVOKE_CONSTRUCTOR TestModule::TestSlim[1]>>")
  end

  it "can't create an instance if there is no class" do
    result = @caller.create("x", "TestModule::NoSuchClass", [])
    result.should include(Statement::EXCEPTION_TAG + "message:<<COULD_NOT_INVOKE_CONSTRUCTOR TestModule::NoSuchClass failed to find in []>>")
  end
end
This makes we want to change all the “should’ statement into “will” statement, or “does” statements.

Automating the Saff Squeeze 51

Posted by Uncle Bob Sun, 30 Nov 2008 20:53:30 GMT

I want the JetBrains and Eclipse people to automate The Saff Squeeze

Imagine that you have a Feathers Characterization test that fails. You click on the test and then select Tools->Saff Squeeze. The IDE then goes through an iteration of inlining the functions in the test and Jester-ing out any code that doesn’t change the result of the assertion.

I think that by using internal code coverage monitoring to see the paths being taken, and by intelligent monte-carlo modification, you could automatically reduce the code down to the minimum needed to maintain the defect.

How cool would that be?

Marick's Law 50

Posted by Uncle Bob Sat, 29 Nov 2008 20:12:00 GMT

A month ago I was deep in the throes of shipping the current release of FitNesse. I just wanted to get it done. I was close to delivery when I spotted a subtle flaw. To fix this flaw I decided to insert identical if statements into each of 9 implementations of an abstract function.

My green wrist band was glowing a nasty shade of puke. I knew I was duplicating code. I knew that I should use the Template Method pattern. But that just seemed too hard. I was convinced that it would be faster to spew the duplicated code out into the derivatives, get the release done, and then clean it up later.

So this morning I was doing something else, and I spotted this duplicated code. I sighed, as I looked down at my green wrist band, and thought to myself that I’d better eat my own dog food and clean this mess up before it gets any worse. I was dreading it.

I made sure that every occurrence of the statement was identical. Then I went to the base class with the intention of refactoring a Template Method. When, what to my wondering eyes should appear, but a Template Method that was already there.

I sheepishly copied and pasted the if statement from one of the derivatives into the Template Method, and then deleted the other eight instances.

I ran the tests.

They all passed.

Damn.

My green wrist band is shouting: “I TOLD YOU SO!”

For my penance I did 20 recitations of Marick’s law. “When it comes to code it never pays to rush.”

The Truth about BDD 154

Posted by Uncle Bob Thu, 27 Nov 2008 22:25:38 GMT

I really like the concept of BDD (Behavior Driven Development). I think Dan North is brilliant, and had done us all a great service by presenting the concept.

OK, you can “feel” the “but” coming, can’t you?

It’s not so much a “but” as an “aha!”. (The punch line is at the end of this article, so don’t give up in the middle.)

BDD is a variation on TDD. Whereas in TDD we drive the development of a module by “first” stating the requirements as unit tests, in BDD we drive that development by first stating the requirements as, well, requirements. The form of those requirements is fairly rigid, allowing them to be interpreted by a tool that can execute them in a manner that is similar to unit tests.

For example,

  GIVEN an employee named Bob making $12 per hour.
  WHEN Bob works 40 hours in one week;
  THEN Bob will be paid $480 on Friday evening.

The Given/When/Then convention is central to the notion of BDD. It connects the human concept of cause and effect, to the software concept of input/process/output. With enough formality, a tool can be (and has been) written that interprets the intent of the requirement and then drives the system under test (SUT) to ensure that the requirement works as stated.

The argued benefit is that the language you use affects the way you think (See this. and so if you use a language closer to the way humans think about problems, you’ll get better thought processes and therefore better results.

To say this differently, the Given/When/Then convention stimulates better thought processes than the AssertEquals(expected, actual); convention.

But enough of the overview. This isn’t what I wanted to talk about. What struck me the other day was this…

The Given/When/Then syntax of BDD seemed eerily familiar when I first heard about it several years ago. It’s been tickling at the back of my brain since then. Something about that triplet was trying to resonate with something else in my brain.

Then yesterday I realized that Given/When/Then is very similar to If/And/Then; a convention that I have used for the last 20+ years to read state transition tables.

Consider my old standard state transition table: The Subway Turnstile:

Current State Event New State Action
LOCKED COIN UNLOCKED Unlock
LOCKED PASS LOCKED Alarm
UNLOCKED COIN UNLOCKED Thankyou
UNLOCKED PASS LOCKED Lock
I read this as a set of If/And/Then sentences as follows:
  • If we are in the LOCKED state, and we get a COIN event, then we go to the UNLOCKED state, and we invoke the Unlock action.
  • If we are in the LOCKED state, and we get a PASS event, then we stay in the UNLOCKED state, and we invoke the Alarm action.
  • etc.

This strange similarity caused me to realize that GIVEN/WHEN/THEN is simply a state transition, and that BDD is really just a way to describe a finite state machine. Clearly “GIVEN” is the current state of the system to be explored. “WHEN” describes an event or stimulus to that system. “THEN” describes the resulting state of the system. GIVEN/WHEN/THEN is nothing more than a description of a state transition, and the sum of all the GIVEN/WHEN/THEN statement is nothing more than a Finite State Machine.

Perhaps if I rephrase this you might see why I think this is a bit more than a ho-hum.

Some of the brightest minds in our industry, people like Dan North, Dave Astels, David Chelimsky, Aslak Hellesoy, and a host of others, have been pursuing the notion that if we use a better language to describe automated requirements, we will improve the way we think about those requirements, and therefore write better requirements. The better language that they have chosen and used for the last several years uses the Given/When/Then form which, as we have seen, is a description of a finite state machine. And so it all comes back to Turing. It all comes back to marks on a tape. We’ve decided that the best way to describe the requirements of a system is to describe it as a turing machine.

OK, perhaps I overdid the irony there. Clearly we don’t need to resort to marks on a tape. But still there is a grand irony in all this. The massive churning of neurons struggling with this problem over years and decades have reconnected the circle to the thoughts of that brave pioneer from the 1940s.

But enough of irony. Is this useful? I think it may be. You see, one of the great benefits of describing a problem as a Finite State Machine (FSM) is that you can complete the logic of the problem. That is, if you can enumerate the states and the events, then you know that the number of paths through the system is no larger than S * E. Or, rather, there are no more than S*E transitions from one state to another. More importantly, enumerating them is simply a matter of creating a transition for every combination of state and event.

One of the more persistent problems in BDD (and TDD for that matter) is knowing when you are done. That is, how do you know that you have written enough scenarios (tests). Perhaps there is some condition that you have forgotten to explore, some pathway through the system that you have not described.

This problem is precisely the kind of problem that FSMs are very good at resolving. If you can enumerate the states, and events, then you know the number of paths though the system. So if Given/When/Then statements are truly nothing more than state transitios, all we need to do is enumerate the number of GIVENs and the number of WHENs. The number of scenarios will simply be the product of the two.

To my knowledge, (which is clearly inadequate) this is not something we’ve ever tried before at the level of a business requirements document. But even if we have, the BDD mindset may make it easier to apply. Indeed, if we can formally enumerate all the Givens and Whens, then a tool could determine whether our requirements document has executed every path, and could find those paths that we had missed.

So, in conclusion, TDD has led us on an interesting path. TDD was adopted as a way to help us phrase low level requirements and drive the development of software based on those requirements. BDD, a variation of TDD, was created to help us think better about higher level requirements, and drive the development of systems using a language better than unit tests. But BDD is really a variation of Finite State Machine specifications, and FSMs can be shown, mathematically, to be complete. Therefore, we may have a way to conclusively demonstrate that our requirements are complete and consistent. (Apologies to Godel).

In the end, the BDDers may have been right that language improves the way we think about things. Certainly in my silly case, it was the language of BDD that resonated with the language of FSM.

Slim Comparison Operators 21

Posted by Uncle Bob Sat, 22 Nov 2008 01:49:47 GMT

This video describes how to use the comparison operators in SLIM.

SLIM Tutorial 88

Posted by Uncle Bob Wed, 19 Nov 2008 01:51:00 GMT

This is a video overview of the new SLIM test system in the latest release of FitNesse

Dirty Rotten ScrumDrels 221

Posted by Uncle Bob Sun, 16 Nov 2008 12:50:14 GMT

So we’ve finally found the answer. We know who’s to blame. It’s SCRUM! SCRUM is the reason that the agile movement is failing. SCRUM is the reason that agile teams are making a mess. SCRUM is the root cause behind all the problems and disasters. SCRUM is the cause of the “Decline and Fall of Agile.”

Yeah, and I’ve got a bridge to sell you.

Scrum is not the problem. Scrum never was the problem. Scrum never will be the problem. The problem, dear craftsmen, is our own laziness.

It makes no sense to blame Scrum for the fact that we don’t write tests and don’t keep our code clean. We can’t blame scrum for technical debt. Technical debt was around long before there was scrum, and it’ll be around long after. No it’s not scrum that’s to blame. The culprits remain the same as they ever were: us.

Of course it’s true that a two day certification course is neither necessary nor sufficient to create a good software leader. It’s also true that the certificate you get for attending a CSM course is good for little more than showing that you paid to attend a two day CSM course. It’s also true that scrum leaves a lot to be desired when it comes to engineering practices. But it is neither the purpose of scrum nor of CSMs to make engineers out of us, or to instill the disciplines of craftsmanship within us. That’s our job!

Some have implied that if all those scrum teams had adopted XP instead of scrum, they wouldn’t be having all the technical debt they are seeing. BALDERDASH!

Let me be more precise. ASININE, INANE, ABSURDITY. BALONEY. DINGOES KIDNEYS.

Let me tell you all, here, now, and forevermore, it is quite possible to do XP badly. It’s easy to build technical debt while going through the motions of TDD. It’s a no brainer to create a wasteland of code with your pair partner. And, believe me, you can such a Simple Design that You Aint Gonna Maintain It. And I’m not speaking metaphorically.

Do you want to know the real secret behind writing good software? Do you want to know the process that will keep your code clean? Do you want the magic bullet, the secret sauce, the once and for all one and only truth?

OK, here it is. Are you ready? The secret is…

The secret is…

Do a good job.

Oh, yeah, and stop blaming everything (and everybody) else for your own laziness.

Slim 160

Posted by Uncle Bob Thu, 02 Oct 2008 19:33:00 GMT

Those of you who have been following me on Twitter have heard me talk about Slim. Slim is a new testing front-end and back-end that I’m adding to FitNesse. Here’s what it’s all about.

FitNesse is a very popular open-source acceptance testing tool that allows non-programmers to write and execute tests. FitNesse is an authoring and execution wrapper around the testing engine Fit. Fit interprets HTML and uses a set of programmer supplied fixtures to invoke the system under test.

The problem is that Fit is big. There’s a lot of stuff inside there. And it all has to be ported to the language (or platform) of the system under test (SUT). Unfortunately, since Fit is all open source it means that the various ports don’t agree with each other. It also means that a change made to one, does not match a change made to another. This is deeply frustrating.

To make matters worse, the effort required to maintain a fit port is relatively high. The folks who maintain these ports do so out of the goodness of their own hears, and have real jobs that often take effort away from Fit.

The end result is that the universe of Fit is uneven at best. Java programmers can use one set of features and fixtures, whereas C++ programmers must use a completely different set. The .NET version of the fixtures work in a very different way from the Java versions. New languages and platforms require a relatively large effort just to get started with Fit. etc.

Slim—The solution

Fit interprets HTML tables in order to make specific calls to the SUT. Fit does all this interpetation in the SUT. What if we moved all that table interpretation to FitNesse? FitNesse is written in Java and communicates with the SUT through a socket. Right now it passes all the HTML through that socket to Fit and accepts the colorized HTML back through the socket from Fit. But what if we changed the partitioning? What if we did all the table processing in FitNesse and then shipped a list of calls across the socket to the SUT?

That’s what Slim is. Slim is a very small module that runs in the SUT. It listens at a socket for very simple commands that instruct it to create instances of fixtures and to call methods on those instances. It passes the return values of those method calls back through the socket.

Slim is very small. Porting it is a matter of a few hours work. Once ported, there’s virtually no other maintenance required. It seems unlikely that new Slim features will be needed very often.

What this means is that all the table processing is done on the FitNesse side, and will work exactly the same regardless of the platform of the SUT. C++, Ruby, Java, Python, C#? It doesn’t matter, your tables will work in exactly the same way.

And who said we needed tables anyway? Since all the processing is done on the FitNesse side, we can write new kinds of testing languages. We can write story runners like RSpec or JBehave. They will all use Slim to communicate to the SUT, and will all work identically with any platform that has a Slim port.

Slim Tables

If you are familiar with Fit, you know that there are different fixture types. There are ColumnFixtures, RowFixtures, ActionFixtures, DoFixtures, etc. Each of these require a library to be written and executed on the SUT.

In Slim we replace these with processors on the FitNesse side. Instead of ColumnFixtures we have DecisionTables. Instead of RowFixtures we have QueryTables. Instead of DoFixtures we have ScriptTables. New kinds of tables can be added using a simple plug-in mechanism.

Slim Fixtures

Slim fixtures are very similar to Fit Fixtures except that they don’t inherit from anything! There is no library on the SUT that you have to include.

Interestingly enough, if you already have a Fit ColumnFixture written for your SUT, it will very likely work with a Slim DecisionTable. Existing RowFixtures will likely work nicely with a Slim QueryTable, etc.

Migrating to Slim

To replace Fit with Slim for a test page, all you need to do is set the variable TEST_SYSTEM to slim. This will invoke the Slim table processor rather than the old Fit test runner.


!define TEST_SYSTEM {slim}

There are some minor differences in the table format. The biggest is that the table type is no longer known by the fixture. Therefore FitNesse needs to know what kind of table you are writing. So a column fixture that used to look like this:


|eg.Division|
|numerator|denominator|quotient|
|10       |2          |5       |
Will now require a simple prefix as follows:

|DT:eg.Division|
|numerator|denominator|quotient|
|10       |2          |5       |

Did you see the DT:? That’s it. Otherwise the tables should be the same. And as you might guess there will be similar prefixes for QueryTable (QT), and ScriptTable (ST).

Indeed, this is the part of the Slim Tables that I like the least. If anybody has a better idea, or at least some better prefixes, I’m all ears (er. eyes).

((Update: For Decision Tables the “DT:” is not necessary but can be supplied. Or you can use the prefix “Decision:” (See FitNesse User Guide for more details). For Query tables the prefix is “Query:”, For Script tables the first cell is just the word “Script”))

What if you like processing tables in fixtures?

Some of you might use TableFixture, or just do your own table processing in your Fit fixtures. You can still do that with Slim. One of the table types I plan to implement is a raw Table (Table prefix). The whole table gets shipped over the socket to the fixtures as a List of Lists of Strings, and the fixture returns with a table with the same geometry, loaded with “pass”, “fail”, or “neutral” to provide colorization hints.

Current Status.

I currently have Slim working in Java, and Decision Tables being processed. What I don’t have so far are any of the other table types, suites, command line test runners, etc. So there’s plenty of work to be done.

There will be code 27

Posted by Uncle Bob Fri, 29 Aug 2008 02:02:45 GMT

During the last three decades, several things about software development have changed, and several other things have not. The things that have changed are startling. The things that have not are even more startling.

What has changed? Three decades have seen a 1000 fold increase in speed, another 1000 fold increase in memory. Yet another 1000 fold decrease in size (by volume), and yet another 1000 fold decrease in power consumption. Adding up all those zeros implies that the resources we have to play with have increased by twelve orders of magnitude. Even if I have over estimated by five orders of magnitude the remaining seven are still and astounding increase.

I remember building RSX-11M on a PDP-11/60 from source. It took several hours. Nowadays I can build huge java applications in a matter of seconds. I remember that compiling small C programs required dozens of minutes. Now much larger programs compile in a eyeblink. I remember painstakingly editing assembly language on punch cards. Now I use refactorings in huge java programs without thinking about it. I remember when 10,000 lines of code was five boxes of cards that weighed 50 pounds. Now, such a program is considered trivial.

Nowadays we have tools! We have editors that compile our code while we type, and complete our thoughts for us. We have analyzers that will find code duplication in huge systems, and identify flaws and weaknesses. We have code coverage tools that will tell us each line of code that our unit tests fail to execute. We have refactoring browsers that allow us to manipulate our code with unprecedented power and convenience.

But in the face of all this massive change, this rampant growth, this almost unlimited wealth of resources, there is something that hasn’t changed much at all. Code.

Fortran, Algol, and Lisp are over fifty years old. These language are the clear progenitors of the static and dynamic languages we use today. The roots of C++, Java, and C# clearly lie in Algol and Fortan. The connection between Lisp, and Ruby, Python, and smalltalk may be less obvious, but only slightly so. Today’s modern language may be rich with features and power, but they are not 12 orders of magnitude better than their ancestors. Indeed, it’s hard to say that they are even ONE order of magnitude better.

When it comes down to it. We still write programs made out of calculations, ‘if’ statements, and ‘for’ loops. We still assign values into variables and pass arguments into functions. Programmers from 30 years ago might be surprised that we use lower case letters in our programs, but little else would startle them about the code we write.

We are like carpenters who started out using hammers and saws, and have progressed to using air-hammers and power saws. These power tools help a lot; but in the end we are still cutting wood and nailing it together. And we probably will be for the next 30 years.

Looking back we see that what we do hasn’t changed all that much. What has changed are the tools and resources we can apply to the task. Looking forward I anticipate that the current trend will continue. The tools will get better, but the code will still be code. We may see some “minor” improvements in languages and frameworks, but we will still be slinging code.

Some folks have put a great deal of hope in technologies like MDA. I don’t. The reason is that I don’t see MDA as anything more than a different kind of computer language. To be effective it will still need ‘if’ and ‘for’ statements of some kind. And ‘programmers’ will still need to write programs in that language, because details will still need to be managed. There is no language that can eliminate the programming step, because the programming step is the translation from requirements to systems irrespective of language. MDA does not change this.

Some folks have speculated that we’ll have “intelligent agents” based on some kind of AI technology, and that these agents will be able to write portions of our programs for us. The problem with this is that we already have intelligent agents that write programs for us. They are called programmers. It’s difficult to imagine a program that is able to communicate to a customer and write a program better than a human programmer.

So, for the foreseeable future I think software will remain the art of crafting code to meet the requirements of our customers.

There is something else that needs to change, however. And I believe it is changing. Our professionalism.

In some ways our “profession” has paralleled that of medicine. 300 years ago there were a few thinkers, and far too many practitioners. There were no standards, no common rituals or behaviors, no common disciplines. If you got sick you might go to a barber, or a healer. Perhaps he’d let out some blood, or ask you to wear some garlic. Over time, however, the few thinkers gained knowledge, discipline, and skill. They adopted standards and rituals. They set up a system for policing and maintaining those standards, and for training and accepting new members.

THIS is the change that I hope the next thirty years holds for software development.

Older posts: 1 ... 4 5 6 7 8 ... 11