The Quality of TDD 43

Posted by Uncle Bob Sun, 17 Feb 2008 02:23:44 GMT

Kieth Braithwaite has made an interesting observation here. The basic idea is that code that has been written with TDD has a lower Cyclomatic Complexity per function compared to code that has not been written with TDD. If this is true then it could imply lower defects because of this.

Kieth’s metric takes in the code for an entire project and boils it down to a single number. His hypothesis is that a system written with TDD will always measure above a certain threshold, indicating very low CC; whereas systems written without TDD may or may not measure above that threshold.

Kieth has built a tool that you can get here that will generate this metric for most java projects. He and others have used this tool to measure many different systems. So far the hypothesis seems to hold water.

The metric can’t tell you if TDD was used; but it might just be able to tell you that it wasn’t used.

Generated Tests and TDD 64

Posted by Uncle Bob Thu, 10 Jan 2008 19:59:30 GMT

TDD has become quite popular, and many companies are attempting to adopt it. However, some folks worry that it takes a long time to write all those unit tests and are looking to test-generation tools as a way to decrease that burden.

The burden is not insignificant. FitNesse, an application created using TDD, is comprised of 45,000 lines of Java code, 15,000 of which are unit tests. Simple math suggests that TDD increases the coding burden by a full third!

Of course this is a naive analysis. The benefits of using TDD are significant, and far outweigh the burden of writing the extra code. But that 33% still feels “extra” and tempts people to find ways to shrink it without losing any of the benefits.

Test Generators.

Some folks have put their hope in tools that automatically generate tests by inspecting code. These tools are very clever. They generate random calls to methods and remember the results. They can automatically build mocks and stubs to break the dependencies between modules. They use remarkably clever algorithms to choose their random test data. They even provide ways for programmers to write plugins that adjust those algorithms to be a better fit for their applications.

The end result of running such a tool is a set of observations. The tool observes how the instance variables of a class change when calls are made to its methods with certain arguments. It notes the return values, changes to instance variables, and outgoing calls to stubs and mocks. And it presents these observations to the user.

The user must look through these observations and determine which are correct, which are irrelevant, and which are bugs. Once the bugs are fixed, these observations can be checked over and over again by re-running the tests. This is very similar to the record-playback model used by GUI testers. Once you have registered all the correct observations, you can play the tests back and make sure those observations are still being observed.

Some of the tools will even write the observations as JUnit tests, so that you can run them as a standard test suite. Just like TDD, right? Well, not so fast…

Make no mistake, tools like this can be very useful. If you have a wad of untested legacy code, then generating a suite of JUnit tests that verifies some portion of the behavior of that code can be a great boon!

The Periphery Problem

On the other hand, no matter how clever the test generator is, the tests it generates will always be more naive than the tests that a human can write. As a simple example of this, I have tried to generate tests for the bowling game program using two of the better known test generation tools. The interface to the Bowling Game looks like this:

  public class BowlingGame {
    public void roll(int pins) {...}
    public int score() {...}
  }
The idea is that you call roll each time the balls gets rolled, and you call score at the end of the game to get the score for that game.

The test generators could not randomly generate valid games. It’s not hard to see why. A valid game is a sequence of between 12 and 21 rolls, all of which must be integers between 0 and 10. What’s more, within a given frame, the sum of rolls cannot exceed 10. These constraints are just too tight for a random generator to achieve within the current age of the universe.

I could have written a plugin that guided the generator to create valid games; but such an algorithm would embody much of the logic of the BowlingGame itself, so it’s not clear that the economics are advantageous.

To generalize this, the test generators have trouble getting inside algorithms that have any kind of protocol, calling sequence, or state semantics. They can generate tests around the periphery of the classes; but can’t get into the guts without help.

TDD?

The real question is whether or not such generated tests help you with Test Driven Development. TDD is the act of using tests as a way to drive the development of the system. You write unit test code first, and then you write the application code that makes that code pass. Clearly generating tests from existing code violates that simple rule. So in some philosophical sense, using test generators is not TDD. But who cares so long as the tests get written, right? Well, hang on…

One of the reasons that TDD works so well is that it is similar to the accounting practice of dual entry bookkeeping. Accountants make every entry twice; once on the credit side, and once on the debit side. These two entries follow separate mathematical pathways. In the end a magical subtraction yields a zero if all the entries were made correctly.

In TDD, programmers state their intent twice; once in the test code, and again in the production code. These two statements of intent verify each other. The tests, test the intent of the code, and the code tests the intent of the tests. This works because it is a human that makes both entries! The human must state the intent twice, but in two complementary forms. This vastly reduces many kinds of errors; as well as providing significant insight into improved design.

Using a test generator breaks this concept because the generator writes the test using the production code as input. The generated test is not a human restatement, it is an automatic translation. The human states intent only once, and therefore does not gain insights from restatement, nor does the generated test check that the intent of the code was achieved. It is true that the human must verify the observations, but compared to TDD that is a far more passive action, providing far less insight into defects, design and intent.

I conclude from this that automated test generation is neither equivalent to TDD, nor is it a way to make TDD more efficient. What you gain by trying to generate the 33% test code, you lose in defect elimination, restatement of intent, and design insight. You also sacrifice depth of test coverage, because of the periphery problem.

This does not mean that test generators aren’t useful. As I said earlier, I think they can help to partially characterize a large base of legacy code. But these tools are not TDD tools. The tests they generate are not equivalent to tests written using TDD. And many of the benefits of TDD are not achieved through test generation.

Business software is Messy and Ugly. 70

Posted by Uncle Bob Thu, 13 Dec 2007 15:41:00 GMT

I was at a client recently. They are a successful startup who have gone through a huge growth spurt. Their software grew rapidly, through a significant hack-and-slash program. Now they have a mess, and it is slowing them way down. Defects are high. Unintended consequences of change are high. Productivity is low.

I spent two days advising them how to adopt TDD and Clean Code techniques to improve their code-base and their situation. We discussed strategies for gradual clean up, and the notion that big refactoring projects and big redesign projects have a high risk of failure. We talked about ways to clean things up over time, while incrementally insinuating tests into the existing code base.

During the sessions they told me of a software manager who is famed for having said:

“There’s a clean way to do this, and a quick-and-dirty way to do this. I want you to do it the quick-and-dirty way.”

The attitude engendered by this statement has spread throughout the company and has become a significant part of their culture. If hack-and-slash is what management wants, then that’s what they get! I spent a long time with these folks countering that attitude and trying to engender an attitude of craftsmanship and professionalism.

The developers responded to my message with enthusiasm. They want to do a good job (of course!) They just didn’t know they were authorized to do good work. They thought they had to make messes. But I told them that the only way to get things done quickly, and keep getting things done quickly, is to create the cleanest code they can, to work as well as possible, and keep the quality very high. I told them that quick-and-dirty is an oxymoron. Dirty always means slow.

On the last day of my visit the infamous manager (now the CTO) stopped into our conference room. We talked over the issues. He was constantly trying to find a quick way out. He was manipulative and cajoling. “What if we did this?” or “What if we did that?” He’d set up straw man after straw man, trying to convince his folks that there was a time and place for good code, but this was not it.

I wanted to hit him.

Then he made the dumbest, most profoundly irresponsible statement I’ve (all too often) heard come out of a CTOs mouth. He said:

“Business software is messy and ugly.”

No, it’s not! The rules can be complicated, arbitrary, and ad-hoc; but the code does not need to be messy and ugly. Indeed, the more arbitrary, complex, and ad-hoc the business rules are, the cleaner the code needs to be. You cannot manage the mess of the rules if they are contained by another mess! The only way to get a handle on the messy rules is to express them in the cleanest and clearest code you can.

In the end, he backed down. At least while I was there. But I have no doubt he’ll continue his manipulations. I hope the developers have the will to resist.

One of the developers asked the question point blank: “What do you do when your managers tell you to make a mess?” I responded: “You don’t take it. Behave like a doctor who’s hospital administrator has just told him that hand-washing is too expensive, and he should stop doing it.”

Thinking about an Appendix. 14

Posted by Uncle Bob Fri, 07 Dec 2007 15:27:04 GMT

No, not of Clean Code (look for it in Spring). This monday evening (12/3) I got a stomach ache.

The stomach pain was a low level annoyance that I was able to ignore for most of the evening. But it started to get pretty bad round midnight. By 2AM I threw up and then felt better. So I went back to bed and slept till morning.

Tuesday started well. I had some residual pain, but figured it was waning. I went to the office to get some work done there. By the time I got there the waning pain had waxed and I turned right around and went home to bed.

The pain continued to grow until 3PM when I thew up again. This made me feel a little better so I went back to bed. But by 6PM the pain had localized to my lower right quadrant and I had had enough. I asked my wife to take me to the immediate care facility.

They could tell by the grimace on my face that this was something to take care of quickly. They put me on a bed, and started an IV. They took blood and did an exam. Then they packed me into an ambulance and sent me to the ER for a CAT scan.

During the 20 minute ride the pain was getting really bad. I was giving it 8.5 on a scale of 1-10. I have intimate knowledge of every pothole on the road the took.

At the ER they put me in a room and gave me a dose of Morphine. Morphine is a very nice drug. It had the effect of filing the pain away in a convenient subdirectory where I could access it if I needed it, but was otherwise out of the way.

By 11PM the CAT scan was complete and the doctor came and said: “The good news is we know what’s wrong. The bad news is it’s your appendix. We just happen to have an OR ready. Do you want to do it now?” How could I refuse an offer like that!

The rolled me into the OR and told me they were going to give me the juice.
DISCONTINUITY
I was in the recovery room with two smiling pretty nurses urging me to awaken to the news that my appendix had been removed.
DISCONTINUITY
I was being wheeled into a hospital room. It was 3AM. My exhausted wife was waiting for me there. She kissed me goodbye and I went to sleep.

Wednesday was my birthday. I spent is sleeping for the most part. They fed me clear foods. The pain was not horrible, but I did accept a vicodin around noon. By 4PM the surgeon came in, looked me over, and said: “You look pretty good. You should go home.” I went home gladly. On the way home my wife and I picked up a pizza.

Thursday was a bit sore, but not bad. I took a couple of motrin and was “smart” about what muscles I used and when. I got a lot of work-reading done.

Today I woke up and the incision pain was like a mild sunburn. I can feel it, but I can ignore it. I’m not about to do 20 pushups, or run a marathon, but the day should be relatively normal.

Active Record vs Objects 97

Posted by Uncle Bob Fri, 02 Nov 2007 16:29:31 GMT

Active Record is a well known data persistence pattern. It has been adopted by Rails, Hibernate, and many other ORM tools. It has proven it’s usefulness over and over again. And yet I have a philosophical problem with it.

The Active Record pattern is a way to map database rows to objects. For example, let’s say we have an Employee object with name and address fields:
public class Employee extends ActiveRecord {
  private String name;
  private String address;
  ...
} 

We should be able to fetch a given employee from the database by using a call like:

Employee bob = Employee.findByName("Bob Martin");

We should also be able to modify that employee and save it as follows:

bob.setName("Robert C. Martin");
bob.save();
In short, every column of the Employee table becomes a field of the Employee class. There are static methods (or some magical reflection) on the ActiveRecord class that allow you to find instances. There are also methods that provide CRUD functions.

Even shorter: There is a 1:1 correspondence between tables and classes, columns and fields. (Or very nearly so).

It is this 1:1 correspondence that bothers me. Indeed, it bothers me about all ORM tools. Why? Because this mapping presumes that tables and objects are isomorphic.

The Difference between Objects and Data Structures

From the beginning of OO we learned that the data in an object should be hidden, and the public interface should be methods. In other words: objects export behavior, not data. An object has hidden data and exposed behavior.

Data structures, on the other hand, have exposed data, and no behavior. In languages like C++ and C# the struct keyword is used to describe a data structure with public fields. If there are any methods, they are typically navigational. They don’t contain business rules.

Thus, data structures and objects are diametrically opposed. They are virtual opposites. One exposes behavior and hides data, the other exposes data and has no behavior. But that’s not the only thing that is opposite about them.

Algorithms that deal with objects have the luxury of not needing to know the kind of object they are dealing with. The old example: shape.draw(); makes the point. The caller has no idea what kind of shape is being drawn. Indeed, if I add new types of shapes, the algorithms that call draw() are not aware of the change, and do not need to be rebuilt, retested, or redeployed. In short, algorithms that employ objects are immune to the addition of new types.

By the same token, if I add new methods to the shape class, then all derivatives of shape must be modified. So objects are not immune to the addition of new functions.

Now consider an algorithm that uses a data structure.

switch(s.type) {
  case SQUARE: Shape.drawSquare((Square)s); break;
  case CIRCLE: Shape.drawCircle((Circle)s); break;
}
We usually sneer at code like this because it is not OO. But that disparagement might be a bit over-confident. Consider what happens if we add a new set of functions, such as Shape.eraseXXX(). None of the existing code is effected. Indeed, it does not need to be recompiled, retested, or redeployed. Algorithms that use data structures are immune to the addition of new functions.

By the same token if I add a new type of shape, I must find every algorithm and add the new shape to the corresponding switch statement. So algorithms that employ data structures are not immune to the addition of new types.

Again, note the almost diametrical opposition. Objects and Data structures convey nearly opposite immunities and vulnerabilities.

Good designers uses this opposition to construct systems that are appropriately immune to the various forces that impinge upon them. Those portions of the system that are likely to be subject to new types, should be oriented around objects. On the other hand, any part of the system that is likely to need new functions ought to be oriented around data structures. Indeed, much of good design is about how to mix and match the different vulnerabilities and immunities of the different styles.

Active Record Confusion

The problem I have with Active Record is that it creates confusion about these two very different styles of programming. A database table is a data structure. It has exposed data and no behavior. But an Active Record appears to be an object. It has “hidden” data, and exposed behavior. I put the word “hidden” in quotes because the data is, in fact, not hidden. Almost all ActiveRecord derivatives export the database columns through accessors and mutators. Indeed, the Active Record is meant to be used like a data structure.

On the other hand, many people put business rule methods in their Active Record classes; which makes them appear to be objects. This leads to a dilemma. On which side of the line does the Active Record really fall? Is it an object? Or is it a data structure?

This dilemma is the basis for the oft-cited impedance mismatch between relational databases and object oriented languages. Tables are data structures, not classes. Objects are encapsulated behavior, not database rows.

At this point you might be saying: “So what Uncle Bob? Active Record works great. So what’s the problem if I mix data structures and objects?” Good question.

Missed Opportunity

The problem is that Active Records are data structures. Putting business rule methods in them doesn’t turn them into true objects. In the end, the algorithms that employ Active Records are vulnerable to changes in schema, and changes in type. They are not immune to changes in type, the way algorithms that use objects are.

You can prove this to yourself by realizing how difficult it is to implement an polymorphic hierarchy in a relational database. It’s not impossible of course, but every trick for doing it is a hack. The end result is that few database schemae, and therefore few uses of Active Record, employ the kind of polymorphism that conveys the immunity of changes to type.

So applications built around ActiveRecord are applications built around data structures. And applications that are built around data structures are procedural—they are not object oriented. The opportunity we miss when we structure our applications around Active Record is the opportunity to use object oriented design.

No, I haven’t gone off the deep end.

I am not recommending against the use of Active Record. As I said in the first part of this blog I think the pattern is very useful. What I am advocating is a separation between the application and Active Record.

Active Record belongs in the layer that separates the database from the application. It makes a very convenient halfway-house between the hard data structures of database tables, and the behavior exposing objects in the application.

Applications should be designed and structured around objects, not data structures. Those objects should expose business behaviors, and hide any vestige of the database. The fact that we have Employee tables in the database, does not mean that we must have Employee classes in the application proper. We may have Active Records that hold Employee rows in the database interface layer, but by the time that information gets to the application, it may be in very different kinds of objects.

Conclusion

So, in the end, I am not against the use of Active Record. I just don’t want Active Record to be the organizing principle of the application. It makes a fine transport mechanism between the database and the application; but I don’t want the application knowing about Active Records. I want the application oriented around objects that expose behavior and hide data. I generally want the application immune to type changes; and I want to structure the application so that new features can be added by adding new types. (See: The Open Closed Principle)

A Rational Plan to Start using SOA 122

Posted by Uncle Bob Sun, 21 Oct 2007 10:00:09 GMT

Many companies are thinking about adopting SOA as a way to reduce the cost of enterprise development. The thought of diverse and reusable services, effortlessly communicating through an intelligent Enterprise Service Bus is a powerful siren song for executives who see their development and maintenance budgets spiraling upwards. What’s more, vendors are displaying savory looking wares that promise to ease the transition.

Caveat Emptor!

The reality is that cramming an old and rickety enterprise system into services that communicate over a complex buss, is not an easy job. The long and short of the problem can be seen in the following catch-22:

The benefit of SOA comes from the decoupled nature of the services. But in order to create an SOA you have to decouple your services.

Decoupling your services is the issue. Once you have decoupled them, then all the wonderful tools might be useful in facilitating an SOA. On the other hand, if you have decoupled your services, you probably don’t need all those wonderful tools.

I have a client that has implemented SOA to good effect. They avoided the vast majority of the tools. For example, they are not using a big expensive buss. They aren’t using a BPEL engine, or message translators, or routing services. About the only thing they have decided to use is SOAP.

They are creating their services from scratch, and wiring the system together manually. Each service knows where each of it’s client services is. There is no registry service, no common dictionary, no fancy routing.

The system is simple, elegant, and it works. It was designed to work from the start. It is not an old crufty system crammed under a soap layer and called a service provider. Rather it is a nicely arrayed suite of services that were designed to work together from the get-go.

And that’s really the moral of the story. If you want an SOA, you are going to have to design that SOA. It is not very likely that you will be able to cram you old, tightly-coupled code below SOAP interfaces and hope to gain the benefits of services. What you’ll more than likely end up with is just a bigger mess.

Architecture is a Second Order Effect 61

Posted by Uncle Bob Sat, 20 Oct 2007 06:58:29 GMT

We often argue that in order to achieve flexibility, maintainability, and reusability, it is important to have a good software architecture. This is certainly true. Without a well ordered architecture, software systems become masses of tangled modules and code. However, the effect of architecture on the ‘ilities is secondary to the effect that a good, fast, suite of tests has.

Why don’t we clean our code? When we see an ugly mass of code that we know is going to cause of problems, our first reaction is “This needs to be cleaned up.” Our second reaction is: “If I touch this code I’ll be spending the next two weeks trying to get it to work again.” We don’t clean code because we are afraid we’ll break it.

In this way bad code hooks itself into our systems and refuses to go away. Nothing stops clean code from going bad, but once it’s bad, we seldom have the time, energy, or nerve to clean it. In that sense, bad code is permanent.

What if you had a button? If you push this button a little light instantly turns red or green. Green means your system works. Red means it’s broken. If you had a button like this, you could make small changes to the system, and prove that they didn’t break anything. If you saw a batch of tangled messy code, you could start to clean it. You’d simply make a tiny improvement and then push the button. If the light was green you’d make the next tiny change, and the next, and the next.

I have a button like that! It’s called a test suite. I can run it any time I like, and within seconds it tells me, with very high certainty, that my system works.

Of course, to be effective, that test suite has to cover all the code in the system. And, indeed, that’s what I have. My code coverage tools tell me that 89% of the 45,000 lines in my system are covered by my test suite. (89% is pretty good number given that my coverage tool counts un-executable lines like interfaces.)

Can I be absolutely sure that the green light means that the system works? Certainly not. But given the coverage of my test suite, I am reasonably sure that the changes I make are not introducing any bugs. And that reasonable surety is enough for me to be fearless about making changes.

This fearlessness is something that needs to be experienced to understand. I feel no reluctance at all about cleaning up the code in my system. I frequently take whole classes and restructure them. I change the names of classes, variables, and functions on a whim. I extract super-classes and derivatives any time I like.

In short, the test suite makes it easy to make changes to my code. It makes my code flexible and easy to maintain.

So does my architecture of course. The design and structure of my system is very easy to deal with, and allows me to make changes without undue impact. The reason my architecture is so friendly, is that I’ve been fearless about changing it. And that’s because I have a test suite that runs in seconds, and that I trust!

So the clean architecture of my system is a result of on-going efforts supported by the test suite. I can keep the architecture clean and relevant because I have tests. I can improve the architecture when I see a better approach because I have tests. It is the tests that enable architectural improvement.

Yes, architecture enables flexibility, maintainability, and reusability; but test suites enable architecture. Architecture is a second order effect.

TDD with Acceptance Tests and Unit Tests 98

Posted by Uncle Bob Wed, 17 Oct 2007 10:50:34 GMT

Test Driven Development is one of the most imperative tenets of Agile Software Development. It is difficult to claim that you are Agile, if you are not writing lots of automated test cases, and writing them before you write the code that makes them pass.

But there are two different kinds of automated tests recommended by the Agile disciplines. Unit tests, which are written by programmers, for programmers, in a programming language. And acceptance tests, which are written by business people (and QA), for business people, in a high level specification language (like FitNesse www.fitnesse.org).

The question is, how should developers treat these two streams of tests? What is the process? Should they write their unit tests and production code first, and then try to get the acceptance tests to pass? Or should they get the acceptance tests to pass and then backfill with unit tests?

And besides, why do we need two streams of tests. Isn’t all that testing awfully reduntant?

It’s true that the two streams of tests test the same things. Indeed, that’s the point. Unit tests are written by programers to ensure that the code does what they intend it to do. Acceptance tests are written by business people (and QA) to make sure the code does what they intend it to do. The two together make sure that the business people and programmers intend the same thing.

Of course there’s also a difference in level. Unit tests reach deep into the code and test independent units. Indeed, programmers must go to great lengths to decouple the components of the system in order to test them independently. Therefore unit tests seldom exercise large integrated chunks of the system.

Acceptance tests, on the other hand, operate on much larger integrated chunks of the system. They typically drive the system from it’s inputs (or a point very close to it’s inputs) and verify operation from it’s outputs (or again, very close to it’s outputs). So, though the acceptance tests may be testing the same things as the unit tests, the execution pathways are very different.

Process

Acceptance tests should be written at the start of each iteration. QA and Business analysts should take the stories chosen during the planning meeting, and turn them into automated acceptance tests written in FitNesse, or Selenium or some other appropriate automation tool.

The first few acceptance tests should arrive within a day of the planning meeting. More should arrive each day thereafter. They should all be complete by the midpoint of the iteration. If they aren’t, then some developers should change hats and help the business people finish writing the acceptance tests.

Using developers in this way is an automatic safety valve. If it happens too often, then we need to add more QA or BA resources. If it never happens, we may need to add more programmers.

Programmers use the acceptance tests as requirements. They read those tests to find out what their stories are really supposed to do.

Programmers start a story by executing the acceptance tests for that story, and noting what fails. Then they write unit tests that force them to write the code that will make some small portion of the acceptance tests pass. They keep running the acceptance tests to see how much of their story is working, and they keep adding unit tests and production code until all the acceptance tests pass.

At the end of the iteration all the acceptance tests (and unit tests) are passing. There is nothing left for QA to do. There is no hand-off to QA to make sure the system does what it is supposed to. The acceptance tests already prove that the system is working.

This does not mean that QA does not put their hands on the keyboards and their eyes on the screen. They do! But they don’t follow manual test scripts! Rather, they perform exploratory testing. They get creative. They do what QA people are really good at—they find new and interesting ways to break the system. They uncover unspecified, or under-specified areas of the system.

ASIDE: Manual testing is immoral. Not only is it high stress, tedious, and error prone; it’s just wrong to turn humans into machines. If you can write a script for a test procedure, then you can write a program to execute that procedure. That program will be cheaper, faster, and more accurate than a human, and will free the human to do what humans to best: create!

So, in short, the business specifies the system with automated acceptance tests. Programmers run those tests to see what unit tests need to be written. The unit tests force them to write production code that passes both tests. In the end, all the tests pass. In the middle of the iteration, QA changes from writing automated tests, to exploratory testing.

SOA, cuts the Gordian Knot -- Not. 46

Posted by Uncle Bob Wed, 03 Oct 2007 18:31:27 GMT

In 333 B.C. Alexander the Great cut the Gordian Knot with his sword, breaking that symbol of ancient power and ushering in a new empire. Some IT managers feel that SOA will finally cut the great Gordian Knot of their tangled and tumultuous software systems. Will it?

No.

Let me restate that more clearly for those of you who may still have doubt.

No frickin way Jose!

Most software systems are a mess. SOA is not the cure for that mess. Indeed, SOA doesn’t even start to be feasible until the mess has been cleaned up.

Again, let me restate that.

SOA does not help you clean up the mess that your software is in. In order to adopt SOA you must first clean up the mess you have made. Once the mess is clean, then you can start to think about SOA.

Right now, at this very moment, some IT manager is reading about how the next release of some Enterprise Service Bus is going to solve all his woes. All he has to do is install this wonderful piece of software, tie it into the new whippy-diddle business process modeling tool. Get some clerks to write the XML files that describe their business processes, get some other clerks to write the message translators that drive that great interoperability Babel Fish in the sky. Get some business process modelers to draw pretty pictures of all the interconnections between the systems. Feed that all into the ESB, and shake vigorously with deadlines and threats. This will cause fuzzy bunnies to hop happily over the green fields of his IT systems.

Right.

The reality is that the first letter of SOA, namely “S”, stands for “Service”. Not to put to fine a point on it, SOA means an Architecture Oriented around Services. What is a service? A Service is a piece of software with the characteristics of a service? What are those characteristics? They are: Low coupling, independently deployability, stable interface, etc.

Most software professionals (if that term is not oxymoronic) will recognize this set of attributes as being identical with that of the “Components” of the 90s. Indeed, Services are the new Components. The words are synonyms.

This means that SOA is just as hard to do as COA. (Component Oriented Architecture for those of you who are acronym challenged). Indeed, SOA is COA!

Now, I know it’s been a long time. But does anybody remember what it took to do COA? No? Then let me tell you a little story to jog your memory.

In the mid ‘90s I had a client who had a 250MB executable written in C++. (Yes, I know this seems like a hello world program today, but back then 250MB was a big program). This client was vexed because every time he changed even a single line of code, he had to rebuild the entire executable, burn it onto a CD, and ship the CDs to his installed base. This was costing him lots of time and money.

What he wanted was a way to break his application into components and then ship only those components that changed. His hope was that he could ship them over the internet because they’d all be small enough to email or FTP. (This was back when people did not regularly download 4GB movies, or backup 100GB disks over the internet every night).

So he adopted COA, and the popular implementation platform of for COA, COM+ (Otherwise known as void**: Nothing to the power of nothing.) He slaved away for months breaking his application up into components that could be dynamically loaded in DLLs. And in the end, he succeeded. His application was completely—componentized.

However, there was one little snag. He forgot to decouple the components from one another. Each component depended on several others. There were cycles in the dependency graph of components. (Aside to the architecturally challenged: “That’s bad.”) So in the end, when he changed a line of code in one component, he had to rebuild and redeploy all the components. He had to burn all those components onto a CD.

In short, nothing had changed.

Long long ago, I watched a software developer write complicated macro in his highly powerful text editor. It converted all tabs to spaces. I remember watching him start it. The code he was transforming was on the screen. He ran the macro, which took several minutes to run. It displayed a series of dots running across the screen. It was very impressive, and looked like it was doing something significant. And then when it finished, the code was redisplayed on the screen. I laughed, because after all that work of translating tabs into spaces, and putting dots on the screen, the displayed code looked exactly the same as when he started. It was as if nothing had happened.

My client’s reaction to his inability to independently deploy his components had nothing to do with laughter.

The point? SOA will not cut the Gordian Knot. Indeed, it will pull that Knot even tighter. The Knot can’t be cut. There is no magic that will make the mess go away, or make the mess less of an issue. If you want the flexibility of SOA, then you must clean up the mess you have and then slowly and gradually build services from those cleaned up systems.

The real point? TANSTAASB. (There ain’t no such thing as a Silver Bullet).

Schools of Thought 25

Posted by Uncle Bob Wed, 03 Oct 2007 17:37:21 GMT

In some sense we software developers are all too tolerant. We know there is no single “right” way to write software, and so we tolerate many different opinions and styles. In the industry at large, this is a good thing. But inside a project it’s chaos.

A project with 10 different developers cannot withstand 10 different styles of coding, 10 different architectural visions, 10 different philosophies of OO design, or 10 different software development methods. To prevent the chaos of anarchy, one style, one vision, one philosophy and one method must be adopted.

The problem is, we know that no one style, vision, philosophy, or method is “right”. This knowledge makes us timid. We tolerate other people’s styles and attitudes rather than insisting on a single style and vision for the project. We feel that we have no right to insist on one particular way of doing things, and so we adopt a politically correct viewpoint regarding software.

Politically correct projects are muddles. They find it difficult to get traction because the team members are all pulling in different directions. This muddling continues until someone puts a stake in the ground and says “We’re doing it this way”, and makes it stick by sheer force of will. Then the team moves on and makes progress.

Was his way right? The question is irrelevant. The stake in the ground provided focus. It eliminated the chaos of too much “rightness”, of too much political correctness.

There are many different schools of martial arts. Karate, Tai Kwon Do, Jiu Jitsu, Judo, etc. Which is right? Of course the question is absurd. Right and wrong are not adjectives that apply to individual martial arts. And yet, each martial art attracts dedicated adherents. The intensity of their devotion to their art is almost religious.

Students of Karate have no rational reason to prefer it to Tai Kwon Do. Indeed, the initial decision to study one vs the other was almost certainly arbitrary. Perhaps there was a Karate studio near their home. Perhaps a friend invited them to a session. Perhaps they saw something on TV, or in the paper. In most cases the initial decision was trivial.

But that’s where the triviality ends. From then on the new student gets immersed into a set of styles, disciplines, philosophies, and methods to the exclusion of all others. There is no political correctness at this level. Teachers tolerate no dissent from their teachings. It’s the Sen Sei’s way, or the highway. Within a dojo, there is a right and a wrong.

At a larger level, everyone knows that no one martial art is clearly better than all the others. There is no “right” school of martial art. And yet, students invest huge efforts into becoming masters of one particular arbitrary discipline.

Why would a student focus on one particular school of martial art to the exclusion of all others for years of his or her life? Simple. To become a master. You can’t become a master of all martial arts at the same time. You have to separate and focus. You have to dedicate yourself to one and learn it inside and out. You have to be politically incorrect. That’s how you gain mastery.

Once a degree of mastery is attained, it becomes beneficial to study other martial arts. A karate black belt will find it beneficial to study Jiu Jitsu, or Akido. The Judo black belt will gain insights into his own art by studying Karate. So, at the level of the master, political correctness and tolerance is re-established.

There are many diverse attitudes about software. For example there are many different kinds of agile methods. Indeed, there are methods that aren’t agile at all. There’s the American vs the European style of OO. There’s functional vs OO vs procedural programming. There are those who practice hungarian notation. There are those who practice test driven development. There are those who believe in C and C++ and those who believe in Ruby. None of these attitudes and preferences are right. None of them are wrong. But without some kind of organizing principle, like a dojo, or a Sen Sei, they are chaos.

There has been a lot of talk about certification lately. Companies need to know if the people they hire are good developers or not. The problem is, there is no standard of certification. There is no right or wrong, and so there’s little to certify. We can certify that someone took a particular course, or that they have certain knowledge of C++ or Java. We can measure someone’s ability to read a book, or write a dumb example program. What we cannot do is certify whether someone is a good developer or not, because we don’t have a definition of “good”.

The different schools of martial arts have solved this problem of certification quite effectively. No school will certify that their student is a master martial artist. But they will certify that they are a master of one particular discipline. You cannot be a general black-belt, but you can be a black-belt in one particular school of Karate.

The award of rank is not based on a casual test, or the taking of a two day course. It is based on months of exposure and participation in practice sessions. It is based on the teacher’s intimate knowledge of the student’s skill. What’s more the teacher’s right to convey rank is based on his teacher’s authority. Thus, there is a chain of trust from teacher to teacher.

I don’t know if a martial-arts ranking system could ever be established in software. The problems seem significant. For example, how would teachers gain the kind of prolonged access they’d need to bestow ranks on students?

However, I do think it’s possible for experienced practitioners to put a stake in the ground and establish a school of software discipline. In effect the practitioner says: “I’ve done it my way, and it worked for me. I’ll teach you my way, perhaps it will work for you.”

Such a school would be precise and exclusive. Students would adopt a certain coding style, a certain philosophy of design, a certain architectural vision, and a certain set of methods and practices. They would adhere to those certain teachings to the exclusion of all others until they mastered them.

Older posts: 1 ... 6 7 8 9 10 11