Why you have time for TDD (but may not know it yet...) 39

Posted by Dean Wampler Mon, 01 Oct 2007 01:01:02 GMT

Note: Updated 9/30/2007 to improve the graphs and to clarify the content.

A common objection to TDD is this; “We don’t have time to write so many tests. We don’t even have enough time to write features!”

Here’s why people who say this probably already have enough time in the (real) schedule, they just don’t know it yet.

Let’s start with an idealized Scrum-style “burn-down chart” for a fictional project run in a “traditional” way (even though traditional projects don’t use burn-down charts…).

ideal_timeline2.png

We have time increasing on the x axis and the number of “features” remaining to implement on the y axis (it could also be hours or “story points” remaining). During a project, a nice feature of burn-down charts is that you can extend the line to see where it intersects the x axis, which is a rough indicator of when you’ll actually finish.

The optimistic planners for our fictional project plan to give the software to QA near the end of the project. They expect QA to find nothing serious, so the release will occur soon thereafter on date T0.

Of course, it never works out that way:

actual_timeline2.png

The red line is the actual effort for our fictional project. It’s quite natural for the planned list of features to change as the team reacts to market changes, etc.. This is why the line goes up sometimes (in “good” projects, too!). Since this is a “traditional” project, I’m assuming that there are no automated tests that actually prove that a given feature is really done. We’re effectively running “open loop”, without the feedback of tests.

Inevitably, the project goes over budget and th planned QA drop comes late. Then things get ugly. Without our automated unit tests, there are lots of little bugs in the code. Without our automated integration tests, there are problems when the subsystems are run together. Without our acceptance tests, the implemented features don’t quite match the actual requirements for them.

Hence, a chaotic, end-of-project “birthing” period ensues, where QA reports a list of big and small problems, followed by a frantic effort (usually involving weekends…) by the developers to address the problems, followed by another QA drop, followed by…, and so forth.

Finally, out of exhaustion and because everyone else is angry at the painful schedule slip, the team declares “victory” and ships it, at time T1.

We’ve all lived through projects like this one.

Now, if you remember your calculus classes (sorry to bring up painful memories), you will recall that the area under the curve is the total quantity of whatever the curve represents. So, the actual total feature work required for our project corresponds to the area under the red line, while the planned work corresponds to the area under the black line. So, we really did have more time than we originally thought.

Now consider a Test-Driven Development (TDD) project [1]:

tdd_timeline2.png

Here, the blue line is similar to the red line, at least early in the project. Now we have frequent “milestones” where we verify the state of the project with the three kinds of automated tests mentioned above. Each milestone is the end of an iteration (usually 1-4 weeks apart). Not shown are the 5-minute TDD cycles and the feedback from the continuous integration process that does our builds and runs all our tests after every block of commits to version control (many times a day).

The graph suggests that the total amount of effort will be higher than the expected effort without tests, which may be true [2]. However, because of the constant feedback during the whole life of the project, we really know where we actually are at any time. By measuring our progress in this way, we will know early whether or not we can meet the target date with the planned feature set. With early warnings, we can adjust accordingly, either dropping features or moving the target date, with relatively little pain. Whereas, without this feedback, we really don’t know what’s done until something, e.g., the QA process, gives us that feedback. Hence, at time T0, just before the big QA drop, the traditional project has little certainty about what features are really completed.

So, we’ll experience less of the traditional end-of-project chaos, because there will be fewer surprises. Without the feedback from automated tests, QA is find lots of problems, causing the chaotic and painful end-of-project experience. Finding and trying to fix major problems late in the game can even kill a project.

So, TDD converts that unknown schedule time at the end into known time early in the project. You really do have time for automated tests and your tests will make your projects more predictable and less painful at the end.

Note: I appreciate the early comments and questions that helped me clarify this post.

[1] As one commenter remarked, this post doesn’t actually make the case for TDD itself vs. alternative “test-heavy” strategies, but I think it’s pretty clear that TDD is the best of the known test-heavy strategies, as argued elsewhere.

[2] There is some evidence that TDD and pair programming lead to smaller applications, because they help avoid unnecessary features. Also, they provide constant feedback to the team, including the stake holders, on what the feature set should really be and which features are most important to complete first.

Why you have time for TDD (but may not know it yet...) 39

Posted by Dean Wampler Mon, 01 Oct 2007 01:01:02 GMT

Note: Updated 9/30/2007 to improve the graphs and to clarify the content.

A common objection to TDD is this; “We don’t have time to write so many tests. We don’t even have enough time to write features!”

Here’s why people who say this probably already have enough time in the (real) schedule, they just don’t know it yet.

Let’s start with an idealized Scrum-style “burn-down chart” for a fictional project run in a “traditional” way (even though traditional projects don’t use burn-down charts…).

ideal_timeline2.png

We have time increasing on the x axis and the number of “features” remaining to implement on the y axis (it could also be hours or “story points” remaining). During a project, a nice feature of burn-down charts is that you can extend the line to see where it intersects the x axis, which is a rough indicator of when you’ll actually finish.

The optimistic planners for our fictional project plan to give the software to QA near the end of the project. They expect QA to find nothing serious, so the release will occur soon thereafter on date T0.

Of course, it never works out that way:

actual_timeline2.png

The red line is the actual effort for our fictional project. It’s quite natural for the planned list of features to change as the team reacts to market changes, etc.. This is why the line goes up sometimes (in “good” projects, too!). Since this is a “traditional” project, I’m assuming that there are no automated tests that actually prove that a given feature is really done. We’re effectively running “open loop”, without the feedback of tests.

Inevitably, the project goes over budget and th planned QA drop comes late. Then things get ugly. Without our automated unit tests, there are lots of little bugs in the code. Without our automated integration tests, there are problems when the subsystems are run together. Without our acceptance tests, the implemented features don’t quite match the actual requirements for them.

Hence, a chaotic, end-of-project “birthing” period ensues, where QA reports a list of big and small problems, followed by a frantic effort (usually involving weekends…) by the developers to address the problems, followed by another QA drop, followed by…, and so forth.

Finally, out of exhaustion and because everyone else is angry at the painful schedule slip, the team declares “victory” and ships it, at time T1.

We’ve all lived through projects like this one.

Now, if you remember your calculus classes (sorry to bring up painful memories), you will recall that the area under the curve is the total quantity of whatever the curve represents. So, the actual total feature work required for our project corresponds to the area under the red line, while the planned work corresponds to the area under the black line. So, we really did have more time than we originally thought.

Now consider a Test-Driven Development (TDD) project [1]:

tdd_timeline2.png

Here, the blue line is similar to the red line, at least early in the project. Now we have frequent “milestones” where we verify the state of the project with the three kinds of automated tests mentioned above. Each milestone is the end of an iteration (usually 1-4 weeks apart). Not shown are the 5-minute TDD cycles and the feedback from the continuous integration process that does our builds and runs all our tests after every block of commits to version control (many times a day).

The graph suggests that the total amount of effort will be higher than the expected effort without tests, which may be true [2]. However, because of the constant feedback during the whole life of the project, we really know where we actually are at any time. By measuring our progress in this way, we will know early whether or not we can meet the target date with the planned feature set. With early warnings, we can adjust accordingly, either dropping features or moving the target date, with relatively little pain. Whereas, without this feedback, we really don’t know what’s done until something, e.g., the QA process, gives us that feedback. Hence, at time T0, just before the big QA drop, the traditional project has little certainty about what features are really completed.

So, we’ll experience less of the traditional end-of-project chaos, because there will be fewer surprises. Without the feedback from automated tests, QA is find lots of problems, causing the chaotic and painful end-of-project experience. Finding and trying to fix major problems late in the game can even kill a project.

So, TDD converts that unknown schedule time at the end into known time early in the project. You really do have time for automated tests and your tests will make your projects more predictable and less painful at the end.

Note: I appreciate the early comments and questions that helped me clarify this post.

[1] As one commenter remarked, this post doesn’t actually make the case for TDD itself vs. alternative “test-heavy” strategies, but I think it’s pretty clear that TDD is the best of the known test-heavy strategies, as argued elsewhere.

[2] There is some evidence that TDD and pair programming lead to smaller applications, because they help avoid unnecessary features. Also, they provide constant feedback to the team, including the stake holders, on what the feature set should really be and which features are most important to complete first.

Why you have time for TDD (but may not know it yet...) 39

Posted by Dean Wampler Mon, 01 Oct 2007 01:01:02 GMT

Note: Updated 9/30/2007 to improve the graphs and to clarify the content.

A common objection to TDD is this; “We don’t have time to write so many tests. We don’t even have enough time to write features!”

Here’s why people who say this probably already have enough time in the (real) schedule, they just don’t know it yet.

Let’s start with an idealized Scrum-style “burn-down chart” for a fictional project run in a “traditional” way (even though traditional projects don’t use burn-down charts…).

ideal_timeline2.png

We have time increasing on the x axis and the number of “features” remaining to implement on the y axis (it could also be hours or “story points” remaining). During a project, a nice feature of burn-down charts is that you can extend the line to see where it intersects the x axis, which is a rough indicator of when you’ll actually finish.

The optimistic planners for our fictional project plan to give the software to QA near the end of the project. They expect QA to find nothing serious, so the release will occur soon thereafter on date T0.

Of course, it never works out that way:

actual_timeline2.png

The red line is the actual effort for our fictional project. It’s quite natural for the planned list of features to change as the team reacts to market changes, etc.. This is why the line goes up sometimes (in “good” projects, too!). Since this is a “traditional” project, I’m assuming that there are no automated tests that actually prove that a given feature is really done. We’re effectively running “open loop”, without the feedback of tests.

Inevitably, the project goes over budget and th planned QA drop comes late. Then things get ugly. Without our automated unit tests, there are lots of little bugs in the code. Without our automated integration tests, there are problems when the subsystems are run together. Without our acceptance tests, the implemented features don’t quite match the actual requirements for them.

Hence, a chaotic, end-of-project “birthing” period ensues, where QA reports a list of big and small problems, followed by a frantic effort (usually involving weekends…) by the developers to address the problems, followed by another QA drop, followed by…, and so forth.

Finally, out of exhaustion and because everyone else is angry at the painful schedule slip, the team declares “victory” and ships it, at time T1.

We’ve all lived through projects like this one.

Now, if you remember your calculus classes (sorry to bring up painful memories), you will recall that the area under the curve is the total quantity of whatever the curve represents. So, the actual total feature work required for our project corresponds to the area under the red line, while the planned work corresponds to the area under the black line. So, we really did have more time than we originally thought.

Now consider a Test-Driven Development (TDD) project [1]:

tdd_timeline2.png

Here, the blue line is similar to the red line, at least early in the project. Now we have frequent “milestones” where we verify the state of the project with the three kinds of automated tests mentioned above. Each milestone is the end of an iteration (usually 1-4 weeks apart). Not shown are the 5-minute TDD cycles and the feedback from the continuous integration process that does our builds and runs all our tests after every block of commits to version control (many times a day).

The graph suggests that the total amount of effort will be higher than the expected effort without tests, which may be true [2]. However, because of the constant feedback during the whole life of the project, we really know where we actually are at any time. By measuring our progress in this way, we will know early whether or not we can meet the target date with the planned feature set. With early warnings, we can adjust accordingly, either dropping features or moving the target date, with relatively little pain. Whereas, without this feedback, we really don’t know what’s done until something, e.g., the QA process, gives us that feedback. Hence, at time T0, just before the big QA drop, the traditional project has little certainty about what features are really completed.

So, we’ll experience less of the traditional end-of-project chaos, because there will be fewer surprises. Without the feedback from automated tests, QA is find lots of problems, causing the chaotic and painful end-of-project experience. Finding and trying to fix major problems late in the game can even kill a project.

So, TDD converts that unknown schedule time at the end into known time early in the project. You really do have time for automated tests and your tests will make your projects more predictable and less painful at the end.

Note: I appreciate the early comments and questions that helped me clarify this post.

[1] As one commenter remarked, this post doesn’t actually make the case for TDD itself vs. alternative “test-heavy” strategies, but I think it’s pretty clear that TDD is the best of the known test-heavy strategies, as argued elsewhere.

[2] There is some evidence that TDD and pair programming lead to smaller applications, because they help avoid unnecessary features. Also, they provide constant feedback to the team, including the stake holders, on what the feature set should really be and which features are most important to complete first.

ANN: Talks at the Chicago Ruby Users Group (Oct. 1, Dec. 3) 14

Posted by Dean Wampler Sat, 29 Sep 2007 14:52:00 GMT

I’m speaking on Aquarium this Monday night (Oct. 1st). Details are here. David Chelimsky will also be talking about new developments in RSpec and RBehave.

At the Dec. 3 Chirb meeting, the two of us are reprising our Agile 2007 tutorial Ruby’s Secret Sauce: Metaprogramming.

Please join us!

ANN: OOPSLA Tutorial on "Principles of Aspect-Oriented Design in Java and AspectJ" 20

Posted by Dean Wampler Thu, 13 Sep 2007 16:34:29 GMT

I’m doing a tutorial on aspect-oriented design principles with examples in Java and AspectJ at OOPSLA this year (October 21st). You can find a description here. I believe Friday, 9/14, is the last day for early, discounted registration, so sign up now!

A short presentation on the same subject can be found here.

Why we write code and don't just draw diagrams 24

Posted by Dean Wampler Thu, 06 Sep 2007 15:45:59 GMT

Advocates of graphical notations have long hoped we would reach the point were we only draw diagrams and don’t write textual code. There have even been a few visual programming environments that have come and gone over the years.

If a picture is worth a thousand words, then why hasn’t this happened?

What that phrase really means is that we get the “gist” or the “gestalt” of a situation when we look at a picture, but nothing expresses the intricate details like text, the 1000 words. Since computers are literal-minded and don’t “do gist”, they require those details spelled out explicitly.

Well, couldn’t we still do that with a sufficiently expressive graphical notation? Certainly, but then we run into the pragmatic issue that typing textual details will always be faster than drawing them.

I came to this realization a few years ago when I worked for a Well Known Company developing UML-based tools for Java developers. The tool’s UI could have been more efficient, but there was no way to beat the speed of typing text.

It’s also true that some languages are rather verbose. This is one of the ways in which Domain-Specific Languages (DSL’s) are going to be increasingly important. A well-designed DSL will let you express those high-level concepts succinctly.

I’m not claiming that there is no place for graphical representations. UML is great for those quick design sessions, when you’re strategizing at a high level. Also, the easiest way to find component dependency cycles is to see them graphically.

I’m also not discounting those scenarios where a diagram-driven approach actually works. I’ve heard of some success developing control systems that are predominantly well-defined, complex state machines.

Still, for the general case, code written in succinct languages with well-designed API’s and DSL’s will trump a diagram-driven approach.

CJUG West 9/6/07: Aspect-Oriented Programming and Software Design 16

Posted by Dean Wampler Tue, 04 Sep 2007 22:55:47 GMT

I’m giving a talk at the Chicago Java User’s Group West meeting this Thursday at 6:30 PM. The topic is Aspect-Oriented Programming and Software Design in Java and AspectJ. I’ll briefly describe the problems that AOP addresses and how the principles of object-oriented design influence AOP and vice versa. If you’re in the area, I hope to see you there.

Announcement: Aquarium v0.1.0 - An Aspect-Oriented Programming Toolkit for Ruby 37

Posted by Dean Wampler Thu, 23 Aug 2007 22:26:00 GMT

I just released the first version of a new Aspect-Oriented Programming toolkit for Ruby called Aquarium. I blogged about the goals of the project here. Briefly, they are
  • Create a powerful pointcut language, the most important part of an AOP toolkit.
  • Provide Robust support for concurrent advice at the same join point..
  • Provide runtime addition and removal of aspects.
  • Provide a test bed for implementation ideas for DSL's.
There is extensive documentation at the Aquarium site. Please give it a try and let me know what you think!

Applications Should Use Several Languages 16

Posted by Dean Wampler Wed, 04 Jul 2007 16:38:31 GMT

Yesterday, I blogged about TDD in C++ and ended with a suggestion for the dilemma of needing optimal performance some of the time and optimal productivity the rest of the time. I suggested that you should use more than one language for your applications.

If you are developing web applications, you are already doing this, of course. Your web tier probably uses several “languages”, e.g., HTML, JavaScript, JSP/ASP, CSS, Java, etc.

However, most people use only one language for the business/mid tier. I think you should consider using several; a high-productivity language environment for most of your work, with the occasional critical functionality implemented in C or C++ to optimize performance, but only after actually measuring where the bottlenecks are located.

This approach is much too rare, but it has historical precedents. One of the most successful and long-lived software projects of all time is Emacs. It consists of a core C-based runtime with most of the functionality implemented in Emacs lisp “components”. The relative ease of extending Emacs using lisp has resulted in a rich assortment of support tools for various operating systems, languages, build tools, etc. Even modern IDEs and and other graphical editors have not completely displaced Emacs.

Java has embraced the mixed language philosophy somewhat reluctantly. JNI is the official and most commonly-used API for invoking “native” code, but it is somewhat hard to use and few people actually use it. In contrast, for example, the Ruby world has always embraced this approach. Ruby has an easy to use API for invoking native C code and good alternatives exist for invoking code in other languages. As a result, many of the 3rd-party Ruby libraries (or gems) contain both Ruby and native C code. The latter is built on the fly when you install the gem. Hence, there are many high-performance Ruby applications. This is not a contradiction in terms, because the performance-critical sections run natively, even though interpreted Ruby is relatively slow.

Of course, you have to be judicious in how you use mixed-language programming. Crossing the language boundary is often somewhat heavyweight, so you should avoid doing such invocations inside tight loops, for example.

So, I think the solution to the dilemma of needing high performance sometimes and high productivity the rest of the time is to pick the right tools for each circumstance and make them interoperate. Even constrained embedded devices like cell phones would be easier to implement if most of the code were written in a language like Ruby, Python, Smalltalk, or Java and performance-critical components were written in C or C++.

If I were starting such a greenfield project, I would assume that time-to-money is the top priority and write most of my code in Ruby (my personal current favorite), using TDD of course. I would profile it constantly, as part of the nightly or continuous-integration build. When bottlenecks emerge, I would first determine if a refactoring is sufficient to fix them and if not, I would rewrite the critical sections in C. If the project were for an embedded device, I would also watch the resource usage carefully.

For my embedded device, I would test from the beginning whether or not the overhead of the interpreter/VM and the overall performance are acceptable. I would also be sure that I have adequate tool support for the inevitable remote debugging and diagnostics I’ll have to do. If I made the wrong tool choices after all, I would know early on, when it’s still relatively painless to retool.

If you’re an IT or web-site developer, you have fewer performance limitations and more options. You might decide to make the cross-language boundary a cross-process boundary, e.g., by communicating through some sort of lightweight web services. This is one way to leverage legacy C/C++ code while developing new functionality in a more productive language.

Observations on TDD in C++ (long) 56

Posted by Dean Wampler Wed, 04 Jul 2007 04:15:09 GMT

I spent all of June mentoring teams on TDD in C++ with some Java. While C++ was my language of choice through most of the 90’s, I think far too many teams are using it today when there are better options for their particular needs.

During the month, I took notes on all the ways that C++ development is less productive than development in languages like Java, particular if you try to practice TDD. I’m not trying to start a language flame war. There are times when C++ is the appropriate tool, as we’ll see.

Most of the points below have been discussed before, but it is useful to list them in one place and to highlight a few particular observations.

Based on my observations last month, as well as previously experience, I’ve come to the conclusion that TDD in C++ is about an order of magnitude slower than TDD in Java. Mostly, this is due to poor or non-existent tool support for automated refactorings, no error detection as you type, and the requirement to compile and link an executable test.

So, here is my list of impediments that I encountered last month. I’ll mostly use Java as the comparison language, but the arguments are more or less the same for C# and the popular dynamic languages, like Ruby, Python, and Smalltalk. Note that the dynamic languages tend to have less complete tool support, but they make up for it in other ways (off-topic for this blog).

Getting Started

There is more setup effort involved in configuring your build environment to use your chosen unit testing framework (e.g., CppUnit) and to create small executables, one each for a single or a few tests. Creating many small tests, rather than one big test (e.g., a variant of the actual application). This is important to minimize the TDD cycle.

Fortunately, this setup is a one-time “charge”. The harder part, if you have legacy code, is refactoring it to break hard dependencies so you can write unit tests. This is true for legacy code in any language, of course.

Complex Syntax

C++ has a very complex syntax. This makes it hard to parse, limiting the capabilities of automated tools and slowing build times (more below).

The syntax also makes it harder to program in the language and not just for novices. Even for experts, the visual noise of pointer and reference syntax obscures the story the code is trying to tell. That is, C++ code is inherently less clean than code in most other languages in widespread use.

Also, the need for the developer to remember whether each variable is a pointer, a reference, or a “value”, and how to manage its life-cycle, requires mental effort that could be applied to the logic of the code instead.

Obsolete Tool Support

No editor or IDE supports non-trivial, automated refactorings. (Some do simple refactorings like “rename”.) This means you have to resort to tedious, slow, and error-prone manual refactorings. Extract Method is made worse by the fact that you usually have to edit two files, an implementation and a header file.

There are no widely-used tools that provide on-the-fly parsing and error indications. This alone increases the time between typing an error and learning about it by an order of magnitude. Since a build is usually required, you tend to type a lot between builds, thereby learning about many errors at once. Working through them takes time. (There may be some commercial tools with limited support for on-the-fly parsing, but they are not widely used.)

Similarly, none of the common development tools support incremental loading of object code that could be used for faster unit testing and hence a faster TDD cycle. Most teams just build executables. Even when they structure the build process to generate small, focused executables for unit tests, the TDD cycle times remain much longer than for Java.

Finally, while there is at least one mocking framework available for C++, it is much harder to use than comparable frameworks in newer languages.

Manual Memory Management

We all know that manual memory management leads to time spent finding and fixing memory errors and leaks. Avoiding these problems in the first place also consumes a lot of thought and design effort. In Java, you just spend far less time thinking about “who owns this object and is therefore responsible for managing its life-cycle”.

Dependency Management

Intelligent handling of include directives is entirely up to the developer. We have all used the following “guard” idiom:

    #ifndef MY_CLASS_H
    #define MY_CLASS_H
    ...
    #endif

Unfortunately, this isn’t good enough. The file will still get opened and read in its entirety every time it is included. You could also put the guard directives around the include statement:

    #ifndef MY_CLASS_H
    #include "myclass.h"
    #endif

This is tedious and few people do it, but it does avoid the wasted file I/O.

Finally, too few people simply declare a required class with no body:

    class MyClass;

This is sufficient when one header references another class as a pointer or reference. In our experience with clients, we have often seen build times improve significantly when teams cleaned up their header file usage and dependencies, in general. Still, why is all this necessary in the 21st century?

This problem is made worse by the unfortunate inclusion of private and protected declarations in the same header file included by clients of the class. This creates phantom dependencies from the clients to class details that they can’t access directly.

Other Debugging Issues

Limited or non-existent context information when an exception is thrown makes the origin of the exception harder to find. To fill the gap, you tend to spend more time adding this information manually through logging statements in catch blocks, etc.

The std::exception class doesn’t appear to have a std::string or const char* argument in a constructor for a message. You could just throw a string, but that precludes using an exception class with a meaningful name.

Compiler error messages are hard to read and often misleading. In part this is due to the complexity of the syntax and the parsing problem mentioned previously. Errors involving template usage are particular hard to debug.

Reflection and Metaprogramming

Many of the productivity gains from using dynamic languages and (to a lesser extent) Java and C# are due to their reflection and metaprogramming facilities. C++ relies more on template metaprogramming, rather than APIs or other built-in language features that are easier to use and more full-featured. Preprocessor hacks are also used frequently. Better reflection and metaprogramming support would permit more robust proxy or aspect solutions to be used. (However, to be fair, sometimes a preprocessor hack has the virtue of being “the simplest thing that could possibly work.”)

Library Issues

Speaking of std::string and char*, it is hard to avoid writing two versions of methods, one which takes const std::string& arguments and one which takes const char* arguments. It doesn’t matter that one method can usually delegate to the other one; this is wasted effort.

Discussion

So, C++ makes it hard for me to work the way that I want to work today, which is test-driven, creating clean code that works. That’s why I rarely choose it for a project.

However, to be fair, there are legitimate reasons for almost all of the perceived “deficiencies” listed above. C++ emphasizes performance and backwards-compatibility with C over all other considerations. However, they come at the expense of other interests, like effective TDD.

It is a good thing that we have languages that were designed with performance as the top design goal, because there are circumstances where performance is the number one requirement. However, most teams that use C++ as their primary language are making an optimal choice for, say, 10% of their code, but which is suboptimal the other 90%. Your numbers will vary; I picked 10% vs. 90% based on the fact that performance bottlenecks are usually localized and they should be found by actual measurements, not guesses!

Workarounds

If it’s true that TDD is an order of magnitude slower for C++ then what do we do? No doubt really good C++ developers have optimized their processes as best as they can, but in the end, you will just have to live with longer TDD cycles. Instead of write just enough test to fail, make it pass, refactor, it will be more like write a complete test, write the implementation, build it, fix the compilation errors, run it, fix the logic errors to make the test pass, and then refactor.

A Real Resolution?

You could consider switching to the D language, which is link compatible with C and appears to avoid many of the problems described above.

There is another way out of the dilemma of needing optimal performance some of the time and optimal productivity the rest of the time; use more than one language. I’ll discuss this idea in my next blog.

Older posts: 1 2 3 4 5