A Few C plus plus TDD videos 155

Posted by Brett Schuchert Mon, 12 Jul 2010 11:30:00 GMT

Using CppUTest, gcc 4.4 and the Eclipse CDT.

Rough, as usual.

The Video Album

Might redo first one with increased font size. Considering redoing whole series at 800×600.

Software Calculus - The Missing Abstraction. 324

Posted by Uncle Bob Tue, 06 Jul 2010 01:26:15 GMT

The problem of infinity plagued mathematicians for millennia. Consider Xeno’s paradox; the one with Achilles and the tortoise. While it was intuitively clear that Achilles would pass the Tortoise quickly, the algebra and logic of the day seemed to suggest that the Tortoise would win every race given a head start. Every time Achilles got to where the tortoise was, the tortoise would have moved on. The ancients had no way to see that an infinite sum could be finite.

Then came Leibnitz and everything changed. Suddenly infinity was tractable. Suddenly you could sum infinite series and write the equations that showed Achilles passing the tortoise. Suddenly a whole range of calculations that had either been impossible or intractable became trivial.

Calculus was a watershed invention for mathematics. It opened up vistas that we have yet to fully plumb. It made possible things like Newtonian mechanics, Maxwell’s equations, special and general relativity and quantum mechanics. It supports the entire framework of our modern world. We need a watershed like that for software.

If you listen to my keynote: Twenty-Five Zeros you’ll hear me go on and on about how even though software has changed a lot in form over the last fifty years, it has changed little in substance. Software is still the organization of sequence, selection, and iteration.

For fifty years we have been inventing new languages, notations, and formulations to manage Sequence, Selection, and Iteration (SSI). Structured Programming is simply a way to organize SSI. Objects are another way to organize SSI. Functional is still another. Indeed, almost all of our software technologies are just different ways of organizing Sequence, Selection, and Iteration.

This is similar to Algebra in the days before calculus. We knew how to solve linear and polynomial equations. We knew how to complete squares and find roots. But in the end it was all just different ways to organize adding. That may sound simplistic, but it’s not. Subtracting is just adding in reverse. Multiplying is just adding repeatedly. Division is just multiplication in reverse. In short, Algebra is an organizational strategy for adding.

Algebra went through many different languages and notations too, just like software has. Think about Roman and Greek numerals. Think how long it took to invent the concept of zero, or the positional exponential notation we use today.

And then one day Newton saw an apple fall, and he changed the way we thought about mathematics. Suddenly it wasn’t about adding anymore. Suddenly it was about infinities and differentials. Mathematical reasoning was raised to a new order of abstraction.

Where is that apple for software (pun intended). Where is the Newton or Leibnitz that will transform everything about the way we think about software. Where is that long-sought new level of abstraction?

For awhile we thought it would be MDA. Bzzzzt, wrong. We thought it would be logic programming like prolog1. Bzzzt. We thought it would be database scripts and 4GLs. Bzzzt. None of those did it. None of those can do it. They are still just various ways of organizing sequence, selection, and iteration.

Some people have set their sights on quantum computing. While I’ll grant you that computations with bits that can be both states simultaneously is interesting, in the end I think this is just another hardware trick to increase throughput as opposed to a whole new way to think about software.

This software transformation, whatever it is, is coming. It must come; because we simply cannot keep piling complexity upon complexity. We need some new organizing principle that revamps the very foundations of the way we think about software and opens up vistas that will take us into the 22nd century.

1 Prolog comes closest to being something more than a simple reorganization of sequence, selection, and iteration. At first look logic programming seems very different. In the end, however, an algorithm expressed in prolog can be translated into any of the other languages, demonstrating the eventual equivalence.

C++ Algorithms, Boost and function currying 165

Posted by Brett Schuchert Sun, 13 Jun 2010 04:41:00 GMT

I’ve been experimenting with C++ using the Eclipse CDT and gcc 4.4. Since I’m a fan of boost, I’ve been using that as well. I finally got into I realistic use of boost::bind.

I converted this:
int Dice::total() const {
  int total = 0;

  for(const_iterator current = dice.begin();
      current != dice.end();
      ++current)
    total += (*current)->faceValue();

  return total;
}
Into this:
int Dice::total() const {
  return std::accumulate(
      dice.begin(),
      dice.end(),
      0,
      bind(std::plus<int>(), _1, bind(&Die::faceValue, _2))
  );
}

To see how to go from the first version to the final version with lots of steps in between: http://schuchert.wikispaces.com/cpptraining.SummingAVector.

This is a first draft. I’ll be cleaning it up over the next few days. If you see typos, or if anything is not clear from the code, please let me know where. Also, if my interpretation of what boost is doing under the covers (there’s not much of that) is wrong, please correct me.

Thanks!

TDD in Clojure 461

Posted by Uncle Bob Thu, 03 Jun 2010 17:33:00 GMT

OO is a tell-don’t-ask paradigm. Yes, I know people don’t always use it that way, but one of Kay’s original concepts was that objects were like cells in a living creature. The cells in a living creature do not ask any questions. They simply tell each other what to do. Neurons are tellers, not askers. Hormones are tellers not askers. In biological systems, (and in Kay’s original concept for OO) communication was half-duplex.

Clojure is a functional language. Functional languages are ask-dont-tell. Indeed, the whole notion of “tell” is to change the state of the system. In a functional program there is no state to change. So “telling” makes little sense.

When we use TDD to develop a tell-don’t-ask system, we start at the high level and write tests using mocks to make sure we are issuing the correct “tells”. We proceed from the top of the system to the bottom of the system. The last tests we write are for the utilities at the very bottom.

In an ask-don’t-tell system, data starts at the bottom and flows upwards. The operation of each function depends on the data fed to it by the lower level functions. There is no mocking framework. So we write tests that start at the bottom, and we work our way up the the top.

Therein lies the rub.

Orbit in Clojure 387

Posted by Uncle Bob Wed, 02 Jun 2010 22:08:56 GMT

I spent the last two days (in between the usual BS) writing a simple orbital simulator in Clojure using Java interop with Swing. This was a very pleasant experience, and I like the way the code turned out – even the swing code!

You can see the source code here

Those of you who are experienced with Clojure, I’d like your opinion on my use of namespaces and modules and other issues of style.

Those of you who are not experienced with Clojure, should start. You might want to use this application as a tutorial.

And just have fun watching the simulation of the coalescence of an accretion disk around a newly formed star.

A Coverage Metric That Matters 222

Posted by Michael Feathers Fri, 28 May 2010 10:39:00 GMT

How much test coverage should your code have? 80%? 90%? If you’ve been writing tests from the beginning of your project, you probably have a percentage that hovers around 90%, but what about the typical project? The project which was started years ago, and contains hundreds of thousands of lines of code? Or millions of lines of code? What can we expect from it?

One of the things that I know is that in these code bases, one could spend one’s entire working life writing tests without doing anything else. There’s simply that much untested code. It’s better to write tests for the new code that you write and write tests for existing code you have to change, at the time you have to change it. Over time, you get more coverage, but your coverage percentage isn’t a goal. The goal is to make your changes safely. In a large existing code base, you may never get more than 20% coverage over its lifetime.

Changes occur in clusters in applications. There’s some code that you will simply never change and there’s other areas of code which change quite often. The other day it occurred to me that we could use that fact to arrive at a better metric, one that is a bit less disheartening and also gives us a sense of our true progress.

The metric I’m thinking about is percentage of commits on files which are covered by tests relative to the number of commits on files without tests.

In the beginning, you can expect to have a very low percentage, but as you start to get tests in place for changes that you make, your percentage will rise rapidly. If you write tests for all of your changes, it will continue to rise. At a certain point, you may want to track only a window of commits, say, the commits which have happened only in the last year. When you do this, you can end up very close to 100%.

If you think this through, it might seem a bit dodgy on a couple of fronts. The first is that having tests for code within a file does not mean that that code is completely covered by those tests. But, I often find that the hardest part of getting started with unit testing is getting classes isolated enough from their dependencies to be testable in a harness at all. Another dodgy bit is the fact that once you get some tests in place for a file, all of the commits you’ve ever done for that file count in the percentage. Again, that’s okay for fundamentally the same reason. Once you start getting coverage, you are in a good position with that particular code.

What about the moving window? If you track this metric over, say, the last N months of commits, you’ll progressively lose information about the code that you just aren’t changing. To me, that’s fine. Coverage matters for the code that we are changing.

The metric I’m considering (maybe we can call it ‘change coverage’) gives us information about how tests are really impacting our day to day work. Moreover, it’s likely that it would be a good motivational tool, and really, that’s one of the things that a good metric should be.

Hello World Revisited 166

Posted by Brett Schuchert Thu, 20 May 2010 15:12:00 GMT

Surprising revelations while taking a TDD approach to writing hello world.

Here it nearly 21 years since I started writing in C++ (and more for C+) and I realize I’ve been blindly writing main functions to actually do something.

This insanity must stop!

What am I talking about? Read it here.

Clojure Prime Factors 220

Posted by Uncle Bob Sun, 16 May 2010 00:09:59 GMT

Can anyone create a simpler version of prime factors in Clojure?

(ns primeFactors)

(defn of
  ([n]
    (of [] n 2))
  ([factors n candidate]
    (cond
      (= n 1) factors
      (= 0 (rem n candidate)) (recur (conj factors candidate) (quot n candidate) candidate)
      (> candidate (Math/sqrt n)) (conj factors n)
      :else (recur factors n (inc candidate))
      )
    )
  )

TDD is wasting time if... 151

Posted by Brett Schuchert Sat, 08 May 2010 15:36:00 GMT

You have no design sense.

OK, discuss.

Sufficient Design means Damned Good Design. 264

Posted by Uncle Bob Wed, 28 Apr 2010 23:02:10 GMT

@JoshuaKerievsky wrote a blog entitled “Sufficient Design”.

Josh makes this point:

‘Yet some programmers argue that the software design quality of every last piece of code ought to be as high as possible. “Produce clean code or you are not a software craftsman!”’

He goes on to say:

“Yet ultimately the craftsmanship advice fails to consider simple economics: If you take the time to craft code of little or moderate value to users, you’re wasting time and money.”

Now this sounds like heresy, and I can imagine that software craftsmanship supporters (like me) are ready to storm the halls of Industrial Logic and string the blighter up by his toenails!

But hold on there craftsmen, don’t get the pitchforks out yet. Look at the scenario that Josh describes. It’s quite revealing.

Josh’s example of “not being a craftsman” is his niggling worry over a function that returns a string but in one derivative case ought to return a void.

Horrors! Horrors! He left a bit of evolved cruft in the design! Revoke his craftsman license and sick the dogs on him!

Ah, but wait. He also says that he spent a half-day trying to refactor this into a better shape but eventually had to give up.

The fool! The nincompoop! The anti-craftsman! A pox on him and all his ilk!


OK, enough hyperbole. It seems to me that Josh was behaving exactly as a craftsman should. He was worrying about exactly the kinds of things a craftsman ought to worry about. He was making the kinds of pragmatic decisions a craftsman ought to make. He was not leaving a huge mess in the system and rushing on to the next feature to implement. Instead he was taking care of his code. The fact that he put so much energy, time, and thought into such a small issue as an inconsistent return type speaks volumes for Josh’s integrity as a software craftsman.

So, as far as Josh Kerievsky is concerned “Sufficient Design” is “Pretty Damned Good Design”.

Look, all our software will have little tiny warts. There’s no such thing as a perfect system. Craftsmen like Josh work hard to keep those warts as small as possible, and to sequester them into parts of the system where they can do the least harm.

So the only aspect of Josh’s post that I disagree with is his contention that the “craftsman” message is one of unrelenting perfection. Craftsmen are first and foremost pragmatists. They seek very high quality code; but they are not so consumed with perfection that they make foolish economic tradeoffs.

Older posts: 1 2 3 4 5 ... 38