New century, old mistakes 431
In the early 90’s, I taught classes in Object-Oriented Analysis and Design, as well as just Object Oriented Design and Programming in C++ and Smalltalk (Java didn’t exist yet). I got burnt out teaching both kinds of classes for different reasons.
The language classes were tough – more C++ than Smalltalk, because there was so much detail and I wanted to cover too much – notice the classic error that the instructor’s opinion is relevant to learning. Both classes were hard, though, because getting the ideas of Object Orientation across in addition to learning a language and an environment (in the case of Smalltalk) was a lot. That, coupled with a lack of discipline in not using some kind of unit testing framework was tough. (This last observation is based on my current observations of teaching similar language classes versus what I used to do last century.)
I have fewer problems with that these days: First, I don’t try to cover it all. I just try to cover some of language highlights. More importantly, using a unit test framework makes it easy to experiment.
On the OOAD front, the problem I used to have was trying to “help” the students by pointing out mistakes ahead of time, so they wouldn’t suffer the same mistakes I’ve made. That was fundamentally wrong on my part, well intended, but just flat-out wrong.
People need to see mistakes and even make mistakes to move forward. I’ve personally learned probably most of what I know based on either making mistakes or observing other mistakes – ones I would have committed myself if my customers had not already done so themselves – thank you for that.
I’ve recently discovered myself making this same mistake again. In this case it has to do with enforcing code coverage to get people to write tests. I used to think this was a bad idea, and maybe it is, but even bad ideas can have utility – context, context, context.
How can this possibly be a bad idea? Well when this happens, developers tend to write tests with few checks that are heavily implementation-oriented. If the production code is written first, which is more common, it often has high path (cyclomatic) complexity. What ends up happening is developers write complex production code for many “just in case” scenarios, the code has many paths, they are told to get 80% coverage and so they write tests to verify that all the paths of their code have been executed.
Of course, how many of those paths are essential versus incidental isn’t typically considered. Since the tests are written to drive coverage, and the underlying code is probably overly complex, the tests tend to be heavily implementation-oriented rather than intention oriented (or scenario-based). The tests are hard to write, harder to read and even worse to maintain.
So up to a few months ago, my advice was to not enforce coverage. But a great theory is often destroyed by data. I’ve had 2 customers recently enforce code coverage on new development. Both of them experienced something like what I expect would happen. They implemented coverage standards, implementation-oriented tests got written and developers were having problems with writing unit tests. However, something else more important happened. Even though the tests were hard to maintain, many of the developers started to get test infected. That is, they started to be convinced that writing unit tests was valuable and now they needed to know how to do that more effectively; rather than this being the destination, it was a step in an ongoing journey. (Another example of a mistake I make, confusing a process for one of its events.)
To me, someone who is writing tests, thinks they are useful and is now ready to learn how to write them more effectively is easier to work with than someone who has not written tests, don’t think they add value and certainly are not their responsibility.
So it appears this “bad practice” has merit as an intermediate step in a longer learning arc.; one possible path through the murky testing waters. Did I make these kinds of mistakes? I’ve never been in a situation where I was told to have some % coverage, so not exactly. However, have I written implementation-oriented tests? Yes. Have I written tests with few or no checks? Yes. I still do sometimes, but much less so than in the past. Just now I do so consciously rather than out of a lack of options.
As I see it, I committed several mistakes I’ve made in the past. First, trying to “help” people by giving advice which would allow them to avoid mistakes I’ve made in the past. Maybe those mistakes are a necessary rite of passage for some people. Another mistake I made was thinking of the learning experience as an event rather than a process. Finally, while I might be tangentially involved, ultimately someone else’s learning is their learning, not mine. They need to do what they need to do. I need to be available if necessary, but otherwise keep my agenda out of it!
Take in the context of a class, learning really doesn’t stop after a TDD class, or coaching. Some might argue that learning doesn’t start UNTIL the TDD class is over – I could make that argument myself. In any case, I was looking at my involvement as “the learning event”. As I write this, I realize just how insane that really is. I wasn’t aware of my mental model until recently; now that I am hopefully I can fundamentally change how I approach this situation going forward.
What are some other “don’t do’s” you’re aware of that make sense in the context of a process, but not as an end or goal?