You Don't Know What You Don't Know Until You Take the Next Step 53

Posted by Bob Koss Tue, 13 Nov 2007 16:11:00 GMT

I was teaching our brand new Principles, Patterns, and Practices course recently (https://objectmentor.com/omTraining/course_ood_java_programmers.html) and I was starting the section on The Single Responsibility Principle.

I had this UML class diagram projected on the screen:

Employee
+ calculatePay()
+ save()
+ printReport()

I asked the class, “How many responsibilities does this class have?” Those students who had the courage to answer a question out loud (sadly, a rare subset of students) all mumbled, “Three.” I guess that makes sense, three methods means three responsibilities. My friend and colleague Dean Wampler would call this a first-order answer (he’s a physicist and that’s how he talks ;-) ). The number will increase as we dig into details. I held one finger in the air and said, “It knows the business rules for how to calculate its pay.” I put up a second finger and said, “It knows how to save its fields to some persistence store.”

“I happen to know that you folks use JDBC to talk to your Oracle database.” Another finger for, “It knows the SQL to save its data.” Another finger for, “It knows how to establish a database connection.” My entire hand is held up for, “It knows how to map its fields to SQLStatement parameters.” I start working on my other hand with, “It knows the content of the report.” Another finger for, “It knows the formatting of the report.” If this example was a real class from this company’s code base I knew I’d be taking off my shoes and socks.

Not that my answer was any better than the students’ answer, given the information at hand there can’t be a right answer because I didn’t provide any context for my question. I found our different answers interesting though. This particular company would have a design phase for a project where UML diagrams would be produced and discussed. How can any reviewer know if a class diagram is “right”?

I have this belief (hang-up?) that you don’t really know what you don’t know, until you take the next step and actually use what you currently have. Looking at UML, we can’t really say that it’s going to work or that it satisfies the Principles of Object Oriented Design, until we take the next step, i.e., write some code and see whether it works or not. The devil is in the details and those devilish dirty details just can’t be seen in a picture.

Let’s take a step back. Before there is UML specifying a design, there must have been requirements stating a problem that the design addresses. Have we captured all of the requirements? Are they complete? Are they accurate? Are they unambiguous? How do you know? I believe that you don’t know, and worse, you don’t even know what you don’t know. You don’t know until you take that next step and try to design a solution. It’s only during design that phrases like, “What’s supposed to happen here,” or, “That doesn’t seem to have been addressed in the spec,” are heard. You don’t know what you don’t know until you take that next step.

It is very easy for everybody on a project to believe that they are doing good work and that the project is going according to plan. If we don’t know what we don’t know, it’s hard to know if we’re on the right track. During requirements gathering, business analysts can crank out user stories, use cases, functional requirements, or whatever artifacts the process du jour dictates they produce. Meetings can be scheduled and documents can be approved once every last detail has been captured, discussed to death, and revised to everybody’s liking. Unfortunately, until solutions for these requirements are designed, they are but a dream. There is no way to predict how long implementation will take so project plans are really interpretations of dreams.

The same danger exists during design. Architects can be cranking out UML class diagrams, sequence diagrams, and state transition diagrams. Abstractions are captured, Design Patterns are applied, and the size of the project documentation archive grows. Good work must be happening. But are the abstractions “right”? Can the code be made to do what our diagrams require? Is the design flexible, maintainable, extensible, testable (add a few more of your favorite -able’s)? You just don’t know.

The problem with Waterfall, or at least the problem with the way most companies implement it, is that there either isn’t a feedback mechanism or the feedback loop is way too long. Projects are divided into phases and people feel that they aren’t allowed to take that crucial next step because the next step isn’t scheduled until next quarter on somebody’s PERT chart. If you don’t know what you don’t know until you use what you currently have, and the process doesn’t allow you to take the next step (more likely somebody’s interpretation of the process doesn’t let you take the next step), then we don’t have a very efficient process in place.

In order to not delude myself that I am on the right track when in reality I’m heading down a blind alley, I would like to know the error of my ways as quickly as possible. Rapid feedback is a characteristic of all of the Agile methodologies. By learning what’s missing or wrong with what I’m currently doing, I can make corrections before too much time is wasted going in the wrong direction. A short feedback loop minimizes the amount of work that I must throw away and do again.

I’m currently working with a client who wants to adopt Extreme Programming (XP) as their development methodology. What makes this difficult is that they are enhancing legacy code and the project members are geographically distributed. The legacy code aspect means that we have to figure out how we’re going to write tests and hook them into the existing code. The fact that we’re not all sitting in the same room means that we have to have more written documentation. We don’t know the nature of the test points in the system, nor do we know what to document and how much detail to provide in documentation. We can’t rely mostly on verbal communication but we don’t want to go back to writing detailed functional specs either. There are many unknowns and what makes this relevant to this discussion is, we don’t even know all of the unknowns. Rapid feedback has to be our modus operandi.

An XP project is driven by User Stories developed by the Customer Team, composed of Business Analysts and QA people. A User Story, by definition, must be sized so that the developers can complete it within an iteration. I have a sense how much work that I personally can do in a two week iteration, but I’m not the one doing the work and I don’t know how much of an obstacle the existing code is going to be. The Customer Team could go off and blindly write stories, but that would lead to a high probability that we’d have to rewrite, rework, split, and join once the developers saw the stories and gave us their estimates. To minimize the amount of rework, I suggested that the Customer Team write about twenty or so stories and then meet with the developers to go over the stories and allow them to give estimates.

My plan to get a feedback loop going on user story development worked quite well. The Customer Team actually erred on the side of stories that were too small. The developers wanted to put estimates of 0.5 on some of the stories (I have my teams estimate on a scale of 1-5. I’ll write up a blog entry on the estimation process I’ve been using.) so we combined a few of them and rewrote others to tackle a larger scope. We took the next step in our process, learned what we didn’t know, took corrective action, and moved forward.

Writing Customer Acceptance Tests didn’t go quite as smoothly, but it is yet another example of learning what didn’t work and making corrections. I advise my clients to write tests for their business logic and to specifically not write business tests through the user interface. Well guess where a lot of business logic resided – yep, in the user interface. We ran into the situation where the acceptance tests passed, but when the application was actually used through the user interface, it failed to satisfy the requirements. I’m very happy that we had a short feedback loop in place that revealed that our tests weren’t really testing anything interesting before the Customer Team had written too many FitNesse tests and the Development Team had written too many test fixtures.

Feedback is good. Rapid feedback is better. Seek it whenever you can. Learn what you don’t know, make corrections, and proceed.