Which came First? 21

Posted by Brett Schuchert Thu, 02 Aug 2007 21:43:00 GMT

I like mnemonics. Many years ago, a colleague of mine, James, gave me a way to remember something. First I’ll give you the letters:
  • CCCCDPIPE
These letters are a way to remember Craig Larman’s GRASP patterns (version 1). This and knowing that in the second version of the GRASP patterns Craig replaced the D for P (D – Don’t talk to strangers – the Law of Demeter, P – Protected Variation). By the way, here are all of them:
  • Coupling (low)
  • Cohesion (high)
  • Creator
  • Controller
  • Don’t talk to strangers (mentioned above and replaced with Protected Variation)
  • Polymorphism
  • Indirection
  • Pure Fabrication
  • Expert
How do I remember the letters? Well walk through this bad pun with me:
  • CCCC (4 c’s, foresees)
  • D (the)
  • PIPE (pipe)

So who foresees the pipe? The Psychic Plumber.

The Psychic Plumber??? I know, it’s awful. However, I heard it once in something like 1999 and I’ve never forgotten it.

That leads me to some other oldies but goodies: SOLID, INVEST, SMART and a relative new-comer: FIRST. While these are actually acronyms (not just abbreviations but real, dictionary-defined acronyms), they are also mnemonics.

You might be thinking otherwise. Typically what people call acronyms are actually just abbreviations. And in any case, they tend to obfuscate rather than elucidate. However, if you’ll lower your guard for just a few more pages, you might find some of these helpful.

Your software should be SOLID:
  • Single Responsibility
  • Open/Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation
  • Dependency Inversion Principle (not to be confused with Dependency Inversion)

I think we should change the spelling: SOLIDD and tack on “Demeter – the Law Of. But that’s just me. Of course if we do this, then it is no longer technically an acronym. That’s OK, because my preference is for mnemonics, not acronyms.

When you’re working on your user stories, make sure to INVEST in them:
  • Independent
  • Negotiable
  • Valuable
  • Estimable
  • Small
  • Testable
And if you’re working on tasks, make sure to keep them SMART:
  • Specific
  • Measurable
  • Achievable
  • Relevant
  • Time-boxed
What what is FIRST? Well have you ever heard Test Driven Development (TDD)? Some people call it by an older name, Test-First Programming (TFP). I don’t think of these as the same thing but that’s neither here nor there. What if the F in TFP really stands for FIRST (notice how I sneaked in the capitals)? If so, then here’s one possible interpretation:
  • Fast – tests should run fast. We should be able to run all of the tests in seconds or minutes. Running the tests should never feel like a burden. If a developer ever hesitates to execute tests because of time, then the tests tests take too long.
  • Isolated – A test is a sealed environment. Tests should not depend on the results of other tests, nor should they depend on external resources such as databases.
  • Repeatable – when a test fails, it should fail because the production code is broken, not because some external dependency failed (e.g. database unavailable, network problems, etc.)
  • Self-Validating – Manual interpretation of results does not scale. A test should * verify that it passed or failed. Going one step further, a test should report nothing but success or failure.
  • Timely – tests should be written concurrently (and preferably before) with production code.

So where does this acronym come from? A while back, a colleague of mine, Tim Ottinger, and I were working on some course notes. I had a list of four out of five of these ideas. We were working on the characteristics of a good unit test. At one point, Tim said to me “Add a T.”

I can be pretty dense fairly often. I didn’t even understand what he was telling me to do. He had to repeat himself a few times. I understood the words, but not the meaning (luckily that doesn’t happen to other people or we’d have problems writing software). Anyway, I finally typed a “T”. And then I asked him “Why?” I didn’t see the word. Apparently you don’t want me on your unscramble team either.

Well eventually he led me to see the word FIRST and it just seemed to fit (not sure if that pun was intended).

Of course, you add all of these together and what do you get? The best I can come up with is: SFP-IS. I was hoping I could come up with a Roman numeral or something because then I could say developers should always wear SPF IS – which is true because we say out of the sun and burn easily. Unfortunately that did not work. If you look at your phone, you can convert this to the number: 73747

If there are any numerologists out there, maybe you can make some sense of it.

In any case, consider remembering some of these mnemonics. If you actually do more than remember them and start practicing them, I believe you’ll become a better developer.

What's your unit of measure? 79

Posted by Brett Schuchert Fri, 20 Jul 2007 14:27:00 GMT

A few weeks back while teaching a class a student asked about refactoring. He we skeptical of the whole idea. “We don’t have time to write new functionality let alone fix code that ‘isn’t broken’.”

After some back-and-forth I asked him this question: What’s your unit of measure for estimating how long a refactoring is going to take?

In retrospect, this is nearly a nonsensical question.

He hemmed and hawed explaining why he could not refactor, why the code was the way it was, etc. After listening to this I then asked the question again: What’s your unit of measure? Is it Weeks, Days, Hours, Minutes?

It took several more rounds of this but I finally got an answer. He said “Days.”

I said something like “Ah-ha! That’s why you cannot refactor, you’re trying to do way too much.”

There are refactorings, and then there are Refactorings (notice the capital R for emphasis). As a matter of course, if I notice:
  • something that’s poorly named
  • a method that’s too long
  • obviously repeated code
  • etc.

I’ll fix it. The work takes a few seconds to minutes and then I’m off. Notice that the unit of measure for estimating isn’t relevant since it takes nearly no time.

Think of it this way. You’re on your way to the kitchen. You haven’t thoroughly cleaned your living room and there’s clutter on the coffee table. On your way to the kitchen, you pick up a few empty glasses and put them in the sink. You’ve just refactored your living room. If you do this every time you move, over time you’ll notice that your living room looks a whole lot better.

If you do this as a habit, your living room will stay cleaner longer – or get dirty slower.

On the other hand, there are those “big refactorings” that get placed on the back burner (backlog) and do not get done until it is impossible to move forward. That’s a whole other level of refactoring and maybe there should be another term for that kind of activity.

This is akin to dusting, vacuuming, mopping, the stuff some people only do when people are coming over. If you do the small stuff out of habit, you’ll be in a better place to do the big stuff when your parents call and say they’re coming over.

Refactoring, cleaning up as you go, is something that should not seem like a lot of work. It should be something to do out of habit. It’s not a separate activity from coding, it is coding – restructuring without changing functionality – organizing.

Think about it. As programmers, all we really do is organize stuff. We write code that stores information, processes it, allows users to type in stuff and get something done but ultimately we organize stuff.

So the next time you notice a badly-named method and you have the refactoring tools to support you, try a 30-second refactor. It will not seem like much but if you do it a few times every day and then you can get your team to start doing it, over time you’ll notice big improvements.

Just remember this:
  • Nothing + Nothing + Nothing eventually equals something…

Creationism versus Natural Selection 35

Posted by Brett Schuchert Wed, 20 Jun 2007 20:34:00 GMT

In the beginning the Product Owner created the Product Feature Request (PFR). And the Product Feature Request was without form, and void; and darkness was upon the face of the deep. And the Spirit of the Requirements Expert (RE) moved upon the face of the PFR. And the RE said, let there be requirements: and there were requirements. And the RE saw the requirements, that they were good: and the RE divided the light from the darkness. The RE called the light the functional requirements and the darkness was called quality attributes. And the functional requirements and the quality attributes were the first four months.

The Architect said, let there be a Software Specification Document (SSD) in the midst of the deep, and let it divide this software from that software. And the Architect made the SSD, and divided the software that already existed from the software that was to be, and it was so.

You can probably guess where this is going…

On the Seventh day, the software shipped and all was good.

Did you ever notice that there are a whole lot of words in Genesis that assume a whole lot of understanding? Like Waters, Heaven, Earth. Obviously these are understood a priori.

If you tried to write some code for Genesis, you would have some gaping holes. Turns out, Genesis does not include a Data Dictionary. Everything was created in its perfect form in those mythical 7 days. No chance to go back and change anything.

Sometimes it seems like this is how we’re expected to develop software. Even though we keep trying to do this, on average, this approach just doesn’t hack it.

Let’s call this the Creationism approach to software development. Everything is known up front and created in its perfect form.

If only it were so simple or even possible.

This is how the coding genesis chapter would have been written if we were using TDD:

In the beginning, the Product Owner identified a Need. And the Need was without form, and void; and darkness was upon the face of the deep. And the Spirit of the Team (Team) moved upon the face of the Need and asked some questions.

And the Product Owner added clarity to the need and the Team made the List and it was OK. The team worked on the List by first expressing What They Wanted (WTW), and they built the Software to make WTW be true. Of course the List was not perfect, so the team maintained it. When they thought they were done with the List, they showed what they had to the Product Owner.

Repeat.

In this approach, which we’ll call the Natural Selection approach, we try things. Unlike an omniscient Deity, the Product owner generally does not know exactly what is needed; they might not even know what some words mean. The Product Owner and Team work together to develop a solution. As the team works with the Product Owner, I’d hope they develop a suite of automated tests, preferably taking a Test First approach.

In a Creationism-based system, we clearly know what we’re doing and we even know what all the words mean. Therefore, it is OK to wait until the end to test. The level of clarity is such (again, we have omniscience working for us) that the only reason we need to test is because mere humans are solving a problem. And as we know, while an omniscient Deity does not make mistakes, we poor humans do.

Not so when using the Natural Selection approach. Whereas before we use a priori knowledge to build the system – which by its very nature is infallible, Natural Selection is all about trying things out. We build our understanding and express it in tests. When a system grows, we make sure that we do not break the tests (these are both the natural resources and environment). In some cases, the tests no longer work (the natural resources dry up or the environment changes). We need to change the tests to accommodate a better understanding of our system. The production code (that which is being selected) either survives under the tests or it gets changed. Sometimes new forms of the production code can exist in the same environment (refactoring).

So if you had your druthers, would you work on a Creationism-based project or a Natural-Selection-based project?

Accepting what you're given 10

Posted by Brett Schuchert Thu, 31 May 2007 05:51:00 GMT

I’m working with Bob Koss this week and, being the new kid on the block, ran to get lunch. You know what Bob did? He created a user story for me on an index card. Really. Here’s what the card said:
  • 6” turkey on Wheat

Sure it could have said something like “As an instructor, I want a sandwich I can eat so that I won’t pass out in the afternoon from either lack of eating or too many carbs.” He didn’t use the “right” form. On the other hand, he got the idea across. However, he wasn’t done just yet.

He added a few acceptance criterion (or as Mike Cohn is now saying, “Conditions of Satisfaction” [I Like that better]).
  • Extra tomatoes
  • Extra jalapenos
  • Nothing else – written but not recorded…
You might argue that the following things were actually conditions of satisfaction as well:
  • 6”
  • Turkey
  • Wheat bread

I’d argue back, I satisfied the customer so bug off. And it really misses the point of a card as a placeholder for a conversation, which we had.

Even with all of these written details, the request could seem a bit ambiguous, right? Was this in addition to other stuff? Is there a “regular or default” amount of stuff to which we’re adding additional constraints? Since we had a conversation it was clear to the implementer at the time (me). And since I was able to implement the use story before I lost the context , nothing more was required.

So off I went. When I finally found the place I had some time to wait in line. I didn’t notice any jalapenos and I started to wonder what I should do. I then though, well wouldn’t he like some other stuff as well. I mean I like a lot of stuff on my sandwiches, doesn’t he? Shouldn’t every sandwich have ranch dressing? What could I substitute for the missing jalapenos? Maybe I could give him a call. Of course did I mention I’m in Houston? In the basement of a skyscraper ordering food? Much like in the North the buildings are connected to keep out the elements – cold in that case, so too are the buildings in downtown – it gets hot but more importantly sticky. Turns out I didn’t have cell service.

Anyway, what was I doing in line? I was gold-plating his lunch. The only time you should consume gold is when you’re drinking Goldschlager. When is that? ... seldom – maybe during college, .

What do you do in this situation? You might be the kind of person who, like me, likes to leave things open until well after it time to make a decision. My motto is “No matter how important it is today, it’ll be more important tomorrow. Until it isn’t.” You might be the kind of person who tends to prefer things be resolved. (If you’re familiar with the MBTI, that’s Perceiving versus Judging – but be careful of those words, their definitions in that context are idiosyncratic.)

Well given our conversation, the card which I held in my back pocket and the confirmation in the form of mostly written conditions of satisfaction, I just went with it. Besides, I was pretty sure they had jalipinos. I’m in Texas three things are required:
  • Cowboy hats
  • Bolo ties
  • jalapenos.

How often have you been tempted to add ranch dressing, oregano, black olives and bell peppers to your code?

Uncovered not always bad 35

Posted by Brett Schuchert Thu, 31 May 2007 05:10:00 GMT

Let’s say for the sake of argument that you are:
  • Using a code coverage tool such as Cobertura, Emma
  • You are actually looking at the coverage of your tests (say you’re looking for stale code in our unit tests – [and you’re thinking gosh, I wish I had some unit tests so I could have some unused code in my unit tests])
  • You are testing that unit under test generates an exception
In JUnit 4 we might write the following:
@Test(expected=RuntimeException.class)
public void methodThatWeExpectAnException() {
    throw new RuntimeException();
}

This test will pass. Yes it’s trivial, of course it would pass. (In reality the single line of code would instead send a message to some object that ultimately would need to generate a RuntimeException for a “real” test to pass.) Fine. That’s not the point.

So what’s the problem with this? Nothing, except that some coverage tools will report the last “line” (the close curly-brace) as not being covered since we did not exit the method cleanly.

Here’s a way to rewrite the above test so that you can assure coverage:
@Test
public void methodThatWeExpectWillThrowAnException() {
    boolean expectedThrown = false;

    try {
        throw new RuntimeException();
    } catch (RuntimeException e) {
        expectedThrown = true;
    }

    assertTrue(expectedThrown);
}

This version is a bit longer, isn’t it?

Here are some comments I’d like to hear from you:
  • Is it any better?
  • Does is express our intent any better?
  • Isn’t it just silly to run coverage tools on your test code?
  • Is anybody having Pascal flashbacks? (If you don’t get this question…you poor &*$^@~).

Pack rats are running the asylum 10

Posted by Brett Schuchert Wed, 16 May 2007 15:58:00 GMT

Legacy systems suffer as much from Requirements Debt as design debt. People tend to not want to lose anything so “new” code meant to replace old code has the same stuff that nobody understands. Why? Because pack rats don’t want to lose anything.

I’ve done a bit of work with legacy systems. My most recent work over the past four years involved working with a group of COBOL developers and mentoring them (sometimes gently – mostly not so much) into J2EE development, which later became Spring-Hibernate based. The foreign element driving this transformation was the end of life for a mainframe. That and the difficulty of finding COBOL programmers lead to what has been called a rewrite in Java.

Four+ years later, many of the systems are rewritten and put into production. What has been one of our biggest challenges? Dealing with “requirements” that simply were not understood but maintained. Much of the legacy code ended up being ported instead of rewritten. Porting to Java you say? Yes, ported.

In my mind, taking a big block of conditional logic written in COBOL, “reverse-engineering” it to “understand” it, and then writing the same Java code is porting, not rewriting. Why all of this work? Because people did not know what the old code did but they sure didn’t want to “lose anything.”

Why didn’t we just take the time to understand what the code really meant or ask “business experts” to tell us what to do? Well, because they don’t exist, really. All the people who should be business experts know the business, but know it so well that they understand the implementation to some level of detail. Since their understanding of what the system should do is in terms of how the system currently does things, writing a new system becomes a port.

Not wanting to miss anything because it might be important (and it probably is), and not having time to figure it out, resulted in keeping a lot of code that nobody really understood. By “kept” I mean “rewrote the same logic in Java.”

So maybe Requirements Dept is a bit of a misnomer. Maybe it’s more “solution-oriented understanding” or some such thing. The bottom line, however, is that I’ve seen this again and again. It’s time to rewrite the code. Nobody really understands it or its original intent. Reverse-engineer the “meaning” from the code and infer the “what” from the “how”.

Unfortunately there’s typically many more how’s than what’s and the mapping certainly isn’t direct in any sense. So working backwards from a solution to understand what the system was supposed to do is the same as trying to move from a phenomenological view to an ontological view. That’s not something that can be solved by hard work.

Is it all doom and gloom? Of course not. But that’s for another time.

Older posts: 1 ... 9 10 11