Some Rough Draft TDD Demonstration Videos 203

Posted by Brett Schuchert Wed, 31 Mar 2010 02:01:00 GMT

I’m doing a series of videos on TDD. The ultimate result will be a much more polished version with embedded slides, and such. But as a part of the development process, I’m creating scratch videos.

Much of what you see in these videos will be in the final versions, but those are far in the future relative to this work.

Hope you find them interesting.

Comments welcome.

Here is what is already available:
  • Getting started
  • Adding Operators
Next up:
  • Removing violation of Open/Closed principle
  • Removing duplication in operations with a combination of the Strategy pattern and the Template Method pattern
  • Adding new operators after the removal of duplication.
  • Reducing coupling by using the Abstract Factory pattern, Dependency Inversion and Dependency Injection
  • Adding a few more operations
  • Allowing the creation of complex “programs” or “macros” by using the Composite pattern – and avoiding Liskov Substitution Principle inherent in the GoF version of the pattern
  • Driving the calculator via FitNesse + Slim

Anyway, that’s the plan. I’ll try to add each of these videos over the next few weeks.

Another Refactoring Exercise: Design Patterns Recommended!-) 50

Posted by Brett Schuchert Wed, 10 Jun 2009 01:57:00 GMT

Well the previous exercise was fun so here’s another one. The following code is taken from the same problem, an RPN calculator. Originally, the interface of the calculator was “wide”; there was a method for each operator. E.g., plus(), minus(), factorial(). In an effort to fix this, a new method, perform(String operatorName) was added and ultimately the interface was fixed gradually to remove those methods.

Changing he calculator API in this way is an example of applying the open/closed principle. However, the resulting code is just a touch ugly (I made it a little extra ugly just for the hack [sic] of it). This code as written does pass all of my unit tests.

Before the code, however, let me give you a little additional information:
  • I changed the calculator to use BigDecimal instead of int
  • Right now the calculator has three operators, +, – !
  • Eventually, there will be many operators (50 ish)
  • Right now there are only binary and unary operators, however there will be other kinds: ternary, quaternary, and others such as sum the stack and replace just the sum on the stack or calculate the prime factors of the top of the stack so take one value but push many values

So have a look at the following code and then either suggest changes or provide something better. There’s a lot that can be done to this code to make it clearer and make the system easier to extend.

The Perform Method

   public void perform(String operatorName) {
      BigDecimal op1 = stack.pop();

      if ("+".equals(operatorName)) {
         BigDecimal op2 = stack.pop();
         stack.push(op1.add(op2));
         currentMode = Mode.inserting;
      } else if ("-".equals(operatorName)) {
         BigDecimal op2 = stack.pop();
         stack.push(op2.subtract(op1));
         currentMode = Mode.inserting;
      } else if ("!".equals(operatorName)) {
         op1 = op1.round(MathContext.UNLIMITED);
         BigDecimal result = BigDecimal.ONE;
         while (op1.compareTo(BigDecimal.ONE) > 0) {
            result = result.multiply(op1);
            op1 = op1.subtract(BigDecimal.ONE);
         }
         stack.push(result);
      } else {
         throw new MathOperatorNotFoundException();
      }
   }

Unlike the last example, I’ll provide the entire class. Feel free to make changes to this class as well. However, for now focus on the perform(...) method.

One note, Philip Schwarz recommended a change to what I proposed to avoid the command/query separation violation. I applied his recommendation before posting this updated version.

The Whole Class

package com.scrippsnetworks.calculator;

import java.math.BigDecimal;
import java.math.MathContext;

public class RpnCalculator {
   private OperandStack stack = new OperandStack();
   private Mode currentMode = Mode.accumulating;

   enum Mode {
      accumulating, replacing, inserting
   };

   public RpnCalculator() {
   }

   public void take(BigDecimal value) {
      if (currentMode == Mode.accumulating)
         value = determineNewTop(stack.pop(), value);

      if (currentMode == Mode.replacing)
         stack.pop();

      stack.push(value);
      currentMode = Mode.accumulating;
   }

   private BigDecimal determineNewTop(BigDecimal currentTop, BigDecimal value) {
      BigDecimal newTopValue = currentTop;
      String digits = value.toString();
      while (digits.length() > 0) {
         newTopValue = newTopValue.multiply(BigDecimal.TEN);
         newTopValue = newTopValue.add(new BigDecimal(Integer.parseInt(digits
               .substring(0, 1))));
         digits = digits.substring(1);
      }

      return newTopValue;
   }

   public void enter() {
      stack.dup();
      currentMode = Mode.replacing;
   }

   public void perform(String operatorName) {
      BigDecimal op1 = stack.pop();

      if ("+".equals(operatorName)) {
         BigDecimal op2 = stack.pop();
         stack.push(op1.add(op2));
         currentMode = Mode.inserting;
      } else if ("-".equals(operatorName)) {
         BigDecimal op2 = stack.pop();
         stack.push(op2.subtract(op1));
         currentMode = Mode.inserting;
      } else if ("!".equals(operatorName)) {
         op1 = op1.round(MathContext.UNLIMITED);

         BigDecimal result = BigDecimal.ONE;
         while (op1.compareTo(BigDecimal.ONE) > 0) {
            result = result.multiply(op1);
            op1 = op1.subtract(BigDecimal.ONE);
         }
         stack.push(result);
      } else {
         throw new MathOperatorNotFoundException();
      }
   }

   public BigDecimal getX() {
      return stack.x();
   }

   public BigDecimal getY() {
      return stack.y();
   }

   public BigDecimal getZ() {
      return stack.z();
   }

   public BigDecimal getT() {
      return stack.t();
   }
}

Strict Mocks and Characterization Tests 21

Posted by Brett Schuchert Sat, 23 May 2009 01:34:00 GMT

This week I worked with a great group in Canada. This group of people had me using Moq for the first time and I found it to be a fine mocking tool. In fact, it reminded me of why I think the Java language is now far outclassed by C# and only getting more behind (luckily the JVM has many languages to offer).

One issue this group is struggling with is a legacy base with several services written with static API’s. These classes are somewhat large, unwieldy and would be much improved with some of the following refactorings:
  • Replace switch with polymorphism
  • Replace type code with strategy/state
  • Introduce Instance Delegator
  • Use a combination of template method pattern + strategy and also strategy + composite

This is actually pretty standard stuff and this group understands the way forward. But what of their existing technical debt?

Today we picked one method in particular and attempted to work our way through it. This method was a classic legacy method (no unit tests). It also had a switch on type and then it also did one or more things based on a set of options. All of this was in one method.

If you read Fowler’s Refactoring Book, it mentions a combination of encapsulating the type code followed by replacing switch with polymorphism for the first problem in this method (the switch). We were able to skip encapsulating the type code since we wanted to keep the external API unchanged (legacy code).

So we first created a base strategy for the switch and then several empty derived classes, one for each of the enums. This is a safe legacy refactoring because it only involved adding new code.

Next, we created a factory to create the correct strategy based on the type code and added that to the existing method (we also added a few virtual methods). Again, a safe refactoring since it only involved adding effectively unused code (we did create the factory using nearly strict TDD). Finally, we delegated from the original method to the strategy returned from the factory. Safe again, since we had tested the factory.

So far, so good. But next, we wanted to push the method down to each of the subclasses and remove the parts of the logic that did not apply to each given type. We did a spike refactoring to see what that’d be like and it was at least a little dicey. We finally decided to get the original method under test so that as we refactored, we had the safety net necessary to refactor with confidence.

We started by simply calling the method with null’s and 0 values. We worked our way through the method, adding hand-rolled test doubles until we came across our first static class.

Their current system has DAO’s with fully static interfaces. This is something that is tough to fake (well we were not using and AOP framework, so …). Anyway, this is where we introduced the instance delegator. We:
  • Added an instance of the class as a static member (essentially creating a singleton).
  • Added a property setter and getter (making it an overridable singleton).
  • We then copied the body of the static method into an instance method, which we made virtual.
  • We then delegated the static method to the virtual method.
  • Then, in the unit test, we set the singleton to a hand-coded test double in the setup and reset the singleton in the tear down method.

We had to do this several times and on the third time (I think it was the third time), the hand-rolled test double would have had to implement several (17ish) methods and it became clear that we were ready to use a mocking framework. They are using Moq so we started using Moq to accomplish the remainder of the mocking.

After some time, we managed to get a test that essentially sent a tracer bullet through one path of the method we wanted to get under test. When the test turned green there was much rejoicing.

However, we had to ask the question: “So what are we testing?” After some discussion, we came up with a few things:
  • This method currently makes calls to the service layers and those calls depend on both an enumeration (replaced with a shallow and wide hierarchy of strategies) and options (to be replaced with a composition of strategies).
  • It also changes some values in an underling domain object.

So that’s what we needed to characterize.

We had a discussion on this and as a group. We wanted a way to report on the actual method calls so we could then assert (or in Moq parlance Verify). We looked at using Moq’s callbacks, but it appears that those are registered on a per-method basis. We briefly toyed with the idea of using an AOP tool to introduce tracing, but that’s for another time (I’m thinking of looking into it out of curiosity) but we decided that we could instead do the following:
  • Begin as we already had, get through the method with a tracer.
  • Determine the paths we want to get under test.
  • For each path:
    • Create a test using strict mocks (which fail as soon as an unexpected method is called)
    • Use a Setup to document this call as expected – this is essentially one of the assertions for the characterization test.
    • Continue until we have all the Setups required to get through the test.
    • Add any final assertions based on state-based checks and call VerifyAll on the Moq-based mock object.

This would be a way we could work through the method and characterize it before we start refactoring it in earnest.

This might sound like a lot of work and it certainly is no cake walk, but all of this work was done by one of the attendees and as a group they certainly have the expertise to do this work. And in reality, it did not take too long. As they practice and get some of the preliminary work finished, this will be much easier.

Overall, it was a fun week. We:
  • Spent time on one project practicing TDD and refactoring to patterns (they implemented 5 of the GoF patterns).
  • Spent time practicing some of Fowler’s refactorings and Feather’s legacy refactorings.
  • Spent a day practicing TDD using mocks for everything but the unit under test. At the end they had a test class, one production class and several interfaces.

In retrospect, the work they did in the first three days was nearly exactly what they needed to practice for the final day of effort. When we started tackling their legacy code, they had already practiced everything they used in getting the method under test.

So overall great week with a fun group of guys in Canada.

I love the 90's: The Fusion Episode 16

Posted by Brett Schuchert Wed, 02 Jul 2008 20:55:00 GMT

A few weeks back I was working with a team on the East Coast. They wanted to develop a simulator to assist in testing other software components. Their system to simulate is well-described in a specification using diagrams close to sequence diagrams as described in the UML.

In fact, these diagrams were of a variety I’d call “system” sequence diagrams. They described the interaction between outside entities (actors – in this case another system) and the system to be simulated.

This brought be back to 1993 when I was introduced to The Fusion Method by Coleman et al. Before that I had read Booch (1 and 2) and Rumbaugh (OMT) and I honestly didn’t follow much of their material – I had book knowledge but I really didn’t practice it. I always thought that Booch was especially strong in Design ideas and notation but weak in Analysis. I though the opposite for Rumbaugh, so the two together + Jacobson with Use Cases and Business Modeling really formed a great team in terms of covering the kinds of things you need to cover in a thorough software process (UP + UML).

But before all that was Fusion.

Several colleagues and I really groked Fusion. It started with system sequence diagrams showing interactions much like the specification I mentioned above. It also described a difference between analysis and design (and if Uncle Bob reads this, he’ll probably have some strong words about so-called object-oriented analysis, well this was 15 years ago… though I still there there is some value to be found there). Anyway, this is mostly about system sequence diagrams so I won’t say much more about that excellent process.

Each system sequence diagram represented a scenario. To represent many scenarios, Fusion offered a BNF-based syntax to express those various scenarios. (I understand that this was also because for political reasons within HP they were not allowed to include a state model, but I don’t know if that is true or not.) For several years I practiced Fusion and really I often revert back to that if I’m not trying to do anything in particular.

Spending a little time up front thinking a little about the logical interaction between the system and its various actors helps me get a big picture view of the system boundary and its general flow. I have also found it helps others as well, but your mileage may vary.

So when I viewed their protocol specification, it really brought back some good memories. And in fact, that’s how we decided to look at the problem.

(What follows is highly-idealized)

We reviewed their specification and decided we’d try to work through the initialization sequence and then work through one sequence that involved “completing a simple job.” I need to keep this high level to keep the identity of the company a secret.

There was prior work and we kept that in mind but really started from scratch. In our very first attempt, there had been some work done along the lines of using the Command pattern, so we started there. Of course, once we did our first command, we backed off and when with a more basic design that seemed to fit the complexity a bit better (starting with the command pattern at the beginning is an example of solution-problemming to use a Weinberg term – and one of the reasons I’m sometimes skeptical when people start talking in patterns).

We continued working from the request coming into the system and working its way through the system. Along the way, we wrote unit tests, driven by our end goal of trying to complete a simple job and guided by the single responsibility principle. As we thought about the system, there were several logical steps:
  • Receive a message from the outside as some array of bytes
  • Determine the “command” represented by the bytes
  • Process the parameters within the command
  • Issue a message to the simulator
  • Create a logical response
  • Format the logical response into the underlying protocol
  • Send the response back

At the time, they were considering using JNI, so we spent just over a day validating that we could communicate bi-directionally, maintaining a single process space.

Along the way we moved from using hand-rolled test doubles to using JMock 2 to create mock objects. I mentioned this to friend of mine who lamented that there are several issues using a mock-based approach:
  • It is easy to end up with a bunch of tested objects but no fully-connected system
  • Sharing setup between various mocks is difficult and often not done so there’s a lot of violation of DRY
  • You have to learn a new syntax

We accepted learning a new syntax because it was deemed less painful than maintaining existing hand-rolled test doubles (though there are several reasonable solution for that, ask if you want to know what it is). There is the issue of sharing setup on mocks, but we did not have enough work yet to really notice that as a problem. However, they were at least aware of that and we briefly discussed how to share common expectation-setting (it’s well supported).

Finally, there’s the issue of not having a fully connected system. We knew this was an issue so we started by writing an integration test using JUnit. We needed to design a system that:
  • Could be tested up to but excluding the JNI stuff
  • Could be configured to stub out JNI or use real JNI
  • Was easily configurable
  • Was automatically configured by C++ (since it was a C++ process that was started to get the whole system in place)

We designed that (15 minute white-board session), coded it and ended up with a few integration tests. Along the way, we built a simple factory for creating the system fully connected. That factory was used both in tests as well as by the JNI-based classes to make sure that we had a fully-connected systems when it was finally started by C++.

Near the end, we decided we wanted to demonstrate asynchronous computation, which we did using tests. I stumbled a bit but we got it done in a few hours. We demonstrated that the system receiving messages from the outside world basically queued up requests rather than making the sender wait synchronously (we demonstrated this indirectly – that might be a later blog post – let me know if you’re interested).

By the way, that was the first week. These guys were good and I had a great time.

There was still a little work to be done on the C++ side and I only had a week, so I asked them to keep me posted. The following Tuesday they had the first end-to-end interaction, system initialization.

By Wednesday (so 3 business days later), they had a complete demonstration of end-to-end interaction with a single, simple job finishing. Not long after that they demonstrated several simple jobs finishing. The next thing on their list? Completing more complex jobs, system configuration, etc.

However, it all goes back to having a well-defined protocol. After we had one system interaction described end-to-end, doing the next thing was easier:
  • Select a system interaction
  • List all of the steps it needs to accomplish (some requests required a response, some did not)
  • Write unit tests for each “arm” of the interaction
So they had a very natural way to form the backlog:
Select a set of end-to-end interactions that add value to the user of the system
They also had an easy way to create a sprint backlog:
For each system-level interaction, enumerate all of its steps and then add implementing those steps as individual back-log items

Now some of those individual steps will end up being small (less than an hour) but some will be quite large when they start working with variable parameters and commands that need to operate at a higher priority.

But they are well on their way and I was reminded of just how much I really enjoyed using Fusion.