I love the 90's: The Fusion Episode 16

Posted by Brett Schuchert Wed, 02 Jul 2008 20:55:00 GMT

A few weeks back I was working with a team on the East Coast. They wanted to develop a simulator to assist in testing other software components. Their system to simulate is well-described in a specification using diagrams close to sequence diagrams as described in the UML.

In fact, these diagrams were of a variety I’d call “system” sequence diagrams. They described the interaction between outside entities (actors – in this case another system) and the system to be simulated.

This brought be back to 1993 when I was introduced to The Fusion Method by Coleman et al. Before that I had read Booch (1 and 2) and Rumbaugh (OMT) and I honestly didn’t follow much of their material – I had book knowledge but I really didn’t practice it. I always thought that Booch was especially strong in Design ideas and notation but weak in Analysis. I though the opposite for Rumbaugh, so the two together + Jacobson with Use Cases and Business Modeling really formed a great team in terms of covering the kinds of things you need to cover in a thorough software process (UP + UML).

But before all that was Fusion.

Several colleagues and I really groked Fusion. It started with system sequence diagrams showing interactions much like the specification I mentioned above. It also described a difference between analysis and design (and if Uncle Bob reads this, he’ll probably have some strong words about so-called object-oriented analysis, well this was 15 years ago… though I still there there is some value to be found there). Anyway, this is mostly about system sequence diagrams so I won’t say much more about that excellent process.

Each system sequence diagram represented a scenario. To represent many scenarios, Fusion offered a BNF-based syntax to express those various scenarios. (I understand that this was also because for political reasons within HP they were not allowed to include a state model, but I don’t know if that is true or not.) For several years I practiced Fusion and really I often revert back to that if I’m not trying to do anything in particular.

Spending a little time up front thinking a little about the logical interaction between the system and its various actors helps me get a big picture view of the system boundary and its general flow. I have also found it helps others as well, but your mileage may vary.

So when I viewed their protocol specification, it really brought back some good memories. And in fact, that’s how we decided to look at the problem.

(What follows is highly-idealized)

We reviewed their specification and decided we’d try to work through the initialization sequence and then work through one sequence that involved “completing a simple job.” I need to keep this high level to keep the identity of the company a secret.

There was prior work and we kept that in mind but really started from scratch. In our very first attempt, there had been some work done along the lines of using the Command pattern, so we started there. Of course, once we did our first command, we backed off and when with a more basic design that seemed to fit the complexity a bit better (starting with the command pattern at the beginning is an example of solution-problemming to use a Weinberg term – and one of the reasons I’m sometimes skeptical when people start talking in patterns).

We continued working from the request coming into the system and working its way through the system. Along the way, we wrote unit tests, driven by our end goal of trying to complete a simple job and guided by the single responsibility principle. As we thought about the system, there were several logical steps:
  • Receive a message from the outside as some array of bytes
  • Determine the “command” represented by the bytes
  • Process the parameters within the command
  • Issue a message to the simulator
  • Create a logical response
  • Format the logical response into the underlying protocol
  • Send the response back

At the time, they were considering using JNI, so we spent just over a day validating that we could communicate bi-directionally, maintaining a single process space.

Along the way we moved from using hand-rolled test doubles to using JMock 2 to create mock objects. I mentioned this to friend of mine who lamented that there are several issues using a mock-based approach:
  • It is easy to end up with a bunch of tested objects but no fully-connected system
  • Sharing setup between various mocks is difficult and often not done so there’s a lot of violation of DRY
  • You have to learn a new syntax

We accepted learning a new syntax because it was deemed less painful than maintaining existing hand-rolled test doubles (though there are several reasonable solution for that, ask if you want to know what it is). There is the issue of sharing setup on mocks, but we did not have enough work yet to really notice that as a problem. However, they were at least aware of that and we briefly discussed how to share common expectation-setting (it’s well supported).

Finally, there’s the issue of not having a fully connected system. We knew this was an issue so we started by writing an integration test using JUnit. We needed to design a system that:
  • Could be tested up to but excluding the JNI stuff
  • Could be configured to stub out JNI or use real JNI
  • Was easily configurable
  • Was automatically configured by C++ (since it was a C++ process that was started to get the whole system in place)

We designed that (15 minute white-board session), coded it and ended up with a few integration tests. Along the way, we built a simple factory for creating the system fully connected. That factory was used both in tests as well as by the JNI-based classes to make sure that we had a fully-connected systems when it was finally started by C++.

Near the end, we decided we wanted to demonstrate asynchronous computation, which we did using tests. I stumbled a bit but we got it done in a few hours. We demonstrated that the system receiving messages from the outside world basically queued up requests rather than making the sender wait synchronously (we demonstrated this indirectly – that might be a later blog post – let me know if you’re interested).

By the way, that was the first week. These guys were good and I had a great time.

There was still a little work to be done on the C++ side and I only had a week, so I asked them to keep me posted. The following Tuesday they had the first end-to-end interaction, system initialization.

By Wednesday (so 3 business days later), they had a complete demonstration of end-to-end interaction with a single, simple job finishing. Not long after that they demonstrated several simple jobs finishing. The next thing on their list? Completing more complex jobs, system configuration, etc.

However, it all goes back to having a well-defined protocol. After we had one system interaction described end-to-end, doing the next thing was easier:
  • Select a system interaction
  • List all of the steps it needs to accomplish (some requests required a response, some did not)
  • Write unit tests for each “arm” of the interaction
So they had a very natural way to form the backlog:
Select a set of end-to-end interactions that add value to the user of the system
They also had an easy way to create a sprint backlog:
For each system-level interaction, enumerate all of its steps and then add implementing those steps as individual back-log items

Now some of those individual steps will end up being small (less than an hour) but some will be quite large when they start working with variable parameters and commands that need to operate at a higher priority.

But they are well on their way and I was reminded of just how much I really enjoyed using Fusion.

Building Magic Funnels, Part 2: Pragmatic Pedantry 30

Posted by tottinger Fri, 18 Apr 2008 03:00:00 GMT

The middle of the funnel we started on needed work. While it is a simple idea that the single, most important thing in the funnel is the first to emerge from the bottom, it is a multi-flavored affair to try to manage for real.

My first strategy was to get the right people in a room and have them fight it out. I think that the prioritization process should be a lot like local government (maybe a school board?) in that people should argue and complain and push and eventually settle on compromises and deals. Ultimately, I believe that people who have a strong interest in the company can make the right decisions. At least they can be right enough for the next 5 days. When you have one-week iterations, the next chance to change the agenda is never far away.

This first strategy didn’t work out, and so we went to a backup plan. A C level manager said he knew what we should do, and so we scheduled an hour or so with him, myself, our priority manager (“funnel guy”) and the senior technical developer.

Our guys used the sticky post-it 3×5 cards and papered the CIO’s windows and whiteboards. They listed the various categories of work from the various stakeholders and pasted them up in priority order.

I asked my first pedantic question: “What is the single most important thing we can work on? If you had only one story that you knew for sure would be finished this week, which would it be?” That led to a nice discussion.

When they placed the card on the table, signifying that it was definitely in the build, I asked again if there was any one card anywhere else in the room more important. I asked if that was really the single most important one.

When the answer was “yes, absolutely” I was ready for my second pedantic question: “Now that this card is off the board, what is the single most important card left for us to do, if only two stories were guaranteed to be done.” Now the pedantry was fully exposed, but the idea had carried. The team collected all the most important stories and placed them in order on the table.

Now I was ready for my next round of pedantry.

Building Magic Funnels, Part 1 29

Posted by tottinger Fri, 18 Apr 2008 02:38:00 GMT

The idea of a magic funnel, as you may remember, is that there is some kind of organizational structure where many ideas and proposals and issues go into the top. Through some magic, the item that emerges from the bottom of the funnel (to go to the development team) is the single most important thing they could work on next.

This is all just backlog management and prioritization, of course. But I think it can be simpler than I’ve seen it in the past, and that real people working without magic can approximate it.

Recently, I’ve been working to establish another magic funnel. One of the first things we did was to find the person who formerly handed out work to the developers and made him a single point of contact. In our case this is not the scrum master, but is another trusted line-manager type. He has been given a number of people to work with on the Customer side of the house and also works with technical people.

We have tried to establish story feeds from all the various stakeholders. Developers, operations people, technical writers, customers, sales people, marketing people, inside security consultants, and others have been feeding their ideas in to our point-of-contact man. This part of the funnel is working fairly well.

The next thing we have tried to do is to match the feed of work to the rate at which work can be done. This has created a fair amount of back-pressure on the stakeholders (which I believe to be healthy).

We have also worked on making the bottom of the funnel narrow, meaning that our guy doesn’t scatter new work to the team, but feeds it through the scrum master who protects the team’s productivity and keeps the work flow and tasks visible on the status board. He makes sure that the team is not expected to “absorb” changes, but that adding two points of work to the iteration results in two points of unstarted work coming off. This also creates a healthy back-pressure.

As a pun on “in-flight”, I named another area of the board “flying standby”. This is for stories that aren’t important enough to swap a story off the board (or for stories that are displaced by more important ones). If the developers finish more work than they expected, there are stories that can be picked up even though they’re not a scheduled part of the iteration. Stakeholders are told that there is no guarantee of these stories being picked up at all in this iteration, but there is some small chance that it could happen if the team discovers that it has overestimated some other stories.

The bottom of the funnel is working pretty well.

What’s missing is the “magic” bit.