Open Spaces at conferences, what's your take? 44

Posted by Brett Schuchert Wed, 07 Apr 2010 15:39:00 GMT

I’m the host for an open space this year at Agile Testing Days 2010, Berlin (October 4th – 7th, the open space being the 7th). I’ve attended a few open spaces and I have some ideas on what the role of host might entail.

But, I know I’m better at collecting requirements than sourcing them, so what have you experienced that worked. What has not worked. Any advice you want to give me?

Please help me better understand what I can do to facilitate a successful open space.

What questions should I be asking?

Do you think having a few things scheduled up front in one room, and several unscheduled rooms left to be determined October 4th – 6th, through a a mix if you will, is an OK thing to do, or should what happens be left completely open?

Or, leave everything completely open until the 7th, then start with the “traditional” introductions and let the agenda form then an there?

I’m aware of a few models of open spaces. What I really want to know is, what works for you.

User Stories for Cross-Component Teams 37

Posted by Dean Wampler Sat, 20 Sep 2008 00:53:00 GMT

I’m working on an Agile Transition for a large organization. They are organized into component teams. They implement features by forming temporary feature teams with representatives from each of the relevant components, usually one developer per component.

Doing User Stories for such cross-component features can be challenging.

Now, it would be nice if the developers just pair-programmed with each other, ignoring their assigned component boundaries, but we’re not quite there yet. Also, there are other issues we are addressing, such as the granularity of feature definitions, etc., etc. Becoming truly agile will take time.

Given where we are, it’s just not feasible to estimate a single story point value for each cross-component user story, because the work for each component varies considerably. A particular story might be the equivalent of 1 point for the UI part, 8 points for the middle-tier part, 2 points for the database part, etc.

So, what we’re doing is treating the user story as an “umbrella”, with individual component stories underneath. We’re estimating and tracking the points for each component story. The total points for the user story is the sum of the component story points, plus any extra we decide is needed for the integration and final acceptance testing work.

This model allows us to track the work more closely, as long as we remember that component points mean nothing from the point of view of delivering customer value!

I prefer this approach to documenting tasks, because it keeps the focus on delivering value to the client of each story. For the component stories, the client will be another component.

Automated Acceptance Tests for Component Stories

Just as for user stories, we are defining automated acceptance tests for each component. We’re using JUnit for them, since we don’t need a customer-friendly specification format, like FitNesse or RSpec.

This is also a (sneaky…) way to get the developers from different components to pair together. Say for example that we have a component story for the midtier and the UI is the client. The UI developer and the midtier developer pair to produce the the acceptance criteria for the story.

For each component story, the pair of programmers produce the following:

  1. JUnit tests that define the acceptance criteria for the component story.
  2. One or more interfaces that will be used by the client of the component. They will also be implemented by concrete classes in the component.
  3. A test double that passes the JUnit tests and allows the client to move forward while the component feature is being implemented.

In a sense, the “contract” of the component story is the interfaces, which specify the static structure, and the JUnit tests, which specify the dynamic behavior of the feature.

This model of pair-programming the component interface should solve the common, inefficient communication problems when component interactions need to be changed. You know the scenario; a client component developer or manager tells a server component developer or manager that a change is needed. A developer (probably a different one…) on the server component team makes up an interface, checks it into version control, and waits for feedback from the client team. Meanwhile, the server component developer starts implementing the changes.

A few days before the big drop to QA for final integration testing, the server component developer realizes that the interface is missing some essential features. At the same time, the client component developer finally gets around to using the new interface and discovers a different set of missing essential features. Hilarity ensues…

We’re just getting started with this approach, but so far it is proving to be an effective way to organize our work and to be more efficient.

Contracts and Integration Tests for Component Interfaces 17

Posted by Dean Wampler Mon, 30 Jun 2008 02:54:00 GMT

I am mentoring a team that is transitioning to XP, the first team in a planned, corporate-wide transition. Recently we ran into miscommunication problems about an interface we are providing to another team.

The problems didn’t surface until a “big-bang” integration right before a major release, when it was too late to fix the problem. The feature was backed out of the release, as a result.

There are several lessons to take away from this experience and a few techniques for preventing these problems in the first place.

End-to-end automated integration tests are a well-established way of catching these problems early on. The team I’m mentoring has set up its own continuous-integration (CI) server and the team is getting pretty good at writing acceptance tests using FitNesse. However, these tests only cover the components provided by the team, not the true end-to-end user stories. So, they are imperfect as both acceptance tests and integration tests. Our longer-term goal is to automate true end-to-end acceptance and integration tests, across all components and services.

In this particular case, the other team is following a waterfall-style of development, with big design up front. Therefore, my team needed to give them an interface to design against, before we were ready to actually implement the service.

There are a couple of problems with this approach. First, the two teams should really “pair” to work out the interface and behavior across their components. As I said, we’re just starting to go Agile, but my goal is to have virtual feature teams, where members of the required component teams come together as needed to implement features. This would help prevent the miscommunication of one team defining an interface and sharing it with another team through documentation, etc. Getting people to communicate face-to-face and to write code together would minimize miscommunication.

Second, defining a service interface without the implementation is risky, because it’s very likely you will miss important details. The best way to work out the details of the interface is to test drive it in some way.

This suggests another technique I want to introduce to the team. When defining an interface for external consumption, don’t just deliver the “static” interface (source files, documentation, etc.), also deliver working Mock Objects that the other team can test against. You should develop these mocks as you test drive the interface, even if you aren’t yet working on the full implementation (for schedule or other reasons).

The mocks encapsulate and enforce the behavioral contract of the interface. Design by Contract is a very effective way of thinking about interface design and implementing automated enforcement of it. Test-driven development mostly serves the same practical function, but thinking in “contractual” terms brings clarity to tests that is often missing in many of the tests I see.

Many developers already use mocks for components that don’t exist yet and find that the mocks help them design the interfaces to those components, even while the mocks are being used to test clients of the components.

Of course, there is no guarantee that the mocks faithfully represent the actual behavior, but they will minimize surprises. Whether you have mocks or not, there is no substitute for running automated integration tests on real components as soon as possible.

Building Magic Funnels, Part 1 29

Posted by tottinger Fri, 18 Apr 2008 02:38:00 GMT

The idea of a magic funnel, as you may remember, is that there is some kind of organizational structure where many ideas and proposals and issues go into the top. Through some magic, the item that emerges from the bottom of the funnel (to go to the development team) is the single most important thing they could work on next.

This is all just backlog management and prioritization, of course. But I think it can be simpler than I’ve seen it in the past, and that real people working without magic can approximate it.

Recently, I’ve been working to establish another magic funnel. One of the first things we did was to find the person who formerly handed out work to the developers and made him a single point of contact. In our case this is not the scrum master, but is another trusted line-manager type. He has been given a number of people to work with on the Customer side of the house and also works with technical people.

We have tried to establish story feeds from all the various stakeholders. Developers, operations people, technical writers, customers, sales people, marketing people, inside security consultants, and others have been feeding their ideas in to our point-of-contact man. This part of the funnel is working fairly well.

The next thing we have tried to do is to match the feed of work to the rate at which work can be done. This has created a fair amount of back-pressure on the stakeholders (which I believe to be healthy).

We have also worked on making the bottom of the funnel narrow, meaning that our guy doesn’t scatter new work to the team, but feeds it through the scrum master who protects the team’s productivity and keeps the work flow and tasks visible on the status board. He makes sure that the team is not expected to “absorb” changes, but that adding two points of work to the iteration results in two points of unstarted work coming off. This also creates a healthy back-pressure.

As a pun on “in-flight”, I named another area of the board “flying standby”. This is for stories that aren’t important enough to swap a story off the board (or for stories that are displaced by more important ones). If the developers finish more work than they expected, there are stories that can be picked up even though they’re not a scheduled part of the iteration. Stakeholders are told that there is no guarantee of these stories being picked up at all in this iteration, but there is some small chance that it could happen if the team discovers that it has overestimated some other stories.

The bottom of the funnel is working pretty well.

What’s missing is the “magic” bit.

Turn Back The Dial 24

Posted by tottinger Thu, 06 Mar 2008 01:16:00 GMT

The coolest thing just happened! I broke the glass cover off of my watch. At first I thought it was awful, but then I realized that I could turn the hands.

Imagine my joy as I realized that I could make it 11:30 again, and go enjoy another lunch. Meeting at 3:30? No problem, just turn the hour hand up to 6:00 and go home! I can sleep as long as I want as long as I turn it back to 8:00 when I get to the office. All my work estimates are now “five minutes”, and I complete them every time.

My coworkers have no idea the awesome power that I’ve gained with this one happy accident. They ask “what time is it?” and I say “what time would you like it to be.”

Of course the above is a total fabrication. Pretending it’s 6:00pm when it’s 8:00am isn’t going to do anybody any good at all, and is likely to make a mess of things for me.

But people still try to mandate velocity.

Agile Snow Shoveling Plan 37

Posted by James Grenning Wed, 19 Dec 2007 20:51:00 GMT

My wife and I have evening plans. The driveway has a nice 10 inch layer of snow. To not keep our friends waiting we must leave the house by 6:30. We have a deadline. Working backwards, I need to be in the shower at 6:00. My requirements are to plow off the whole driveway and leave the house, showered and dressed by 6:30

Its 5:00. I have one hour. With the kids gone, the cub cadet (with plow) still in need of repair, I wonder can I meet my requirements and finish the job by 5:55 so I can dust off, open a beer and get in the shower by 6. The schedule looks tight, I could do more analysis, but starting the job will tell me a lot and help me get it done too.

I get my back friendly shovel and get to work, shoveling behind the car we plan on taking. After fifteen minutes I have a realization, I am not going to make it. The pan is not doable. A quick estimate and I see I have cleaned off about one eighth of the driveway and I have used one quarter of my time. Sound familiar. My back was not going to let me move more snow more quickly. Wishful thinking would mean not getting to the shower on time and possibly being blocked in the driveway. Thirty minutes from shower out the door is a metric established long ago. I needed to adjust the plan.

I got a committee together to discuss our options. Oh wait a second, that was a different plan.

I cleared about one eighth of the drive in fifteen minutes. Using my velocity it looks like a two hour job. So, its likely I will only have time to move half of the snow that’s preventing access to and from our house. I better focus on the “critical path”. The other car would be buried for another day and the front walk would have to wait.

A good snow drift could have destroyed my plan, but surprisingly there were none on the critical path. Shoveling this software is predictable, and my velocity was stable. I did not set any stretch goals, because I wanted to shovel another day. Keeping with my sustainable pace, I delivery the critical path by 5:55. The Heine tasted good, and we made the next milestone: dinner with our friends.

Short Reach 32

Posted by tottinger Mon, 23 Apr 2007 20:33:00 GMT

I’m always trying to find newer, better, shorter, more powerful ways to explain what Agile is about. I suppose I’m some kind of obsessive about expressive power and economy.

Finally I decided that Agile, as I understand it today, is about the short reach.

It seems to me that all of the agile practices are about shortening our reach, the distance in time-and-space that one leaves an assumption, decision, or line of code untested and unconfirmed. All the practices seem to follow this one rule.

  • The customer/analyst is keep in the same room, in the same short reach.
  • We feed back the iterations to the customer/analyst so that his every decision has a shorter reach.
  • We do iterations to ensure our planning has short reach.
  • We keep our teammates very close, in the same room, so that it’s a shorter reach to them.
  • The test is written first, so that implementation has shorter feedback on correctness.
  • We compile/test frequently because our code time should have a short reach.
  • We pair so that our code is instantly vetted through a peer. We don’t pile it up and review it after the tests pass.
  • Our planning is based on “yesterday’s weather”, data collected a very short time ago.
  • We don’t plan the team structure and the assignments, we self-organize so that tasks are waiting for the shortest time possible.
  • We talk face-to-face, not across chat and email and official company documents.
  • Tell-dont-ask and the Law of Demeter guide us in keeping the reach of our objects very short.
  • We use unit tests to exercise a class directly, and we isolate with mocks to reduce the reach of our tests through the system.
  • Shared code ownership means that the guy sitting behind your keyboard has all the permission he needs to do excellent work, even if it impacts existing design.
  • Test-first development means that the guy who makes a change knows very quickly whether his change is safe or not. He doesn’t have to wait until the week before integration when the “real” tests are run.

Where does Agile run into logistical or operational difficulties? Wherever a long reach is required or imposed. Where an organization chooses to continue in waterfall-style management, where the team is distributed among managers with appointed “point of contact” and “official channels”, and where the developers are not placed in a common work area agility is very difficult.

I’m not saying that agile techniques can’t work for large companies, but that is an area where a lot of experts are trying (maybe succeeding) to extend the agile techniques and where the average “agilist” finds challenges. When it works, it is almost certain it will be because someone has found a way to shorten the reach of the teams so that all they need to know is never more than a few seconds or minutes away.

At least that’s my half-baked observation of the day. Let me know if I’m wrong here. Or if I’m more right than I think I am.