Different Test Examples in C++ Using C++ CppUTest 70

Posted by Brett Schuchert Thu, 26 Mar 2009 23:08:00 GMT

Here are several versions of the same unit tests written in different styles using CppUTest: C++ Unit Test Examples. If you look near the bottom, you’ll see what looks like story tests in C++. Shocking!

These are all based on Bob’s Prime Factors Kata.

User Stories for Cross-Component Teams 37

Posted by Dean Wampler Sat, 20 Sep 2008 00:53:00 GMT

I’m working on an Agile Transition for a large organization. They are organized into component teams. They implement features by forming temporary feature teams with representatives from each of the relevant components, usually one developer per component.

Doing User Stories for such cross-component features can be challenging.

Now, it would be nice if the developers just pair-programmed with each other, ignoring their assigned component boundaries, but we’re not quite there yet. Also, there are other issues we are addressing, such as the granularity of feature definitions, etc., etc. Becoming truly agile will take time.

Given where we are, it’s just not feasible to estimate a single story point value for each cross-component user story, because the work for each component varies considerably. A particular story might be the equivalent of 1 point for the UI part, 8 points for the middle-tier part, 2 points for the database part, etc.

So, what we’re doing is treating the user story as an “umbrella”, with individual component stories underneath. We’re estimating and tracking the points for each component story. The total points for the user story is the sum of the component story points, plus any extra we decide is needed for the integration and final acceptance testing work.

This model allows us to track the work more closely, as long as we remember that component points mean nothing from the point of view of delivering customer value!

I prefer this approach to documenting tasks, because it keeps the focus on delivering value to the client of each story. For the component stories, the client will be another component.

Automated Acceptance Tests for Component Stories

Just as for user stories, we are defining automated acceptance tests for each component. We’re using JUnit for them, since we don’t need a customer-friendly specification format, like FitNesse or RSpec.

This is also a (sneaky…) way to get the developers from different components to pair together. Say for example that we have a component story for the midtier and the UI is the client. The UI developer and the midtier developer pair to produce the the acceptance criteria for the story.

For each component story, the pair of programmers produce the following:

  1. JUnit tests that define the acceptance criteria for the component story.
  2. One or more interfaces that will be used by the client of the component. They will also be implemented by concrete classes in the component.
  3. A test double that passes the JUnit tests and allows the client to move forward while the component feature is being implemented.

In a sense, the “contract” of the component story is the interfaces, which specify the static structure, and the JUnit tests, which specify the dynamic behavior of the feature.

This model of pair-programming the component interface should solve the common, inefficient communication problems when component interactions need to be changed. You know the scenario; a client component developer or manager tells a server component developer or manager that a change is needed. A developer (probably a different one…) on the server component team makes up an interface, checks it into version control, and waits for feedback from the client team. Meanwhile, the server component developer starts implementing the changes.

A few days before the big drop to QA for final integration testing, the server component developer realizes that the interface is missing some essential features. At the same time, the client component developer finally gets around to using the new interface and discovers a different set of missing essential features. Hilarity ensues…

We’re just getting started with this approach, but so far it is proving to be an effective way to organize our work and to be more efficient.

Configuration Management Systems, Automated Tests, CI, and Complexity 54

Posted by Dean Wampler Mon, 08 Sep 2008 21:53:00 GMT

I’m working with a client that has a very complex branching structure in their commercial CM system, which will remain nameless. Why is it so complex? Because everyone is afraid to merge to the integration branches.

This is a common symptom in teams that don’t have have good automated test coverage and don’t use continuous integration (CI). Fear is their lot in life. They’ll keep lots of little branches and only merge to integration when they’re ready for the “big bang” integration.

I spoke with a manager at the client site today who expressed frustration that no one really knows which branch they should be committing to and when they should merge to the integration branches.

In contrast, projects with good test coverage and CI do almost all their work on the integration branch. They have little fear, because any problems will get caught and fixed quickly. So, the first point of this post is:

Automated test coverage and CI drastically simplify your use of configuration management.

Here’s something else I’ve noticed, open source CM systems seem to focus on different priorities than commercial systems. Disclaimer: I know I’m generalizing a bit here.

The commercial systems tend to be really good at managing complex branching, with fancy GUI tools to help manage the big trees of branches, facilitate merges, etc. That seems to be their biggest selling feature, GUI’s to manage the complexity.

In contrast, most of the open-source CM systems have lower-tech GUI’s, if any, but the teams using them don’t seem to care that much. Usually, this is because these teams are also practicing TDD and CI, so they just don’t need the wizardry as much.

The open-source CM systems seem to be better at scalability and performance. Some are pioneering distributed CM, e.g., Git and Mercurial. Git, for example, was designed to manage a massive project called Linux. Maybe you’ve heard of it.

Distributed CM is not easy, but it’s a lot easier to do if you don’t need to worry as much about complex branch hierarchies.

Most of the commercial tools I’ve seen don’t scale well and some require way too much administration. My client is apparently the biggest user of their particular tool and the developers complain all the time about performance. This tool is not designed to scale horizontally. The only hope is use faster hardware. In this case, the vendor has focused on managing complexity. To be frank, even their GUI tool is an uninspired and slow Java fat client.

So the second point of this post is:

Avoid CM tools that encourage complexity. Pick the ones that scale.

Quality, Timing, and a Clash of Values 104

Posted by tottinger Fri, 23 May 2008 03:31:00 GMT

I came across a quote in something I was reading, and I can’t recall if it was a blog, a cartoon, a book, or what. A young woman was talking to a politician and asked if he had to make a lot of compromises. When he admitted that he had to compromise a few times, she shut him down by saying “you can only compromise your values once, and after that you don’t have any.”

I had a talk the other day with a fellow mentor, and brought up line only to have it returned to me (with top spin!) when we were talking about when would be the right time to turn on an older suite of tests in CI.

I finally had a chance to examine the decision point. Of course, it’s always the right time to turn on more tests. When do you not want to know if your code stinks? On the other hand, I knew:
  1. these forgotten tests were not used by developers, and so are of dubious quality
  2. the team in the middle of a big stabilization crunch
  3. the team is in their final coding week before release.

While they need to know their code and test quality soonest (preferably before release) there was emotional foot-dragging on my part. I knew that they didn’t have the headroom to even triage the old tests to see if they should pass.

The problem was not of compromising my principles so much as choosing which of my principles would have sway. On one hand, the quality driver knows that we need more tests sooner. On the other hand, I knew how close the team was to being overwhelmed to the point of shutting down. My nurturing, coaching “spider-sense” kicked in when I wanted to turn on the tests, and my XP quality-driving “spider-sense” kicked in when I thought about not turning them on.

I decided to wait only a week, and then get the tests turned on. Was it the right decision? I can’t tell. I am truly ambivalent; both drawn and repulsed by the decision. Sometimes the good is the enemy of the best and it’s hard to see which is which.

I chose to be the kind of guy who would err on the side of compassion. It’s the more personal and superior of the two values. Having decided, I move on.

But now it’s time to get those tests running, and sooner is better.

Which came First? 23

Posted by Brett Schuchert Thu, 02 Aug 2007 21:43:00 GMT

I like mnemonics. Many years ago, a colleague of mine, James, gave me a way to remember something. First I’ll give you the letters:
  • CCCCDPIPE
These letters are a way to remember Craig Larman’s GRASP patterns (version 1). This and knowing that in the second version of the GRASP patterns Craig replaced the D for P (D – Don’t talk to strangers – the Law of Demeter, P – Protected Variation). By the way, here are all of them:
  • Coupling (low)
  • Cohesion (high)
  • Creator
  • Controller
  • Don’t talk to strangers (mentioned above and replaced with Protected Variation)
  • Polymorphism
  • Indirection
  • Pure Fabrication
  • Expert
How do I remember the letters? Well walk through this bad pun with me:
  • CCCC (4 c’s, foresees)
  • D (the)
  • PIPE (pipe)

So who foresees the pipe? The Psychic Plumber.

The Psychic Plumber??? I know, it’s awful. However, I heard it once in something like 1999 and I’ve never forgotten it.

That leads me to some other oldies but goodies: SOLID, INVEST, SMART and a relative new-comer: FIRST. While these are actually acronyms (not just abbreviations but real, dictionary-defined acronyms), they are also mnemonics.

You might be thinking otherwise. Typically what people call acronyms are actually just abbreviations. And in any case, they tend to obfuscate rather than elucidate. However, if you’ll lower your guard for just a few more pages, you might find some of these helpful.

Your software should be SOLID:
  • Single Responsibility
  • Open/Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation
  • Dependency Inversion Principle (not to be confused with Dependency Inversion)

I think we should change the spelling: SOLIDD and tack on “Demeter – the Law Of. But that’s just me. Of course if we do this, then it is no longer technically an acronym. That’s OK, because my preference is for mnemonics, not acronyms.

When you’re working on your user stories, make sure to INVEST in them:
  • Independent
  • Negotiable
  • Valuable
  • Estimable
  • Small
  • Testable
And if you’re working on tasks, make sure to keep them SMART:
  • Specific
  • Measurable
  • Achievable
  • Relevant
  • Time-boxed
What what is FIRST? Well have you ever heard Test Driven Development (TDD)? Some people call it by an older name, Test-First Programming (TFP). I don’t think of these as the same thing but that’s neither here nor there. What if the F in TFP really stands for FIRST (notice how I sneaked in the capitals)? If so, then here’s one possible interpretation:
  • Fast – tests should run fast. We should be able to run all of the tests in seconds or minutes. Running the tests should never feel like a burden. If a developer ever hesitates to execute tests because of time, then the tests tests take too long.
  • Isolated – A test is a sealed environment. Tests should not depend on the results of other tests, nor should they depend on external resources such as databases.
  • Repeatable – when a test fails, it should fail because the production code is broken, not because some external dependency failed (e.g. database unavailable, network problems, etc.)
  • Self-Validating – Manual interpretation of results does not scale. A test should * verify that it passed or failed. Going one step further, a test should report nothing but success or failure.
  • Timely – tests should be written concurrently (and preferably before) with production code.

So where does this acronym come from? A while back, a colleague of mine, Tim Ottinger, and I were working on some course notes. I had a list of four out of five of these ideas. We were working on the characteristics of a good unit test. At one point, Tim said to me “Add a T.”

I can be pretty dense fairly often. I didn’t even understand what he was telling me to do. He had to repeat himself a few times. I understood the words, but not the meaning (luckily that doesn’t happen to other people or we’d have problems writing software). Anyway, I finally typed a “T”. And then I asked him “Why?” I didn’t see the word. Apparently you don’t want me on your unscramble team either.

Well eventually he led me to see the word FIRST and it just seemed to fit (not sure if that pun was intended).

Of course, you add all of these together and what do you get? The best I can come up with is: SFP-IS. I was hoping I could come up with a Roman numeral or something because then I could say developers should always wear SPF IS – which is true because we say out of the sun and burn easily. Unfortunately that did not work. If you look at your phone, you can convert this to the number: 73747

If there are any numerologists out there, maybe you can make some sense of it.

In any case, consider remembering some of these mnemonics. If you actually do more than remember them and start practicing them, I believe you’ll become a better developer.