Installing DBFit for SQL Server

NOTE: I originally posted this at my old company's website in 2011,
but as that blog has been taken down, I thought I'd resurrect this
post here.

I’ve been using DBFit on several projects recently, and I’m really pleased with the results. If you’re not familiar with DBFit, it’s a set of Fit fixtures that allows Fitnesse tests to execute directly against a database, without you having to build a separate connector. This post shows you how to get started with DbFit with the least amount of pain.

I practice Test Driven Development and while TDD is a pretty mature discipline in most modern programming languages, it’s still not widely practiced in the database world. We’ve had some success in the past using TSQLUnit, but I prefer DbFit because you can use it to write Unit Tests as well as Acceptance Tests. In addition, Fitnesse allows you to add additional colour in the form of documentation to truly turn your acceptance tests into an executable specification.

One thing that doesn’t (at least to me) appear to be clear from the documentation is that to install DbFit to test a Microsoft SQL Server database, you can simply install Fitnesse, and then install FitSharp. The advantage of doing this, compared to installing DbFit from sourceforge is that you will be working with the latest and greatest versions of Fitnesse and FitSharp. The current instance of DbFit on sourceforge uses a Fitnesse build from 2008.

Here is a high level overview of the installation steps (more detailed information including troubleshooting steps is available at each of the links below):

  1. If necessary, install Java 6 from here:
  2. Install Fitnesse from here: As it mentions on the downloads page, just download fitnesse.jar, into the folder in which you’d like to install it  and type java -jar fitnesse.jar -p 8080 (where 8080 is the port number you’d like to use to run fitnesse from – usually port 80 is taken by other stuff on your machine) at the command line. Fitnesse will unpack and install itself.
  3. If necessary, install .NET framework 3.5 or 4.0 from here:
  4. Install FitSharp from here (choose the correct version for your version of the .NET framework): To install FitSharp, create a folder under the folder into which you installed Fitnesse (I name mine FitSharp) and unpack the installation files into this folder.
  5. Once you’re installed and up and running, follow the examples in the DbFit Reference to see how the whole thing works.
  6. One additional tip is that if you’re following the Hello World example in the DbFit reference, use the following text instead of the one they document in Step 2: Setting Up the Environment. This will correctly point you to the location of your FitSharp installation files and use the Microsoft SQL Server flavour of DbFit:
!define COMMAND_PATTERN {%m -r fitnesse.fitserver.FitServer,"FitSharp\fit.dll" %p}
!define TEST_RUNNER {FitSharp\Runner.exe}
!define PATH_SEPARATOR {;}
!path FitSharp\dbfit.sqlserver.dll

I’ll be posting some further information on using DbFit as an executable specification that includes acceptance tests, for Unit Testing and performance testing, as well as showing you some of the ways we’ve organized our test suites to make them more maintainable and easier to understand. Happy Database Testing!


Heilmeier’s Catechism – a checklist for software projects

NOTE: I originally posted this at my old company's website in 2010,
but as that blog has been taken down, I thought I'd resurrect this
post here.

Until recently, I am ashamed to say that I had never heard of George Harry Heilmeier. A recent retweet by Roy Osherove on Twitter soon had me digging for more information.

It turns out that not only was Mr. Heilmeier  a pioneering contributor to liquid crystal displays, he was a Vice President (and later CTO) of Texas Instruments during the time they produced the mighty Speak and Spell.

Mr Heilmeier’s Wikipedia page lists an amazing amount of awards, including the National Medal of Science  and the IEEE Medal of Honor, but that’s not what sparked my curiousity.

What was interesting to me about Mr. Heilmeier was a series of questions anyone should be able to answer when proposing a research project or product development effort. These questions are known as Heilmeier’s Catechism.

Here is Heilmeier’s original list of questions:

Heilmeier’s Catechism

  • What are you trying to do? Articulate your objectives using absolutely no jargon.
  • How is it done today, and what are the limits of current practice?
  • What’s new in your approach and why do you think it will be successful?
  • Who cares?
  • If you’re successful, what difference will it make?
  • What are the risks and the payoffs?
  • How much will it cost?
  • How long will it take?
  • What are the midterm and final “exams” to check for success?

When I read this list, it struck me that these questions could easily be adapted as a software project checklist.

With some small tweaks in language, this list becomes a standard project checklist that any consulting organization should work on with their customers to answer when deciding whether or not to go ahead with a project:

Project Checklist

  • What is the underlying business problem we are trying to solve with this project?
  • What happens today? Is this problem worked around with manual processes?
  • What’s new in this approach and why do we think it will be successful?
  • Who are the project stakeholders?
  • If we’re successful, what difference will it make?
  • What are the risks and the payoffs? How can the risks be mitigated?
  • How much will it cost?
  • How long will it take?
  • How will we measure progress on the project? How do we know we’ve been successful?

What about your organization’s project approval process? Does your company use Heilmeier’s Catechism to decide whether to give a project a green light? What other questions should be asked before starting a project?

Is your consulting code an asset or a liability?

NOTE: I originally posted this at my old company's website in 2010,
but as that blog has been taken down, I thought I'd resurrect this
post here.


When you hire consultants to build software for you, how do you know if the code is worth the money you pay them?

Consulting code (indeed, any custom code) should be an asset to your business, but unfortunately, many times it’s quite the opposite.

Consulting code can become a liability. Lack of tests and poor quality, buggy code can leave you, your code and your pocket book in worse shape than before the consultants arrived (and usually with little to no recourse).

Untested code has no business value

Years ago at the Agile 2006 conference, one of the sessions was "Delivering flawless tested software every iteration", delivered by Alex Pukinskis. The catchy name attracted a big audience and it was standing room only.

During this presentation, Alex made the following statement: ((I'm not sure whether Alex was quoting someone, but googling that precise phrase returned zero results, so I'm attributing it to him))

Untested code has no business value

This struck a chord with me, because at the time I was working for a company that was trying to transition to agile, but many of our practices still hadn't changed. We were (okay I was) still breaking the build and still expecting the QA team to find our bugs for us. We weren't consistently doing TDD, we had no automation of the build acceptance and we were not practicing continuous integration. I could go on, but I'm sure you get the idea.

The "aha" moment for me was the concept that the quality of the code, and a lack of defects was my responsibility as a professional developer. It was my responsibility to ensure that no defects were passed to QA. Of course, I know that software cannot be perfect, and there will be defects, but my attitude towards defects changed after that talk. Defects should be unexpected. Defects should be unusual. Defects should be prevented, not found.

No Bugs

The attitude that defects should be prevented, not found, can be summarized in the "No Bugs" philosophy.

James Shore recently published the full text of the No Bugs section from his excellent Art of Agile book. He summarizes the text in 99 words:

Rather than fixing bugs, agile methods strive to prevent them.

Test-driven development structures work into easily-verifiable steps. Pair programming provides instant peer review, enhances brainpower, and maintains self-discipline. Energized work reduces silly mistakes. Coding standards and a "done done" checklist catch common errors.

On-site customers clarify requirements and discover misunderstandings. Customer tests communicate complicated domain rules. Iteration demos allow stakeholders to correct the team's course.

Simple design, refactoring, slack, collective code ownership, and fixing bugs early eliminates bug breeding grounds. Exploratory testing discovers teams' blind spots, and root-cause analysis allows teams to eliminate them.

Asking the right questions

Now you are probably expecting some sales pitch from me at this point to say that you should hire us because we're great. Well that's not the point of this post (although you should, and we are). The point of this post is that the next time you are talking to a consulting firm or hiring a developer (( For more information on hiring developer team members, read, ask them about their definition of code quality and how they ensure it. By simply asking a couple of questions you should be able to determine whether they are all "smoke and mirrors" or if they will add value to your business. For example, you might ask:

  1. What is your definition of quality code?
  2. How do you ensure your code is bug free?

The answer to the first question should mention practices like the Law of Demeter, coding standards, refactoring and adherence to the SOLID principles.

Although unsettling, it is okay if the answer to the second question is “I‘m never 100% sure my code is bug free” (particularly if they mention Gödel's proofs). However, they should quickly follow up with, “I make bugs less likely by practicing test driven development, peer reviews (or pair programming), automated acceptance tests and adherence to coding standards and good design principles.”

If they can answer those two questions to your satisfaction (and follow through by demonstrating these practices when building the code) then they might know what they are talking about, and actually give you value for your money.

Show them the numbers – it’s results that matter

NOTE: I originally posted this at my old company's website in 2011,
but as that blog has been taken down, I thought I'd resurrect this
post here.

On a recent project one of our customers had some questions about the project’s progress. The project was a mid-size systems integration between Oracle Financials and Onyx CRM for a Fortune 500 company.

As with many large companies, there were several business units and people with interest in the project, and also as with many large companies, people had several other projects to deal with and were not attending the daily standups or weekly progress meetings.

I began to hear that there were some concerns about the project progress from some of the management team and a concern that we were spending too much time writing tests instead of code.

This came as a big surprise to me: from the perspective of the team working on the project on a daily basis (both customer and ext.IT), we were sailing along with no problems. We were providing the usual artifacts such as status reports which showed how we were tracking against estimated hours and when our code complete date would be, and all of this was tracking well against our original estimates, so to hear that people were unhappy was a big shock.

I called a meeting with all of the stakeholders and as soon as I invited all of the stakeholders I realized two things:

  1. This meeting would be the first time in the whole project that all of the stakeholders had been in a room together.
  2. While we had discussed our approach during the initial engagement, we hadn’t held a project kickoff with all of the stakeholders to discuss our approach to building software.

Both of these items were my fault, but here I was, several months into the project with an unhappy customer, so I thought I would address it in this meeting.

I put together a short presentation which discussed our approach to building software, including brief descriptions of how we perform iterations, test driven development, automated acceptance testing, retrospectives and other benefits to the way we build software, and why we considered our approach to be best practice.

Almost as an afterthought, I ended the presentation with a couple of slides showing our project status. This was an afterthought because everyone at the meeting was a recipient of the weekly status report containing the same information. I created one slide comparing our original project estimates (before we had detailed requirements) to our detailed estimates (once requirements were fleshed out) and to our actual hours so far on the completed features, and another slide showing our defect count and comparing it to another of the customer’s internal projects that they had build without the benefit of automated tests.

Once the meeting started, it was clear that I didn’t need any of the slides about our approach. We spent a few minutes discussing the original estimates against the actual time taken for completed features: everyone was happy with the job we had done in estimating against the original high-level requirements. We also discussed how we were tracking on the hours left: again, everyone was satisfied with the results. Finally, we spent five minutes comparing the low defect count on our project (with a high level of test automation), compared to some of the customer’s other projects (without unit tests), and everyone was satisfied with this too.

There wasn’t a single question about our approach, and ultimately no-one in the higher echelons of the customer’s business cared about the minutiae of our approach, nor were they interested in why we thought our approach was best practice.

They simply wanted to see the results of our approach.

In retrospect, this might seem obvious, but the best way to convince anyone that your approach to doing something is practical and pragmatic, is to do it and measure it against other approaches. People don’t care about dogma, or why you think something should be the best way. Only by showing people the numbers and measuring the effectiveness of your approach can you truly show that your practices are effective.

To conclude: After this meeting, the project was delivered successfully without any more concerns from the customer. We were one of several projects integrating with Oracle and we were the only project to deliver on time and under budget, with a very happy customer!

The Waterfall approach to reading

NOTE: I originally posted this at my old company's website in 2010,
but as that blog has been taken down, I thought I'd resurrect
this post here.

I was chatting to my wife the other evening and started to discuss reading lists. I wondered aloud how many years I had to live (30? 40? 50?), and was thinking about how many more books I would be able to read in that time.  A book a week for 40 years would mean I could read about 2,000 more books in my lifetime.

A quick search netted this wonderful article from the Guardian Online: 1000 novels everyone must read, so that was a great starting point.

I thought about how the task of building a reading list would look like if I approached the task like a software project.

The Waterfall Approach

  1. Use the Guardian article, Amazon reviews and recommendations to create the complete list of 2,000 books I am going to read.
  2. Prioritize the list.
  3. Buy all of the books in the list.
  4. Work through the list, reading the books in order.

Now deciding ahead of time what you are going to read for the next forty years is clearly ludicrous, as so much will change in that time.  My interests and priorities will change. The world will change. The available books will change. Even the cost and technologies for delivering books will change. All of this means that making decisions now about events far in the future is a pointless and expensive exercise, as most of the books in the list would go unread.

The Agile Approach

  1. Decide how far ahead you want to plan – far enough to avoid running out of books to read, but not so far as to make change costly.
  2. Make a much shorter list of books to read.
  3. Prioritize the list.
  4. Buy the first few books in the list.
  5. Keep updating this shortlist as your interests change and new books are published.

This approach has the advantage of having less overhead to maintain (a few books instead of 2,000), cheaper (less cost and inventory up front) and more flexibility (the list can change more easily over time).

Don’t plan in detail too far ahead, but be prepared for change

On many software projects, we are asked to plan iterations and projects months and even years into the future. This is as foolish as planning your reading list far in advance.

Just as most of the 2,000 books in our waterfall reading list would probably go unread, by the time we get to the end of a long software project, we know so much more about what we need to do and so much has changed that a lot of the later detailed planning has to be completely redone as it is worthless and out of date.

I’m not saying it’s worthless to have high level plans – in fact this is really important. We need to know if certain things have to be lined up ahead of time (for example communicating release plans and outages, getting users trained, making sure other teams are ready for the downstream affects of our changes) and we need to know when the last responsible moment to make those decisions will be.

We also often need to provide our customers with a ballpark idea of the costs and schedule of a project (is it a week? a month? a year?) before the project begins, but we shouldn’t treat this as being cast in stone. The plan has to be flexible enough to change with the landscape. The cost and schedule estimates need to be improved and updated as time goes on, as uncertainties about the project are reduced (I’ll write more about this in another blog post).

While a high level plan is a good thing, what is foolish is detailed design, task planning, assignment and estimating for items far in the future of a project. While this may give the illusion of control and predictability, it is just that: an illusion. In fact it’s harmful to have that level of detail because the more design artifacts and task details that need to change in a project, the higher the overhead of maintaining these details and the more rigid the project will become.

Just like keeping your reading list short and easy to change, you should keep your iterations and detailed project plan short. This will give you a flexible plan that can adapt as the project progresses.

For more ideas about shrinking your iterations, see my recent blog post on the topic.