Category Archives: Software Development

FIT for Testing

Over at Martin Fowler’s blog, I found a link to Framework for Integrated Test (FIT) from Ward Cunningham. I’m just starting to read it now, but I wanted to pass it along because it looks pretty good. Fowler writes:

But I wonder if a language designed for programming is really the right language for writing tests. The point about tests is that they operate by example. They don’t try to cover how to handle any value, instead they describe specific scenarios and responses. I wonder if this implies a different kind of programming language is required. Perhaps this is the truly startling innovation in FIT.

More on this later.

Fluid Communication

Sharing information in your organization is much easier if you use some kind of knowledge base software. For large enterprises, there are plenty of choices, most aimed at the help desk or call center, but they might be overkill and expensive if all you are trying to do is make a central repository of the bits of information in the collective mind of your company. Here are three low-cost ideas for implementing a simple knowledge repository.

  • News Group Software —  There are plenty of easy to install and use newsgroup applications. I’ve had good experiences with Snitz, which is free and full-featured.
    • Advantages: Familiar interface, every edit is marked with author, support for alerting via e-mail is common
    • Disadvantages: Knowledge is in discussion/serial format, hard to edit old entries, linking to other entries can be hard, requires ability to install software on the server, may not support attachments

  • Wiki Software — A Wiki is a website where every page is editable by the reader. The best known public example is the WikiPedia, but the concept started at the Portland Pattern Repository. It’s a powerful idea, but depending on the exact software you use, it can be hard for some people to use. Here’s a list of wiki implementations.
    • Advantages: Everything is editable, linking is easy, free implementations are available, pro versions track users and edits
    • Disadvantages: Can be hard to use, requires ability to install software on the server, may not support attachments

  • Content Management Software — CMS tools can be as expensive as KnowledgeBase tools, but for ease of use and quality of the resulting site, they cannot be beat. I use CityDesk for this site and others (Note: as of 2007, I use RapidWeaver). It averages about $100 a user for contributors and $300 for the site designer, but for small sites, a free version is available.
    • Advantages: Complete control of resulting site, linking is easy, everything is editable, very easy to use, attachments usually supported
    • Disadvantage: Must set up templates, edits not usually logged

For some knowledge bases, a combination of these ideas can work very well—a news group for requests and a Wiki or CMS for official information, or a Wiki for internal use and a CMS for customer facing pages that need to look pretty.

Book Review: VB for Testers

I was somewhat skeptical about Visual Basic for Testers [amazon] because I thought it was going to be focused on automated GUI testing. I have no interest in reinventing WinRunner or TestComplete as a giant list of SendKeys calls. Luckily, neither did Mary Romero Sweeney. She concludes the first chapter with the advice that “Visual Basic should not be considered a substitute for the major test tools but simply a powerful adjunct to them”. Earlier in the chapter, in a section titled Automated Software Testing in the Real World (page 8), she justifies the use of VB and other general-purpose languages for testing with this:

Using Visual Basic and other programming languages on a test project are some of the real-world techniques we end up using when our high-priced automated testing tools just can’t get to that important information. This is not meant as a criticism of the automated test tools on the market today […] However, by default, these tools can’t possibly keep up with every need on every test project. Most testing organizations find that at some point they need to resort to using programming-experienced personnel to write code to supplement their testing.

I have to confess that had she not been clear about this goal, I may have abandoned the book here. Confident that this book was going to offer something to add to my arsenal of testing techniques, I read on.

The next few chapters are an introduction to VB focusing on the features that a tester would be interested in (getting data from a database, automating a COM object, manipulating the registry). Experienced VB programmers will likely skip over these chapters. If you want to skim these chapters, I recommend hunting down the asides marked “Tester’s Tip” and those set off with dotted lines and a bold centered title. Some of the latter of these are good software development process tips. Ms. Sweeney rightly realized that she was addressing beginning programmers and sought to instill good practice in them from the start. She offers tips on accessibility (p. 52), tab order (p. 54), setting up a directory structure (p. 62), and naming conventions (p. 77). These are all important concepts and it’s never too early to learn them.

It’s obvious that Ms. Sweeney actually uses VB for testing, because the examples are suited to VB’s strengths, not just a hodgepodge of VB features. She spends most of her time on database, COM, registry, file I/O and other Windows API features, revisiting them in later chapters. This is the gap between unit testing (which is best written in the same language of the application) and automated GUI testing (written using off-the-shelf tools). These features are hard to test with a recorder and often best tested in the language you expect your customers to use, which in many cases is VB. If your application exposes a COM interface, for instance, it would be foolish not to use VB.

Chapter 10, Testing the Web with Visual Basic, begins with an explanation that there are tools (some free) for testing websites, but also that there is more you can do with VB. One useful example in this chapter is a testing web browser that exposes the internals of the site. I could see this being useful, for example, for verifying that specific headers are present without constantly viewing source. And since you use the IE control, you can be assured that the page will be rendered exactly as it would in IE. Taking the idea further, the browser could be a flight recorder for functional testing—logging exactly what you’ve done on a site, so that if you see a bug, it would be easy to reproduce.

The one critique I have of the book is that while the examples are great for learning the features of VB, they are not really testing scripts. In real testing scripts, there would not be visual confirmation—testing scripts run best without a GUI or intervention from the user, only logging information when there is something wrong. The examples are visual because of the visual nature of VB development and the fact that when learning a new language, it’s easier to understand if you can see what’s going on.  I would have liked to see the idea of self-verification explored more. That being said, Ms. Sweeney says in the introduction that this is not a software testing automation book or a VB manual—that it is enough of both to get started on using VB for automation, and readers are expected to be somewhat familiar with automation practice. She recommends Software Test Automation by Fewster and Graham [amazon] and Automated Software Testing: Introduction, Management, and Performance by Dustin, et al. [amazon] for learning automation practice.

Also, realizing the need to at least mention .NET, this book tacks on two chapters from a VB.NET book. They are not specifically about testing and serve to introduce a VB programmer to the many differences in between VB and VB.NET. It is somewhat of an afterthought, and might be useful to get your feet wet, but I would have liked to see some of the “Testing Tips” or other asides from the earlier chapters.

The book ends with advice directly from some professional test automators and genuinely useful appendices. Appendix D collects some interesting essays for further reading.

If you are in test automation, and running up against the limitations of the available tools, this book is great for learning how to fill that gap. Also, any tester who is interested in learning how to program will find the advice invaluable and the examples relevant to their work. The fact that Ms. Sweeney and her contributors are professional test automators imparting hard-won advice makes this book all the more useful.

Using jUnit for Monitoring

Since jUnit (and the xUnit family) is just a verification mechanism, it’s a natural fit for service monitoring. This article from JDJ describes using jUnit and JMX to build a service monitoring framework:

This article walks you through the process of setting up a basic service monitor and event handler for a common J2EE n-tier system. Developers of J2EE systems will be able to use JMX4ODP to create testing suites to help them develop more reliable systems. J2EE application administrators will be able to use JMX4ODP to simplify and regulate the management of deployed systems.

Why jUnit?

JUnit bills itself as a regression-testing suite for code objects, but it’s not much of a leap to see it as a tool for distributed system diagnostics. JUnit runs tests by instantiating objects, invoking their methods with known inputs, and checking the output against expected returns.The article is very Java and J2EE focused, but the concepts are applicable to any service monitoring project.

Source Control for One

Eric Sink, whose company, SourceGear, publishes a source control product called SourceGear Vault, announced that they are now offering a single user license for $49.

He has an excellent article about why to use source control on projects with one developer and a follow up.

To his list of reasons, I’d add these:

  • It enables Daily Builds.  You can’t build on a clean machine from the source on your drive. You need there to be an official, working version of the source.
  • It enables bug fixes to released versions. Using labeling and branching features, you can fix a bug in a released version to a user without also giving them every change you’ve made since then. This is important no matter how many developers you have.
  • It enables binary search bug fixing. Outlined in this Joel On Software article, it’s simply a way of finding a bug by systematically trying each archived build until you find the one that introduced the bug.  Then you check the log entries to see what you did.

Eric sells the benefits a little short, I think, but any company with one developer is unlikely to shell out a lot for source control, so the price is probably right, and there’s always that hope for a second developer.

Unit Testing with NUnit

I’m preparing a presentation on unit testing in .NET (using NUnit). Unit Testing is generally non-implementation language specific. The original SmallTalk framework and the popular jUnit Framework for Java have nearly identical designs. The same is true for CppUnit (for C++) and every other unit testing framework I’ve looked at. Since .NET languages support all of the language features necessary to implement this design, I assumed that it would be the same—and for NUnit 1.0, that was the case.

NUnit 2.0 takes it further. The developers correctly realized that .NET offers some features that can automate some of the tasks in unit testing, and took advantage of them. This “breaks” consistency with other unit testing frameworks, but for a good reason, and it’s close enough to the others to not be a big problem.

The basic architecture of the xUnit Frameworks provides a base class called TestCase which you extend to contain the test methods. The framework requires you to write another class that builds a list of these classes into a TestSuite. In jUnit, this class typically looks like this:

import junit.framework.*;

public class AllTests
{
  public static Test suite() {
    TestSuite suite =
      new TestSuite(“All tests”);

    suite.addTest(
      new TestSuite(ReaderTest.class)
    );

    suite.addTest(
      new TestSuite(CommandTest.class)
    );

    return suite;
  }
}

Note that the method AllTests.suite() must be edited each time a test class is added. If you forget, you will get a false test success. If you’re writing tests first, you will notice this immediately—but it would still be nice not to have to write or maintain this class. NUnit 2.0 uses custom attributes so that the framework can discover all Test classes and call them automatically, and takes care of this manual task. In NUnit, instead of extending a TestCase, you use a custom attribute to mark the class as a TestFixture and add test methods to it (either marked with the attribute Test or named with the prefix “test”).  It looks like this:

namespace NUnit.Tests
{
  using System;
  using NUnit.Framework;

  [TestFixture]
  public class SuccessTests
  {
    [Test] public void Add()
    { /* … */ }

    public void TestSubtract()
    { /* … */ }
  }
}

There is no need to write an AllTests class, as .NET has features that allows the framework to discover this class automatically and run it. Kent Beck calls this an “idiomatic design”, one that takes advantage of language features, instead of porting the design from a language with less features, and is a good lesson for any type of language port.

Getting Started with Extreme Programming

Extreme Programming combines several practices into a well-defined software process. The goal of XP is to deliver features to the end user as fast as possible. Some of the practices are focused on quality, some on maintaining releasability, some on development speed and some on planning.

XP may seem daunting to you at first because it’s probably very different from how you work now. It’s also common for development teams to doubt the benefits of some of the practices and resist adopting them. Therefore, when getting started with XP, you will find it easier to migrate there in small changes, each one building upon the last and gaining confidence and trust in the process the whole way. If some of the practices aren’t right for your organization, you can still benefit from the others.

There are three XP practices which are easy to adopt and will provide immediate benefit. In addition, they don’t rely on other practices.

The first, Coding Standard, simply states that you should have a consistent naming and formatting convention for your code. Many organizations already do this and some languages (Java, Eiffel, C#) already have recommended coding standards. Your coding standard should not only be internally consistent, but also consistent with industry norms. This practice enables the Metaphor and Common Code Ownership practices.

The next, Unit Testing, is far less commonly used in non-XP shops, but no other XP practice is as easy to adopt and has as much immediate benefit as it does. Even if you are skeptical of XP, I highly recommend that you try Unit Testing and I will be spending a lot of time in this blog discussing why. Unit Testing enables Refactoring, Simple Design, Common Code Ownership, Continuous Integration, and other good practices.

Lastly, instituting Daily Builds will make your functional testing and deployment more consistent. This is also easy to adopt and has many obvious benefits. This practice enables Continuous Integration, Small Releases, and Customer Tests practices.

XP practices work best when combined, but it’s better to be successful at using some of them than fail when trying to use them all at once.

Once you’ve mastered these, pick among the enabled practices to take your process further. Practices focused on getting code to users will be most beneficial (Continuous Integration, Simple Design and Small Releases).

Automated Software Process Checklist

Here is a list of software processes that should be automated

1. Source control
2. Bug tracking
3. Unit Testing
4. Daily Build and Test
5. Deployment
6. Functional Testing (Black box testing)
7. Source Code Generation (from other sources — e.g. Databases)

I’ll be talking more about these soon.  The order is determined by what will give you the most benefit for the least additional work.

For some projects, it’s just inexcusable to not have automated deployment, and it will make sense to tackle that first. For most projects, it’s the development processes which are manual or semi-manual and need to be dealt with first.