I have worked on two large projects now that both had grown large and complex testing frameworks. When I work on projects, I like to see strong test suites; they help me feel more confident that any changes I make to the code base won't cause regressions/bugs. This is important because in many legacy code bases, features become tangled and disorganized due to deadlines and scope creep. We've all been there, so we know how such things come to be.
At first glance, the test suites looked pretty good. Coverage was not documented well, but I could tell that the tests did a good job at covering most important cases and a lot of edge cases. As I started to dig into the test suites, however, I noticed both projects had a common problem -- the complexity of the test suite grew in a different direction than the intention of application it was meant to test. I get frustrated when this happens because such suites leave me confused about things like the true life cycle of an application.
Consider an application meant to administer surveys, for example. The life cycle of such application probably looks like this:
At first glance, the test suites looked pretty good. Coverage was not documented well, but I could tell that the tests did a good job at covering most important cases and a lot of edge cases. As I started to dig into the test suites, however, I noticed both projects had a common problem -- the complexity of the test suite grew in a different direction than the intention of application it was meant to test. I get frustrated when this happens because such suites leave me confused about things like the true life cycle of an application.
Consider an application meant to administer surveys, for example. The life cycle of such application probably looks like this:
- Administrator creates a new survey.
- Send survey to potential takers.
- Some takers complete the survey.
- Administrator reviews survey answers as more completions arrive.
- Administrator closes the survey.
- Administrator generates summary reports for survey.
Test suites that suffer from the problems I mention might have a test that run something like this:
- Generate a survey.
- Generate survey answers.
- Generate summary report for the survey.
- Verify summary is correct.
This is a simple example that highlights something very important: tests, usually automation tests, have a bad habit of skipping steps in the application life cycle. In this example, no administrator generated the survey. These kinds of things can confuse developers new to a project. If a new developer has just started, he or she probably doesn't know that an administrator is supposed to create surveys. The test tools just... make a survey. The conditions for making that survey are not clear in the test. What's worse... if we refactor the validation that requires an administrator create a survey down a bit, we might suddenly cause large sections of the test suite to fail in step one of the test because the tests all get errors saying "only an administrator can create a survey" or something like that.
This kind of stuff leaves me stumped. If someone went to all the trouble of creating an automation test framework, why not set the automation to use very public APIs and honor application lifecycle? Granted, there will always be situations where some data manipulation magic will be necessary (something that says "x happens after a week" can't be automated by waiting a week), but as a general principle, I think it makes so much more sense to create a test framework to both test and teach the application lifecycle correctly. This doesn't require a choice between spock or cucumber or junit (though most of what I am saying here does not apply to unit tests)... all it requires is a commitment to test tools/frameworks at an integration or system level that work hard to express the lifecycle.
Comments
Post a Comment