Automated testing is a cornerstone of agile development. An effective testing strategy will deliver new functionality more aggressively, accelerate user feedback, and improve quality. However, for many developers, creating effective automated tests is a unique and unfamiliar challenge.\ xUnit Test Patterns is the definitive guide to writing automated tests using xUnit, the most popular unit testing framework in use today. Agile coach and test automation expert Gerard Meszaros describes 68...
Automated testing is a cornerstone of agile development. An effective testing strategy will deliver new functionality more aggressively, accelerate user feedback, and improve quality. However, for many developers, creating effective automated tests is a unique and unfamiliar challenge.xUnit Test Patterns is the definitive guide to writing automated tests using xUnit, the most popular unit testing framework in use today. Agile coach and test automation expert Gerard Meszaros describes 68 proven patterns for making tests easier to write, understand, and maintain. He then shows you how to make them more robust and repeatableand far more cost-effective.Loaded with information, this book feels like three books in one. The first part is a detailed tutorial on test automation that covers everything from test strategy to in-depth test coding. The second part, a catalog of 18 frequently encountered "test smells," provides trouble-shooting guidelines to help you determine the root cause of problems and the most applicable patterns. The third part contains detailed descriptions of each pattern, including refactoring instructions illustrated by extensive code samples in multiple programming languages.Topics covered include Writing better testsand writing them faster The four phases of automated tests: fixture setup, exercising the system under test, result verification, and fixture teardown Improving test coverage by isolating software from its environment using Test Stubs and Mock Objects Designing software for greater testability Using test "smells" (including code smells, behavior smells, and project smells) to spot problems and know when and how to eliminate them Refactoring tests for greater simplicity, robustness, and execution speedThis book will benefit developers, managers, and testers working with any agile or conventional development process, whether doing test-driven development or writing the tests last. While the patterns and smells are especially applicable to all members of the xUnit family, they also apply to next-generation behavior-driven development frameworks such as RSpec and JBehave and to other kinds of test automation tools, including recorded test tools and data-driven test tools such as Fit and FitNesse.Visual Summary of the Pattern Language Foreword Preface Acknowledgments Introduction Refactoring a Test PART I: The Narratives Chapter 1 A Brief Tour Chapter 2 Test Smells Chapter 3 Goals of Test Automation Chapter 4 Philosophy of Test Automation Chapter 5 Principles of Test Automation Chapter 6 Test Automation Strategy Chapter 7 xUnit Basics Chapter 8 Transient Fixture Management Chapter 9 Persistent Fixture Management Chapter 10 Result Verification Chapter 11 Using Test Doubles Chapter 12 Organizing Our Tests Chapter 13 Testing with Databases Chapter 14 A Roadmap to Effective Test Automation PART II: The Test Smells Chapter 15 Code Smells Chapter 16 Behavior Smells Chapter 17 Project Smells PART III: The Patterns Chapter 18 Test Strategy Patterns Chapter 19 xUnit Basics Patterns Chapter 20 Fixture Setup Patterns Chapter 21 Result Verification Patterns Chapter 22 Fixture Teardown Patterns Chapter 23 Test Double Patterns Chapter 24 Test Organization Patterns Chapter 25 Database Patterns Chapter 26 Design-for-Testability Patterns Chapter 27 Value Patterns PART IV: Appendixes Appendix A Test Refactorings Appendix B xUnit Terminology Appendix C xUnit Family Members Appendix D Tools Appendix E Goals and Principles Appendix F Smells, Aliases, and Causes Appendix G Patterns, Aliases, and VariationsGlossary References Index
The Value of Self-Testing Code\ In Chapter 4 of Refactoring Ref, Martin Fowler writes:\ If you look at how most programmers spend their time, you'll find that writing code is actually a small fraction. Some time is spent figuring out what ought to be going on, some time is spent designing, but most time is spent debugging. I'm sure every reader can remember long hours of debugging, often long into the night. Every programmer can tell a story of a bug that took a whole day (or more) to find. Fixing the bug is usually pretty quick, but finding it is a nightmare. And then when you do fix a bug, there's always a chance that anther one will appear and that you might not even notice it until much later. Then you spend ages finding that bug.\ Some software is very difficult to test manually. In these cases, we are often forced into writing test programs.\ I recall a project I was working on in 1996. My task was to build an event framework that would let client software register for an event and be notified when some other software raised that event (the Observer GOF pattern). I could not think of a way to test this framework without writing some sample client software. I had about 20 different scenarios I needed to test, so I coded up each scenario with the requisite number of observers, events, and event raisers. At first, I logged what was occurring in the console and scanned it manually. This scanning became very tedious very quickly.\ Being quite lazy, I naturally looked for an easier way to perform this testing. For each test I populated a Dictionary indexed by the expected event and the expected receiver of it with the name of the receiver as the value. When a particular receiver was notified of the event, it looked in the Dictionary for the entry indexed by itself and the event it had just received. If this entry existed, the receiver removed the entry. If it didn't, the receiver added the entry with an error message saying it was an unexpected event notification.\ After running all the tests, the test program merely looked in the Dictionary and printed out its contents if it was not empty. As a result, running all of my tests had a nearly zero cost. The tests either passed quietly or spewed a list of test failures. I had unwittingly discovered the concept of a Mock Object (page 544) and a Test Automation Framework (page 298) out of necessity!\ My First XP Project\ In late 1999, I attended the OOPSLA conference, where I picked up a copy of Kent Beck's new book, eXtreme Programming Explained XPE. I was used to doing iterative and incremental development and already believed in the value of automated unit testing, although I had not tried to apply it universally. I had a lot of respect for Kent, whom I had known since the first PLoP1 conference in 1994. For all these reasons, I decided that it was worth trying to apply eXtreme Programming on a ClearStream Consulting project. Shortly after OOPSLA, I was fortunate to come across a suitable project for trying out this development approach—namely, an add-on application that interacted with an existing database but had no user interface. The client was open to developing software in a different way.\ We started doing eXtreme Programming "by the book" using pretty much all of the practices it recommended, including pair programming, collective ownership, and test-driven development. Of course, we encountered a few challenges in figuring out how to test some aspects of the behavior of the application, but we still managed to write tests for most of the code. Then, as the project progressed, I started to notice a disturbing trend: It was taking longer and longer to implement seemingly similar tasks.\ I explained the problem to the developers and asked them to record on each task card how much time had been spent writing new tests, modifying existing tests, and writing the production code. Very quickly, a trend emerged. While the time spent writing new tests and writing the production code seemed to be staying more or less constant, the amount of time spent modifying existing tests was increasing and the developers' estimates were going up as a result. When a developer asked me to pair on a task and we spent 90% of the time modifying existing tests to accommodate a relatively minor change, I knew we had to change something, and soon!\ When we analyzed the kinds of compile errors and test failures we were experiencing as we introduced the new functionality, we discovered that many of the tests were affected by changes to methods of the system under test (SUT). This came as no surprise, of course. What was surprising was that most of the impact was felt during the fixture setup part of the test and that the changes were not affecting the core logic of the tests.\ This revelation was an important discovery because it showed us that we had the knowledge about how to create the objects of the SUT scattered across most of the tests. In other words, the tests knew too much about nonessential parts of the behavior of the SUT. I say "nonessential" because most of the affected tests did not care about how the objects in the fixture were created; they were interested in ensuring that those objects were in the correct state. Upon further examination, we found that many of the tests were creating identical or nearly identical objects in their test fixtures.\ The obvious solution to this problem was to factor out this logic into a small set of Test Utility Methods (page 599). There were several variations:\ \ When we had a bunch of tests that needed identical objects, we simply created a method that returned that kind of object ready to use. We now call these Creation Methods (page 415).\ Some tests needed to specify different values for some attribute of the object. In these cases, we passed that attribute as a parameter to the Parameterized Creation Method (see Creation Method).\ Some tests wanted to create a malformed object to ensure that the SUT would reject it. Writing a separate Parameterized Creation Method for each attribute cluttered the signature of our Test Helper (page 643), so we created a valid object and then replaced the value of the One Bad Attribute (see Derived Value on page 718).\ \ We had discovered what would become2 our first test automation patterns.\ Later, when tests started failing because the database did not like the fact that we were trying to insert another object with the same key that had a unique constraint, we added code to generate the unique key programmatically. We called this variant an Anonymous Creation Method (see Creation Method) to indicate the presence of this added behavior.\ Identifying the problem that we now call a Fragile Test (page 239) was an important event on this project, and the subsequent definition of its solution patterns saved this project from possible failure. Without this discovery we would, at best, have abandoned the automated unit tests that we had already built. At worst, the tests would have reduced our productivity so much that we would have been unable to deliver on our commitments to the client. As it turned out, we were able to deliver what we had promised and with very good quality. Yes, the testers3 still found bugs in our code because we were definitely missing some tests. Introducing the changes needed to fix those bugs, once we had figured out what the missing tests needed to look like, was a relatively straightforward process, however.\ We were hooked. Automated unit testing and test-driven development really did work, and we have been using them consistently ever since.\ As we applied the practices and patterns on subsequent projects, we have run into new problems and challenges. In each case, we have "peeled the onion" to find the root cause and come up with ways to address it. As these techniques have matured, we have added them to our repertoire of techniques for automated unit testing.\ We first described some of these patterns in a paper presented at XP2001. In discussions with other participants at that and subsequent conferences, we discovered that many of our peers were using the same or similar techniques. That elevated our methods from "practice" to "pattern" (a recurring solution to a recurring problem in a context). The first paper on test smells RTC was presented at the same conference, building on the concept of code smells first described in Ref.\ My Motivation\ I am a great believer in the value of automated unit testing. I practiced software development without it for the better part of two decades, and I know that my professional life is much better with it than without it. I believe that the xUnit framework and the automated tests it enables are among the truly great advances in software development. I find it very frustrating when I see companies trying to adopt automated unit testing but being unsuccessful because of a lack of key information and skills.\ As a software development consultant with ClearStream Consulting, I see a lot of projects. Sometimes I am called in early on a project to help clients make sure they "do things right." More often than not, however, I am called in when things are already off the rails. As a result, I see a lot of "worst practices" that result in test smells. If I am lucky and I am called early enough, I can help the client recover from the mistakes. If not, the client will likely muddle through less than satisfied with how TDD and automated unit testing worked—and the word goes out that automated unit testing is a waste of time.\ In hindsight, most of these mistakes and best practices are easily avoidable given the right knowledge at the right time. But how do you obtain that knowledge without making the mistakes for yourself? At the risk of sounding self-serving, hiring someone who has the knowledge is the most time-efficient way of learning any new practice or technology. According to Gerry Weinberg's "Law of Raspberry Jam" SoC,4 taking a course or reading a book is a much less effective (though less expensive) alternative. I hope that by writing down a lot of these mistakes and suggesting ways to avoid them, I can save you a lot of grief on your project, whether it is fully agile or just more agile than it has been in the past—the "Law of Raspberry Jam" not withstanding.\ Who This Book Is For\ I have written this book primarily for software developers (programmers, designers, and architects) who want to write better tests and for the managers and coaches who need to understand what the developers are doing and why the developers need to be cut enough slack so they can learn to do it even better! The focus here is on developer tests and customer tests that are automated using xUnit. In addition, some of the higher-level patterns apply to tests that are automated using technologies other than xUnit. Rick Mugridge and Ward Cunningham have written an excellent book on Fit FitB, and they advocate many of the same practices.\ Developers will likely want to read the book from cover to cover, but they should focus on skimming the reference chapters rather than trying to read them word for word. The emphasis should be on getting an overall idea of which patterns exist and how they work. Developers can then return to a particular pattern when the need for it arises. The first few elements (up to and include the "When to Use It" section) of each pattern should provide this overview.\ Managers and coaches might prefer to focus on reading Part I, The Narratives, and perhaps Part II, The Test Smells. They might also need to read Chapter 18, Test Strategy Patterns, as these are decisions they need to understand and provide support to the developers as they work their way through these patterns. At a minimum, managers should read Chapter 3, Goals of Test Automation.\ 1 The Pattern Languages of Programs conference.\ 2 Technically, they are not truly patterns until they have been discovered by three independent project teams.\ 3 The testing function is sometimes referred to as "Quality Assurance." This usage is, strictly speaking, incorrect.\ 4 The Law of Raspberry Jam: "The wider you spread it, the thinner it gets."
Visual Summary of the Pattern Language xviiForeword xixPreface xxiAcknowledgments xxviiIntroduction xxixRefactoring a Test xlvThe Narratives 1A Brief Tour 3About This Chapter 3The Simplest Test Automation Strategy That Could Possibly Work 3Development Process 4Customer Tests 5Unit Tests 6Design for Testability 7Test Organization 7What's Next? 8Test Smells 9About This Chapter 9An Introduction to Test Smells 9What's a Test Smell? 10Kinds of Test Smells 10What to Do about Smells? 11A Catalog of Smells 12The Project Smells 12The Behavior Smells 13The Code Smells 16What's Next? 17Goals of Test Automation 19About This Chapter 19Why Test? 19Economics of Test Automation 20Goals of TestAutomation 21Tests Should Help Us Improve Quality 22Tests Should Help Us Understand the SUT 23Tests Should Reduce (and Not Introduce) Risk 23Tests Should Be Easy to Run 25Tests Should Be Easy to Write and Maintain 27Tests Should Require Minimal Maintenance as the System Evolves Around Them 29What's Next? 29Philosophy of Test Automation 31About This Chapter 31Why Is Philosophy Important? 31Some Philosophical Differences 32Test First or Last? 32Tests or Examples? 33Test-by-Test or Test All-at-Once? 33Outside-In or Inside-Out? 34State or Behavior Verification? 36Fixture Design Upfront or Test-by-Test? 36When Philosophies Differ 37My Philosophy 37What's Next? 37Principles of Test Automation 39About This Chapter 39The Principles 39What's Next? 48Test Automation Strategy 49About This Chapter 49What's Strategic? 49Which Kinds of Tests Should We Automate? 50Per-Functionality Tests 50Cross-Functional Tests 52Which Tools Do We Use to Automate Which Tests? 53Test Automation Ways and Means 54Introducing xUnit 56The xUnit Sweet Spot 58Which Test Fixture Strategy Do We Use? 58What Is a Fixture? 59Major Fixture Strategies 60Transient Fresh Fixtures 61Persistent Fresh Fixtures 62Shared Fixture Strategies 63How Do We Ensure Testability? 65Test Last-at Your Peril 65Design for Testability-Upfront 65Test-Driven Testability 66Control Points and Observation Points 66Interaction Styles and Testability Patterns 67Divide and Test 71What's Next? 73xUnit Basics 75About This Chapter 75An Introduction to xUnit 75Common Features 76The Bare Minimum 76Defining Tests 76What's a Fixture? 78Defining States of Tests 78Running Tests 79Test Results 79Under the xUnit Covers 81Test Commands 82Test Suite Objects 82xUnit in the Procedural World 82What's Next? 83Transient Fixture Management 85About This Chapter 85Test Fixture Terminology 86What Is a Fixture? 86What Is a Fresh Fixture? 87What Is a Transient Fresh Fixture? 87Building Fresh Fixtures 88In-line Fixture Setup 88Delegated Fixture Setup 89Implicit Fixture Setup 91Hybrid Fixture Setup 93Tearing Down Transient Fresh Fixtures 93What's Next? 94Persistent Fixture Management 95About This Chapter 95Managing Persistent Fresh Fixtures 95What Makes Fixtures Persistent? 95Issues Caused by Persistent Fresh Fixtures 96Tearing Down Persistent Fresh Fixtures 97Avoiding the Need for Teardown 100Dealing with Slow Tests 102Managing Shared Fixtures 103Accessing Shared Fixtures 103Triggering Shared Fixture Construction 104What's Next? 106Result Verification 107About This Chapter 107Making Tests Self-Checking 107Verify State or Behavior? 108State Verification 109Using Built-in Assertions 110Delta Assertions 111External Result Verification 111Verifying Behavior 112Procedural Behavior Verification 113Expected Behavior Specification 113Reducing Test Code Duplication 114Expected Objects 115Custom Assertions 116Outcome-Describing Verification Method 117Parameterized and Data-Driven Tests 118Avoiding Conditional Test Logic 119Eliminating "if" Statements 120Eliminating Loops 121Other Techniques 121Working Backward, Outside-In 121Using Test-Driven Development to Write Test Utility Methods 122Where to Put Reusable Verification Logic? 122What's Next? 123Using Test Doubles 125About This Chapter 125What Are Indirect Inputs and Outputs? 125Why Do We Care about Indirect Inputs? 126Why Do We Care about Indirect Outputs? 126How Do We Control Indirect Inputs? 128How Do We Verify Indirect Outputs? 130Testing with Doubles 133Types of Test Doubles 133Providing the Test Double 140Configuring the Test Double 141Installing the Test Double 143Other Uses of Test Doubles 148Endoscopic Testing 149Need-Driven Development 149Speeding Up Fixture Setup 149Speeding Up Test Execution 150Other Considerations 150What's Next? 151Organizing Our Tests 153About This Chapter 153Basic xUnit Mechanisms 153Right-Sizing Test Methods 154Test Methods and Testcase Classes 155Testcase Class per Class 155Testcase Class per Feature 156Testcase Class per Fixture 156Choosing a Test Method Organization Strategy 158Test Naming Conventions 158Organizing Test Suites 160Running Groups of Tests 160Running a Single Test 161Test Code Reuse 162Test Utility Method Locations 163TestCase Inheritance and Reuse 163Test File Organization 164Built-in Self-Test 164Test Packages 164Test Dependencies 165What's Next? 165Testing with Databases 167About This Chapter 167Testing with Databases 167Why Test with Databases? 168Issues with Databases 168Testing without Databases 169Testing the Database 171Testing Stored Procedures 172Testing the Data Access Layer 172Ensuring Developer Independence 173Testing with Databases (Again!) 173What's Next? 174A Roadmap to Effective Test Automation 175About This Chapter 175Test Automation Difficulty 175Roadmap to Highly Maintainable Automated Tests 176Exercise the Happy Path Code 177Verify Direct Outputs of the Happy Path 178Verify Alternative Paths 178Verify Indirect Output Behavior 179Optimize Test Execution and Maintenance 180What's Next? 181The Test Smells 183Code Smells 185Obscure Test 186Conditional Test Logic 200Hard-to-Test Code 209Test Code Duplication 213Test Logic in Production 217Behavior Smells 223Assertion Roulette 224Erratic Test 228Fragile Test 239Frequent Debugging 248Manual Intervention 250Slow Tests 253Project Smells 259Buggy Tests 260Developers Not Writing Tests 263High Test Maintenance Cost 265Production Bugs 268The Patterns 275Test Strategy Patterns 277Recorded Test 278Scripted Test 285Data-Driven Test 288Test Automation Framework 298Minimal Fixture 302Standard Fixture 305Fresh Fixture 311Shared Fixture 317Back Door Manipulation 327Layer Test 337xUnit Basics Patterns 347Test Method 348Four-Phase Test 358Assertion Method 362Assertion Message 370Testcase Class 373Test Runner 377Testcase Object 382Test Suite Object 387Test Discovery 393Test Enumeration 399Test Selection 403Fixture Setup Patterns 407In-line Setup 408Delegated Setup 411Creation Method 415Implicit Setup 424Prebuilt Fixture 429Suite Fixture Setup 441Setup Decorator 447Chained Tests 454Result Verification Patterns 461State Verification 462Behavior Verification 468Custom Assertion 474Delta Assertion 485Guard Assertion 490Unfinished Test Assertion 494Fixture Teardown Patterns 499Garbage-Collected Teardown 500Automated Teardown 503In-line Teardown 509Implicit Teardown 516Test Double Patterns 521Test Double 522Test Stub 529Test Spy 538Mock Object 544Fake Object 551Configurable Test Double 558Hard-Coded Test Double 568Test-Specific Subclass 579Test Organization Patterns 591Named Test Suite 592Test Utility Method 599Parameterized Test 607Testcase Class per Class 617Testcase Class per Feature 624Testcase Class per Fixture 631Testcase Superclass 638Test Helper 643Database Patterns 649Database Sandbox 650Stored Procedure Test 654Table Truncation Teardown 661Transaction Rollback Teardown 668Design-for-Testability Patterns 677Dependency Injection 678Dependency Lookup 686Humble Object 695Test Hook 709Value Patterns 713Literal Value 714Derived Value 718Generated Value 723Dummy Object 728Appendixes 733Test Refactorings 735xUnit Terminology 741xUnit Family Members 747Tools 753Goals and Principles 757Smells, Aliases, and Causes 761Patterns, Aliases, and Variations 767Glossary 785References 819Index 835