Mocks and Dependency Injection

This is the third and final part of a series about mocking and TDD. In part 2, I created a github boundary object. To use it, I relied on dependency injection that looked like this: User.get('username', github)

This isn’t great. Every time I need a user I would have to take two steps: create or import the github boundary, and then get the user. That’s not only annoying, but could result in Github() calls being sprinkled throughout my code, making it very hard to change the way I initialize it (I can easily picture needing to pass in some config options down the road).

One way to fix this would be to make the github parameter optional. It would look something like this:

# in User model:
@classmethod
def get(cls, username, github=None):
    github = github or Github()
    user_data = github.get_user(username)
    return cls(**user_data)

That would let me instantiate users wherever I want using just the username, and still let me inject a mocked github boundary in my tests. But it would wreak of code that only exists to make something easier to test. That’s a smell (a signal that a design may be bad), that is very common when testing with mocks. Your tests are more effective when they exercise code the same way it’s used in production, without special hooks or hacks to poke around inside.

Making the parameter optional also risks accidental integration. Have you ever found that your unit test suite fails when the network was down, even though you thought you were being so careful? Allowing implicit communication with boundaries can lead to pain like that.

Whenever I reach the point in my test where I want to inject a mocked boundary, I always leave it as a required parameter when implementing the production code. I do this because boundaries are volatile and out of my control, so I want all interaction with them to be explicit.

Since I want the github parameter to be required, but I don’t want to pass it in every time I need a user, I decide that I need a new thing that knows about my boundary and can pass it in for me, and when I need a user from github, I can call that thing. One way to do that is with a full-blown inversion of control (IoC) container. It’s an object that knows how to build your objects with all their dependencies/wiring. Those have their own set of downsides and I try to avoid getting to the point where I need something that heavy.

Instead, I’ll add a new class method right on the User model that looks like this: User.from_github('username'). Then I can get users with a single call, and my interaction with github will still be explicit. There’s no risk of accidental integration: I’ll either be passing in a boundary, or calling a method that mentions it.

How do I implement this? First, I realize that I’m in the “refactor” phase of my red/green/refactor TDD cycle. Since I’m refactoring, I want to change the structure of my code, without changing its behavior. So my goal is to clean this up without breaking (and ideally without changing) any of my tests that exercise that behavior. My view currently looks like this:

# somewhere in my view:
github = Github()
user = User.get(request.data['username'], github)

It’s instantiating the github boundary and then using it to get a user matching a username. This is pretty much the exact behavior I want in my from_github method, so my plan is to do an extract method refactoring.

I start by literally copying the lines to a class method on my User model:

# in my User model:
@classmethod
def from_github(cls):
    github = Github()
    user = User.get(request.data['username'], github)

I can’t rely on a global request object, so I change it to a parameter:

# in my User model:
@classmethod
def from_github(cls, username):
    github = Github()
    user = User.get(username, github)

and update my view:

# somewhere in my view:
user = User.from_github(request.data['username'])

Still green! Successful refactoring.

While in my user model I notice something:

# in User:
@classmethod
def get(cls, username, github):
    user_data = github.get_user(username)
    return cls(**user_data)

Now that I have a method that explicitly mentions github, it feels weird that this method – which is not specific to github – has a parameter called github. I change it to be generic:

# in User:
@classmethod
def get(cls, username, repo):
    user_data = repo.get_user(username)
    return cls(**user_data)

Much better. And now the door is open to getting users from other APIs. For example, you can probably imagine what a from_twitter method might look like.

Notice that I never explicitly wrote a test for User.from_github. There are a few reasons: In my refactoring step, I rarely write new tests, since I don’t want to change behavior. And this method is actually another boundary, which I don’t unit test, and is already covered by the system test that hits my view.

In the end, I now have several ways to create users:

  • Instantiate with data from anywhere: User(**attributes)
  • Instantiate with data from a boundary that conforms to my expected interface: User.get(username, boundary)
  • Instantiate with data from a specific, named boundary: User.from_github(username)

I’m happy with this design. These are all clear, well defined factory methods (methods for creating objects) each with a specific purpose. And it turns out this is the pattern I usually end up with when dealing with models and boundaries. More explicitly, the pattern looks like this:

  • Ignore the boundary at first and write an init method that accepts pure data.
  • Drill down until I need to test that data is coming from a boundary, design the boundary using a mock, and inject it via a required parameter on a new method (see part 2).
  • Clean things up by adding a new method that can handle the boundary wiring for me.

I’ve found that using mocks, dependency injection, and factory methods in this way has made my code easier to maintain. The methods are small, all interaction is clear, and refactoring is safe and fun.

Mocks and External Dependencies

This is Part 2 in a series about mocking. In part 1, I said it’s best to use mocks as a design tool, and not as a convenience tool for tests that touch external dependencies. But what does it look like when you do have an external dependency, like a third-party library? Do you wrap it in your own code and mock the wrapper? I don’t think so. That puts the emphasis on the dependency, and I want dependencies to be details. Instead, I think in terms of boundaries. I let my tests help me decide where those boundaries are, and then by mocking them, figure out what they should look like. Then I may implement that boundary using an external dependency, which I do not mock in the tests. I’ll show what I mean with an example.

My imaginary example app will tell you if a particular github user is famous or not. My business logic determines famousness based on the number of followers. If they have 100 or more, they are famous.

Now pretend that I’ve drilled down to the point where I need a User model with a is_famous() method.

I don’t start by looking for (or writing) a github API library. I don’t like to interact with an API in my code until I’m absolutely forced to. I do take a look at the API to get an idea of where I’m headed before starting, but when I do, I’m careful not to let that influence my design in a way that would couple it tightly to the external API.

So I have some idea of what the github data looks like, but since I don’t want to hit the API yet, I start by assuming I can init my user with data from anywhere. This lets me test my logic without worrying about anything external. My tests look something like this:

def test_user_is_famous_if_more_than_100_followers(self):
    user = User(followers=101)
    self.assertTrue(user.is_famous())

def test_user_is_famous_if_100_followers(self):
    user = User(followers=100)
    self.assertTrue(user.is_famous())

def test_user_is_not_famous_if_less_than_100_followers(self):
    user = User(followers=99)
    self.assertFalse(user.is_famous())

It’s easy enough to make these pass, and since I’m not concerned with the github api yet, the tests are very easy to read and understand. No mocking noise!

But in the real world, I won’t be hard-coding the data passed to User‘s constructor. So I want a new method that can initialize a user with data from the service where that data lives. I take a moment to think about what that might look like:

# in my imagination (or maybe a scratch buffer...)
@classmethod
def get(cls, username):
    # get user_data from github...
    return cls(**user_data)

I’m happy with that. So in a real file, I start with a test:

def test_gets_user_data_from_github(self):
    user = User.get('blaix')
    self.assertEqual(user.id, 420)
    self.assertEqual(user.followers, 69)

How do I make this pass? Time to start looking for a github client library? Not yet. I can defer that decision a bit longer. For now, I only want to do the simplest thing that makes the test pass, so I cheat:

# in User:
@classmethod
def get(cls, username):
    return cls(id=420, followers=69)

I haven’t shown it, but I have a view that is initializing a user, and a system test that exercises that view. So I update my view to call this new method, and check that my system test still passes.

I’m all green. But not every github user has an id and follower count this cool and nice. I need my code to handle the general case. To make my code more general, I need to make my tests more specific. So I need my tests to explicitly verify that I’m getting the data from github. How do I do that?

First, I recognize that I’ve finally reached a boundary: my app code needs data from the outside world – in this case, the github API. At boundaries like this, I want an explicit object (a function, method, instance, or class) with a single purpose: handle that external communication.

I keep my boundaries in explicit objects to protect me from things that are volatile. The github API, or even the library I’d use to access the API, could change for reasons completely independent from my business logic. When I keep my interaction with it isolated, I can respond to those changes safer and faster, since the interaction won’t be scattered around and mixed with my app code. As a side-effect, it also provides a nice injection point to stick a test double that will help me move forward here, as well as protect my unit tests from unreliable and non-deterministic network calls.

Now back to that test: how do I assert that those numbers came from github? Since I’ve decided to use a boundary object, I can verify that my new method is using the boundary object to get the data. How do I verify that something I’m testing is interacting with another object correctly? This is exactly the right job for a mock.

This is a pattern I’ve been recognizing in my code lately. At my boundaries, I want the ability to inject a boundary object, and I want my tests to verify the interactions by injecting a test double to stand in for that boundary.

Since I’m using a test double as a stand-in for my boundary object, I get to design it from the point of view of the caller without worrying about the details of the implementation. So I decide that the cleanest way to get a user from my boundary object is to call a get_user method. Here’s my updated test:

def test_gets_user_data_from_github(self):
    github = Stub('github')
    
    calling(github.get_user).passing('blaix').returns({
        'id': 420,
        'followers': 69,
    })

    calling(github.get_user).passing('notblaix').returns({
        'id': 421,
        'followers': 70,
    })
    
    user = User.get('blaix', github)
    self.assertEqual(user.id, 420)
    self.assertEqual(user.followers, 69)

    user = User.get('notblaix', github)
    self.assertEqual(user.id, 421)
    self.assertEqual(user.followers, 70)

Note: I’m using tdubs which does not have explicit Mock objects, but the way I’m using Stub + calling here provides the same functionality: verify that I’m calling the collaborator correctly (since I’d only get the expected values when calling the method with those parameters).

Notice I had to add a new parameter to inject the github object. That’s  yucky, but I don’t want to split my thinking yet. So I make a note to refactor this when I’m green again. First I’ll make this test pass by writing :

# in User:
@classmethod
def get(cls, username, github):
    user_data = github.get_user(username)
    return cls(**user_data)

That makes the unit test pass, but now my system test is failing because I’m not passing the github parameter. So I update my view:

# somewhere in my view:
github = Github() # I know this doesn't exist yet, it's fine.
user = User.get(request.data['username'], github)

It’s still failing, but for a different reason. Progress! Now it’s failing because Github doesn’t exist. So I create that class but leave it empty. I like to wait for my test failures to move me forward. Now it’s failing because get_user doesn’t exist, so I create that too, leaving it empty as well. Finally I get a failure that isn’t about basic scaffolding: I’m returning None and my code expects a dictionary. That’s going to require adding real logic, and I don’t want to do that without an explicit test for that logic, so for now, I silence this failure by returning a faked dictionary.

Time to force my really for real github logic. As usual, I want to start with a test. Does that mean a unit test? Well, imagine what I’d need to do to unit test Github.get_user (meaning: test it without interacting with the outside world). I’d end up mocking third-party or even standard libraries. I don’t control those interfaces, so I wouldn’t get the full benefit of mocks, but I’d still get all the costs. So to optimize my rewards, I decide to fully integration test this boundary method. I expect to hit the real github API, and assert against my real user id. I’m only asserting against my id and not my follower count because the latter is likely to change, and I’m confident enough in my system tests that my bases are covered there.

def test_get_user_from_github(self):
    user = Github().get_user('blaix')
    self.assertEqual(user['id'], 664)

This is another pattern in my code: I always integration test my boundary objects. By doing this, I get two benefits: the tests are simple and provide high confidence, and since I don’t want to write a lot of tests like this, there is pressure to keep minimal logic in my boundary objects, which makes them easier to understand and maintain – something that’s very important for code that bumps up against things that are unreliable and could change without your control.

Time to make it pass by filling my empty method with real guts:

import requests

class Github(object):
    def get_user(self, username):
        url ='https://api.github.com/users/{}'.format(username)
        return requests.get(url).json()

I decided I didn’t need a full github client library. The simplest way to make my tests pass was to use the requests package (a ubiquitous package in python land for making HTTP requests).

But even though I’m using a third-party package, at no point did I need to patch an import. I’m not wrapping a third-party package to have something to mock in my tests, I used a mock to design an interface and then implemented that interface with a third-party package. I’m now free to swap out that package if I need to as the requirements grow more complex, and as long as I keep returning github user data from Github.get_user, I won’t have to change any of my other production code, or any of my tests. Imagine that: a complete refactoring of the internals of a class, with a test suite that acts only as a safety net and not handcuffs. Tests (with mocks!) that make refactoring third-party integrations easier, not harder. It’s possible when you follow these guidelines:

  • Work from the outside in. I started with a system test (not shown in the article), and that provided the safety net to start and keep the ball rolling. Then I worked my way in, one layer at a time, designing the code I wanted to have at the next layer down as I wrote my tests.
  • Defer decisions on third-party integrations as long as possible. It would have been tempting to start by using a third-party github library right in my view, but instead, I worked in layers, drilling down until I absolutely needed a single object with the sole purpose of communicating with github.
  • Prefer injectable boundary objects. When I reached the point where I wanted a test to assert that certain data came from github, I did that by injecting a test double, and this made it very easy to design the API of an explicit object to communicate with github.
  • Only integration test boundary objects. When I reached my boundary object, it was something that needed to communicate with the outside world. I could have tested it in isolation by mocking a third-party dependency, but that would leave me tightly coupled to an API I don’t have control over. So I fully integration test it, which puts pressure on me to keep my boundary object thin and free of logic, which is a good design for an object that interacts with volatile things like third-party dependencies and external HTTP APIs.

But wait! Remember this?

user = User.get('blaix', github)

This is gross. Passing an instance of my boundary object every time I need a user is going to be annoying. I punted on that earlier, but now that I’ve implemented everything and my tests are green, I’m free to refactor. This will require some discussion about mocks and dependency injection, and will be the subject of part 3 in this series.

Mocks as a Design Tool

Many people see mocks as a necessary evil to isolate their test code from third party dependencies and the outside world (the database, network, filesystem, etc). But in the paper “Mock Roles, Not Objects“, some of the first people to describe mocks describe them as a tool used in TDD to discover good interactions between your objects (i.e. design good types). They are much more powerful, and their costs are more reasonable, when they are used as a design tool,  and not just a convenience tool for isolating your tests.

Note: “Mock” is a loaded word often used to describe any type of test double, but this article will be speaking about mocks in the strict sense. If you don’t know what that means, first read The Little Mocker by Uncle Bob. It’s the best explanation I’ve seen of the different types of test doubles. Further note: all of this will also apply to some implementations of spies.

To understand how mock objects can be used as a design tool, it helps to to think about object-oriented programming as being all about messaging. In OOP, we don’t just have procedures that we can call, we have objects that we can ask questions or give commands to. Those questions and commands are messages that we send to the object. When you write my_model.save(), try thinking of it as telling the my_model object to save itself.

So if OOP is about messages, what are mock objects used for? Verifying messages! You should use a mock when you are testing something that interacts with another object, and you want to verify that you have told that other object to do something – i.e. assert that you sent it a particular message. And when you are writing your test first, you literally get to make up what that message looks like.

This is how mocks are used as a design tool in TDD. You work outside-in: start at a high level, and delegate details to lower levels. Mock those lower levels because right now you only care about telling them what to do. You’ll worry about how they do it later, when you’re ready to test that level.

In other words, you design your messages from the perspective of the message sender, the perspective that cares most about what you want that object to do, and least about how that object does it. This leads to messages that are simple and communicate well. And that leads to an object API that is simple and communicates well.

When done right, it feels like cheating. Your high-level tests almost feel like they aren’t testing anything. That’s good. These high level tests aren’t about verifying algorithms or reducing bugs. They are about designing your messages. It’s part of a TDD process to design code that is easy to understand and maintain. This high level code is easy to test because it’s easy to understand. It also helps lead to low-level code that is easy to test and understand because you’ve shaken out all the object collaboration in the higher levels, leaving simple procedures that can be tested without mocks.

But you only get these design benefits if you own the API of the object you’re mocking. You may have heard that you should not mock what you don’t own. Some libraries even strictly enforce this rule. But what does that mean? Why is it important?

When you “mock something you don’t own”, like a third-party dependency or something in stdlib, you can’t let your tests help you decide what the messages should be, because those choices have already been made. So if you only use mocks in this way, you are only getting what should be a side-effect of mocking, with none of the design benefits. And that leads to pain, because mocks have high costs. They give you plenty of rope to hang yourself with: increased coupling between test and implementation, potential for “false positives”, and increased setup costs. Many people don’t like mocks for these reasons, and if you aren’t using them primarily to design messages, I agree, they aren’t worth it.

So how do you mitigate those costs? What exactly should you do when you have an external dependency? What does this all look like in practice? I’m still writing about those topics and more, and planning to release it as a series about mocking and TDD. Part 2 covers mocks and external dependencies. If you’d like to be emailed when it is complete, subscribe to my newsletter. In the meantime, try using mocks to design the interactions between your objects. Used in this way, they can become a powerful part of your TDD tool belt.

Doctestability

Some languages let you use inline documentation to write example code that can be used as unit tests. In Python, these are called doctests. They look something like this:

def adder(a, b):
    """Adds two numbers for example purposes.

    >>> adder(1, 2)
    3

    >>> adder(5, 2)
    7

    """
    return a + b

I’m becoming a big fan of this feature. Because I’ve noticed that the ability to effectively doctest something is usually an indicator of good design.

What is an “effective doctest”? I mean a doctest that:

  • Is easy to understand
  • Is focused: doesn’t require a lot of setup
  • Is safe: no side effects
  • Communicates: It’s documentation first and a test second

These are also things you can say about code that is well designed: it’s easy to understand, focused, safe, and communicates intent.

A black-box, purely functional object meets all of these criteria. You pass some data in, you get some data out. Passing the same data in always gives you the same data out. This is the perfect candidate for a doctest, so let your desire to doctest force you to write more functions like this.

But what about situations where you must have side effects?

Recently I needed an object to route background tasks. For example, when background task A was finished, it should fire off task B and C in parallel, and when B was finished, it should fire off D. Upon task completion, the task router should be triggered again with a message saying the task was completed so we can fire off the next task(s).

We were going to do this in python using celery. An implementation could have looked like this:

from myproj.celery import app, tasks

@app.task
def router(data, task, message):
    """Route background tasks.

    When task A is complete, run B and C.
    When task B is complete, run D.
    Start the chain by calling:

        router('data', 'task_a', 'start')

    """
    if task == 'task_a':
        if message == 'start':
            tasks.task_a.delay(data) | router('task_a', 'complete')
        if message == 'complete':
            tasks.task_b.delay(data) | router('task_b', 'complete')
            tasks.task_c.delay(data) | router('task_c', 'complete')
    elif task == 'task_b':
        if message == 'complete':
            tasks.task_d.delay(data) | router('task_d', 'complete')
    else:
        # all done
        return data

Let’s look past the nested conditionals I used to keep the example compact and see what else is wrong with this function: My business logic – what tasks get triggered when – is tightly coupled to a third-party implementation: celery.

@app.task, .delay(), and chaining calls with a pipe are all celery-specific. This doesn’t seem too bad now, but this logic is likely to grow more complex, make the coupling even tighter, cluttering the logic, and making it even harder to test. And what happens when we outgrow our celery implementation and want to move to something like Amazon Simple Workflow Service?

Instead, since I approached this code with a desire to doctest, it ended up looking more like this:

class Router:
    """Route tasks.

    When task A is complete, run B and C.
    When task B is complete, run D.

    Init with a task runner: a callable that accepts the name of a
    task, some data, and a callback (which will be this router's
    route method). The runner should call the callback with a
    'complete' message and result data for the completed task.

    Example Usage:

    >>> def fake_runner(task, data, callback):
    ...     print('Running %s with %s' % (task, repr(data)))
    ...     callback('%s results' % task, task, 'complete')
    ...
    >>> router = Router(fake_runner)
    >>> router.route('task_a', 'data', 'start')
    Running task_a with 'data'
    Running task_b with 'task_a results'
    Running task_c with 'task_a results'
    Running task_d with 'task_b results'

    """
    def __init__(self, runner):
        self.runner = runner

    def route(self, task, data, message):
        if task == 'task_a':
            if message == 'start':
                self.runner('task_a', 'data', callback=self.route)
            if message == 'complete':
                self.runner('task_b', 'data', callback=self.route)
                self.runner('task_c', 'data', callback=self.route)
        elif task == 'task_b':
            if message == 'complete':
                self.runner('task_d', 'data', callback=self.route)
        else:
            # all done
            return data  

To make it doctestable, I introduced a seam between my business logic and celery: a task runner (I’ll leave the celery runner implementation to your imagination). And that seam was simple enough that I could include a fake implementation right in the doctest without hurting its readability. In fact, it improves the communication by documenting how to implement the seam’s interface.

So the documentation is better, but is the code better?

My celery usage (the mechanics of running background tasks) and my business logic (what tasks run when) are now decoupled. Since they need to change for different reasons, my code now follows the Single Responsibility Principle. That’s a good sign that this is a better design. I can expand the logic without celery details increasing the complexity, and I can move to a new third-party task runner by writing a new implementation of the runner interface without touching my business logic at all.

Notice my router no longer depends on celery. In fact, I no longer need to import anything. Instead, it depends on an interface (the runner). So it’s also following the Dependency Inversion Principle. As a side effect, I can now unit test this by injecting a mock runner and making assertions on its calls. These are also good signs that it’s a better design.

But! You may be asking, aren’t these the same benefits you get from normal unit testing?

Yes, but there is one big additional constraint with doctests that you don’t have in unit tests: You don’t want to use a mocking library. It would reduce the effectiveness of the doctest by cluttering it with mock stuff, which reduces its focus and ability to communicate. If I had a mocking library available, I may have decided to just patch celery and tasks. Instead, I was forced to construct a seam with an interface that was simple enough to fake right in the documentation for my object.

I love the ability to mock. But it’s a design tool, and reducing your need for mocks is usually an indicator of good design. So get into the habit of writing doctests as you write your code. You will be happy with where it leads you.

Does TDD slow you down?

When you first start out with TDD, development will be much harder and much slower. It will practically grind to a halt. This is because you are learning. I’m not as interested in this part of the discussion. Any time you are learning something new, you will go slower. The more interesting question is, is it worth learning? Does it still slow you down once you become competent?

The truth is, you may never be as fast with TDD as you were without it. That’s a sign that you were going too fast. You weren’t finishing your work. You were writing code to get that specific feature working, and then moving on. You didn’t have to worry if the code you wrote was tightly coupled or had a poor interface, because you only had to call it once and that work is done. You definitely didn’t do much refactoring, because there was no safety net in place to alleviate the fear.

That pace is super fast and very addicting. But it is not sustainable. You can get things built quickly, but eventually maintenance becomes a nightmare, and your progress grinds to a halt. That’s because the same qualities that make code hard to test make it hard to change. Building code is easy, maintaining (i.e. changing) code is the hard part. TDD forces you to start feeling that pain early, so the cost gets spread out over the life of the codebase, instead of pushed back and back until you’re forced to deal with it (technical debt!).

So it’s about trade-offs. If you are working on a quick prototype, don’t write tests. It will slow you down! It’s ok to admit that. But it’s fine, because tests for a prototype won’t provide value. But if you are building something to last, write tests. It may slow down your initial velocity, but it will even out over the long-term life of the project.

Have you been trying to do TDD and it still feels like it’s slowing you down too much? Does it seem like your tests are doing more harm than good? Are you still waiting to see all these supposed “benefits” of TDD? I’m working on some materials to help you level up your TDD skills so you can start loving your tests instead of hating them. And I started a newsletter so we can have a conversation about the pain you’re feeling, and I can let you know as soon my TDD materials become available.

How to test your tests

One of the benefits of writing your tests first is that you will always see the test fail. This is important, because you can’t trust a test you haven’t seen fail. Think about a line of code like this: assert a = 3

Of course, you meant to write a == 3, but you may not realize that if it’s in a test that you wrote to verify already-working code. It would pass, and you’d assume it passed because the code it’s testing really did set a to 3. But if you wrote and ran the test first (or commented out the working code to see a test failure), you’d notice that the test was passing when it shouldn’t, and fix the bug.

Watching a test fail is one way to test your tests.

But don’t just see a red/failing test and run off to make it pass. Pay attention to the failure message. You could have a different bug in your test that’s causing the wrong failure. Maybe due to a syntax or logic error. So if you don’t get the failure you expect, that’s another sign that your test may have a bug.

Now when you see a failure you expect, only write just enough code to fix that specific failure. Is your “makes a equal to 3″ test failing because the module is missing? Don’t implement the entire module, just create it. Then watch it fail because the function is missing from the module. Now don’t implement the entire function, just declare it. And so on. Keep fixing only the immediate failure until you hit green. Does the implementation feel complete? If not, you need another test.

It may feel silly at first, but if you train yourself to always take these micro steps, you can be sure that every line of your code is actually being tested. If you take large steps, the chances increase that untested – or even superfluous – code sneaks into your system. Fixing only the current failure tests your tests for completeness.

So while you’re in the “red” step of “red, green, refactor”, remember to keep an eye out that you’re red for the right reason, and don’t try to jump straight to green, just fix whatever is making you red right now. Eventually you’ll get there, and you’ll feel super confident in your code.

Do I have to write the test first?

To many, writing the test first is a requirement of TDD. It’s how I prefer to do it, but I don’t believe it’s a requirement, especially when starting out.

But that doesn’t mean I’m suggesting you go ahead and code away willy nilly and then write all the tests when you’re done. You still need a tight feedback loop. So how do you get that if you aren’t writing the tests first?

Using small steps: write one slice of code. Does it work? Good, now comment it out! Then write a test that will only pass with the code you just wrote.

Now run your test and watch it fail. This is an important step. If you haven’t seen a test fail, you can’t trust that you’re actually testing what you think you’re testing.

Now uncomment your code. Does the test pass? Good. Now you can refactor. Does the test still pass? Good! Now commit and repeat. I sometimes call this comment-driven development. I’m sure I’m the first person to think of it. I’m very clever.

If you stick to this style, eventually you will start to anticipate how to design the code you’re writing so you can easily test it. Then you may decide it’s easier to just go ahead and write the test first. Welcome to the club.

TDD, Micro Steps, and Backing Up

TDD is a way to think through your requirements incrementally by writing a test for one small piece, writing just enough code to get that test to pass, refactoring, and then moving on to the next small piece.

As you’re growing your code in this way, you should be zoomed way in. Taking tiny, micro steps. The time from failing to passing test should be measured in seconds, not minutes. It’s a feedback loop and it should be tight. You don’t want to waste time poking around in the dark.

This puts excellent design pressure on your system. The only way to keep that tight feedback loop moving is by writing code that is loosely coupled with a single responsibility. Otherwise it will be too hard to test.

This is why TDD is more about design than it is about verifying working code. But that doesn’t mean you can ignore design completely and let your tests lead you blindly somewhere without thinking. You will need to zoom back out every once in a while. You will need to put your knowledge of good design principles into practice and think critically about your code beyond what’s only easy to test.

I’ve been in many positions where my tests painted me into a corner. Or my feedback loop starts slowing down as complexity starts spiking, or the easiest way to test something would result in an obvious code smell. When that happens, I back up.

The ability to back up is another reason to keep the feedback loop tight. You should always be able to easily jump back any number of steps to working code and try a new path.

Yes, you still have to choose your path when you TDD. It’s called test-driven development, but you’re still the driver, not the tests. They are a tool you use to drive out some desired behavior, and there’s usually going to be multiple ways to write tests to get there. Use your design sense to make the best choice. If you don’t like where you ended up, back up. And keep your feedback loop short so backing up is no big deal.

tdubs: better test doubles for python

A couple things have been bothering me about python’s unittest.mock:

Problem 1: Stubs aren’t Mocks

Here’s a function (that is stupid and dumb because this is an example):

def get_next_page(repo, current_page):
    return repo.get_page(current_page + 1)

If I want to test this with unittest.mock, it would look like this:

def test_it_gets_next_page_from_repo(self):
    repo = Mock()
    next_page = get_next_page(repo, current_page=1)
    self.assertEqual(next_page, repo.get_page.return_value)
    repo.get_page.assert_called_with(2)

What bothers me is that I’m forced to use a mock when what I really want is a stub. What’s the difference? A stub is a test double that provides canned responses to calls. A mock is a test double that can verify what calls are made.

Look at the implementation of get_next_page. To test this, all I really need is a canned response to repo.get_page(2). But with unittest.mock, I can only give a canned response for any call to repo.get_page. That’s why I need the last line of my test to verify that I called the method with a 2. It’s that last line that bothers me.

If I’m writing tests that explicitly assert that specific calls were made, I prefer those to be verifying commands, not queries. For example, imagine I have some code that looks like this:

# ...
article.publish()
# ...

with tests like this:

def test_it_publishes_the_article(self):
    article.publish.assert_called_once_with()

Now the assertion in my test feels right. I’m telling the article to publish, so my test verifies that I sent the publish message to the article. My tests are verifying that I sent a command, I triggered some behavior that’s implemented elsewhere. Feels good. But wait…

Problem 2: Public API conflicts

Here’s the other problem. Imagine I had a typo in my test:

def test_it_publishes_the_article(self):
    article.publish.assertt_called_once_with()

Notice the extra “t” in “assert”? I hope so, because this test will pass even if article.publish is never called. Because every method called on a unittest.mock.Mock instance returns another Mock instance.

The problem here is that python’s mocks have their own public api, but they are supposed to be stand-ins for other objects that themselves have a public api. This causes conflicts. Have you ever tried to mock an object that has a name attribute? Then you’ve felt this pain (passing name as a Mock kwarg doesn’t stub a name attribute like you think it would, instead if names the mock).

Doesn’t autospec fix this problem?

autospec is an annoying bandage over this problem. It doesn’t fit into my normal TDD flow where I use the tests to tease out a collaborator’s public API before actually writing it.

Solution: tdubs

I decided to write my own test double library to fix these problems, and I am very happy with the result. I called it tdubs. See the README for installation and usage instructions. In this post I’m only going to explain the parts that solve the problems I described above.

In tdubs, stubs and mocks are explicit. If you want to give canned responses to queries, use a Stub. If you want to verify commands, use a Mock. (you want to do both? rethink your design [though it’s technically possible with a Mock])

A Stub can provide responses that are specific to the arguments passed in. This lets you create true stubs. In the example above, using tdubs I could have stubbed my repo like this:

repo = Stub('repo')
calling(repo.get_page).passing(2).returns(next_page)

and I would not need to verify my call to repo.get_page, because I would only get my expected next page object if I pass 2 to the method.

With tdubs, there’s no chance of false positives due to typos or API conflicts, because tdubs doubles have no public attributes. For example, you don’t verify commands directly on a tdubs Mock, you use a verification object:

verify(my_mock).called_with(123)

After hammering out the initial implementation to solve these specific problems, I ended up really liking the way my tests read and the type of TDD flow that tdubs enabled. I’ve been using it for my own projects since then and I think it’s ready to be used by others. So if you’re interested, visit the the readme and try it out. I’d love some feedback.

TDD Rules!

These are some rules I like to follow when doing TDD. You can follow them too! Rules are fun!

  • Write your tests first. If you can’t, spike a solution, throw it away, and try again.
  • Test units in isolation. Use mocks to verify interaction between units. If this makes your tests brittle, refactor.
  • You don’t need to isolate your unit from simple value objects. So use more value objects.
  • If you feel like you can’t keep everything in your head, ask yourself if you really need to keep it all in your head. If you do, you need to refactor.
  • Each branch of logic should be covered by a unit test. If that makes you feel like you have too many tests, your logic is too complicated. Refactor.
  • If you ever feel the need to only run part of the unit test suite, it’s too slow and refactoring is needed.
  • Unit tests should be written as if they are a set of requirements – or “specs” – for the unit being tested.
  • Each test should test one and only one concept. That doesn’t always mean only one assertion.
  • When fixing bugs, make sure there is a test that fails without your fix, and passes with it.
  • Never push commits that contain failing tests. This makes it harder to revert, cherry-pick, and debug problems (e.g. with git bisect).