Unit testing in Python 3

The necessity of unit testing in Python

As you may know, Python is a dynamically typed language. Unlike some functional languages like Haskell or F# that have this beautiful thing called Hindley-Milner type inference , Python has : Duck-typing.

If it flies like a duck, quacks like a duck, swims like a duck, then it probably is a duck.

In practice, this pretty much means “yeah, we’ll sort this typing-mess at runtime. If the object does not have the quack method we’re trying to call, we’ll just throw an exception”.

What could possibly go wrong?

Well first, not knowing what kind of argument you need to pass.

Was it “3” or 3? Because “3”*3 is “333” and 3*3 is 9. That’s not exactly the same result. Now you need to look back at previous code to be sure.

Then you have Refactoring-Hell. You have changed the parameters order, or their name, and you now have broken calls to your API.

Of course, you dont know that yet. You’ll discover it the next time you trigger the broken code path. Maybe that’s a month after deployment. Too bad.

This dynamism makes unit testing in Python not a mere addition, but a requirement for your sanity.

A friendly reminder about unit tests

What were those exactly?

Remember this? Yay the good ol’ V model.

You’re wondering why I put this here. Nobody uses the V-Model anymore, it’s tedious, Agile yada-yada… Well, I agree. I hate the V-Model, but this here bears a very important reminder :

In V-Model, Unit Testing validates that your code fits the Low Level Specification (a 400 pages Word document that nobody reads except a traceability program that gives your manager the knowledge that 93% of the requirements from your High Level requirements are linked to a Low Level one). But I disgress.

In Agile? Well, since you probably don’t have a spec, Unit Tests WILL be your spec, your guarantee that :

  • even after that refactor, all your calls are still correct.
  • every branch of the function works as expected, not just the main one.
  • the painful merge you applied did not bring back a regression from the dead.

Issues with Unit Tests

It’s not Functional testing

Well, duh. Unit tests are not a silver bullet, they won’t test your software “globally”. That’s what functional testing is for. You could probably automate that a bit, or just hire very patient people that will take care of doing it. Again. And again. And again and again and again…

There are some drawbacks

  • you spend time writing them, sometimes more than you spent coding the feature.
  • you won’t see the need for them until they detect something broke and save your ass.
  • rewriting dozens of tests just because you did a little refactor that touched lots of classes can be a pain.

Now to the practice.

First, here’s the code we’re going to test. As you can see it includes a few things to test :

  • call of an external function (subprocess.call)
  • use of a builtin function (open/read)
  • call of an internal function
import yaml
import subprocess

class MyClass:
    def __init__(self, conf_file):
        self._conf_file = conf_file
        self._config_keys = ["key1", "key2", "key3"]

    def get_conf(self):
        """ parse config file using yaml """
        with open(self._conf_file, "r") as f:
            return yaml.load(f)

    def check_conf(self):
        """ check conf file contains all the config keys """
        config = self.get_conf()
        for key in self._config_keys:
            if key not in config:
                raise Exception("missing key : {}".format(key))
        return True

    def execute_key1(self, config):
        subprocess.call([config["key1"], "--some-arg", config["key2"]])


The skeleton for a Test is always the same :

setUp and tearDown functions will be called at the beginning and end of each test, independently of whatever happens in the test (success, exception, …).

Then you have a bunch of test* methods that will be called one after the other. Each of those is a unit test.

Unless you are using a specific runner, like nosetests (which I personally don’t find very useful), you will need to add a line to run unittest.main(). This will take care of running the tests in the file.

import unittest
import MyClass

class TestMyClass(unittest.TestCase):
    def setUp(self):
        """ Executed before each test """

    def tearDown(self):
        """ Executed after each test """

    def test0000_something(self):

# execute the tests if called directly
if __name__ == "__main__":


Quite often, when you try to write a test, you run into the issue of calling code from other objects. At this point, you’re not sure what you are testing anymore, is it the calling or the called code.

Unit tests are just that : their only scope is the object you are testing (and often even smaller : the function). So you need to be sure that the object is correct, not the objects you are using. It is, in fact, easier to simulate the behavior per-case of those objects. It is extremely easy in Python 3, but the documentation does not reflect that.

Mocks have a few interesting properties :

  • They only live during the duration of your test (as a decorator), or even less (using a with statement)
  • They replace functions,  methods or complete objects
  • They are inexpensive to create
  • They can be used to be sure some code is called

Basic Mocking

Returning a value

@patch('method_to_replace', return_value=3)

Raising an exception

@patch('method_to_replace', side_effect=Exception("awe"))

Returning different values at each call

@patch('method_to_replace', side_effect=["first call return value", "second call return value"])

Mocking object methods


import unittest
from unittest.mock import patch
from unittest.mock import mock_open

from my_class import MyClass

class TestMyClass(unittest.TestCase):
    def setUp(self):
        self.obj = MyClass("/tmp/test_file.yaml")
        self.working_conf = {"key1": 1, "key2": 2, "key3": 3}

    def test_000_check_conf_works_with_all_keys(self):
        """ Check that our function works with a correct conf
        Notable things here :
        - we use a with statement
        with patch.object(MyClass, "get_conf", return_value=self.working_conf):

    @patch.object(MyClass, "get_conf")
    def test_001_check_conf_raises_exception_on_missing_key(self, get_conf_method):
        """ Check that for each key, we raise an exception if that key is missing
        Notable things here :
        - we use a decorator this time
        - we use subTest to regroup tests that are similar (new in Python 3.4).
          This ensures that all iterations are run even if the first fails.
          We also get debug information if the subtest fails
        - we re-assign the output of our mock method for each iteration
        source = {"key1": 1, "key2": 2, "key3": 3}
        for key in self.obj._config_keys:
            test_conf = source.copy()
            get_conf_method.return_value = test_conf
            with self.subTest(conf=test_conf):
                with self.assertRaises(Exception):

if __name__ == '__main__':

Mocking file I/O with mock_open()

Very often, you’ll find you need to test code that reads/writes from a file on disk. The most instinctive way is to use setUp()/tearDown() to create a file (probably in /tmp, even better if you use tempfile.NamedTemporaryFile), write the data to it, then delete the file in tearDown().

Then you realize you need to do 3, maybe 5 tests with different sets of data, and all your motivation goes to shambles.

Fear not, you can just mock open() and read() in one line (one that is incredibly hard to find on the net unfortunately).

    def test_002_get_conf_returns_decoded_yaml_data(self):
        """ Check that we decode yaml and return it directly

        Notable things here :
        - mock_open is used to patch open() to avoid failing to open a
        real file, but also to return special data upon read()!
        - we replace my_class.open, which means open is replaced only in the
        scope of the "my_class" module (from which we imported MyClass)
        - we provide create=True, because open() is a builtin function (not imported).
        This is not needed anymore as of Python 3.5
        with patch('my_class.open', mock_open(read_data='["qwe"]'), create=True):
            self.assertEqual(self.obj.get_conf(), ["qwe"])

Mocking File as an iterator

Sometimes your code uses a file descriptor as an iterator :

with open("file) as f:
    for line in file:

The current mock_open does not support this behavior for now unfortunately, but you can implement it with two lines :

m_open = mock_open(read_data='some data \n new lines \n')
m_open.return_value.__iter__ = lambda self: self
m_open.return_value.__next__ = lambda self: self.readline()
with patch('my_class.open', m_open, create=True):

Mocking file writes

We saw how easy it is to mock reading a file, but you’re probably wondering how you verify data has been written to a file. To be fair, it works a bit differently :

By calling the fake open() function returned by mock_open(), you retrieve the same mock file descriptor that your object used.

This (mock) File object has the classic file methods, like file.write()… which are also mocks! (yeah, mocks all the way down!). On this mock, you can call assert_called_with, assert_has_calls, …to be sure the data you want has been written. (see part “Mocking an external function (and checking its values)” for more information)

m_open = mock_open(read_data='some data \n new lines \n')
with patch('my_class.open', m_open, create=True):

# verify write has been called with argument
file_desc = m_open()
file_desc.write.assert_called_once_with("data we wrote")

Mocking an external function (and checking its values)

Here’s a use case : you don’t want your method to really call a function. Maybe your library is not on the system that runs the tests, or maybe your executing a subprocess, with the executable not on the system.

In that case, you want to mock the call to that function, but you also want to know if the parameters correspond to what you expect.

Good news : Mocks remember when they are called and how!

    def test_003_execute_key1_executes_correct_command(self, sp_call):
        """ Check that subprocess.call is called, with the expected arguments 
        Notable things here :
        - we use patch('my_class.subprocess.call') to patch subprocess.call
        only inside the my_class module, objects outside of that module will
        not be affected by our mock. We could mock subprocess.call to mock
        all instances instead
        - we check the mock is called with specific arguments
        self.obj.execute_key1({"key1": "command", "key2": "argument"})
        sp_call.assert_called_with(["command", "--some-arg", "argument"])

Other functions are at your disposal to check how the mock has been called, like assert_has_calls, which takes a list of unittest.mock.call(arguments).

Mocking a complete object

Sometimes you want to mock more than a method, and do a full object emulation. Here’s how you can do it :

First step is to create a fake class that does whatever you want. You can either do it manually, or use Mock/MagicMock, which work the same way as patch. For example, this one-liner will create an object with

  • a method method() that returns 5
  • another method method2() that returns a different integer each time it is called
  • an attribute attr with a value of 5
m = Mock(**{'method.return_value': 5, 'attr': 5, 'method2.side_effect': [1, 2, 3]})

Next, you just need to set this mock as the return_value of the class (think of it that way : when you “call” the class, it returns a class instance).

with patch("namespace.MyClass", return_value=m):

Leave a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: