Is it possible to change PyTest's assert statement behaviour in Python

You are using pytest, which gives you ample options to interact with failing tests. It gives you command line options and and several hooks to make this possible. I'll explain how to use each and where you could make customisations to fit your specific debugging needs.

I'll also go into more exotic options that would allow you to skip specific assertions entirely, if you really feel you must.

Handle exceptions, not assert

Note that a failing test doesn’t normally stop pytest; only if you enabled the explicitly tell it to exit after a certain number of failures. Also, tests fail because an exception is raised; assert raises AssertionError but that’s not the only exception that’ll cause a test to fail! You want to control how exceptions are handled, not alter assert.

However, a failing assert will end the individual test. That's because once an exception is raised outside of a try...except block, Python unwinds the current function frame, and there is no going back on that.

I don't think that that's what you want, judging by your description of your _assertCustom() attempts to re-run the assertion, but I'll discuss your options further down nonetheless.

Post-mortem debugging in pytest with pdb

For the various options to handle failures in a debugger, I'll start with the --pdb command-line switch, which opens the standard debugging prompt when a test fails (output elided for brevity):

$ mkdir demo
$ touch demo/
$ cat << EOF > demo/
> def test_ham():
>     assert 42 == 17
> def test_spam():
>     int("Vikings")
$ pytest demo/ --pdb
[ ... ] AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/
-> assert 42 == 17
(Pdb) q
Exit: Quitting debugger
[ ... ]

With this switch, when a test fails pytest starts a post-mortem debugging session. This is essentially exactly what you wanted; to stop the code at the point of a failed test and open the debugger to take a look at the state of your test. You can interact with the local variables of the test, the globals, and the locals and globals of every frame in the stack.

Here pytest gives you full control over whether or not to exit after this point: if you use the q quit command then pytest exits the run too, using c for continue will return control to pytest and the next test is executed.

Using an alternative debugger

You are not bound to the pdb debugger for this; you can set a different debugger with the --pdbcls switch. Any pdb.Pdb() compatible implementation would work, including the IPython debugger implementation, or most other Python debuggers (the pudb debugger requires the -s switch is used, or a special plugin). The switch takes a module and class, e.g. to use pudb you could use:

$ pytest -s --pdb --pdbcls=pudb.debugger:Debugger

You could use this feature to write your own wrapper class around Pdb that simply returns immediately if the specific failure is not something you are interested in. pytest uses Pdb() exactly like pdb.post_mortem() does:

p = Pdb()
p.interaction(None, t)

Here, t is a traceback object. When p.interaction(None, t) returns, pytest continues with the next test, unless p.quitting is set to True (at which point pytest then exits).

Here is an example implementation that prints out that we are declining to debug and returns immediately, unless the test raised ValueError, saved as demo/

import pdb, sys

class CustomPdb(pdb.Pdb):
    def interaction(self, frame, traceback):
        if sys.last_type is not None and not issubclass(sys.last_type, ValueError):
            print("Sorry, not interested in this failure")
        return super().interaction(frame, traceback)

When I use this with the above demo, this is output (again, elided for brevity):

$ pytest -s --pdb --pdbcls=demo.custom_pdb:CustomPdb
[ ... ]
    def test_ham():
>       assert 42 == 17
E       assert 42 == 17 AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Sorry, not interested in this failure
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

    def test_spam():
>       int("Vikings")
E       ValueError: invalid literal for int() with base 10: 'Vikings' ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../
-> int("Vikings")

The above introspects sys.last_type to determine if the failure is 'interesting'.

However, I can't really recommend this option unless you want to write your own debugger using tkInter or something similar. Note that that is a big undertaking.

Filtering failures; pick and choose when to open the debugger

The next level up is the pytest debugging and interaction hooks; these are hook points for behaviour customisations, to replace or enhance how pytest normally handles things like handling an exception or entering the debugger via pdb.set_trace() or breakpoint() (Python 3.7 or newer).

The internal implementation of this hook is responsible for printing the >>> entering PDB >>> banner above as well, so using this hook to prevent the debugger from running means you won't see this output at all. You can have your own hook then delegate to the original hook when a test failure is 'interesting', and so filter test failures independent of the debugger you are using! You can access the internal implementation by accessing it by name; the internal hook plugin for this is named pdbinvoke. To prevent it from running you need to unregister it but save a reference do we can call it directly as needed.

Here is a sample implementation of such a hook; you can put this in any of the locations plugins are loaded from; I put it in demo/

import pytest

def pytest_configure(config):
    # unregister returns the unregistered plugin
    pdbinvoke = config.pluginmanager.unregister(name="pdbinvoke")
    if pdbinvoke is None:
        # no --pdb switch used, no debugging requested
    # get the terminalreporter too, to write to the console
    tr = config.pluginmanager.getplugin("terminalreporter")
    # create or own plugin
    plugin = ExceptionFilter(pdbinvoke, tr)

    # register our plugin, pytest will then start calling our plugin hooks
    config.pluginmanager.register(plugin, "exception_filter")

class ExceptionFilter:
    def __init__(self, pdbinvoke, terminalreporter):
        # provide the same functionality as pdbinvoke
        self.pytest_internalerror = pdbinvoke.pytest_internalerror
        self.orig_exception_interact = pdbinvoke.pytest_exception_interact = terminalreporter

    def pytest_exception_interact(self, node, call, report):
        if not call.excinfo. errisinstance(ValueError):
  "Sorry, not interested!")
        return self.orig_exception_interact(node, call, report)

The above plugin uses the internal TerminalReporter plugin to write out lines to the terminal; this makes the output cleaner when using the default compact test status format, and lets you write things to the terminal even with output capturing enabled.

The example registers the plugin object with pytest_exception_interact hook via another hook, pytest_configure(), but making sure it runs late enough (using @pytest.hookimpl(trylast=True)) to be able to un-register the internal pdbinvoke plugin. When the hook is called, the example tests against the call.exceptinfo object; you can also check the node or the report too.

With the above sample code in place in demo/, the test_ham test failure is ignored, only the test_spam test failure, which raises ValueError, results in the debug prompt opening:

$ pytest demo/ --pdb
[ ... ]
demo/ F
Sorry, not interested!

demo/ F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

    def test_spam():
>       int("Vikings")
E       ValueError: invalid literal for int() with base 10: 'Vikings'

demo/ ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/
-> int("Vikings")

To re-iterate, the above approach has the added advantage that you can combine this with any debugger that works with pytest, including pudb, or the IPython debugger:

$ pytest demo/ --pdb --pdbcls=IPython.core.debugger:Pdb
[ ... ]
demo/ F
Sorry, not interested!

demo/ F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

    def test_spam():
>       int("Vikings")
E       ValueError: invalid literal for int() with base 10: 'Vikings'

demo/ ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/
      1 def test_ham():
      2     assert 42 == 17
      3 def test_spam():
----> 4     int("Vikings")


It also has much more context about what test was being run (via the node argument) and direct access to the exception raised (via the call.excinfo ExceptionInfo instance).

Note that specific pytest debugger plugins (such as pytest-pudb or pytest-pycharm) register their own pytest_exception_interact hooksp. A more complete implementation would have to loop over all plugins in the plugin-manager to override arbitrary plugins, automatically, using config.pluginmanager.list_name_plugin and hasattr() to test each plugin.

Making failures go away altogether

While this gives you full control over failed test debugging, this still leaves the test as failed even if you opted not to open the debugger for a given test. If you want to make failures go away altogether, you can make use a different hook: pytest_runtest_call().

When pytest runs tests, it'll run the test via the above hook, which is expected to return None or raise an exception. From this a report is created, optionally a log entry is created, and if the test failed, the aforementioned pytest_exception_interact() hook is called. So all you need to do is change what the result that this hook produces; instead of an exception it should just not return anything at all.

The best way to do that is to use a hook wrapper. Hook wrappers don't have to do the actual work, but instead are given a chance to alter what happens to the result of a hook. All you have to do is add the line:

outcome = yield

in your hook wrapper implementation and you get access to the hook result, including the test exception via outcome.excinfo. This attribute is set to a tuple of (type, instance, traceback) if an exception was raised in the test. Alternatively, you could call outcome.get_result() and use standard try...except handling.

So how do you make a failed test pass? You have 3 basic options:

  • You could mark the test as an expected failure, by calling pytest.xfail() in the wrapper.
  • You could mark the item as skipped, which pretends that the test was never run in the first place, by calling pytest.skip().
  • You could remove the exception, by using the outcome.force_result() method; set the result to an empty list here (meaning: the registered hook produced nothing but None), and the exception is cleared entirely.

What you use is up to you. Do make sure to check the result for skipped and expected-failure tests first as you don't need to handle those cases as if the test failed. You can access the special exceptions these options raise via pytest.skip.Exception and pytest.xfail.Exception.

Here's an example implementation which marks failed tests that don't raise ValueError, as skipped:

import pytest

def pytest_runtest_call(item):
    outcome = yield
    except (pytest.xfail.Exception, pytest.skip.Exception, pytest.exit.Exception):
        raise  # already xfailed,  skipped or explicit exit
    except ValueError:
        raise  # not ignoring
    except (, Exception):
        # turn everything else into a skip
        pytest.skip("[NOTRUN] ignoring everything but ValueError")

When put in the output becomes:

$ pytest -r a demo/
============================= test session starts =============================
platform darwin -- Python 3.8.0, pytest-3.10.0, py-1.7.0, pluggy-0.8.0
rootdir: ..., inifile:
collected 2 items

demo/ sF                                                      [100%]

=================================== FAILURES ===================================
__________________________________ test_spam ___________________________________

    def test_spam():
>       int("Vikings")
E       ValueError: invalid literal for int() with base 10: 'Vikings'

demo/ ValueError
=========================== short test summary info ============================
FAIL demo/
SKIP [1] .../demo/ [NOTRUN] ignoring everything but ValueError
===================== 1 failed, 1 skipped in 0.07 seconds ======================

I used the -r a flag to make it clearer that test_ham was skipped now.

If you replace the pytest.skip() call with pytest.xfail("[XFAIL] ignoring everything but ValueError"), the test is marked as an expected failure:

[ ... ]
XFAIL demo/
  reason: [XFAIL] ignoring everything but ValueError
[ ... ]

and using outcome.force_result([]) marks it as passed:

$ pytest -v demo/  # verbose to see individual PASSED entries
[ ... ]
demo/ PASSED                                        [ 50%]

It's up to you which one you feel fits your use case best. For skip() and xfail() I mimicked the standard message format (prefixed with [NOTRUN] or [XFAIL]) but you are free to use any other message format you want.

In all three cases pytest will not open the debugger for tests whose outcome you altered using this method.

Altering individual assert statements

If you want to alter assert tests within a test, then you are setting yourself up for a whole lot more work. Yes, this is technically possible, but only by rewriting the very code that Python is going to execute at compile time.

When you use pytest, this is actually already being done. Pytest rewrites assert statements to give you more context when your asserts fail; see this blog post for a good overview of exactly what is being done, as well as the _pytest/assertion/ source code. Note that that module is over 1k lines long, and requires that you understand how Python's abstract syntax trees work. If you do, you could monkeypatch that module to add your own modifications there, including surrounding the assert with a try...except AssertionError: handler.

However, you can't just disable or ignore asserts selectively, because subsequent statements could easily depend on state (specific object arrangements, variables set, etc.) that a skipped assert was meant to guard against. If an assert tests that foo is not None, then a later assert relies on to exist, then you simply will run into an AttributeError there, etc. Do stick to re-raising the exception, if you need to go this route.

I'm not going to go into further detail on rewriting asserts here, as I don't think this is worth pursuing, not given the amount of work involved, and with post-mortem debugging giving you access to the state of the test at the point of assertion failure anyway.

Note that if you do want to do this, you don't need to use eval() (which wouldn't work anyway, assert is a statement, so you'd need to use exec() instead), nor would you have to run the assertion twice (which can lead to issues if the expression used in the assertion altered state). You would instead embed the ast.Assert node inside a ast.Try node, and attach an except handler that uses an empty ast.Raise node re-raise the exception that was caught.

Using the debugger to skip assertion statements.

The Python debugger actually lets you skip statements, using the j / jump command. If you know up front that a specific assertion will fail, you can use this to bypass it. You could run your tests with --trace, which opens the debugger at the start of every test, then issue a j <line after assert> to skip it when the debugger is paused just before the assert.

You can even automate this. Using the above techniques you can build a custom debugger plugin that

  • uses the pytest_testrun_call() hook to catch the AssertionError exception
  • extracts the line 'offending' line number from the traceback, and perhaps with some source code analysis determines the line numbers before and after the assertion required to execute a successful jump
  • runs the test again, but this time using a Pdb subclass that sets a breakpoint on the line before the assert, and automatically executes a jump to the second when the breakpoint is hit, followed by a c continue.

Or, instead of waiting for an assertion to fail, you could automate setting breakpoints for each assert found in a test (again using source code analysis, you can trivially extract line numbers for ast.Assert nodes in an an AST of the test), execute the asserted test using debugger scripted commands, and use the jump command to skip the assertion itself. You'd have to make a tradeoff; run all tests under a debugger (which is slow as the interpreter has to call a trace function for every statement) or only apply this to failing tests and pay the price of re-running those tests from scratch.

Such a plugin would be a lot of work to create, I'm not going to write an example here, partly because it wouldn't fit in an answer anyway, and partly because I don't think it is worth the time. I'd just open up the debugger and make the jump manually. A failing assert indicates a bug in either the test itself or the code-under-test, so you may as well just focus on debugging the problem.

You can achieve exactly what you want without absolutely any code modification with pytest --pdb.

With your example:

import pytest
def test_abc():
    a = 9
    assert a == 10, "some error message"

Run with --pdb:

py.test --pdb
collected 1 item F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

    def test_abc():
        a = 9
>       assert a == 10, "some error message"
E       AssertionError: some error message
E       assert 9 == 10 AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /private/tmp/a/
-> assert a == 10, "some error message"
(Pdb) p a

As soon as a test fails, you can debug it with the builtin python debugger. If you're done debugging, you can continue with the rest of the tests.

If you're using PyCharm then you can add an Exception Breakpoint to pause execution whenever an assert fails. Select View Breakpoints (CTRL-SHIFT-F8) and add an on-raise exception handler for AssertionError. Note that this may slow down the execution of the tests.

Otherwise, if you don't mind pausing at the end of each failing test (just before it errors) rather than at the point the assertion fails, then you have a few options. Note however that by this point various cleanup code, such as closing files that were opened in the test, might have already been run. Possible options are:

  1. You can tell pytest to drop you into the debugger on errors using the --pdb option.

  2. You can define the following decorator and decorate each relevant test function with it. (Apart from logging a message, you could also start a pdb.post_mortem at this point, or even an interactive code.interact with the locals of the frame where the exception originated, as described in this answer.)

from functools import wraps

def pause_on_assert(test_func):
    def test_wrapper(*args, **kwargs):
            test_func(*args, **kwargs)
        except AssertionError as e:
            # re-raise exception to make the test fail
    return test_wrapper

def test_abc()
    a = 10
    assert a == 2, "some error message"

  1. If you don't want to manually decorate every test function, you can instead define an autouse fixture that inspects sys.last_value:
import sys

@pytest.fixture(scope="function", autouse=True)
def pause_on_assert():
    if hasattr(sys, 'last_value') and isinstance(sys.last_value, AssertionError):