Unit testing of stored procedures

We've been doing this for almost five years, and we think that explicitly testing modifications is definitely doable, but it is quite slow. Besides, we cannot easily run such tests concurrently from several connections, unless we use separate databases. Instead, we should test modfications implicitly - we use them to build up at least some of the test data, and verify that our selects return expected results.

I've written an article entitled Close Those Loopholes: Lessons learned from Unit Testing T-SQL, as well as some blog posts

Regarding your question "Is there a treshold in complexity where it gets completely hopeless?", complex modules need tests much more than simple ones.

To simplify maintenance, we generate expected results, and we store them in separate files - that makes a huge difference.


Yes, you should test the whole chain of events as a unit. So, in your example with a procedure that inserts into a table and causes several triggers to fire, you should write unit tests that evaluate the procedure for various inputs. Each unit test should pass or fail depending on whether it returns the correct values, changes the state of tables correctly, creates the correct email, and even sends the correct network packets if it is designed to do such a thing. In short every effect the unit has should be verified.

You are correct, that designing unit tests takes some work, but most of that work has to be done to manually test the unit, you are just saving the work required to test the unit so that when a change is made in the future the testing can be just as thorough and significantly easier.

Changing data does make testing more difficult, but it doesn’t make testing less important and actually increases the value of unit testing as most the difficulties only have to be thought through once rather than every time a change is made to the unit. Saved datasets, inserts/updates/deletes that are part of the setup/teardown, and narrowly scoped operation can all be used to make this easier. Since the question isn’t database specific, details will vary.

There is no complexity threshold on the high or low end that should stop you from testing or unit testing. Consider these questions:

  1. Do you always write bug free code?
  2. Are small units always bug free?
  3. Is it OK for a large unit to have a bug?
  4. How many bugs does it take to cause a disaster?

Suppose you start a new job and are tasked with making an optimization to a small function used in many places. The entire application was written and maintained by an employee that no one even remembers. The units have documentation describing normal expected behavior, but little else. Which of these would you rather find?

  • No unit tests anywhere in the application. After making the change you can do some manual testing against the unit itself to make sure it still returns the expected values in the documentation. You can then roll it out to production, cross your fingers and hope that it works (after all, you always write bug free code and an optimization in one unit could never effect another) or spend a massive amount of time learning how the entire application works so you can manually test every unit directly or indirectly effected.
  • Unit tests throughout the application that run automatically daily or on demand. They check not just normal input values and their expected response, but also abnormal values and the expected exceptions that get raised. You make your change and run the unit test suite for the application immediately seeing that three other units no longer return expected results. Two of them are benign, so you tweak the unit tests to account for that. The third requires another slight tweak and a small new unit test. After make the changes the entire suite of tests succeeds and you roll out the change with confidence.

For PostgreSQL, check out pgTAP:

pgTAP is a suite of database functions that make it easy to write TAP-emitting unit tests in psql scripts or xUnit-style test functions.