What is the effect of having an opened transaction in MSSQL for too long?

Having an open transaction by itself will have almost no consequence. A simple

BEGIN TRANSACTION
-- wait for a while, doing nothing
-- wait a bit longer
COMMIT

will, at worst, hold a few bytes of status values. No big deal.

Most programs will do actual work within the transaction and this is another matter. The point of a transaction is so you can be sure that several facts within the database are true simultaneously, despite there being other users writing to the same database concurrently.

Take the cannonical example of transferring money between bank accounts. The system must ensure that the source account exists, has sufficient funds, the destination account exists, and that both debit and credit happen or neither happens. It must guarantee this while other transactions happen, perhaps even between these two accounts. The system ensures this by taking locks on the tables concerned. What locks are taken, and how much of other peoples' work you see, is controlled by the transaction isolation level.

So if you do a lot of work there is a good chance other transactions will be queued waiting for the objects on which you hold locks. This will reduce the system's overall throughput. Eventually they will hit timeout limits and fail, which is a problem for overall system behaviour. If you use an optimistic isolation level your transaction may fail when you try a commit because of other's work.

Holding locks takes system resources. This is memory which the system cannot use to process other requests, reducing throughput.

If a lot of work has been performed the system may choose to perform lock escalation. Instead of locking individual rows the entire table will be locked. Then more concurrent users will be affected, system throughput will drop further and the application impact will be greater.

Data changes are written to the log file, as are the locks which protect them. These cannot be cleared from the log until the transaction commits. Hence very long transaction may cause log file bloat with its associated problems.

If the current work uses tempdb, which is likely for large workloads, the resources there may be tied up until the end of the transaction. In extreme cases this can cause other tasks to fail because there is no longer enough room for them. I have had cases where a poorly coded UPDATE filled tempdb so there was insufficient disk left for a report's SORT and the report failed.

If you choose to ROLLBACK the transaction, or the system fails and recovers, the time taken for the system to become available again will depend on how much work was performed. Simply having a transaction open will not affect the recovery time, it is how much work was performed. If the transaction was open but idle for an hour recovery will be almost instantaneous. If it was writing constantly for that hour the rule of thumb is that recovery time also will be about an hour.

As you can see long transaction can be problematic. For OLTP systems best practice is to have one database transaction per business transaction. For batch work process input in blocks, with frequent commits, and restart logic coded. Typically several thousand records can be processed inside a single DB transaction, but this should be tested for concurrency and resoruce consumption.

Do not be tempted to go to the other extreme and avoid transactions and locks entirely. If you need to maintain consistency within your data (and why else would you be using a database?) isolation levels and transactions serve a very important purpose. Learn about your options and decide what balance of concurrency and correctness you are prepared to live with for each part of your application.


Your largest consequence will be blocking of the objects used in the transaction. Especially if you assume your users are inserting data, then that long running transaction could include SELECT statements on commonly used tables. Your users' update statements may not be able to get the necessary lock required to complete their updates or inserts.

A secondary thing that could happen is log file activity, say if you were updating a large dataset, the portion of the log that transaction is using is held active for the duration of that transaction. You would not be able to reuse that portion of the log until the transaction is committed or rolled back. In scenarios where you may be in a heavily active OLTP system this could cause your log file to grow quickly, filling up your storage device.


Incomplete transaction may hold large number of locks and cause blocking

When a transaction is not completed either because a query times out or because the batch is cancelled in the middle of a transaction without issuing a COMMIT or ROLLBACK statement to complete the transaction, the transaction is left open and all the locks acquired during that transaction continue to be held. Subsequent transactions executed under the same connection are treated as nested transactions, so all the locks acquired in these completed transactions are not released. This problem repeats with all the transactions executed from the same connection until a ROLLBACK is executed. As a result, a large number of locks are held, users are blocked, and transactions are lost, which results in data that is different from what you expect.

source