Does alter column change datatype use tempdb version store?

Assuming we're talking about a plain ALTER TABLE, not an operation done through some wizard that does temp table and rename tricks or anything similar.

A straight column data type change may result in update of every row in the table. It depends on the types involved. You have to consider this: the row image in the table is just a bag of bytes, until is somehow interpreted by the engine. The engine follows the table metadata description to make sense of what those bytes are. The metadata will say 'from offset 12 to offset 16 is an int, the column number 4. From offset 17 to offset 19 is a short, the column number 5' and so on and so forth. Now, when you do an ALTER that changes a column type, the metadata will change. And now the $1M question: will the new metadata describe the same bag of bytes correctly? If yes, then the row ALTER is instantaneous and all the rows in the table remain unchanged. If not, then all rows have to be updated. A huge table will lead to a huge update, all in one transaction. This update can exhaust your resources (eg. run out of log space). Shanky is right that this 'update' may take the form of a temp table->insert, but that's an implementation detail.

I wrote some time ago about SQL Server table columns under the hood which shows how some of these operations occur, and how they leave evidence in the actual physical rowset metadata.

Now, you say that you are running out of version store space when this operation is done, even if snapshot was not enabled, right? The version store is used by more than just snapshot isolation. To give a well known example, the DELETED pseudo-table in triggers is powered from the version store. Which means that the version store may be used even if you did not enable snapshot isolation. See Managing TempDB in SQL Server: TempDB Basics (Version Store: Why do we need it?), Sunil quotes 3 non-snapshot related uses of the version-store: triggers, MARS and online index build.

SQL 2014 does not support online alter column, and as a DDL operation it should not require row-versioning for trigger nor for MARS. So IMHO it should not grow the version store.

My main suspect would be, in this case, a readable secondary (it's a shot in the dark here...). The fact that readable secondaries map all reads to snapshot isolation is well documented. I don't remember exactly the details and I may be wrong, but I think that in order for the secondary database to support snapshot isolation, the primary has to use the version store. Think about the fact that the secondary is a physical copy of the primary and the row size has to match.

Perhaps not an answer, but too long for a comment.


I believe that the db engine is creating a new temporary table and filling it. Looking at your execution plan might provide clues. You can see the quote in the following article but after searching I did not see anything official in MS documentation.

Also, if you are running this using SSMS designer it may have different behavior than executing the alter table query. If using the designer instead of hitting save you can script out the query instead.

https://www.mssqltips.com/sqlservertip/1903/best-practices-for-sql-server-database-alter-table-operations/

Most of the ALTER TABLE operations if performed using SSMS causes a drop and recreation of the table

If necessary I would work it out so that I could create a new table myself and pump the data to the new table. If you have to leave that table open to writes during the process it would get more complicated.

Also, you could temporarily give TempDB an extra 500GB while the operation is running. A 250GB table should not need more than that but I have seen stranger things with TempDB.


I am quoting a paragraph

Appendix A: Using Management Studio to Change Data Types from SQL Server 2005 document on Impact Of Changing Collation and Data types

When you change data type of column following things happen

  1. Internally, SQL Server creates a temporary table that has the same column property as that in the target column.

  2. Next an INSERT INTO statement is executed to copy the data to the temporary table.

When they used Profiler following query was seen.

IF EXISTS(SELECT * FROM dbo.t1)--T1 was table with col C1 whose data type was changed
   EXEC('INSERT INTO dbo.Tmp_t1 (c1)
      SELECT CONVERT(nchar(10), c1) FROM dbo.t1 WITH (HOLDLOCK TABLOCKX)')
  1. Afterwards, the index is added if it existed in the original table and the table name is renamed to the original table name.

Now considering you have very large table so temporary table was also large and Tempdb was not able to accommodate it and hence it became full.