Truncate a table with 17 billion rows in an AG

Logging Extents

The amount of log data generated (and thus sent over the network to your other AG nodes) depends on how big the rows are in your 17 billion row table are. TRUNCATE will definitely be a tiny amount compared to doing a DELETE. But it could still be significant, depending on your infrastructure and expectations.

Consider the dbo.Votes table in the Stack Overflow sample database:

screenshot of SSMS showing the columns in the votes table and their data types

Each row is 28 bytes. A page in SQL Server is 8 KB (8,192 bytes), so you can fit around 292 rows on a page. This isn't exactly correct, since there is overhead for both pages and rows, but it's a decent approximation for this example.

That means it takes about 58,219,178 pages to hold all 17 billion rows. When doing a DROP or TRUNCATE, a background task deallocates extents (groups of 8 pages). Each of these deallocations is logged. This means about 7,277,397 log records will be created by truncating this table.

Testing dbo.Votes

I tried this out on my copy of that sample database, after setting the recovery model to full, and taking full and log backups to initialize the backup chain. The dbo.Votes table has 10,146,802 rows. Based on our previous calculations, this should be around 34,749 pages, or 4,343 extents.

In reality, that table uses 47,721 pages allocated to it (because of the overhead mentioned before), which is 5,965.125 extents.

Now I'll TRUNCATE the table:

TRUNCATE TABLE dbo.Votes;

This completes instantly, but I ended up with 17,605 log records. It looks like there are really 3 log records per extent (2 for updating IAM and GAM pages, one for updating the PFS page to deallocate the data page).

Those log records only totaled up to ~1.28 MB of log file usage. But your real table has 1,600 times the rows as this, and your row sizes might be bigger. This could mean over 1 GB of log data generated and sent to each replica over the network.

The amount of data grows further if you have nonclustered indexes on this table, which are logged in the same way.

Row Size Makes a Big Difference

Another case study could be the dbo.Comments table. It has 3,907,472 rows, but each row is 1,424 bytes long (maximum - the Text column is nvarchar(700)).

Despite having significantly fewer rows than dbo.Votes, this table has 176,722 pages allocated to it. TRUNCATEing dbo.Comments results in 63,792 log records and 4.86 MB of log data.

If your real row size is more in this ballpark, that could be over 7.5 GB of log data.

What to Do

Maybe your infrastructure and log files can handle several GB of data easily - if you have a 17 B row table, it seems like they should! But I thought it would be worthwhile to mention that the amount of traffic is not necessarily insignificant, since the existing answers didn't bring this up.

Test in a non-prod environment if you can. Measure the log file usage before and after, and make sure your prod infrastructure is set up to handle that amount of data. Make certain that the log file has been presized to handle this truncate - having an autogrowth occur in the middle of this operation will slow things down a lot, and cause blocking.

If you can't test, do your best to estimate what the impact will be. Use a query like this one to get the number of pages in the table. Then divide that by 8 (to get the number of extents) and multiply by 3 to get the approximate number of log records.

My testing has had average log record sizes of around 70 bytes, but I don't know if this is typical. But you could try multiplying the approximate log records by 70 to get the number of potential log bytes produced by the truncate.

With the AG in the mix, you could also play around with log stream compression. I haven't used these trace flags, so I'm really just mentioning that it's another knob you can tune.


If you find that the TRUNCATE approach is too unpredictable, or too much for your systems, you could always use a normal DELETE in batches. This uses more log in total, but you could spread it out over whatever period of time you like. However, make sure you Take Care When Scripting Batches if you go that route.


Truncate table command truncates rows instantly and does not write deleted rows to transaction log file

Usually Truncate table executes instantly and there is no noticeable network traffic between replicas in AG, there will be no noticeable log backups as a consequence, etc., compared to when you do Delete from. However, there can be noticeable traffic and noticeable log backup, because 17 billion rows is a lot.

P.S. Consider backing up the database and save the backup to archive before doing truncate, so you can restore the 17 billion table later, if needed