Is there a limit to how many backups can be appended to a single file?

TL:DR; It is possibly to put 32,000+ backups on a single file. If this is good thing or if you can recover from a backup on this file is not addressed here.


I started taking tlog backups last night, on an existing database (231682) with no activity. I used a while loop and a counter so I could get a running total.

DECLARE @counter int
SET @counter = 1
While 1=1
Begin 
BACKUP LOG [231682] TO  
DISK = N'G:\SQLBackups\Test_Tlog.trn' WITH NOFORMAT, NOINIT,  
NAME = N'231682-Log Database Backup', SKIP, NOREWIND, NOUNLOAD
SET @counter = @counter + 1
print @counter
End
  • 16 hours later the count is at 5,967 with a file size of 356MB. This might take a while. At the start was completing 24 log backups per second.
  • 1 day 16 hours: count is 8,401 with a file size of 700MB, sp_whoisactive shows wait info (2029ms)BACKUPTHREAD
  • 4 days 18 hours: count is 12,834 with a file size of 1.6GB sp_whoisactive shows wait info (52113ms)BACKUPTHREAD time between backups has slowed to about 70 seconds per log backup.
  • 5 days and some hours; an external event (server reboot) caused the tlog backups to halt. count of completed backups is 13,717 file size is 1.8GB, time between backups is about 80 seconds. Validated with restore headeronly from disk='G:\SQLBackups\Test_Tlog.trn' Did not attempt a database restore, as it would be beyond painful. Set counter to start SET @counter = 13717 and restart adding backups to the same file with the same code. Backups resume and are taking about 80 seconds
  • 1 week; second restart due to external issues. Count is 15,186 file size 2.3GB each backup is taking about 90 seconds.
  • Week and half; Count is 17,919 file size is 3.2GB each backups is taking about 2 minutes 25 seconds.
  • 2 Weeks; Count is 19,645 file size is 3.8GB each backup is taking about 2 minutes 45 seconds.
  • 3 Weeks; count is 22,919 file size is 5.2GB each backup is taking about 3 minutes 30 seconds (17 per hour) as I am running the endless t-log backups in a job now, I have added this to the code RAISERROR(N'Count equals :%d', 16, 1, @counter ) WITH LOG; so the running count displays in the SQL error log Thank you @Erik Darling
  • 4 weeks; count is 25,587 file size is 6.6GB each backup is taking about 3 minutes 45 seconds (16 per hour)
  • 5 Weeks; count is 28,242 file size is 8GB each backup is taking about 4 minutes 35 seconds, (13.5 per hour)
  • 6 weeks; count is 30,037 file size is 9.1GB each backup is taking about 5 minutes (12 per hour)
  • ~7 weeks: Running out of space on the dedicated backup disk. Backups start failing when insufficient space, but the job keeps running and trying. Stop everything else, made a bit more space by deleting some files. The count in SQL logs is wrong, as it counts both failed and successful attempts.
    • Stop job
    • File size is 10.4GB each backup is taking about 5 minutes (12 per hour), time is the same on success or failure.
    • Check backup with restore headeronly from disk='G:\SQLBackups\Test_Tlog.trn' Count is 32,021
    • Header implies all the successful backups are fine.
    • Most backups show a value of 75,776 for BackupSize and a value of ~4,000 for CompressedBackupSixe, Compressed size varies on each backup.
    • I did not attempt a Restore with the t-log file. I suspect there will be issues, and I would want to try it before I made everything as ugly as it is now.
    • Delete the t-log file, restart normal backups.

I am compressing backups by default on this instance.

Week 3 Note: File size and backup time is growing out of proportion to number of backups. Looking at tlog headers, we see the backup in position 2 has a size of 75766 bytes and a start to finish time of one second or less. The backup in position 22919 also has a size of 75766 bytes and a start to finish time of one second or less. The overhead of appending the backups to the same file seems be causing the slow down. The abnormal growth is probably related to weekly maintenance tasks I have running on the instance.

Off site backups, It looks like my offsite backup solution (IBM Spectrum) is not backing up the trn file. I suspect this is because the file is constantly being edited.


Edit, some time later. I was considering doing another experiment to test recovery at around 30,000 backups. To avoid the issues of trying to restore multiple t-logs I looked at using differential backups. I created an empty database, took a full backup and then took 10 differential backups. Then I took 10 t-logs backups and using RESTORE HEADERONLY FROM DISK I compared the size, the differential backups are significantly larger then the t-logs, I don't have a enough space to perform a good test.

Differential backups 2-10 (first is always a bit bigger)

  • BackupSize = 1126400
  • CompressedBackupSize = 62271

T-Logs backups 2-10 (first is always a bit bigger)

  • BackupSize = 75776
  • CompressedBackupSize = 3800 (average, it varies)

Differential backups are about 16 times larger, best case I could only get about 2,000 of them, I am not doing further testing at this time.


There should be no other limit than maximum possible file size. The backup file is written in Microsoft Tape Format and new headers are simply appended to the file.

MTF Volume Description block


Is there a limit to how many backups can be appended to a single file?

3285.

Actually, that just as far as I got. The backup file got to 10GB, and each backup was taking 10 seconds, so I wasn't willing to wait longer.

Using:

use master
go
create database bt
go
backup database bt to disk='c:\temp\bt.bak' with noinit
go 10000

restore headeronly from disk='c:\temp\bt.bak' 

restore database bt from disk='c:\temp\bt.bak' with file = 3285, replace