SQL Server 2014 compression and maximum row size

The limit that you're running into has nothing to do with the data stored on the page. The calculation is done based on the data types of the columns. That's why you run into the error without any data in the table. Compression makes this limit worse. You can read about the technical details behind the overhead here.

You can workaround this issue by using SPARSE columns. That means that it'll be possible for inserts to fail depending on what you insert, but you can bypass the 8060 byte limit. The following code shows that you can create 1023 columns just fine:

drop table t1
GO
create table t1(c1 decimal(26, 8) null)
GO

declare @i int = 2;
declare @sql varchar(100);
while @i <= 1023
begin
    set @sql = 'alter table t1 add c' + convert(varchar, @i) + ' decimal(26, 8) SPARSE null';
    execute (@sql);
    set @i += 1;
end;
GO

However, all of the restrictions around it (read the linked article) may make this not suitable for your use case. Specifically, only NULL values (not 0) are optimized to take up very little space. If you try to insert too many 0s in a single row you'll get an error. Here's what I see when I try to insert 1023 0 values:

Msg 511, Level 16, State 1, Line 1 Cannot create a row of size 17402 which is greater than the allowable maximum row size of 8060.

I suppose that if you got really desperate you could create the columns as VARCHAR(27) instead. Variable length columns can be moved off page so that you can exceed the 8060 byte limit in the table definition but inserting certain combinations of values will fail. SQL Server warns you of this when creating the table:

Warning: The table "t1" has been created, but its maximum row size exceeds the allowed maximum of 8060 bytes. INSERT or UPDATE to this table will fail if the resulting row exceeds the size limit.

Page or row compression may be helpful if you go with the VARCHAR(27) approach. That will minimize the space used by both 0 and NULL. With VARCHAR(27) I'm able to insert 1023 0 values just fine.


Outside of the technical aspects and proposed work-around (using VARCHAR(27) columns) discussed in @Joe's answer, I question the "need to create [a] wide denormalized table" as expressed by the O.P. Unless there is some odd technical requirement that all these columns must be in a single table, I would suggest / recommend spreading them out across as many "sibling" tables as necessary. Sibling tables being tables that:

  • have a 1-to-1 relationship with each other,
  • all have the exact same Primary Key,
  • only one has the IDENTITY column (and no FK to the others)
  • the rest have a Foreign Key (on the PK column) pointing to the PK of the table that has the IDENTITY

Here you are splitting he logical row across two or more physical tables. But that is essentially what normalization is anyway, and what relational databases are designed to handle.

In this scenario you do incur some extra space used by duplicating the PK, and some additional query complexity due to the need to either INNER JOIN the tables together (frequently but not always, unless all SELECT queries use all columns, but that doesn't usually happen) or create an explicit Transaction to INSERT or UPDATE them together (DELETE can be handled via ON DELETE CASCADE set on the FK).

HOWEVER, you get the benefits of having a proper data model with proper, native datatypes, and no trickery that could have unforeseen consequences later on. Even if using VARCHAR(27) allows this to work on a technical level, pragmatically I don't think storing decimals as strings is in your / the project's best interest.

So, if you are only "needing" a single table due to not realizing that a single logical entity does not need to be represented physically in a single container, then do not attempt to force all of this into a single table when it will work gracefully across multiple tables.

The example below illustrates the basic concept:

SETUP

CREATE TABLE tempdb.dbo.T1
(
  [ID] INT NOT NULL IDENTITY(11, 2) PRIMARY KEY,
  [Col1] VARCHAR(25),
  [Col2] DATETIME NOT NULL DEFAULT (GETDATE())
);

CREATE TABLE tempdb.dbo.T2
(
  [ID] INT NOT NULL PRIMARY KEY
                    FOREIGN KEY REFERENCES tempdb.dbo.T1([ID]) ON DELETE CASCADE,
  [Col3] UNIQUEIDENTIFIER,
  [Col4] BIGINT
);

GO
CREATE PROCEDURE #TestInsert
(
  @Val1 VARCHAR(25),
  @Val4 BIGINT
)
AS
SET NOCOUNT ON;

BEGIN TRY
  BEGIN TRAN;

  DECLARE @InsertedID INT;

  INSERT INTO tempdb.dbo.T1 ([Col1])
  VALUES (@Val1);

  SET @InsertedID = SCOPE_IDENTITY();

  INSERT INTO tempdb.dbo.T2 ([ID], [Col3], [Col4])
  VALUES (@InsertedID, NEWID(), @Val4);

  COMMIT TRAN;
END TRY
BEGIN CATCH
  IF (@@TRANCOUNT > 0)
  BEGIN
    ROLLBACK TRAN;
  END;

  THROW;
END CATCH;

SELECT @InsertedID AS [ID];
GO

TEST

EXEC #TestInsert 'aa', 454567678989;

EXEC #TestInsert 'bb', 12312312312234;

SELECT *
FROM   tempdb.dbo.T1
INNER JOIN tempdb.dbo.T2
        ON T2.[ID] = T1.[ID];

Returns:

ID  Col1  Col2                     ID  Col3                                  Col4
11  aa    2017-07-04 10:39:32.660  11  44465676-E8A1-4F38-B5B8-F50C63A947A4  454567678989
13  bb    2017-07-04 10:41:38.180  13  BFE43379-559F-4DAD-880B-B09D7ECA4914  12312312312234