Assume that you add a column to a table in Microsoft SQL Server 2012 or SQL Server 2014. When the size of the row is close to the maximum allowed size that is 8,060 bytes, the adding operation may take a long time.
The issue occurs because, when the total size of the row is close to the maximum allowed row size, SQL Server does a full scan of the table to make sure that adding a new column does not violate the data integrity. The versions before SQL Server 2012 do not do this check, and that makes the data definition language (DDL) runs faster. However, this behavior results in the failure of subsequent operations such as row updating or shrinking if the row size exceeds the allowed size.
- You can turn on the trace flag 647 by using the startup parameter. For information, see Database Engine Service Startup Options.
- You can run the following query (Please replace <table name> with your actual table name) to test the particular table before the operation on a production system, and plan the effect of this issue. If the result is greater than 8,060, your table is subject to this issue. Additionally, if you have dropped columns, you have to alter the table with the rebuild option. Rebuilding the table can make the query return correct results.
1+1+2 + 2 +
(case when sum (case when leaf_offset < 0 then 1 else 0 end) > 0 then 2 else 0 end) +
( (count (*) + 7)/8 ) + count (case when leaf_offset < 0 then 1 else null end) * 2 +
sum( case when max_length=-1 then 24 else max_length end)
from sys.system_internals_partition_columns col join sys.partitions par on col.partition_id = par.partition_id
where object_id = object_id ('<table name>') and index_id in (0,1) and partition_number =1
- You can consider adding variable length columns or sparse columns when your application frequently adds new columns to the table. It is a metadata-only operation even if the total size of all columns exceeds 8,060 bytes.
Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.