Tiered Storage Partition Copy

In my last tiered storage sliding window post, I shared a sliding window script that also moves an older partition to a different filegroup as part of a tiered storage strategy. This approach allows one to keep actively used partitions on the fastest storage available while keeping older less frequently used data on less expensive storage. That version of the script moves data efficiently using a staging table and CREATE INDEX…DROP EXISTING but the downside is the data being moved is unavailable for querying from the time the partition is switched out of the main table until it is switched back in. Consequently, the maintenance needs to be scheduled during a window where data in the partition being moved isn’t needed.

This follow-up article shows an alternative copy technique instead. Data in the partition being moved is left in the main table until the after the copy completes. Once data are copied to the different filegroup, partition maintenance is performed to switch out the original partition and switch in the copied data, now on the older data filegroup. This allows data to remain online except during the final partition switches, which are fast meta-data operations. It is assumed that the data in the partition being moved is read-only during this process.

Copy Data to Slower Storage
Unfortunately, neither ALTER INDEX nor CREATE INDEX…DROP EXISTING provide the capability to repartition an individual partition of a table/index. ALTER INDEX allows one to target a specific partition during a REORGANIZE or REBUILD but not change the filegroup or partition scheme in the process. CREATE INDEX…DROP EXISTING allows a filegroup or partition scheme specification but applies to the entire index; individual partitions cannot be specified. Consequently DML (INSERT…SELECT) instead of DDL must be used in order to keep data online during the copy process.

I’ll assume you’ve already slid the window by purging expired data and preparing for new data as detailed in my last article with the only task remaining to move an older partition to a slower storage filegroup. The following steps will keep the data being moved online during the potentially long copy process. The offline operations performed in steps 3-6 are metadata operations that will complete quickly, once an exclusive table lock (to avoid deadlocks) can be granted. Note that all partition functions and schemes are identical and the staging table is empty before these steps are performed.

1) Move the empty staging table partition to the older data filgegroup
2) Load data into the moved partition of the staging table
3) Switch the copied data out of the main table into a second staging table
4) Switch the adjacent partition out of the main table into the first staging table
5) Move the main table partition to the older data filegroup
6) Switch the partitions from the first staging table back into the main table

The complete DDL and script is at the end of this article. It differs from my last article (which I’ll assume you’ve already perused) by the addition of a second staging table and this alternative move technique. Below are the individual steps.

Move the empty staging table partition to the older data filgegroup
Remember, the staging table is partitioned using the secondary partition function/scheme so that it can be repartitioned independently of the main table. The first task is to move the partition of the unused secondary partition scheme to the slower storage filegroup using MERGE and SPLIT:

1) MERGE the secondary partition function boundary of the moved month to remove it from the secondary partition function and scheme:

ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
    MERGE RANGE (@MonthStartDateToMoveFromCurrentToOld);

2) Set the secondary partition scheme NEXT USED to slower storage filegroup:

ALTER PARTITION SCHEME PS_12MonthSlidingWindow_Secondary
    NEXT USED FG_OlderData;

3) SPLIT the secondary partition function for the moved month to recreate the partition on the slower storage filegroup:

ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
    SPLIT RANGE (@MonthStartDateToMoveFromCurrentToOld);

Load the moved partition of the staging table
Now that the staging table partition is on the FG_OlderData filegroup, copy data for the partition to be moved into a staging table using INSERT…SELECT. Note that the WHERE clause specifies the source partition boundaries:

INSERT INTO dbo.SlidingWindowTable_Staging WITH(TABLOCKX) (PartitioningColumn, OtherKeyColumn, OtherData)
    SELECT
          PartitioningColumn
	, OtherKeyColumn
	, OtherData
FROM dbo.SlidingWindowTable
WHERE
        PartitioningColumn >= @MonthStartDateToMoveFromCurrentToOld
    AND PartitioningColumn < DATEADD(month, 1, @MonthStartDateToMoveFromCurrentToOld);

Switch the copied data out of the main table into a second staging table
The second staging table is used to permanently remove data from the original partition, still on the NewerData filegroup.

TRUNCATE TABLE dbo.SlidingWindowTable_Staging2;
ALTER TABLE dbo.SlidingWindowTable
    SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(@MonthStartDateToMoveFromCurrentToOld)
    TO dbo.SlidingWindowTable_Staging2 PARTITION $PARTITION.PF_12MonthSlidingWindow(@MonthStartDateToMoveFromCurrentToOld);

TRUNCATE TABLE dbo.SlidingWindowTable_Staging2;

Switch the adjacent partition out of the main table into the first staging table
Before we move the main table partition (now empty) to the OlderData filegroup, the adjacent partition is also switched out. This is technically not required but is done to follow Microsoft's recommended best practice of merging only empty partitions.

ALTER TABLE dbo.SlidingWindowTable
    SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld))
    TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld));

Move the main table partition to the older data filegroup
The main table partition is moved from the NewerData filegroup to OlderData with merge. Keep in mind that the adjacent partition on the OlderData filegroup is empty. Data for both these partitions are in the staging table.

--remove partition from main table
ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
    MERGE RANGE (@MonthStartDateToMoveFromCurrentToOld);

--create new empty partition on old data filegroup for moved data
ALTER PARTITION SCHEME PS_12MonthSlidingWindow
    NEXT USED FG_OlderData;
    ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
    SPLIT RANGE (@MonthStartDateToMoveFromCurrentToOld);

Switch the partitions from the first staging table back into the main table
Both main and staging table partition schemes are now identical. The partitions can now be switch back into the main table.

ALTER TABLE dbo.SlidingWindowTable_Staging
    SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld))
    TO dbo.SlidingWindowTable PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld));
ALTER TABLE dbo.SlidingWindowTable_Staging
    SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(@MonthStartDateToMoveFromCurrentToOld)
    TO dbo.SlidingWindowTable PARTITION $PARTITION.PF_12MonthSlidingWindow(@MonthStartDateToMoveFromCurrentToOld);

Scripts
Below are the complete example scripts to implement this copy sliding window technique. My last tiered storage sliding window post also includes the CREATE DATABASE, including filegroups.

--main table partition function (before start of new month)
CREATE PARTITION FUNCTION PF_12MonthSlidingWindow (datetime2(0))
AS RANGE RIGHT FOR VALUES (
	  '2014-11-01T00:00:00' --current month -12 on OlderData filegroup
	, '2014-12-01T00:00:00' --current month -11 on OlderData filegroup
	, '2015-01-01T00:00:00' --current month -10 on OlderData filegroup
	, '2015-02-01T00:00:00' --current month -9 on OlderData filegroup
	, '2015-03-01T00:00:00' --current month -8 on OlderData filegroup
	, '2015-04-01T00:00:00' --current month -7 on OlderData filegroup
	, '2015-05-01T00:00:00' --current month -6 on OlderData filegroup
	, '2015-06-01T00:00:00' --current month -5 on OlderData filegroup
	, '2015-07-01T00:00:00' --current month -4 on OlderData filegroup
	, '2015-08-01T00:00:00' --current month -3 on NewerData filegroup
	, '2015-09-01T00:00:00' --current month -2 on NewerData filegroup
	, '2015-10-01T00:00:00' --current month -1 on NewerData filegroup
	, '2015-11-01T00:00:00' --current month on NewerData filegroup
	, '2015-12-01T00:00:00' --next month on NewerData filegroup
	, '2016-01-01T00:00:00' --future month
)
GO

--main table partition scheme
CREATE PARTITION SCHEME PS_12MonthSlidingWindow
AS PARTITION PF_12MonthSlidingWindow
TO (
	  FG_OlderData --older than 12 montths
	, FG_OlderData --current month -12 on OlderData
	, FG_OlderData --current month -11 on OlderData
	, FG_OlderData --current month -10 on OlderData
	, FG_OlderData --current month -9 on OlderData
	, FG_OlderData --current month -8 on OlderData
	, FG_OlderData --current month -7 on OlderData
	, FG_OlderData --current month -6 on OlderData
	, FG_OlderData --current month -5 on OlderData
	, FG_OlderData --current month -4 on OlderData
	, FG_OlderData --current month -3 on NewerData
	, FG_NewerData --current month -2 on NewerData
	, FG_NewerData --current month -1 on NewerData
	, FG_NewerData --current month on NewerData
	, FG_NewerData --next month on NewerData
	, FG_NewerData --future month on NewerData
      );
GO

--secondary partition function identical to main function
CREATE PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary (datetime2(0))
AS RANGE RIGHT FOR VALUES (
	  '2014-11-01T00:00:00' --current month -12 on OlderData filegroup
	, '2014-12-01T00:00:00' --current month -11 on OlderData filegroup
	, '2015-01-01T00:00:00' --current month -10 on OlderData filegroup
	, '2015-02-01T00:00:00' --current month -9 on OlderData filegroup
	, '2015-03-01T00:00:00' --current month -8 on OlderData filegroup
	, '2015-04-01T00:00:00' --current month -7 on OlderData filegroup
	, '2015-05-01T00:00:00' --current month -6 on OlderData filegroup
	, '2015-06-01T00:00:00' --current month -5 on OlderData filegroup
	, '2015-07-01T00:00:00' --current month -4 on OlderData filegroup
	, '2015-08-01T00:00:00' --current month -3 on NewerData filegroup
	, '2015-09-01T00:00:00' --current month -2 on NewerData filegroup
	, '2015-10-01T00:00:00' --current month -1 on NewerData filegroup
	, '2015-11-01T00:00:00' --current month on NewerData filegroup
	, '2015-12-01T00:00:00' --next month on NewerData filegroup
	, '2016-01-01T00:00:00' --future month
)
GO

--secondary partition scheme identical to main scheme
CREATE PARTITION SCHEME PS_12MonthSlidingWindow_Secondary
AS PARTITION PF_12MonthSlidingWindow_Secondary
TO (
	  FG_OlderData --older than 12 montths
	, FG_OlderData --current month -12 on OlderData
	, FG_OlderData --current month -11 on OlderData
	, FG_OlderData --current month -10 on OlderData
	, FG_OlderData --current month -9 on OlderData
	, FG_OlderData --current month -8 on OlderData
	, FG_OlderData --current month -7 on OlderData
	, FG_OlderData --current month -6 on OlderData
	, FG_OlderData --current month -5 on OlderData
	, FG_OlderData --current month -4 on OlderData
	, FG_OlderData --current month -3 on NewerData
	, FG_NewerData --current month -2 on NewerData
	, FG_NewerData --current month -1 on NewerData
	, FG_NewerData --current month on NewerData
	, FG_NewerData --next month on NewerData
	, FG_NewerData --future month on NewerData
      );
GO

--main partitioned table
CREATE TABLE dbo.SlidingWindowTable(
	  PartitioningColumn datetime2(0)
	, OtherKeyColumn int NOT NULL
	, OtherData int NULL
	, CONSTRAINT PK_SlidingWindowTable PRIMARY KEY
		CLUSTERED (PartitioningColumn, OtherKeyColumn)
		ON PS_12MonthSlidingWindow(PartitioningColumn)
	) ON PS_12MonthSlidingWindow(PartitioningColumn);
GO

--The staging table initially uses the same scheme as the main table
--except with maintenance is in progress.
CREATE TABLE dbo.SlidingWindowTable_Staging(
	  PartitioningColumn datetime2(0)
	, OtherKeyColumn int NOT NULL
	, OtherData int NULL
	, CONSTRAINT PK_SlidingWindowTable_Staging PRIMARY KEY
		CLUSTERED (PartitioningColumn, OtherKeyColumn)
		ON PS_12MonthSlidingWindow(PartitioningColumn)
	) ON PS_12MonthSlidingWindow(PartitioningColumn);
GO

--This second staging table uses the same scheme as the main table and
--is used to switch data out of the main table after a partition is copied.
CREATE TABLE dbo.SlidingWindowTable_Staging2(
	  PartitioningColumn datetime2(0)
	, OtherKeyColumn int NOT NULL
	, OtherData int NULL
	, CONSTRAINT PK_SlidingWindowTable_Staging2 PRIMARY KEY
		CLUSTERED (PartitioningColumn, OtherKeyColumn)
		ON PS_12MonthSlidingWindow(PartitioningColumn)
	) ON PS_12MonthSlidingWindow(PartitioningColumn);
GO

WITH
	t4 AS (SELECT n FROM (VALUES(0),(0),(0),(0)) t(n))
	,t256 AS (SELECT 0 AS n FROM t4 AS a CROSS JOIN t4 AS b CROSS JOIN t4 AS c CROSS JOIN t4 AS d)
	,t16M AS (SELECT ROW_NUMBER() OVER (ORDER BY (a.n)) - 1 AS num FROM t256 AS a CROSS JOIN t256 AS b CROSS JOIN t256 AS c)
INSERT INTO dbo.SlidingWindowTable WITH (TABLOCKX) (PartitioningColumn, OtherKeyColumn, OtherData) 
SELECT DATEADD(month, num/1000000, '20141101'), num, num
FROM t16M
WHERE num <= 13000000;
GO

CREATE PROCEDURE dbo.PerformMonthlySlidingWindowMaintenace
	  @ExpiredMonthStartDate datetime2(0) --purge data for this month
	, @MonthStartDateToMoveFromCurrentToOld datetime2(0) --move data from this month from current data filegroup to old data filegroup
	, @FutureMonthStartDate datetime2(0) --create new partition for this date on current data filegroup
/*
	This proc performs partition maintenance needed to maintain a monthly sliding
	window with filegroups on tiered storage.  The objective is to keep recent frequently
	used data on filegroup FG_NewData and less often accessed data on PS_12MonthSlidingWindow_Secondary.
*/
AS

SET NOCOUNT ON;
SET XACT_ABORT ON;

DECLARE @result int;

BEGIN TRY

	BEGIN TRAN;

	--acquire exclusive lock on main table to prevent deadlocking during partition maintenance
	SET @result = (SELECT TOP (0) 1 FROM dbo.SlidingWindowTable WITH (TABLOCKX));

	--**************************
	--*** Purge expired data ***
	--**************************
	--purge oldest partition from main table (including partiton for data older than first boundary)
	TRUNCATE TABLE dbo.SlidingWindowTable_Staging;
	ALTER TABLE dbo.SlidingWindowTable
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @ExpiredMonthStartDate))
		TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(DATEADD(month, -1, @ExpiredMonthStartDate));
	ALTER TABLE dbo.SlidingWindowTable
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(@ExpiredMonthStartDate)
		TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(@ExpiredMonthStartDate);
	TRUNCATE TABLE dbo.SlidingWindowTable_Staging;

	--remove expired partition boundary from both functions after purge
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
		MERGE RANGE (@ExpiredMonthStartDate);
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
		MERGE RANGE (@ExpiredMonthStartDate);

	--*******************************
	--*** Prepare for future data ***
	--*******************************
	--add new partition on new data filegroup for future data to both functions/schemes
	ALTER PARTITION SCHEME PS_12MonthSlidingWindow
			  NEXT USED FG_NewerData;
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
		SPLIT RANGE (@FutureMonthStartDate);
	ALTER PARTITION SCHEME PS_12MonthSlidingWindow_Secondary
			  NEXT USED FG_NewerData;
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
		SPLIT RANGE (@FutureMonthStartDate);

	--this will release the exclusve table lock
	COMMIT;

	--********************************************************
	--*** move partition from FG_NewerData to FG_OlderData ***
	--********************************************************
	--alter secondary partition function and scheme to move partition from FG_NewerData to FG_OlderData
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
		MERGE RANGE (@MonthStartDateToMoveFromCurrentToOld);
	ALTER PARTITION SCHEME PS_12MonthSlidingWindow_Secondary
		NEXT USED FG_OlderData;
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
		SPLIT RANGE (@MonthStartDateToMoveFromCurrentToOld);

	--rebuild staging table using new partition scheme to move partition to filegroup FG_OlderData
	CREATE UNIQUE CLUSTERED INDEX PK_SlidingWindowTable_Staging
		ON dbo.SlidingWindowTable_Staging(PartitioningColumn, OtherKeyColumn)
		WITH(DROP_EXISTING=ON)
		ON PS_12MonthSlidingWindow_Secondary(PartitioningColumn);

	--copy data from main table into corresponding staging table partition, now on FG_OlderData
	INSERT INTO dbo.SlidingWindowTable_Staging WITH(TABLOCKX) (PartitioningColumn, OtherKeyColumn, OtherData)
		SELECT
			  PartitioningColumn
			, OtherKeyColumn
			, OtherData
		FROM dbo.SlidingWindowTable
		WHERE
			PartitioningColumn >= @MonthStartDateToMoveFromCurrentToOld
			AND PartitioningColumn < DATEADD(month, 1, @MonthStartDateToMoveFromCurrentToOld);

	BEGIN TRAN;

	--acquire exclusive lock on main table to prevent deadlocking during partition maintenance
	SET @result = (SELECT TOP (0) 1 FROM dbo.SlidingWindowTable WITH (TABLOCKX));

	--make sure the second staging table is empty
	TRUNCATE TABLE dbo.SlidingWindowTable_Staging2;

	--switch out partition to be moved on FG_NewData into second staging table
	ALTER TABLE dbo.SlidingWindowTable
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(@MonthStartDateToMoveFromCurrentToOld)
		TO dbo.SlidingWindowTable_Staging2 PARTITION $PARTITION.PF_12MonthSlidingWindow(@MonthStartDateToMoveFromCurrentToOld);

	--remove redundant copy of data
	TRUNCATE TABLE dbo.SlidingWindowTable_Staging2;

	--switch the latest partition of main table on the FG_OlderData filegroup to the staging table
	--so that adjacent partitions of the main table are empty for the MERGE and SPLIT needed to 
	--physically move the main table partition to FG_OlderData
	ALTER TABLE dbo.SlidingWindowTable
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld))
		TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld));

	--remove empty partition
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
		MERGE RANGE (@MonthStartDateToMoveFromCurrentToOld);

	--create new empty partition on old data filegroup for moved data
	ALTER PARTITION SCHEME PS_12MonthSlidingWindow
			  NEXT USED FG_OlderData;
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
		SPLIT RANGE (@MonthStartDateToMoveFromCurrentToOld);

	--now that both partition functions/schemes are identical, switch partitions from staging table back into main table
	ALTER TABLE dbo.SlidingWindowTable_Staging
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld))
		TO dbo.SlidingWindowTable PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld));
	ALTER TABLE dbo.SlidingWindowTable_Staging
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(@MonthStartDateToMoveFromCurrentToOld)
		TO dbo.SlidingWindowTable PARTITION $PARTITION.PF_12MonthSlidingWindow(@MonthStartDateToMoveFromCurrentToOld);

	COMMIT;

END TRY
BEGIN CATCH

	IF @@TRANCOUNT > 0 ROLLBACK;

	THROW;

END CATCH;
GO

CHECKPOINT
DBCC DROPCLEANBUFFERS
--when scheduling after the start of a new month, set @RunMonth to the first day of the current month
DECLARE
	--@RunMonth datetime2(0) = DATEADD(day, 1, DATEADD(month, -1, EOMONTH(GETDATE())));
	  @RunMonth datetime2(0) = '20151201';

--calculate boundary dates based on @RunMonth
DECLARE
	  @ExpiredMonthStartDate datetime2(0) = DATEADD(month, -13, @RunMonth)
	, @MonthStartDateToMoveFromCurrentToOld datetime2(0) = DATEADD(month, -4, @RunMonth)
	, @FutureMonthStartDate datetime2(0) = DATEADD(month, 2, @RunMonth);

SELECT
	  @ExpiredMonthStartDate AS ExpiredMonthStartDate
	, @MonthStartDateToMoveFromCurrentToOld AS MonthStartDateToMoveFromCurrentToOld
	, @FutureMonthStartDate AS FutureMonthStartDate;

EXEC dbo.PerformMonthlySlidingWindowMaintenace
	  @ExpiredMonthStartDate = @ExpiredMonthStartDate
	, @MonthStartDateToMoveFromCurrentToOld = @MonthStartDateToMoveFromCurrentToOld
	, @FutureMonthStartDate = @FutureMonthStartDate;
GO

Partitioned Tables and Tiered Storage

A sliding window has long been a mainstay use case for table partitioning. A table partitioning sliding window pattern provides an efficient way to purge data according to retention needs and allows individual partitions to be placed on segregated storage. Well-planned data placement can improve performance of parallel scans of related data, meet RTO objectives with piecemeal restores, and performance SLAs with tiered storage. One can place more actively used partitions on the fastest storage available while keeping older less often used data on slower storage, which is typically much less expensive and more abundant.

I’ve posted articles on this site that show how to automate retention of temporal data using partitioned tables and T-SQL scripts. In this post, I’ll combine those building blocks to implement a tiered storage strategy with an automated sliding window. This technique will keep the most current data on the fastest storage, move data to slower storage as temporal partitions age, purge the oldest partition, and finally prepare for new data. I’ll assume you’re already familiar with partitioning concepts, objects (PARTTION SCHEME and FUNCTION), DDL (SWITCH, SPLIT, MERGE), and partition alignment. Alignment is required in order to use the methods described here.

Tiered Storage Planning
Partitioned tables or not, remember that SQL Server provides control of data placement only at the filegroup level. Without partitioning, an entire table or index is placed on the same filegroup. SQL Server then distributes data among the underlying files according to its proportional fill algorithm. Table partitioning provides more granular control over data placement by allowing individual table/index partitions within a table to be placed on different filegroups when it is advantageous to do so to meet the objectives I mentioned earlier.

In order control placement of individual partitions on different storage, at least one filegroup for each storage tier is needed, each with at least one file on that storage tier. Avoid mixing files within a filegroup that reside on different types of storage if you want maximum control over performance. Consider that there may be multiple levels of storage abstraction (OS, virtualization, SAN, NAS, etc.) that ultimately determines where data are physical stored so one must be mindful of those implications in file placement within a storage tier filegroup.

I’ll walk through the implementation with this scenario. Let’s assume 1TB of solid state storage is available and an additional 5TB available on a spinning media SAN. Although I use only 2 storage tiers here, this technique can be extended for additional tiers if needed.

A 12 month sliding window is planned with a single partitioned table. We’ll keep the current month plus 3 prior months on solid state storage and the remaining 9 months of historical data on the SAN. This necessitates a monthly maintenance script that that performs the following tasks:

1) Purge expired data (older than 12 months)
2) Prepare for new data (on solid state storage)
3) Move data to slower storage (move the 4-month old partition from solid state to spinning media storage
4) Cleanup

I start by creating the partition function, partition scheme, partitioned table, and a staging table partitioned using the same partition scheme (complete DDL at the end of this article). The partitioned staging table avoids the need to create check constraints on the staging table in order to use switch partitions back into the main table. I also create a secondary partition function and partition scheme, identical the others but initially not used. These objects are used to move the partition and data, which I’ll discuss in detail later.

Note that I use a RANGE RIGHT partition function, which I consider to be more natural for incremental temporal data than RANGE LEFT. I also follow the recommended practice of a SPLIT and MERGE of only empty partitions in order to avoid data movement and excessive logging (about 4 times that of normal DML).

Monthly maintenance tasks are performed by a stored procedure scheduled by SQL Agent job after the start of each new month. This proc takes 3 date parameters and is run scheduled for execution after the start of a new month:

    @ExpiredMonthStartDate: partition boundary of expired data to purge.
    @MonthStartDateToMoveFromCurrentToOld: partition boundary of expired data to move to slower storage.
    @FutureMonthStartDate: partition boundary for future data to be created on fast storage.

Below are the details of the maintenance tasks.

Purge Expired data
The first task is the same with any sliding window where oldest data are purged:

1) Truncate the staging table, which is partitioned using a partition scheme identical to the main table:

TRUNCATE TABLE dbo.SlidingWindowTable_Staging;

2) Switch expired data partitions from to staging table into the corresponding staging table partition using SWITCH:

ALTER TABLE dbo.SlidingWindowTable
SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @ExpiredMonthStartDate))
		TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(DATEADD(month, -1, @ExpiredMonthStartDate));
ALTER TABLE dbo.SlidingWindowTable
	SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(@ExpiredMonthStartDate)
	TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(@ExpiredMonthStartDate);

3) Truncate staging table again:

TRUNCATE TABLE dbo.SlidingWindowTable_Staging;

4) Remove the first partition boundary with MERGE of the main table partition function:

ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
	MERGE RANGE (@ExpiredMonthStartDate);
ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
	MERGE RANGE (@ExpiredMonthStartDate);

The secondary partition function is also merged to keep it identical to the main partition. Again, no objects yet use the secondary partition function/scheme.

Prepare for New Data
The second task is also similar to the typical sliding window pattern, except the new partition is specifically placed on the solid state filegroup and the secondary partition function/scheme is also modified:

1) Set NEXT USED partition schemes to solid state filegroup

ALTER PARTITION SCHEME PS_12MonthSlidingWindow
	NEXT USED FG_NewerData;
ALTER PARTITION SCHEME PS_12MonthSlidingWindow_Secondary
	NEXT USED FG_NewerData;

2) SPLIT main table partition function to create future boundary

ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
	SPLIT RANGE (@FutureMonthStartDate);
ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
	SPLIT RANGE (@FutureMonthStartDate);

Move Data to Slower Storage
The final task is to move the 4 month old partition from solid state to spinning media storage. Unlike the previous MERGE and SPLIT, which are metadata-only operations due to the empty partitions, moving the non-empty partition requires physical movement of a potentially large amount of data. Although it is possible to do this with MERGE and SPLIT alone, that would be very inefficient. Instead, I use the DROP_EXISITNG clause of CREATE INDEX after switching the partition to the staging table. The DROP_EXISITNG clause leverages the existing index order to avoid an expensive sort operation.

Note that this technique switches the partition to be moved out of the main table so data is not available for querying during the move process. In my next post, I’ll show how to keep read-only data online during the move using INSERT…SELECT with detailed attention to performance.

1) SWITCH the partition of the month to be moved into the staging table (partitioned using the same partition scheme as the main table to ensure alignment):

ALTER TABLE dbo.SlidingWindowTable
	SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld))
	TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow (DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld));

2) Move the partition of the unused secondary partition scheme to the slower storage filegroup:

a. MERGE the secondary partition function boundary of the moved month to remove it from the secondary partition function and scheme:

ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
	MERGE RANGE (@MonthStartDateToMoveFromCurrentToOld);

b. Set the secondary partition scheme NEXT USED to slower storage filegroup:

ALTER PARTITION SCHEME PS_12MonthSlidingWindow_Secondary
	NEXT USED FG_OlderData;

c. SPLIT the secondary partition function for the moved month to recreate the partition on the slower storage filegroup:

ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
	SPLIT RANGE (@MonthStartDateToMoveFromCurrentToOld);

3) Rebuild the staging table on the secondary partition scheme. This will physically move the partition and data of the month to move (the only non-empty partition in the staging table) to the slower storage filegroup:

CREATE UNIQUE CLUSTERED INDEX PK_SlidingWindowTable_Staging
	ON dbo.SlidingWindowTable_Staging(PartitioningColumn, OtherKeyColumn)
	WITH(DROP_EXISTING=ON)
	ON PS_12MonthSlidingWindow_Secondary(PartitioningColumn);

4) SWITCH the partition prior to the moved month into the staging table so that the adjacent partitions of the main table are empty:

ALTER TABLE dbo.SlidingWindowTable
	SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld))
	TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld));

5) MERGE the main table partition function to remove the moved month boundary:

ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
	MERGE RANGE (@MonthStartDateToMoveFromCurrentToOld);

6) Recreate the main table moved month partition on slower storage:

a. Set the main table partition scheme NEXT USED to the slower storage filegroup:

ALTER PARTITION SCHEME PS_12MonthSlidingWindow
	NEXT USED FG_OlderData;

b. SPLIT the main table partition scheme to recreate the moved month partition on the slower storage filegroup:

ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
	SPLIT RANGE (@MonthStartDateToMoveFromCurrentToOld);

7) SWITCH the moved and prior month partitions back into the staging table:

ALTER TABLE dbo.SlidingWindowTable_Staging
	SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld))
	TO dbo.SlidingWindowTable PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld));
ALTER TABLE dbo.SlidingWindowTable_Staging
	SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(@MonthStartDateToMoveFromCurrentToOld)
	TO dbo.SlidingWindowTable PARTITION $PARTITION.PF_12MonthSlidingWindow(@MonthStartDateToMoveFromCurrentToOld);

Cleanup
1) Rebuild the staging table specifying the main table partition scheme to prepare for the next months maintenance. The secondary partition function and scheme are again unused and are identical to the main table partition function/scheme:

CREATE UNIQUE CLUSTERED INDEX PK_SlidingWindowTable_Staging
	ON dbo.SlidingWindowTable_Staging(PartitioningColumn, OtherKeyColumn)
	WITH(DROP_EXISTING=ON)
	ON PS_12MonthSlidingWindow(PartitioningColumn);

Complete Script
Below is a complete script to create the initial database, partition function, partition scheme, table DDL, sample data, partition maintenance stored procedure, and example usage. The D drive is on solid state storage, E and L drives are on a spinning disk SAN, with the L drive dedicated to log.

Note that the stored procedure acquires an exclusive table lock on the partitioned table to prevent deadlocking during partition maintenance. However, the duration of the transaction should be very short since only metadata operations are performed.

If you have multiple partition schemes (e.g. indexes on separate filegroups) using the same partition functions, you’ll need to create secondary schemes for those too and set the NEXT USED accordingly prior to SPLIT. I can’t think of a reason to also have different partition functions in this aligned table scenario since the functions must be identical anyway to provide alignment. If you also have different functions for the index partition schemes, those will need to be split and merged at the same time as the data partition scheme.

CREATE DATABASE SlidingWindowWithTieredStorage ON
	PRIMARY 
		( NAME = N'SlidingWindowWithTieredStorage'
		, FILENAME = N'D:\SqlDataFiles\MSSQLSERVER\SlidingWindowWithTieredStorage.mdf'
		, SIZE = 10MB
		, MAXSIZE = UNLIMITED
		, FILEGROWTH = 10MB ),
    FILEGROUP FG_NewerData 
		( NAME = N'NewerData1'
		, FILENAME = N'D:\SqlDataFiles\MSSQLSERVER\SlidingWindowWithTieredStorage_NewerData1.ndf'
		, SIZE = 1TB
		, MAXSIZE = UNLIMITED
		, FILEGROWTH = 10GB ),
    FILEGROUP FG_OlderData 
		( NAME = N'OlderData2'
		, FILENAME = N'D:\SqlDataFiles\MSSQLSERVER\SlidingWindowWithTieredStorage_OlderData2.ndf' 
		, SIZE = 1TB
		, MAXSIZE = UNLIMITED
		, FILEGROWTH = 10GB ),
    FILEGROUP FG_OlderData 
		( NAME = N'OlderData3'
		, FILENAME = N'D:\SqlDataFiles\MSSQLSERVER\SlidingWindowWithTieredStorage_OlderData3.ndf' 
		, SIZE = 1TB
		, MAXSIZE = UNLIMITED
		, FILEGROWTH = 10GB ),
    FILEGROUP FG_OlderData 
		( NAME = N'OlderData4'
		, FILENAME = N'D:\SqlDataFiles\MSSQLSERVER\SlidingWindowWithTieredStorage_OlderData4.ndf' 
		, SIZE = 1TB
		, MAXSIZE = UNLIMITED
		, FILEGROWTH = 10GB ),
    FILEGROUP FG_OlderData 
		( NAME = N'OlderData5'
		, FILENAME = N'D:\SqlDataFiles\MSSQLSERVER\SlidingWindowWithTieredStorage_OlderData5.ndf' 
		, SIZE = 1TB
		, MAXSIZE = UNLIMITED
		, FILEGROWTH = 10GB )
    LOG ON 
	( NAME = N'SlidingWindowWithTieredStorage_Log'
	, FILENAME = N'C:\SqlLogFiles\MSSQLSERVER\SlidingWindowWithTieredStorage_Log.ldf' 
	, SIZE = 1024KB 
	, MAXSIZE = 2048GB 
	, FILEGROWTH = 1GB);
GO

CREATE DATABASE SlidingWindowWithTieredStorage ON
	PRIMARY 
		( NAME = N'SlidingWindowWithTieredStorage'
		, FILENAME = N'D:\SqlDataFiles\MSSQLSERVER\SlidingWindowWithTieredStorage.mdf'
		, SIZE = 4096KB
		, MAXSIZE = UNLIMITED
		, FILEGROWTH = 1024KB ),
    FILEGROUP FG_NewerData 
		( NAME = N'NewerData'
		, FILENAME = N'D:\SqlDataFiles\MSSQLSERVER\SlidingWindowWithTieredStorage_NewerData.ndf'
		, SIZE = 4096KB
		, MAXSIZE = UNLIMITED
		, FILEGROWTH = 1024KB ),
    FILEGROUP FG_OlderData 
	( NAME = N'OlderData'
		, FILENAME = N'D:\SqlDataFiles\MSSQLSERVER\SlidingWindowWithTieredStorage_OlderData.ndf' 
		, SIZE = 4096KB 
		, MAXSIZE = UNLIMITED
		, FILEGROWTH = 1024KB )
    LOG ON 
	( NAME = N'SlidingWindowWithTieredStorage_Log'
	, FILENAME = N'C:\SqlLogFiles\MSSQLSERVER\SlidingWindowWithTieredStorage_Log.ldf' 
	, SIZE = 1024KB 
	, MAXSIZE = 2048GB 
	, FILEGROWTH = 1GB);
GO

USE SlidingWindowWithTieredStorage;
GO

--main table partition function (before start of next month)
CREATE PARTITION FUNCTION PF_12MonthSlidingWindow (datetime2(0))
AS RANGE RIGHT FOR VALUES (
	  '2014-11-01T00:00:00' --current month -12 on OlderData filegroup
	, '2014-12-01T00:00:00' --current month -11 on OlderData filegroup
	, '2015-01-01T00:00:00' --current month -10 on OlderData filegroup
	, '2015-02-01T00:00:00' --current month -9 on OlderData filegroup
	, '2015-03-01T00:00:00' --current month -8 on OlderData filegroup
	, '2015-04-01T00:00:00' --current month -7 on OlderData filegroup
	, '2015-05-01T00:00:00' --current month -6 on OlderData filegroup
	, '2015-06-01T00:00:00' --current month -5 on OlderData filegroup
	, '2015-07-01T00:00:00' --current month -4 on OlderData filegroup
	, '2015-08-01T00:00:00' --current month -3 on NewerData filegroup
	, '2015-09-01T00:00:00' --current month -2 on NewerData filegroup
	, '2015-10-01T00:00:00' --current month -1 on NewerData filegroup
	, '2015-11-01T00:00:00' --current month on NewerData filegroup
	, '2015-12-01T00:00:00' --next month on NewerData filegroup
	, '2016-01-01T00:00:00' --future month
)
GO

--main table partition scheme
CREATE PARTITION SCHEME PS_12MonthSlidingWindow
AS PARTITION PF_12MonthSlidingWindow
TO (
	  FG_OlderData --older than 12 montths
	, FG_OlderData --current month -12 on OlderData
	, FG_OlderData --current month -11 on OlderData
	, FG_OlderData --current month -10 on OlderData
	, FG_OlderData --current month -9 on OlderData
	, FG_OlderData --current month -8 on OlderData
	, FG_OlderData --current month -7 on OlderData
	, FG_OlderData --current month -6 on OlderData
	, FG_OlderData --current month -5 on OlderData
	, FG_OlderData --current month -4 on OlderData
	, FG_OlderData --current month -3 on NewerData
	, FG_NewerData --current month -2 on NewerData
	, FG_NewerData --current month -1 on NewerData
	, FG_NewerData --current month on NewerData
	, FG_NewerData --next month on NewerData
	, FG_NewerData --future month on NewerData
      );
GO

--secondary partition function identical to main function
CREATE PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary (datetime2(0))
AS RANGE RIGHT FOR VALUES (
	  '2014-11-01T00:00:00' --current month -12 on OlderData filegroup
	, '2014-12-01T00:00:00' --current month -11 on OlderData filegroup
	, '2015-01-01T00:00:00' --current month -10 on OlderData filegroup
	, '2015-02-01T00:00:00' --current month -9 on OlderData filegroup
	, '2015-03-01T00:00:00' --current month -8 on OlderData filegroup
	, '2015-04-01T00:00:00' --current month -7 on OlderData filegroup
	, '2015-05-01T00:00:00' --current month -6 on OlderData filegroup
	, '2015-06-01T00:00:00' --current month -5 on OlderData filegroup
	, '2015-07-01T00:00:00' --current month -4 on OlderData filegroup
	, '2015-08-01T00:00:00' --current month -3 on NewerData filegroup
	, '2015-09-01T00:00:00' --current month -2 on NewerData filegroup
	, '2015-10-01T00:00:00' --current month -1 on NewerData filegroup
	, '2015-11-01T00:00:00' --current month on NewerData filegroup
	, '2015-12-01T00:00:00' --next month on NewerData filegroup
	, '2016-01-01T00:00:00' --future month
)
GO

--secondary partition scheme identical to main scheme
CREATE PARTITION SCHEME PS_12MonthSlidingWindow_Secondary
AS PARTITION PF_12MonthSlidingWindow_Secondary
TO (
	  FG_OlderData --older than 12 montths
	, FG_OlderData --current month -12 on OlderData
	, FG_OlderData --current month -11 on OlderData
	, FG_OlderData --current month -10 on OlderData
	, FG_OlderData --current month -9 on OlderData
	, FG_OlderData --current month -8 on OlderData
	, FG_OlderData --current month -7 on OlderData
	, FG_OlderData --current month -6 on OlderData
	, FG_OlderData --current month -5 on OlderData
	, FG_OlderData --current month -4 on OlderData
	, FG_OlderData --current month -3 on NewerData
	, FG_NewerData --current month -2 on NewerData
	, FG_NewerData --current month -1 on NewerData
	, FG_NewerData --current month on NewerData
	, FG_NewerData --next month on NewerData
	, FG_NewerData --future month on NewerData
      );
GO

--this is the main partitioned table
CREATE TABLE dbo.SlidingWindowTable(
	  PartitioningColumn datetime2(0)
	, OtherKeyColumn int NOT NULL
	, OtherData int NULL
	, CONSTRAINT PK_SlidingWindowTable PRIMARY KEY
		CLUSTERED (PartitioningColumn, OtherKeyColumn)
		ON PS_12MonthSlidingWindow(PartitioningColumn)
	) ON PS_12MonthSlidingWindow(PartitioningColumn);
GO

--The staging table initially uses the same scheme as the main table
--except with maintenance is in progress
CREATE TABLE dbo.SlidingWindowTable_Staging(
	  PartitioningColumn datetime2(0)
	, OtherKeyColumn int NOT NULL
	, OtherData int NULL
	, CONSTRAINT PK_SlidingWindowTable_Staging PRIMARY KEY
		CLUSTERED (PartitioningColumn, OtherKeyColumn)
		ON PS_12MonthSlidingWindow(PartitioningColumn)
	) ON PS_12MonthSlidingWindow(PartitioningColumn);
GO

--load main table with 14M rows of sample data
WITH
	t4 AS (SELECT n FROM (VALUES(0),(0),(0),(0)) t(n))
	,t256 AS (SELECT 0 AS n FROM t4 AS a CROSS JOIN t4 AS b CROSS JOIN t4 AS c CROSS JOIN t4 AS d)
	,t16M AS (SELECT ROW_NUMBER() OVER (ORDER BY (a.n)) AS num FROM t256 AS a CROSS JOIN t256 AS b CROSS JOIN t256 AS c)
INSERT INTO dbo.SlidingWindowTable WITH (TABLOCKX) (PartitioningColumn, OtherKeyColumn, OtherData) 
SELECT DATEADD(month, num/1000000, '20141101'), num, num
FROM t16M
WHERE num <= 14000000;
GO

CREATE PROCEDURE dbo.PerformMonthlySlidingWindowMaintenace
	  @ExpiredMonthStartDate datetime2(0) --purge data for this month
	, @MonthStartDateToMoveFromCurrentToOld datetime2(0) --move data from this month from current data filegroup to old data filegroup
	, @FutureMonthStartDate datetime2(0) --create new partition for this date on current data filegroup
/*
	This proc performs partition maintenance needed to maintain a monthly sliding
	window with filegroups on tiered storage.  The objective is to keep recent frequently
	used data on filegroup FG_NewData and less often accessed data on PS_12MonthSlidingWindow_Secondary.
*/
AS

SET XACT_ABORT ON;

BEGIN TRY

	BEGIN TRAN;

	--acquire exclusive lock on main table to prevent deadlocking during partition maintenance
	DECLARE @result int = (SELECT TOP (0) 1 FROM dbo.SlidingWindowTable WITH (TABLOCKX));

	--purge oldest partition from main table (including partiton for data older than first boundary)
	TRUNCATE TABLE dbo.SlidingWindowTable_Staging;
	ALTER TABLE dbo.SlidingWindowTable
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @ExpiredMonthStartDate))
		TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(DATEADD(month, -1, @ExpiredMonthStartDate));
	ALTER TABLE dbo.SlidingWindowTable
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(@ExpiredMonthStartDate)
		TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(@ExpiredMonthStartDate);
	TRUNCATE TABLE dbo.SlidingWindowTable_Staging;

	--remove expired partition boundary from both functions after purge
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
		MERGE RANGE (@ExpiredMonthStartDate);
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
		MERGE RANGE (@ExpiredMonthStartDate);

	--add new partition on new data filegroup for future data to both functions/schemes
	ALTER PARTITION SCHEME PS_12MonthSlidingWindow
			  NEXT USED FG_NewerData;
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
		SPLIT RANGE (@FutureMonthStartDate);
	ALTER PARTITION SCHEME PS_12MonthSlidingWindow_Secondary
			  NEXT USED FG_NewerData;
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
		SPLIT RANGE (@FutureMonthStartDate);
	
	--switch partition to be moved from FG_NewData to PS_12MonthSlidingWindow_Secondary into staging table
	ALTER TABLE dbo.SlidingWindowTable
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(@MonthStartDateToMoveFromCurrentToOld)
		TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow(@MonthStartDateToMoveFromCurrentToOld);

	--alter secondary partition function and scheme to move partition from FG_NewerData to FG_OlderData
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
		MERGE RANGE (@MonthStartDateToMoveFromCurrentToOld);
	ALTER PARTITION SCHEME PS_12MonthSlidingWindow_Secondary
		NEXT USED FG_OlderData;
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow_Secondary()
		SPLIT RANGE (@MonthStartDateToMoveFromCurrentToOld);

	--this will release the exclusve table lock but the data in the staging table temporarily unavailable
	COMMIT;

	--rebuild staging table using new partition scheme to move switched out data to old filegroup
	CREATE UNIQUE CLUSTERED INDEX PK_SlidingWindowTable_Staging
		ON dbo.SlidingWindowTable_Staging(PartitioningColumn, OtherKeyColumn)
		WITH(DROP_EXISTING=ON)
		ON PS_12MonthSlidingWindow_Secondary(PartitioningColumn);

	BEGIN TRAN;

	--acquire exclusive lock on main table to prevent deadlocking during partition maintenance
	SET @result = (SELECT TOP (0) 1 FROM dbo.SlidingWindowTable WITH (TABLOCKX));

	--switch the latest partition of main table on the FG_OlderData filegroup to the staging table
	--so that adjacent partitions of the main table are empty for the MERGE and SPLIT needed to 
	--physically move the main table partition
	ALTER TABLE dbo.SlidingWindowTable
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld))
		TO dbo.SlidingWindowTable_Staging PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld));

	--remove empty partition
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
		MERGE RANGE (@MonthStartDateToMoveFromCurrentToOld);

	--create new empty partition on old data filegroup for moved data
	ALTER PARTITION SCHEME PS_12MonthSlidingWindow
			  NEXT USED FG_OlderData;
	ALTER PARTITION FUNCTION PF_12MonthSlidingWindow()
		SPLIT RANGE (@MonthStartDateToMoveFromCurrentToOld);

	--now that both partition functions/schemes are identical, switch partitions from staging table back into main table
	ALTER TABLE dbo.SlidingWindowTable_Staging
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld))
		TO dbo.SlidingWindowTable PARTITION $PARTITION.PF_12MonthSlidingWindow(DATEADD(month, -1, @MonthStartDateToMoveFromCurrentToOld));
	ALTER TABLE dbo.SlidingWindowTable_Staging
		SWITCH PARTITION $PARTITION.PF_12MonthSlidingWindow_Secondary(@MonthStartDateToMoveFromCurrentToOld)
		TO dbo.SlidingWindowTable PARTITION $PARTITION.PF_12MonthSlidingWindow(@MonthStartDateToMoveFromCurrentToOld);

	--rebuild staging table using same partition scheme as main table to prepare for next month
	CREATE UNIQUE CLUSTERED INDEX PK_SlidingWindowTable_Staging
		ON dbo.SlidingWindowTable_Staging(PartitioningColumn, OtherKeyColumn)
		WITH(DROP_EXISTING=ON)
		ON PS_12MonthSlidingWindow(PartitioningColumn);

	COMMIT;

END TRY
BEGIN CATCH

	IF @@TRANCOUNT > 0 ROLLBACK;

	THROW;

END CATCH;
GO

--when scheduling after the start of a new month, set @RunMonth to the first day of the current month
DECLARE
	--@RunMonth datetime2(0) = DATEADD(day, 1, DATEADD(month, -1, EOMONTH(GETDATE())));
	  @RunMonth datetime2(0) = '20151201';

--calculate boundary dates based on @RunMonth
DECLARE
	  @ExpiredMonthStartDate datetime2(0) = DATEADD(month, -13, @RunMonth)
	, @MonthStartDateToMoveFromCurrentToOld datetime2(0) = DATEADD(month, -4, @RunMonth)
	, @FutureMonthStartDate datetime2(0) = DATEADD(month, 2, @RunMonth);

SELECT
	  @ExpiredMonthStartDate
	, @MonthStartDateToMoveFromCurrentToOld
	, @FutureMonthStartDate;

EXEC dbo.PerformMonthlySlidingWindowMaintenace
	  @ExpiredMonthStartDate = @ExpiredMonthStartDate
	, @MonthStartDateToMoveFromCurrentToOld = @MonthStartDateToMoveFromCurrentToOld
	, @FutureMonthStartDate = @FutureMonthStartDate;
GO

SQL Server and Hyper-threading

I had a client ask whether or not SQL Server could take advantage of all 32 logical processors if they enabled hyper-threading on their server. They were running SQL Server 2012 Standard Edition on a server with 4-sockets, each would 4 cores. As in nearly all things related to SQL Server, the answer is “it depends”.

The compute capacity of SQL Server 2012 Standard Edition (and later versions as of this writing) is the lesser of 4 sockets or 16 cores regardless of the number of logical processors. With hyper-threading enabled and running SQL Server on bare metal, the number of logical processors doubles from 16 to 32. SQL Server 2012 SE can use all 32 of these logical processors because the compute limit is based on physical sockets/cores, not logical processors.

However, SQL Server 2012 SE can use only 16 of the 32 logical processors when running virtualized on the same physical server. This is because the physical processor architecture isn’t exposed to the guest OS. Logical processors available to the virtualization host are mapped to guest VMs and these appear as virtual sockets and cores to both the OS and SQL Server with no notion of hyper-threading. The end result is that SQL Server 2012 SE will use no more than 16 logical processors when virtualized with or without hyper-threading enabled.

Other Hyper-threading Considerations
Regardless of the SQL Server edition, one should enable hyper-threading only if the OS and SQL Server can utilize the additional logical processors. This practice guarantees logical processors and physical cores are one and the same, providing the best performance possible. When additional logical processors provided by hyper-threading can be used, the SQL Server performance benefits, or even detriments, are very work-load dependent. The only way to accurately ascertain HT performance is with your actual production workload.

My past experience is that OLTP workloads benefit the most from hyper-threading. In the early days of HT technology (before Windows and SQL Server were HT-aware), I observed a 15-20% performance improvement for an OLTP workload. Unfortunately, reporting workload performance could worsen with HT enabled. Slava Oks’s blog post may explain why.

The world has changed a lot since then. SQL Server is now both HT and NUMA aware, with improved locking primitives and memory-optimized structures available that avoid locking entirely. With current SQL Server versions running bare-metal on modern hardware, I suggest you enable HT unless your actual workload experience dictates otherwise. Premature optimization is evil; don’t disable HT preemptively without justification with your own workload or you may be wasting free CPU cycles. Importantly, limit MAXDOP per this support article.

See Linchi’s Shea’s post and also this one for reporting workload tests with and without HT enabled. Also, see Joe Chang’s post. The first graph is especially interesting in that it shows an actual production OLTP workload running with and without HT. The remainder of the post shows results of the TPC-H benchmark tests, a synthetic decision support (reporting) workload.

Again, your workload is unique so only you can ascertain if HT is actually beneficial.

Beware Job Recruiting SCAM

I’ve had multiple reports of a scam where persons have used my identity to entice IT professionals into applying for positions they have no business recruiting for. The scammers later ask for a PayPal payment for their services. The authorities have been notified of this fraudulent activity.

I urge everyone to be vigilant with regards to social engineering scams like this. Beware of unsolicited emails from persons you don’t know personally. It’s is easy for unscrupulous individuals to glean much information from publicly available sources (job descriptions, professional profiles, etc.) and tailor a message customized for a particular job seeker.

SQL Server File Naming Standards

Attention to detail in naming SQL Server physical files and logical file names will make a DBAs life easier. This is especially important when using RESTORE or attach. I suggest one establish and follow a naming standard for physical and logical database file names.

SQL Server does not enforce any particular naming standard for files.  SQL Server is perfectly happy with a data file named “readme.txt” or a log file named “word.doc”.  Of course, such an inappropriate name and extension will lead to confusion so the best practice is to name files appropriately.  An appropriate SQL Server database file name is essentially one that is self-documenting; one should be able to determine the exact purpose of a file simply by examining the name.  I recommend a naming standard that includes the associated database name, filegroup name, and file type.

Physical File Names
I propose adopting a physical file naming convention of “<database-name>_< filegroup-name>_<uniqueifier>.<file-type>” where:

  • <database-name> is the name of the associated database
  • <filegroup-name> is the name of the filegroup containing the file, or the literal “Log” for log files
  • <uniqueifier> is a integer to ensure the file name is unique
  • <file-type> is the standard SQL Server extension for the file type (“mdf”, “ndf”, or “ldf”) as documented in the SQL Server Books Online Database Files and Filegroups topic

Personally, I use an underscore to separate the name components but a dash will also suffice.  Ideally, the separator character should never be used in database or filegroup names to avoid ambiguity.  I recommend one avoid using special characters in database and filegroup names (e.g. use proper case) and ensure database and filegroup names conform to the rules for regular identifiers as outlined in the Books Online Database Identifiers topic.

There is some wiggle room as to how strictly one adheres to this naming convention.  One could omit the filegroup name and uniqueifier components for the primary data file (mdf) because this file is implicitly in the PRIMARY filegroup and there can be only primary data file in the filegroup/database.  Similarly, the uniqueifier could be omitted for the first or only file within a filegroup.   That being said, a more strict adherence this naming convention provides better consistency and makes naming file more of a no-brainer.

Note that the uniqueifier in the name need not imply a sequential value with no gaps.  For example, consider a database named MyDatabase with filegroup DataFG containing 3 files named ‘MyDatabase_DataFG_1.ndf’, ‘MyDatabase_DataFG_2.ndf’, and ‘MyDatabase_DataFG_3.ndf’.  If the second file is removed, there is no requirement to rename file ‘MyDatabase_DataFG_3.ndf’ to ‘MyDatabase_DataFG_2.ndf’.  However, if one is anal about such things, there is no harm in doing so other than the unavailability of the database during the maintenance.

The physical file naming convention described above will guarantee physical file names are unique within a SQL Server instance and facilitate relocation to different drives/folders without naming conflicts.  On a server with multiple instances, I suggest placing files in separate folders for each instance.  This practice will better organize database files while avoiding file name conflicts when a database with the same name exists on different instances.

Logical File Names
Although I commonly see database names embedded within logical file names, I suggest one avoid that practice.  The scope of a logical file name is the context database so including the database name is redundant.  More importantly, the original logical file name is retained when a database is subsequently restored or attached with a different name so the name becomes out-of-sync with the actual database name unless one explicitly changes the names afterward to match the new database name.  This extra administrative work is often overlooked and can be avoided entirely by not including the database name in the logical file name when the initial database is created or altered.

I suggest one use the same naming convention for the logical file name as the physical file name but without the database name.  The logical name will therefore match the last part of physical name (< filegroup-name>_<uniqueifier>).

Unfortunately, SQL Server likes to include the database name in logical names of the primary data file and log file during initial creation.  Executing a minimal create database T-SQL statement like “CREATE DATABASE Foo;” will result in the files below created in the default data and log folder for the instance.  This also applies to the logical names suggested by the SSMS GUI, although one can specify different names as desired.

Logical Name Physical Name
Foo D:\SqlDataFiles\Foo.mdf
Foo_log L:\SqlLogFiles\Foo_log.ldf

I recommend using the expanded form of CREATE DATABASE so that you have complete control over names, locations, and sizes.  Consider creating a custom SSMS template or code snippet to facilitate creating databases with the proper names.

CREATE DATABASE ExampleDatabase
ON PRIMARY
    (NAME = PRIMARY_1,
    FILENAME = N'D:\SqlDataFiles\ExampleDatabase_PRIMARY_1.mdf',
    SIZE = 10MB,
    MAXSIZE = 50MB,
    FILEGROWTH = 10%)
LOG ON
    ( NAME = Log_1,
    FILENAME = N'L:\SqlLogFiles\ExampleDatabase_Log_1.ldf',
    SIZE = 10MB,
    MAXSIZE = 50MB,
    FILEGROWTH = 10MB);

Examples

Below are examples of the naming conventions detailed in this article.

Simple database with only a primary data and log file:

Logical Name Physical Name
PRIMARY_1 D:\SqlDataFiles\ExampleDatabase_PRIMARY_1.mdf
Log_1 L:\SqlLogFiles\ExampleDatabase_Log_1.ldf

Database with 2 files in PRIMARY filegroup, 2 secondary filegroups containing 3 files each, and 2 log files:

Logical Name Physical Name
PRIMARY_1 D:\SqlDataFiles\ExampleDatabase_PRIMARY_1.mdf
PRIMARY_2 E:\SqlDataFiles\ExampleDatabase_PRIMARY_2.ndf
DataFG_1 F:\SqlDataFiles\ExampleDatabase_DataFG_1.ndf
DataFG_2 G:\SqlDataFiles\ExampleDatabase_DataFG_2.ndf
DataFG_3 H:\SqlDataFiles\ExampleDatabase_DataFG_3.ndf
IndexFG_1 I:\SqlDataFiles\ExampleDatabase_IndexFG_1.ndf
IndexFG_2 J:\SqlDataFiles\ExampleDatabase_IndexFG_2.ndf
IndexFG_3 K:\SqlDataFiles\ExampleDatabase_IndexFG_3.ndf
Log_1 L:\SqlLogFiles\ExampleDatabase_Log_1.ldf
Log_2 M:\SqlLogFiles\ExampleDatabase_Log_2.ldf

Table Partitioning Best Practices

SQL Server table partitioning has a number of gotchas without proper planning.  This article demonstrates those that commonly cause grief and recommends best practices to avoid them.

Implications of the Partition Function Range Specification

One needs a good understanding of how the RANGE LEFT/RIGHT specification affects partition setup and management.  The RANGE specification determines:

  • The partition created by SPLIT
  • The partition removed with MERGE
  • The permanent partition that can never be removed from partition schemes

Below are nuances of the RANGE specification that commonly surprise people.

The Partition Created by SPLIT

New partitions are created by splitting a partition function.  A partition function SPLIT splits an existing partition into 2 separate ones, changing all of the underlying partition schemes, tables, and indexes.  Below are the actions performed when a LEFT or RIGHT partition is SPLIT, with important the differences in bold:

The actions performed by a SPLIT of a RANGE LEFT partition function:

  • Identify existing partition to be split, which is the one that contains the new boundary (or the last partition if no existing boundaries are higher than the one being added)
  • Add the new boundary to the partition function, maintaining boundary order and incrementing subsequent partition numbers
  • Create a new partition to the left of the existing one on the NEXT USED filegroup of each partition scheme that uses the function
  • For each table/index using the affected partition scheme(s), move rows from the existing split partition that are less than or equal to the new boundary into the newly created partition on the left

The actions performed by a SPLIT of a RANGE RIGHT partition function:

  • Identify existing partition to be split, which is the one that contains the new boundary (or the first partition if no existing boundaries are less than the one being added)
  • Add the new boundary to the partition function, maintaining boundary order and incrementing subsequent partition numbers
  • Create a new partition to the right of the existing one on the NEXT USED filegroup for each partition scheme that uses the function
  • For each table/index using the affected partition scheme(s), move rows from the existing split partition that are greater than or equal to the new boundary into the newly create partition on the right

I generally suggest one use RANGE RIGHT instead because it is more natural, and helps avoid common pitfalls when adding incremental partition boundaries.  RANGE LEFT behavior is not intuitive and trips up many DBAs but I will discuss for completeness.  Consider this example of a RANGE LEFT partition function on a datetime column, where the initial setup is for 2 years of data (2013 and 2014):

CREATE DATABASE PartitioningDemo ON PRIMARY
        ( NAME = N'Primary', FILENAME = N'C:\SqlDataFiles\PartitioningDemo.mdf' , SIZE = 100MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB ),
    FILEGROUP FG_2013
        ( NAME = N'FG_2013_1', FILENAME = N'C:\SqlDataFiles\PartitioningDemo_FG_2013_1.ndf' , SIZE = 10MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB ),
    FILEGROUP FG_2014
        ( NAME = N'FG_2014_1', FILENAME = N'C:\SqlDataFiles\PartitioningDemo_FG_2014_1.ndf' , SIZE = 10MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB )
    LOG ON
        ( NAME = N'Log_1', FILENAME = N'C:\SqlLogFiles\PartitioningDemo_Log_1.ldf' , SIZE = 100MB , MAXSIZE = 10GB , FILEGROWTH = 10MB);
GO

USE PartitioningDemo;
GO

CREATE PARTITION FUNCTION PF_DateTimeLeft(datetime) AS
    RANGE LEFT FOR VALUES(
        N'2013-12-31T23:59:59.997'
    );

CREATE PARTITION SCHEME PS_DateTimeLeft AS
    PARTITION PF_DateTimeLeft TO(
        FG_2013
        , FG_2014
    );

CREATE TABLE dbo.PartitionedTableLeft(
    DateTimeColumn datetime
    ) ON PS_DateTimeLeft(DateTimeColumn);

INSERT INTO dbo.PartitionedTableLeft VALUES
      (N'2013-01-01T01:02:03.340')
    , (N'2013-02-03T04:05:06.780')
    , (N'2014-01-01T01:02:03.340')
    , (N'2014-02-03T04:05:06.780')
    , (N'2014-03-04T05:06:07.890');
GO

ALTER DATABASE PartitioningDemo
    ADD FILEGROUP FG_2015;

ALTER DATABASE PartitioningDemo
    ADD FILE
        ( NAME = N'FG_2015_1', FILENAME = N'C:\SqlDataFiles\PartitioningDemo_FG_2015_1.ndf' , SIZE = 10MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB )
    TO FILEGROUP FG_2015;

ALTER PARTITION SCHEME PS_DateTimeLeft
    NEXT USED FG_2015;

ALTER PARTITION FUNCTION PF_DateTimeLeft()
    SPLIT RANGE(N'2014-12-31T23:59:59.997');

INSERT INTO dbo.PartitionedTableLeft VALUES
    (N'2015-01-01T01:02:03.340');
GO

This initial setup results in 2 partitions and data properly mapped to the 2 yearly filegroups of the scheme.  Now, we need to prepare for year 2015 so we add a new filegroup, set the NEXT USED filegroup, SPLIT the function for year 2015, and insert data for 2015:

ALTER DATABASE PartitioningDemo
    ADD FILEGROUP FG_2015;

ALTER DATABASE PartitioningDemo
    ADD FILE
        ( NAME = N'FG_2015_1', FILENAME = N'C:\SqlDataFiles\PartitioningDemo_FG_2015_1.ndf' , SIZE = 10MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB )
    TO FILEGROUP FG_2015;

ALTER PARTITION SCHEME PS_DateTimeLeft
    NEXT USED FG_2015;

ALTER PARTITION FUNCTION PF_DateTimeLeft()
    SPLIT RANGE(N'2014-12-31T23:59:59.997');

INSERT INTO dbo.PartitionedTableLeft VALUES
    (N'2015-01-01T01:02:03.346');
GO

This SPLIT results in adding the new boundary to the function as expected:

CREATE PARTITION FUNCTION PF_DateTimeLeft(datetime) AS
    RANGE LEFT FOR VALUES(
          N'2013-12-31T23:59:59.997'
        , N'2014-12-31T23:59:59.997'
);

However, the partition scheme is not as desired.  The resultant partition scheme is:

CREATE PARTITION SCHEME PS_DateTimeLeft AS
    PARTITION PF_DateTimeLeft TO(
          FG_2013
        , FG_2015
        , FG_2014
    );

As you can see, the filegroup for 2015 was inserted into the scheme before the 2014 one, resulting in 2014 data in the FG_2015 filegroup and 2015 data in the FG_2014 filegroup.  Furthermore, not only is the filegroup mapping now wrong, data movement was required to move the entire year of 2014 data into the new partition.  No big deal here since only 3 rows were moved by this demo script but in a production table, this movement could be a show stopper.  Logging during SWITCH/MERGE data movement during is about 4 times that of normal DML, which is especially costly when working with large tables containing millions or billions of rows as is commony used in table partitioning.

Remember that the new partition by a SPLIT is the one that includes the specified boundary, which is to the left of the existing split partition with a RANGE LEFT function (the 2014 partition here).  Data are moved from the existing SPLIT partition into the newly created partition according to the boundaries of the new partition (greater than ‘2013-12-31T23:59:59.997’ and less than or equal to ‘2014-12-31T23:59:59.997’).

Again, I recommend using a RANGE RIGHT function to avoid this non-intuitive behavior.  Below is the equivalent RANGE RIGHT script for yearly partitions, which results in the desired filegroup mappings as well as no data movement.  Also, note that the datetime boundaries are exact date specifications with RANGE RIGHT, which is also more intuitive when working with temporal datetime, datetime2, and datetimeoffset data types that include a time component.

CREATE PARTITION FUNCTION PF_DateTimeRight(datetime) AS
    RANGE RIGHT FOR VALUES(
        N'2014-01-01T00:00:00.000'
    );

CREATE PARTITION SCHEME PS_DateTimeRight AS
    PARTITION PF_DateTimeRight TO(
          FG_2013
        , FG_2014
    );

CREATE TABLE dbo.PartitionedTableRight(
    DateTimeColumn datetime
    ) ON PS_DateTimeRight(DateTimeColumn);

INSERT INTO dbo.PartitionedTableRight VALUES
      (N'2013-01-01T01:02:03.340')
    , (N'2013-02-03T04:05:06.780')
    , (N'2014-01-01T01:02:03.340')
    , (N'2014-02-03T04:05:06.780')
    , (N'2014-03-04T05:06:07.890');
GO

ALTER PARTITION SCHEME PS_DateTimeRight
    NEXT USED FG_2015;

ALTER PARTITION FUNCTION PF_DateTimeRight()
    SPLIT RANGE(N'2015-01-01T00:00:00.000');

INSERT INTO dbo.PartitionedTableRight VALUES
    (N'2015-01-01T01:02:03.346');

The resultant partition function and scheme definitions are as desired with this RANGE RIGHT partition function:

CREATE PARTITION FUNCTION PF_DateTimeRight (datetime) AS
    RANGE LEFT FOR VALUES(
          N'2014-01-01T00:00:00.000'
        , N'2015-01-01T00:00:00.000'
    );

CREATE PARTITION SCHEME PS_DateTimeRight AS
    PARTITION PF_DateTimeRight TO(
          FG_2013
        , FG_2014
        , FG_2015
    );

The Partition Removed by MERGE

When a partition is removed with MERGE, the dropped partition is the one that includes the specified boundary.  If the dropped partition is not empty, all data will be moved into the adjacent remaining partition.  Like SPLIT, costly data movement during partition maintenance should be avoided so it is best to plan such than only empty partitions are removed.  A MERGE should typically done after a purge/archive of data using SWITCH.

Below is a summary of a RANGE LEFT partition function MERGE:

  • Identify existing partition to be removed, which is the one that includes (to the left of) the specified existing boundary
  • For each table/index using the affected partition scheme(s), move rows from this partition into the adjacent right partition
  • Remove boundary from the partition function, maintaining boundary order and decrementing subsequent partition numbers
  • Remove the filegroup from each partition scheme that uses the function, unless the filegroup is mapped to another partition or is the NEXT USED filegroup

Below is a summary of a RANGE RIGHT partition function MERGE:

  • Identify existing partition to be removed, which is the one that includes (to the right of) the specified existing boundary
  • For each table/index using the affected partition scheme(s), move rows from this partition into the adjacent left partition
  • Remove boundary from the partition function, maintaining boundary order and decrementing subsequent partition numbers
  • Remove the filegroup from each partition scheme that uses the function, unless the filegroup is mapped to another partition or is the NEXT USED filegroup

I suggest explicit partition boundaries for expected data ranges to facilitate using both MERGE and SPLIT, and an additional one for the permanent partition (which I’ll detail shortly).  This practice helps ensure data are both logically and physically aligned, providing more natural partition management.

The Permanent Partition

You might not be aware that each partition scheme has a permanent partition that can never be removed.  This is the first partition of a RANGE RIGHT function and the last partition of a RANGE LEFT one.  Be mindful of this permanent partition when creating a new partition scheme when multiple filegroups are involved because the filegroup on which this permanent partition is created is determined when the partition scheme is created and cannot be removed from the scheme.

My recommendation is that one create explicit partition boundaries for all expected data ranges plus a lower and upper boundary for data outside the expected range, and map these partitions to appropriately named filegroups.  This practice will make the purpose of each partition/filegroup clear and help avoid accidentally placing data on the wrong filegroup.

Consider mapping partitions containing data outside the expected range to a dummy filegroup with no underlying files.  This will guarantee data integrity much like a check constraint because data outside the allowable range cannot be inserted.  If you must accommodate errant data rather than rejecting it outright, instead map these partitions to a generalized filegroup like DEFAULT or one designated specifically for that purpose.

I suggest specifying a NULL value for the first boundary of a RANGE RIGHT partition function.  This NULL boundary serves as the upper boundary of the permanent first partition as well as the lower boundary for the second partition containing data outside the expected range.  No rows are less than NULL so the first partition will always be empty.  It is therefore safe to map the first partition to the previously mentioned dummy filegroup even if you need to house data outside the expected range.  That being said, there is no harm in mapping the first partition to another filegroup other than lack of clarity.

For the last boundary of a RANGE RIGHT function, I suggest specifying the lowest value outside the expected range and also mapping the partition to either the dummy filegroup, or one designated to contain unexpected data.  The boundaries between the first boundary (NULL) and this one designate partitions for expected data.

Summary

In summary, I recommend a RANGE RIGHT function with the following setup:

  • First boundary value NULL
  • Subsequent boundary values for expected data partitions
  • A final boundary value of greater than the expected range
  • Map first, second, and last partitions to either a dummy filegroup or one designated for unexpected data
  • Map remaining expected data partitions to appropriately named filegroups

A similar RANGE LEFT function can be setup as follows.  I’m including this for only for completeness as the RANGE RIGHT setup above is a best practice, in my humble opinion.

  • First boundary for data less than the expected range
  • Subsequent boundaries for expected data partitions
  • A final boundary value of the maximum allowable value for the partitioning data type (which is another kludge that bolsters the case for RANGE RIGHT)
  • Map first, second from last, and last partitions to either a dummy filegroup or one designated for unexpected data
  • Map remaining expected data partitions to appropriately named filegroups

Below is an example script of applying these techniques with a RANGE RIGHT function, including adding an incremental partition for a new year.  Partitions for data outside the expected data range are mapped to the FG_NoData filegroup (which contains no files) so any attempt to insert data outside the expected range will fail.

--*********************************
--*** initial setup for 2 years ***
--*********************************

--add a dummy filegroup with no files
ALTER DATABASE PartitioningDemo
    ADD FILEGROUP FG_NoData;
GO

CREATE PARTITION FUNCTION PF_DateTimeRightWithExplictBoundaries(datetime) AS
    RANGE RIGHT FOR VALUES(
          NULL --prior to year 2013 (outside expected range)
        , N'2013-01-01T00:00:00.000' --year 2013 (within expected range)
        , N'2014-01-01T00:00:00.000' --year 2014 (within expected range)
        , N'2015-01-01T00:00:00.000' --after year 2014 (outside expected range)
    );

CREATE PARTITION SCHEME PS_DateTimeRightWithExplictBoundaries AS
    PARTITION PF_DateTimeRightWithExplictBoundaries TO(
          FG_NoData --first partition permanent partition and cannot be removed
        , FG_NoData --second partition contains data prior to year 2013 and is outside expected range
        , FG_2013 --year 2013 (within expected range)
        , FG_2014 --year 2014 (within expected range)
        , FG_NoData --after year 2014 (outside expected range)
    );

CREATE TABLE dbo.PartitionedTableRightWithExplictBoundaries(
        DateTimeColumn datetime
    ) ON PS_DateTimeRightWithExplictBoundaries(DateTimeColumn);

INSERT INTO dbo.PartitionedTableRightWithExplictBoundaries VALUES
      (N'2013-01-01T01:02:03.346')
    , (N'2013-02-03T04:05:06.780')
    , (N'2014-01-01T01:02:03.346')
    , (N'2014-02-03T04:05:06.780')
    , (N'2014-03-04T05:06:07.890');
GO

--**********************
--*** add a new year ***
--**********************

--specify FG_NoData filegroup to scheme as next used
ALTER PARTITION SCHEME PS_DateTimeRightWithExplictBoundaries
    NEXT USED FG_NoData;

--add 2016 boundary as outside expected range, mapped to FG_NoData
ALTER PARTITION FUNCTION PF_DateTimeRightWithExplictBoundaries()
    SPLIT RANGE(N'2016-01-01T00:00:00.000');

--remove 2015 boundary, currently mapped to FG_NoData
ALTER PARTITION FUNCTION PF_DateTimeRightWithExplictBoundaries()
    MERGE RANGE(N'2015-01-01T00:00:00.000');

--specify FG_2015 filegroup to scheme as next used
ALTER PARTITION SCHEME PS_DateTimeRightWithExplictBoundaries
    NEXT USED FG_2015;

--re-add 2015 boundary as within expected range, now mapped to FG_2015
ALTER PARTITION FUNCTION PF_DateTimeRightWithExplictBoundaries()
    SPLIT RANGE(N'2015-01-01T00:00:00.000');

INSERT INTO dbo.PartitionedTableRightWithExplictBoundaries VALUES
    (N'2015-01-01T01:02:03.346');

 

Always Use Semicolon Statement Terminators

ANSI-standard semicolon statement terminators are often omitted in T-SQL queries and many developers are unaware that this is syntax is deprecated.  Omitting statement terminators is a dangerous practice because, even if the batch compiles, you may get unexpected results.  Consider the insidious examples below pointed out by SQL Server MVP Erland Sommarskog:

BEGIN TRY
	BEGIN TRAN
	SELECT 1/0 AS CauseAnException
	COMMIT
END TRY
BEGIN CATCH
	SELECT ERROR_MESSAGE()
	THROW
END CATCH

Few of us will catch (no pun intended) the bug in the above script.  What results do you expect after running the above script under SQL Server 2012 or later versions?  Rather than leave this as an exercise for the reader, I’ll spoil the fun and mention that no run-time error is raised at all.  Instead, the THROW statement is interpreted as a column alias for the ERROR_MESSAGE() column.  This sort of coding error is especially nasty because catch blocks are rarely unit tested and this catch block coding mistake hides the run-time error entirely without raising an exception.

Similarly, the absence of statement terminators in the script below causes another problem.  Can you spot it?

BEGIN TRY
	BEGIN TRAN
	SELECT 1/0 AS CauseAnException
	COMMIT
END TRY
BEGIN CATCH
    IF @@TRANCOUNT > 0 ROLLBACK TRAN
    THROW
END CATCH

At least an error is raised in this case, albeit not the one you might expect.  The resultant error is “Cannot roll back THROW. No transaction or savepoint of that name was found”.  This coding bug obfuscates the preceding divide by zero error and prevents the THROW statement from being executed.

Semicolons Will Become Mandatory
Microsoft announced with the SQL Server 2008 release that semicolon statement terminators will become mandatory in a future version so statement terminators other than semicolons (whitespace) are currently deprecated.  This deprecation announcement means that you should always use semicolon terminators in new development.  I honestly don’t expect SQL Server to strictly enforce mandatory semicolons in the near future but it is still a best practice to use semicolon statement to avoid issues like those mentioned earlier as well as facilitate code maintainability.  I suggest specifying statement terminators in all new development and perhaps adding terminators to existing code as you perform maintenance.

Transact-SQL does not currently enforce the ANSI semicolon statement terminator requirement.  Instead, semicolon statement terminators are optional and any whitespace (spaces, tabs, newline) may be used instead.  The exception to this rule is that many of the statements introduced in SQL Server 2005 and later require the preceding statement to be properly terminated in order for the batch to compile.

Below are some guidelines I suggest on when to, and when not to, use semicolon statement terminators.

Suggested Guidelines
The Transact-SQL parser is quite lax, allowing any whitespace (e.g. space, tab, newline) to be used.  This laxness results in ambiguity like the examples at the beginning of this article demonstrate.  Similarly, statement terminators may not only be omitted, they may also be used in inappropriately.  I strongly suggest you adhere to the T-SQL syntax documented in the Books Online even if the parser allows otherwise.  This practice will help future-proof your code since relying on undocumented behavior is inherently risky.

Don’t precede a statement with a semicolon
Remember that the purpose of semicolons is to terminate SQL statements, not begin them.  A common mistake I see is throwing a semicolon in front of statements in order to get a batch of statements to compile, especially with newer statements like WITH (CTE expression) that require previous statement termination.  Although the T-SQL parser currently ignores extraneous and misplaced semi-colons, I suggest they be specified in the appropriate place according statement syntax documented in the SQL Server Books Online.

Specify semicolons at the end of each stand-alone SQL statement
Not only will this conform to the ANSI standard, your intent will be clearer and the code easier to read.

Terminate control-of-flow statement blocks at the end of the control-of-flow scope
Control-of-flow statements are not covered by the ANSI SQL standard because these are proprietary SQL extensions.  The SQL Server Books Online is sketchy on the subject and many of the examples (as of this writing) are inconsistent and do not always include statement terminators.  Furthermore, control-of-flow statement blocks are confusing due to the many variations, nesting, and optional BEGIN/END specifications.

Below are examples illustrating what I believe to be proper use of statement terminators control-of-flow block terminators using IF statements in SQL 2008 and later versions.  The same concepts apply to other control-of-flow constructs like WHILE and TRY/CATCH blocks.  I should add that this batch example will not compile under SQL 2005 because an explicit BEGIN/END block is required to execute a common table expression conditionally in that version.  T-SQL parser enhancements eliminated that requirement in SQL 2008 and later.

IF 1 = 1
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement and IF

IF 1 = 1
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
ELSE
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement and IF

IF 1 = 1
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END; --terminate IF

IF 1 = 1
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END
ELSE
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END; --terminate IF

IF 1 = 1
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
ELSE
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END; --terminate IF

IF 1 = 1
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END
ELSE
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement and IF

IF 1 = 1
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
	IF 1 = 1
		WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
	ELSE
	BEGIN
		WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
	END; --terminate inner nested IF
	WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END
ELSE
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement and outer IF

Summary
Consistent user of semicolons helps avoid bugs in code that might otherwise go undetected.  Code with statement terminators can also be more easily modified without introducing compile errors and make code easier to maintain because the end of each statement is readily apparent to subsequent developers.  Importantly, you’ll be better positioned for future SQL Server versions by consistently using semicolon statement terminators.

Help Stop SQL Injection Madness

SQL injection continues to be the most exploited security vulnerability on the internet. A SQL injection vulnerability can put companies out of business and expose innocent people to sensitive data disclosure and damage, all due to developer ignorance or carelessness. Sadly, although there is much information available on SQL injection exploits and mitigation, SQL injection exploits continue because developers use flawed database access techniques. This article details the best, and simplest, method to prevent SQL injection in SQL Server: parameterized SQL statements.

Parameterized SQL queries and stored procedure calls (with CommandType.StoredProcedure and no dynamic SQL within proc code) prevent unintended SQL from being executed. I’ll discuss exactly why shortly. Parameterized SQL also provides many benefits besides security. Notably:

  • allows quotes in data without programmatic escaping
  •  avoids the need to format date strings according to DATEFORMAT session settings
  • handles decimal separators automatically
  •  improves performance due to plan reuse
  •  provides cleaner code

Most articles on SQL injection focus on validating user input rather than parameterized queries.  Input validation certainly has benefits related to data integrity and user-friendly validation messages but it is at best a secondary defense against injection compared to parameterized queries.  Input validation using a blacklist of prohibited keywords and characters is especially weak.

SQL Injection Overview
I am puzzled by the number of examples in articles and forum posts that show building SQL statements with literals using a string concatenation technique.  I suspect the main reason for this practice is that developers write code that generates the exact same ad-hoc SQL statement they would write using a query tool like SQL Server Management Studio or Visual Studio.  This leads to the nasty habit of not using parameters.  For example, a developer might develop and test a query like this in SSMS:

SELECT FirstName, LastName, EmailAddress
FROM dbo.Customer
WHERE CustomerID = 12345;

After testing the query, the C# developer includes the SQL statement in the application code but, instead of a hard-coded value for CustomerID, the desired value is concatenated with the remainder of the SQL statement as a literal.

var command = new SqlCommand(
             "SELECT FirstName, LastName, EmailAddress" +
            " FROM dbo.Customer " +
            " WHERE CustomerID = '" + txtBoxCustomerID.Text + "';"
            ,connection);

The query runs perfectly well and returns the expected result in testing.  It is not until after production deployment that a script kiddie or hacker finds the SQL injection vulnerability.  A malicious person can easily manipulate this SQL statement in ways unintended by the developer.  For example, a hacker could enter CustomerID value “1 OR 1 = 1” in the text box to harvest the names and email addresses of all customers in the database.  This vulnerability can also be exploited to execute additional SQL statements in the same batch, which is often used to update data with malicious html script that is subsequently returned and executed by unsuspecting client browsers.  There are many ways to exploit a SQL injection vulnerability that result in sensitive data disclosure and/or database manipulation.  I won’t detail those here but rather recommend a very simple defense; use a parameterized statement instead of literals.

Mitigating SQL Injection
Below is a functionally identical parameterized query.

SELECT FirstName, LastName, EmailAddress
FROM dbo.Customer
WHERE CustomerID = @CustomerID;

This query can be incorporated into the C# application code with the parameter and corresponding value added to the parameterized SQL command as a strongly-typed parameter:

var command = new SqlCommand(
             "SELECT LastName" +
            " FROM dbo.Customer " +
            " WHERE CustomerID = @CustomerID;"
            ,connection);
command.Parameters.Add("@CustomerID", SqlDbType.Int).Value = int.Parse(textBoxCustomerID.Text);

As you can see, the effort needed to do the same job with a parameterized command is about the same as the string concatenation method, yet the security benefits are immeasurable.

Why Parameters Prevent SQL Injection
Parameterized SQL is more secure for a couple of reasons.  In the case of an ad-hoc query, the SQL statement with parameter markers is hard-coded in the application (or generated via an ORM framework) and passed to SQL Server as-is.  Parameter values are passed separately rather than inside the SQL statement itself.  This ensures the SQL statement cannot be modified by the values provided.

Similarly, a stored procedure call (using CommandType.StoredProcedure) sends only the stored procedure name to SQL Server with separately passed parameter values.  Using stored procedures also allow one to more strictly adhere to the security principal of least privilege.  By granting only stored procedure execute permissions, permissions on indirectly referenced objects are not needed as long as the ownership chain is unbroken.  This practices limits ad-hoc access to underlying data should an attacker find a back door into the database under the application security context.  Personally, I’m a staunch advocate of stored procedures for this and other reasons too.  Stored procedures provide a well-defined database interface, allowing queries and database schema to be refactored without breaking changes.

I should add that it is perfectly acceptable to build a SQL statement dynamically in either application code or stored procedures, but only if parameterized and actually needed for the task at hand.  A common use case is a dynamic search query that is constructed based on user criteria. Instead of coding a separate query for each possible search permutation, one can add parameterized WHERE clauses as needed.  Under no circumstances should a SQL statement string be built with concatenation of untrusted values.

Additional Precautions
Use Windows Authentication for SQL Server database access.  This practice eliminates the need to store security credentials in application configuration files.  If SQL authentication must be used, protect the credentials with encryption.  This is easily done in .NET by encrypting the connectionStrings section of the app.config file.  See http://msdn.microsoft.com/en-us/library/ms254494.aspx.

Adhere to Principal of least privilege.  Never use a privileged account, such as a sysadmin role member, for routine application access.  Grant permissions only on database objects which are used directly by applications.  Use stored procedures so that one can leverage ownership chaining, eliminating the need to grant permissions on tables directly.

Specify strongly-typed parameters.  The parameter data type can be inferred (e.g. using SqlCommand.Parameters.AddWithValue method) but a best practice is to explicitly specify a parameter data type that matches the underlying column data type and maximum length.  These practices will minimize the size of the SQL Server procedure cache and avoid implicit conversions that can negatively affect performance.

Maximizing Performance with Table-Valued Parameters

I’ve seen TVPs improve performance by orders of magnitude when used appropriately.  This feature isn’t used as often as it should be so I’ll show how to implement a TVP using a C# application example and discuss TVP considerations.

Why Table-Valued Parameters Improve Performance
TVPs allow one to pass multiple rows or values at once to a query or stored procedure.  By doing so, the query can leverage set-based operations to greatly improve performance compared to executing the query/proc many times.  TVPs decrease network latency by reducing network round trips.  Prior to TVPs, one had to employ workarounds like passing a string containing a delimited list of records (or XML) and parsing on the SQL side.  Those techniques are at best a kludge.  Instead TVPs are the right tool for the job in SQL Server 2008 and later.

TVPs (and table variables) are temp tables.  TVPs and table variables are persisted in tempdb much the same way as regular temp tables (# prefix).  The big differences are that TVPs and table variables are limited in scope to the current batch and do not have statistics.  Also, table parameters/variables allow only indexes declared via primary key nor unique constraints; non-unique indexes are not supported.

When a TVP Is passed to SQL Server from a client application, the data is bulk-inserted into tempdb where it is made available to the SQL statement or stored procedure using the declared TVP.  This bulk insert is handled by the client API transparently to the application and allows a large number of rows to be passed to SQL Server efficiently.  In T-SQL, one can declare and load a table variable, where it can be used directly in a SQL statement or passed as a TVP to a stored procedure or query (invoked with sp_executesql).

The exact threshold where TVPs outperform single-row operations is much lower than you might expect.  Your mileage may vary but, at least in my experience, it’s only a few rows.  The performance tests I ran here show that the threshold where a TVP outperforms individual requests with a simple list of customers is only 4 rows.  That said, I would not recommend TVPs for single-row operations due to the start-up overhead but it’s a good choice when the most common use case is multiple rows.

How to Use TVPs
In order to pass a TVP to a SQL statement or stored procedure, one must first create a user-defined table type in SQL Server.  The table type defines the schema of the table SQL Server expects for the TVP.  Listing 1 shows the CREATE DDL for the table type I use in the AdventureWorks2012 database to pass a list of customers to the stored procedure in Listing 2 that returns all orders for these customers.  Note that the stored procedure TVP must be declared as READONLY.

Listing 1: Table type DDL

CREATE TYPE dbo.CustomerList AS TABLE(
CustomerID int NOT NULL
PRIMARY KEY
);

Listing 2: Stored procedure to return orders for multiple customers

CREATE PROC dbo.usp_select_orders_for_multiple_customers
@CustomerList dbo.CustomerList READONLY
AS
SELECT soh.CustomerID, soh.SalesOrderID
FROM Sales.SalesOrderHeader AS soh
JOIN @CustomerList AS cl ON
cl.CustomerID = soh.CustomerID;
RETURN @@ERROR;

TVPs are passed from a .NET application using parameter data type DbType.Structured.  The actual parameter value can be an object of type DataTable, DbDataReader or IEnumberable<SqlDataRecord>.  A DataTable is appropriate when the required TVP data is already in a DataTable object needed for other purposes.  A DbDataReader is a good choice when the source data is the result of a query (note that any DbDataReader may be used, not just a SqlDataReader).   For other cases, I recommend an IEnumerable<SqlDataRecord>.  It is very easy to implement IEnumerable<SqlDataRecord>, especially if you already have an enumerable object like an array or collection containing the values you need to pass.  See the code at the end of this article for an example class that implements IEnumerable<SqlDataRecord> for a TVP.

A benefit with DbDataReader and IEnumerable<SqlDataRecord> is that the source data can be streamed to SQL server without first loading all the TVP data into memory in either the client application or in SQL Server.  This is a big consideration when a lot of data needs to be passed because you are not constrained by memory on ether the client or SQL Server side.

Listing 3 shows the stored procedure I use to compare performance with the TVP method.  This proc takes only a single @CustomerID parameter so it must be called once for each customer.

Listing 3: Stored procedure to return orders for a single customer

CREATE PROC dbo.usp_select_orders_for_single_customer
@CustomerID int
AS
SELECT soh.CustomerID, soh.SalesOrderID
FROM Sales.SalesOrderHeader AS soh
WHERE
soh.CustomerID = @CustomerID;
RETURN @@ERROR;

The complete C# console application I used for the performance tests is in Listing 4.  The code demonstrates the 3 ways to pass a TVP from C# and logs timings for each invocation.  As you can see in Figure 1, the TVP outperformed individual calls once more than 3 customers were passed.  Note that I had to use a logarithmic y-axis scale because of the huge performance difference.  For example, 10,000 individual calls averaged about 1.5 seconds whereas passing 10,000 customers via a TVP took less to 100 milliseconds.

Figure 1:  TVP versus individual call performance
TVP performance graph

Listing 4: Passing TVPs in C#

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
using System.Data.SqlClient;
using System.Diagnostics;
using Microsoft.SqlServer.Server;
using System.IO;

namespace TVPTest
{
    class Program
    {

        private static string connectionString = @"Data Source=MyServer;Initial Catalog=AdventureWorks2012;Integrated Security=SSPI";
        private static StreamWriter logFile = new StreamWriter(@"TvpTest.log", false);

        static void Main(string[] args)
        {

            // write column headers to log file
            logFile.WriteLine("{0},{1},{2}"
                , "Test Name"
                , "Customers"
                , "Durations");

            // run each performance test 100 times
            for (int i = 0; i < 100; ++i)
            {
                runPerformanceTest(1);
                runPerformanceTest(2);
                runPerformanceTest(3);
                runPerformanceTest(4);
                runPerformanceTest(5);
                runPerformanceTest(10);
                runPerformanceTest(25);
                runPerformanceTest(50);
                runPerformanceTest(100);
                runPerformanceTest(500);
                runPerformanceTest(1000);
                runPerformanceTest(2500);
                runPerformanceTest(5000);
                runPerformanceTest(10000);
                logFile.Flush();
            }

            logFile.Close();

            Console.WriteLine("Press any key to close.");
            Console.ReadKey();
        }

        private static void runPerformanceTest(int customerCount)
        {

            Console.WriteLine("{0} customers:", customerCount);

            DataTable orderDataTable;
            Stopwatch sw;
            DateTime duration;

            //individual call baseline
            sw = Stopwatch.StartNew();
            orderDataTable = getOrdersForCustomersIndividually(customerCount);
            sw.Stop();
            duration = new DateTime(sw.Elapsed.Ticks);
            Console.WriteLine("\tIndividual call: {0} orders returned, duration is {1}"
                , orderDataTable.Rows.Count, duration.ToString("HH:mm:ss.ffffff"));
            logFile.WriteLine("{0},{1},{2}"
                , "Individual call"
                , customerCount
                , duration.ToString("HH:mm:ss.ffffff"));

            //TVP from DataTable
            sw = Stopwatch.StartNew();
            orderDataTable = getOrdersForCustomersUsingTvp(getCustomerListDataTable(customerCount));
            sw.Stop();
            duration = new DateTime(sw.Elapsed.Ticks);
            Console.WriteLine("\tTVP DataTable: {0} orders returned, duration is {1}"
                , orderDataTable.Rows.Count, duration.ToString("HH:mm:ss.ffffff"));
            logFile.WriteLine("{0},{1},{2}"
                , "TVP DataTable"
                , customerCount
                , duration.ToString("HH:mm:ss.ffffff"));

            //TVP from DataReader
            sw = Stopwatch.StartNew();
            orderDataTable = getOrdersForCustomersUsingTvp(getCustomerListDataReader(customerCount));
            sw.Stop();
            duration = new DateTime(sw.Elapsed.Ticks);
            Console.WriteLine("\tTVP DataReader: {0} orders returned, duration is {1}"
                , orderDataTable.Rows.Count, duration.ToString("HH:mm:ss.ffffff"));
            logFile.WriteLine("{0},{1},{2}"
                , "TVP DataReader"
                , customerCount
                , duration.ToString("HH:mm:ss.ffffff"));

            //TVP from IEnumerable
            sw = Stopwatch.StartNew();
            orderDataTable = getOrdersForCustomersUsingTvp(getCustomerListSqlDataRecords(customerCount));
            sw.Stop();
            duration = new DateTime(sw.Elapsed.Ticks);
            Console.WriteLine("\tTVP IEnumerable: {0} orders returned, duration is {1}"
                , orderDataTable.Rows.Count, duration.ToString("HH:mm:ss.ffffff"));
            logFile.WriteLine("{0},{1},{2}"
                , "TVP IEnumerable"
                , customerCount
                , duration.ToString("HH:mm:ss.ffffff"));

            Console.WriteLine();

        }

        /// 
        /// Call proc individually for each customer
        /// 
        /// Number of customers to return
        /// DataTable of customer orders
        private static DataTable getOrdersForCustomersIndividually(int customerCount)
        {
            using (var connection = new SqlConnection(connectionString))
            using (var command = new SqlCommand(@"dbo.usp_select_orders_for_single_customer", connection))
            {
                command.CommandType = CommandType.StoredProcedure;
                var customerIdParameter = command.Parameters.Add("@CustomerID", System.Data.SqlDbType.Int);
                var orderListDataAdapter = new SqlDataAdapter(command);
                var orderDataTable = new DataTable();
                for (int customerId = 11000; customerId < 11000 + customerCount; ++customerId)
                {
                    customerIdParameter.Value = customerId;
                    orderListDataAdapter.Fill(orderDataTable);
                }
                return orderDataTable;
            }
        }

        /// 
        /// Call proc with TVP of CustomersIDs
        /// 
        /// List of customers
        /// DataTable of customer orders
        private static DataTable getOrdersForCustomersUsingTvp(Object customerList)
        {
            using (var connection = new SqlConnection(connectionString))
            using (var command = new SqlCommand(@"dbo.usp_select_orders_for_multiple_customers", connection))
            {
                command.CommandType = CommandType.StoredProcedure;
                command.Parameters.Add("@CustomerList", System.Data.SqlDbType.Structured).Value = customerList;
                var orderListDataAdapter = new SqlDataAdapter(command);
                var orderDataTable = new DataTable();
                orderListDataAdapter.Fill(orderDataTable);
                return orderDataTable;
            }
        }

        /// 
        /// Create a DataTable containing list of customers for TVP
        /// 
        /// Number of customers to return
        /// DataTable with single CustomerID column
        private static DataTable getCustomerListDataTable(int customerCount)
        {
            var customerListDataTable = new DataTable();
            customerListDataTable.Columns.Add("CustomerID", typeof(int));

            for (int customerId = 11000; customerId < 11000 + customerCount; ++customerId)
            {
                var row = customerListDataTable.NewRow();
                customerListDataTable.Rows.Add(row);
                row[0] = customerId;
            }
            return customerListDataTable;
        }

        /// 
        /// Create a DataReader containing list of customers for TVP
        /// 
        /// Number of customers to return
        /// DataReader with single CustomerID column        
        private static IDataReader getCustomerListDataReader(int customerCount)
        {
            var connection = new SqlConnection(connectionString);
            var command = new SqlCommand(@"SELECT CustomerID FROM Sales.Customer WHERE CustomerID BETWEEN 11000 AND 11000 + @CustomerCount - 1;", connection);
            command.Parameters.Add("@CustomerCount", System.Data.SqlDbType.Int).Value = customerCount;
            connection.Open();
            command.CommandType = CommandType.Text;
            return command.ExecuteReader();

        }

        /// 
        /// Create an IEnumerable<SqlDataRecord> object containing list of customers for TVP
        /// 
        /// Number of customers to return
        /// IEnumerable<SqlDataRecord> with single CustomerID column
        private static IEnumerable getCustomerListSqlDataRecords(int customerCount)
        {
            var customerList = new List();
            for (int customerId = 11000; customerId < 11000 + customerCount; ++customerId)
            {
                customerList.Add(customerId);
            }
            return new CustomerListSqlDataRecords(customerList);
        }

        /// 
        /// Implement IEnumerable<SqlDataRecord> to return a SqlDataRecord for each CustomerID
        /// in the provided IEnumerable<int>
        /// 
        private class CustomerListSqlDataRecords : IEnumerable
        {
            private SqlMetaData metaData = new SqlMetaData("CustomerID", SqlDbType.Int);

            private IEnumerable customerList;

            public CustomerListSqlDataRecords(IEnumerable customerList)
            {
                this.customerList = customerList;
            }

            public IEnumerator GetEnumerator()
            {
                foreach (var customerId in customerList)
                {
                    var record = new SqlDataRecord(metaData);
                    record.SetInt32(0, customerId);
                    yield return record;
                }
            }

            System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
            {
                throw new NotImplementedException();
            }
        }
    }
}

Improving Uniqueidentifier Performance

A common anti-pattern I run into is the random primary key, commonly a GUID. This design is insidious because the performance implications of random access aren’t immediately obvious and exacerbated when the primary key index is clustered. It is often only after the table grows to a larger size that the performance problems become apparent. Symptoms include slowly degrading performance over time, with increased blocking and deadlocking as a side effect.

Figure 1 shows the performance profile of a random inserts with a random GUID (SQL Server uniqueidentifier data type) clustered primary key. The red line indicates the rate of batch requests per second (inserts) while the blue line shows the total number of rows in the table, scaled such that the top of the graph represents 3M rows. Only about 700, 000 rows could be inserted during this 15 minute single-threaded random key insert test, even though the insert rate was fast initially.

Figure 1: Random key insert performance
Random insert performance graph

Incremental Primary Keys

As you might guess, the cure for the random primary key anti-pattern is an incremental key pattern. With a uniqueidentifier data type, a sequential value can be assigned by SQL Server using the NEWSEQUENTIALID function (in a default constraint expression) or in application code using the UuidCreateSequential Win32 API call along with some byte swapping (code example below). Alternatively, one can use an integral data type (int, bigint, etc.) along with a value generated by an IDENTITY property or a SEQUENCE object. The advantage of an integral type is the reduced space requirements compared to a 16-byte uniqueidentifier. The advantage of a uniqueidentifier is that it can easily be generated in application code before database persistence without a database round trip, which is desirable for distributed applications and when keys of related tables are assigned in application code before writing to the database.

Figure 2 shows the same test using a sequential key value. Over 2.2M rows were inserted in 15 minutes. As you can see, significant performance improvement is achieved with this trivial application change.

Figure 2: Incremental key insert performance
Random insert performance graph

Listing 1 shows the T-SQL code I used for these performance tests and listing 2 contains the C# code (with the random GUID commented out). I generated the uniqueidentifier value via application code in the tests but performance with NEWID() is comparable to the first test and NEWSEQUENTIALID() is similar to the second test.

Listing 1: T-SQL scripts for test table and stored procedure

CREATE TABLE dbo.TestTable(
	TestKey uniqueidentifier NOT NULL
		CONSTRAINT PK_TestTable PRIMARY KEY CLUSTERED
	,TestData char(8000) NOT NULL
);
GO

CREATE PROC dbo.InsertTestTable
	@TestKey uniqueidentifier
	,@TestData char(8000)
AS
SET NOCOUNT ON;
DECLARE @TotalRows int;

--insert row
INSERT INTO dbo.TestTable (TestKey, TestData) 
	VALUES(@TestKey, @TestData);

--update pmon counter for rowcount
SELECT @TotalRows = rows 
FROM sys.partitions 
WHERE object_id = OBJECT_ID(N'TestTable') AND index_id = 1;
EXEC sys.sp_user_counter1 @TotalRows; --for pmon row count

RETURN @@ERROR;
GO

Listing 2: C# insert test console application

using System;
using System.Data;
using System.Data.SqlClient;
using System.Runtime.InteropServices;

namespace UniqueIdentifierPerformanceTest
{
    class Program
    {
        [DllImport("rpcrt4.dll", CharSet = CharSet.Auto)]
        public static extern int UuidCreateSequential(ref Guid guid);

        static string connectionString = @"Data Source=MyServer;Initial Catalog=MyDatabase;Integrated Security=SSPI";
        static int rowsToInsert = 10000000;
        static SqlConnection connection;
        static SqlCommand command;

        static void Main(string[] args)
        {

            int rowsInserted = 0;

            using (connection = new SqlConnection(connectionString))
            {
                using (command = new SqlCommand("dbo.InsertTestTable", connection))
                {
                    command.Connection = connection;
                    command.CommandType = CommandType.StoredProcedure;
                    command.Parameters.Add("@TestKey", SqlDbType.UniqueIdentifier);
                    command.Parameters.Add("@TestData", SqlDbType.Char, 8000);

                    connection.Open();

                    while (rowsInserted < rowsToInsert)
                    {
                        //random guid
                        //command.Parameters["@TestKey"].Value = Guid.NewGuid();

                        //sequential guid
                        command.Parameters["@TestKey"].Value = NewSequentialGuid();
                        command.Parameters["@TestData"].Value = "Test";
                        command.ExecuteNonQuery();
                        ++rowsInserted;

                        //display progress every 1000 rows
                        if (rowsInserted % 1000 == 0)
                        {
                            Console.WriteLine(string.Format(
                                "{0} of {1} rows inserted"
                                , rowsInserted.ToString("#,##0")
                                , rowsToInsert.ToString("#,##0")));
                        }
                    }
                }

                connection.Close();
            }

        }

        /// 
        /// call UuidCreateSequential and swap bytes for SQL Server format
        /// 
        /// sequential guid for SQL Server
        private static Guid NewSequentialGuid()
        {
            const int S_OK = 0;
            const int RPC_S_UUID_LOCAL_ONLY = 1824;

            Guid oldGuid = Guid.Empty;

            int result = UuidCreateSequential(ref oldGuid);
            if (result != S_OK && result != RPC_S_UUID_LOCAL_ONLY)
            {
                throw new ExternalException("UuidCreateSequential call failed", result);
            }

            byte[] oldGuidBytes = oldGuid.ToByteArray();
            byte[] newGuidBytes = new byte[16];
            oldGuidBytes.CopyTo(newGuidBytes, 0);

            // swap low timestamp bytes (0-3)
            newGuidBytes[0] = oldGuidBytes[3];
            newGuidBytes[1] = oldGuidBytes[2];
            newGuidBytes[2] = oldGuidBytes[1];
            newGuidBytes[3] = oldGuidBytes[0];

            // swap middle timestamp bytes (4-5)
            newGuidBytes[4] = oldGuidBytes[5];
            newGuidBytes[5] = oldGuidBytes[4];

            // swap high timestamp bytes (6-7)
            newGuidBytes[6] = oldGuidBytes[7];
            newGuidBytes[7] = oldGuidBytes[6];

            //remaining 8 bytes are unchanged (8-15) 

            return new Guid(newGuidBytes);

        }

    }
}

Why Random Keys Are Bad

I think it’s important for one to understand why random keys have such a negative impact on performance against large tables. DBAs often cite fragmentation and page splits as the primary causes of poor performance with random keys. Although it is true random inserts do cause fragmentation and splits, the primary cause of bad performance with random keys is poor temporal reference locality (http://en.wikipedia.org/wiki/Locality_of_reference), which I’ll detail shortly. Note that there were no real page splits in these insert performance tests because the nearly 8K row size allowed only one row per page. Although significant extent fragmentation occurred, this didn’t impact these single-row requests; extent fragmentation is mostly an issue with sequential scans against spinning media. So neither splits nor fragmentation explain the poor performance of the random inserts.

Temporal reference locality basically means that once data is used (e.g. inserted or touched in any way), it is likely to be used again in the near future. This is why SQL Server uses a LRU-2 algorithm to manage the buffer cache; data most recently touched will remain in memory while older, less often referenced data are aged out. The impact of random key values on temporal locality (i.e. buffer efficiency) is huge. Consider that inserts are basically rewrites of existing pages. When a new row is inserted into a table, SQL Server first reads the page where the row belongs (by key value if the table has a clustered index) and then either adds the row to the existing data page or allocates a new one if there’s not enough space available in the existing page for the new row. With a random key value, the new key value is unlikely to be adjacent to the last one inserted (which is probably still in memory) so the needed page often must be read from storage.

All things being equal, single-row performance will be roughly the same with both sequential and random keys as long as data are memory resident. This is why the random and sequential key insert tests show the same good performance initially. But once the table size exceeded the size of the buffer pool, the random key test showed a precipitous drop in throughput and steady degradation thereafter. In short, random keys diminish temporal reference locality because there is no correlation between time (most recently accessed data) and the key value.

Why Incremental Keys Good

An incremental key value naturally improves temporal reference locality; the next key value is adjacent to the last one inserted and is likely still in memory. An incremental key provides excellent insert performance regardless of table size as the insert performance test shows. Also, applications typically use recently inserted data more often than older data. This allows the same amount of work to done with much less physical I/O than a random key value.

Random Notes about GUIDs

According to the Globally unique identifier Wiki (http://en.wikipedia.org/wiki/Globally_unique_identifier), the random 122 bits of a GUID can generate 2122 unique values. That’s an incomprehensibly large 5.3 x 1036 (or 5,300,000,000,000,000,000,000,000,000,000,000,000) number unique values.

The value returned by NEWSEQUENTIALID and UuidCreateSequential is guaranteed to be unique on a given computer. Furthermore, it is globally unique if the computer has a network card because the MAC address is used as part of the GUID generation algorithm.