The Curious Case of Undetected SQL Exceptions

Undetected database errors are insidious. It can be really bad when an error gets dropped on the floor, resulting in incomplete or wrong results. Consider that this simple SELECT query returns an empty result set instead of raising a SqlException for the divide by zero error:

string sqlBatchText = @"
BEGIN TRY

    SELECT 1/0 AS DivideByZeroError;

END TRY
BEGIN CATCH

    THROW;

END CATCH;
";

int rowCount = 0;

try
{

    using (SqlConnection connection = new SqlConnection(@"Data Source=.;Initial Catalog=tempdb;Integrated Security=SSPI"))
    using (SqlCommand command = new SqlCommand(sqlBatchText, connection))
    {
        connection.Open();
        using (SqlDataReader reader = command.ExecuteReader())
        {
            while (reader.Read()) { ++rowCount; };
        }
        Console.WriteLine("{0} rows read", rowCount);
    }
}
catch
{
    throw;
}

One can generally assume SQL errors raised during batch execution will also raise ADO.NET database exceptions. However, there are cases involving multi-statement batches and stored procedures where exceptions are not raised by ADO.NET automatically following SQL errors, as the above example shows. These scenarios can be distilled as:

1) An error is caught in T-SQL while executing a row-returning statement.
2) An error occurs after a row-returning statement successfully executes.

To ensure database exceptions are raised, one must either code T-SQL to avoid these scenarios entirely or ensure the ADO.NET data access layer invokes the data reader NextResult method to consume all results returned even when no more than one result set is expected. Row-returning statements include SELECT (not variable assignment), OUTPUT clause in an INSERT/UPDATE/DELETE/MERGE statement that returns rows to the client, as well as some specialized commands like RESTORE FILELISTONLY, DBCC commands with TABLE_RESULTS, etc.

The defensive programming technique example below executes a command that is expected to return a single result set but still calls NextResult afterwards to ensure database exceptions are raised when SQL errors are subsequently raised in the T-SQL batch.

string sqlBatchText = @"
BEGIN TRY

    SELECT 1/0 AS DivideByZeroError;

END TRY
BEGIN CATCH

    THROW;

END CATCH;
";

int rowCount = 0;

try
{

    using (SqlConnection connection = new SqlConnection(@"Data Source=.;Initial Catalog=tempdb;Integrated Security=SSPI"))
    using (SqlCommand command = new SqlCommand(sqlBatchText, connection))
    {
        connection.Open();
        using (SqlDataReader reader = command.ExecuteReader())
        {
            while (reader.Read()) { ++rowCount; };
            //get any subsequent errors returned
            while (reader.NextResult()) { };
        }
        Console.WriteLine("{0} rows read", rowCount);
    }
}
catch
{
    throw;
}

I use System.Data.SqlClient namespace objects (.NET Framework Data Provider for SQL Server) in this article but the same considerations apply to other ADO.NET managed providers (System.Data.Odbc and System.Data.OleDb namespace) when accessing SQL Server.

Regardless of the method one uses to execute SQL Server queries, ADO.NET uses a data reader to return command results even when higher-level objects (e.g. Dataset) or ORMs (e.g. Entity Framework) are used. The low-level ADO.NET command ExecuteReader method exposes the data reader whereas ExecuteScalar and ExecuteNonQuery do not expose the internal reader.

ExecuteScalar returns the first column of the first row returned as a scalar value but doesn’t call NextResult on the internal data reader to retrieve subsequent results. Consequently, errors may go undetected with ExecuteScalar. ExecuteScalar will not raise an exception if a T-SQL error occurs after the first row is returned. Also, if no rows are returned because the row-returning statement erred and the error was caught in T-SQL, ExecuteScalar returns a null object without raising an exception.

ExecuteNonQuery executes the entire batch of statements and returns the accumulated count of affected rows as a scalar value, discarding rows returned (if any). The returned value will be -1 if SET NOCOUNT ON is specified. Because ExecuteNonQuery internaly consumes all results in the process, errors will be raised without additional ADO.NET programming, albeit one doesn’t typically use ExecuteNonQuery to execute a batch that returns rows. Again, the ADO.NET error detection issue only applies to row-returning statements.

The remainder of this article discusses T-SQL error handling and ADO.NET defensive programming techniques in more detail and discusses techniques to avoid undetected database errors in ADO.NET.

T-SQL Error Handling Objectives
T-SQL and ADO.NET data access code must work in concert with one another to ensure SQL errors are detected in application code. The T-SQL constructs used in multi-statement batches can affect if and how when errors are reported by ADO.NET during batch execution. I’ll start by citing core T-SQL error handling objectives, which can be summarized as:

1) Ensure a multi-statement T-SQL batch doesn’t continue after an error occurs.
2) Rollback transaction after errors.
3) Raise error so that the client application is aware a problem occurred.

The T-SQL building blocks used to achieve these objectives are:
1) SET XACT_ABORT ON
2) Structured error handling (SEH) (a.k.a. TRY/CATCH)
3) Control-of-flow (e.g. IF @@ERROR GOTO ErrorHandler)

T-SQL Behavior Without SEH and XACT_ABORT ON
When a runtime error occurs with the XACT_ABORT session setting ON outside a TRY block, SQL Server will stop batch execution immediately, rollback the transaction (if any), and raise the error. Consequently, a single SET XACT_ABORT ON statement will meet all aforementioned error handling objectives without T-SQL procedural code. However, the XACT_ABORT setting is not considered when user-defined errors are raised with RAISERROR so control-of-flow statements are required to meet objectives #2 and #3 when RAISERROR is employed.

SET XACT_ABORT ON also rolls back open transactions following an attention event like an explicit cancel or query timeout, which would otherwise leave the transaction open. This is one reason why I strongly recommend using SET XACT_ABORT ON, especially in procs that include BEGIN TRAN, regardless of whether or not SEH is also used.

T-SQL Behavior Without SEH and XACT_ABORT OFF
When an error occurs with the SET XACT_ABORT session setting OFF and SEH is not used, SQL Server will raise the error immediately but, depending on the error and severity, batch execution might continue and the transaction not rolled back. The T-SQL batch must use control-of-flow statements after each statement to avoid continuing after errors and roll back the transaction (objectives #1 and #2).

T-SQL Behavior With T-SQL Structured Error Handling
When an error occurs during statement execution with a T-SQL structured error handler is in scope, the CATCH block error handler is entered, after marking the transaction uncommittable if SET XACT_ABORT is ON. SEH meets the first error handling objective by skipping subsequent statements in the TRY block after an error. It is the responsibility of the error handling code in the CATCH BLOCK to roll back the transaction if needed and raise the error. The simple T-SQL handler below achieves objectives #2 and #3 in Azure SQL Database and SQL Server 2012 and later:

BEGIN CATCH
    IF @@TRANCOUNT > 0 ROLLBACK;
    THROW;
END;

THROW is not available In SQL 2008 R2 and earlier so one must use RAISERROR instead in older versions. The error handler below provides similar functionality a THROW, although RAISERROR obfuscates the original error as a user error with message number 50000+.

BEGIN CATCH
    DECLARE
          @ErrorNumber int = ERROR_NUMBER()
        , @ErrorMessage nvarchar(2048) = ERROR_MESSAGE()
        , @ErrorSeverity int = ERROR_SEVERITY()
        , @ErrorState int = ERROR_STATE()
        , @ErrorLine int = ERROR_LINE();

    RAISERROR('Error %d caught at line %d: %s'
        , @ErrorSeverity
        , @ErrorState
        , @ErrorNumber
        , @ErrorLine
        , @ErrorMessage);
END CATCH;

T-SQL Interaction with ADO.NET
ADO.NET will reliably detect errors in batches without T-SQL SEH when no result sets are returned or only the last statement in the batch returns rows. When a row returning statement is not the last statement in the batch, ADO.NET code must call NextResult to ensure ADO.NET raises errors for subsequent statements in the batch that may have erred. Avoid row returning statements that are not the last one in a batch unless you are certain the calling code consumes all results with NextResult.

Using T-SQL SEH when result sets are returned has implications on ADO.NET data access code to ensure database exceptions are raised after SQL errors. As illustrated with the ExecuteReader example at the beginning of this article, when an error is caught in T-SQL during a row-returning statement, ADO.NET will not raise the database exception immediately and instead return the partial or empty result set generated by the failed statement. The next data reader Read method call will return false after the error. The error raised in the CATCH block by the T-SQL THROW or RAISERROR statement is considered a separate result by ADO.NET and will not be raised as a database exception until NextResult is called.

Unless you have a specific reason to use SEH in row-returning batches, I suggest instead using SET XACT_ABORT ON alone as this will address core T-SQL error handling objectives and allow ADO.NET to detect errors without calling NextResult. However, as mentioned earlier, ADO.NET code will still need to call NextResult if the row-returning statement is not the last statement in the batch.

Summary
I hope this information will help you ensure database errors in multi-statement batches are detected. The interaction between T-SQL and ADO.NET isn’t as intuitive as it could be.

SQL Server 2016 and Azure SQL Database V12 Breaking Change

This post is to get the word out about a breaking change to datetime conversion and comparison behavior in SQL Server 2016 and Azure SQL Database V12. This change hasn’t been documented as of this writing in the Breaking Changes to Database Engine Features in SQL Server 2016 topic in the SQL Server Books Online.

In short, conversion from datetime to a higher precision temporal data type (datetime2, datetimeoffset, or time) may yield a different, but more accurate, time value than in prior versions. Also, predicates involving datetime consider the full precision of raw datetime internal value instead of the time value rounded to the nearest millisecond. These changes in conversion and comparison behavior may affect existing applications and is not intuitive unless one understands the underlying datetime data type implementation.

Background
You may be aware that the accuracy of datetime is limited to 1/300 of a second. This is because values are internally an 8-byte structure consisting of 2 separate 32-bit integers, one with the number of day units since 1900-01-01 and the other with the number of 1/300 second interval units since midnight. The 1/300 second unit interval limits the time accuracy to 3.33333… milliseconds and the milliseconds value will be a repeating decimal when time interval units are not evenly divisible by 3. The raw decimal value is rounded to a scale of 3 in accordance with the fixed datetime precision of 3, resulting in a millisecond value of 0, 3, or 7 for all datetime values.

Pre-SQL Server 2016 Behavior
Before SQL Server 2016, conversion from datetime to another temporal type used the source datetime value after it was rounded to the nearest millisecond, which truncated repeating decimal fractional milliseconds. The rounded value was then rounded again according to the target type precision. When the target type precision was greater than 3, the time was extended to the target type precision with insignificant trailing zeros, resulting in zero for the sub-millisecond value.

DECLARE @DateTime datetime = '2016-01-01T00:00:00.007';
SELECT CAST(@DateTime AS datetime2(0)); --2016-01-01 00:00:00
SELECT CAST(@DateTime AS datetime2(1)); --2016-01-01 00:00:00.0
SELECT CAST(@DateTime AS datetime2(2)); --2016-01-01 00:00:00.01
SELECT CAST(@DateTime AS datetime2(3)); --2016-01-01 00:00:00.007
SELECT CAST(@DateTime AS datetime2(4)); --2016-01-01 00:00:00.0070
SELECT CAST(@DateTime AS datetime2(5)); --2016-01-01 00:00:00.00700
SELECT CAST(@DateTime AS datetime2(6)); --2016-01-01 00:00:00.007000
SELECT CAST(@DateTime AS datetime2(7)); --2016-01-01 00:00:00.0070000

Also, when datetime was compared to another temporal type, the rounded value was used. This script shows the result of the equality predicate is true after the datetime value is converted to datetime2.

--This script prints EQUAL predicate is true
DECLARE @DateTime datetime = '2016-01-01T00:00:00.003';
DECLARE @DateTime2 datetime2(7) = @DateTime;
IF @DateTime = @DateTime2 PRINT 'EQUAL predicate is true' ELSE PRINT 'EQUAL predicate is not true';
IF @DateTime < @DateTime2 PRINT 'LESS THAN predicate is true' ELSE PRINT 'LESS THAN predicate is not true';
IF @DateTime > @DateTime2 PRINT 'GREATER THAN predicate is true' ELSE PRINT 'GREATER THAN predicate is not true';
GO
--This script prints EQUAL predicate is true
DECLARE @DateTime datetime = '2016-01-01T00:00:00.007';
DECLARE @DateTime2 datetime2(7) = @DateTime;
IF @DateTime = @DateTime2 PRINT 'EQUAL predicate is true' ELSE PRINT 'EQUAL predicate is not true';
IF @DateTime < @DateTime2 PRINT 'LESS THAN predicate is true' ELSE PRINT 'LESS THAN predicate is not true';
IF @DateTime > @DateTime2 PRINT 'GREATER THAN predicate is true' ELSE PRINT 'GREATER THAN predicate is not true';
GO

SQL Server 2016 Behavior Change
SQL Server 2016 and Azure SQL Database V12 use the raw datetime internal value without rounding during conversion to another temporal type. The value is rounded only once during conversion, to the target type precision. The end result will be the same as before SQL Server 2016 when the target type precision is 3 or less. However, the converted value will be different when the target type precision is greater than 3 and the internal time unit interval is not evenly divisible by 3 (i.e. rounded source datetime millisecond value is 3 or 7). Note the non-zero microseconds and nanoseconds in the script results below and that rounding is based on the target type precision rather than the source.

DECLARE @DateTime datetime = '2016-01-01T00:00:00.003';
SELECT CAST(@DateTime AS datetime2(0)); --2016-01-01 00:00:00
SELECT CAST(@DateTime AS datetime2(1)); --2016-01-01 00:00:00.0
SELECT CAST(@DateTime AS datetime2(2)); --2016-01-01 00:00:00.03
SELECT CAST(@DateTime AS datetime2(3)); --2016-01-01 00:00:00.003
SELECT CAST(@DateTime AS datetime2(4)); --2016-01-01 00:00:00.0033
SELECT CAST(@DateTime AS datetime2(5)); --2016-01-01 00:00:00.00333
SELECT CAST(@DateTime AS datetime2(6)); --2016-01-01 00:00:00.003333
SELECT CAST(@DateTime AS datetime2(7)); --2016-01-01 00:00:00.0033333
GO
DECLARE @DateTime datetime = '2016-01-01T00:00:00.007';
SELECT CAST(@DateTime AS datetime2(0)); --2016-01-01 00:00:00
SELECT CAST(@DateTime AS datetime2(1)); --2016-01-01 00:00:00.0
SELECT CAST(@DateTime AS datetime2(2)); --2016-01-01 00:00:00.01
SELECT CAST(@DateTime AS datetime2(3)); --2016-01-01 00:00:00.007
SELECT CAST(@DateTime AS datetime2(4)); --2016-01-01 00:00:00.0067
SELECT CAST(@DateTime AS datetime2(5)); --2016-01-01 00:00:00.00667
SELECT CAST(@DateTime AS datetime2(6)); --2016-01-01 00:00:00.006667
SELECT CAST(@DateTime AS datetime2(7)); --2016-01-01 00:00:00.0066667
GO

This behavior change provides a more accurate converted value but may break applications that expect the converted value to be the same as the rounded datetime value as was the case before SQL Server 2016.

Be aware than the full raw datetime precision (instead of the rounded value) is also used when evaluating predicates involving a datetime type. The full precision of both arguments are used, resulting in the equality compare predicate to evaluate to false in both scripts below. The greater than predicate is true in the first script and the less than predicate is true in the second:

--This script prints GREATER THAN predicate is true
DECLARE @DateTime datetime = '2016-01-01T00:00:00.003';
DECLARE @DateTime2 datetime2(7) = @DateTime;
IF @DateTime = @DateTime2 PRINT 'EQUAL predicate is true' ELSE PRINT 'EQUAL predicate is not true';
IF @DateTime < @DateTime2 PRINT 'LESS THAN predicate is true' ELSE PRINT 'LESS THAN predicate is not true';
IF @DateTime > @DateTime2 PRINT 'GREATER THAN predicate is true' ELSE PRINT 'GREATER THAN predicate is not true';
GO
--This script prints LESS THAN predicate is true
DECLARE @DateTime datetime = '2016-01-01T00:00:00.007';
DECLARE @DateTime2 datetime2(7) = @DateTime;
IF @DateTime = @DateTime2 PRINT 'EQUAL predicate is true' ELSE PRINT 'EQUAL predicate is not true';
IF @DateTime < @DateTime2 PRINT 'LESS THAN predicate is true' ELSE PRINT 'LESS THAN predicate is not true';
IF @DateTime > @DateTime2 PRINT 'GREATER THAN predicate is true' ELSE PRINT 'GREATER THAN predicate is not true';
GO

To provide insight into why the comparisons result in greater than and less than respectively, the script below shows the nanoseconds value of the compared data types:

DECLARE @DateTime datetime = '2016-01-01T00:00:00.003';
DECLARE @DateTime2 datetime2(7) = @DateTime;
SELECT DATEPART(nanosecond, @DateTime); --3333333
SELECT DATEPART(nanosecond, @DateTime2); --3333300
GO
DECLARE @DateTime datetime = '2016-01-01T00:00:00.007';
DECLARE @DateTime2 datetime2(7) = @DateTime;
SELECT DATEPART(nanosecond, @DateTime); --6666666
SELECT DATEPART(nanosecond, @DateTime2); --6666700
GO

The datetime2 type is accurate only to 100 nanosecond whereas datetime includes values to the nanosecond (and beyond) because the theoretical precision of repeating decimal values is unlimited. The implication is that a datetime type with a repeating decimal value will never compare equally with any temporal type except datetime.

Datetime conversion and comparison behavior is controlled by the database compatibility level. Databases in SQL Server 2016 level (130) use the new behavior and the legacy behavior is used with other levels.

Summary
These datetime behavior changes have the benefit of improved accuracy and performance of datetime conversion/comparison. Affected applications can use a pre-SQL Server 2016 database compatibility level until they can be remediated.

I recommend one avoid comparing datetime directly to other temporal types going forward. Instead convert the datetime value to the type being compared and use the converted value in the predicate. It’s generally best practice to match data types whenever possible for performance and to avoid ambiguity.

Add or Remove IDENTITY Property From an Existing Column Efficiently

Introduction
Refactoring is often needed to improve schema design or address changes in requirements. Occasionally, these schema modifications require removing the IDENTITY property from an existing non-empty table while retaining the existing values. Unfortunately, there is no DDL command to add or remove IDENTITY from an existing column. Traditionally, one must go through the pain of either recreating the table or jumping through hoops by adding a new column, updating the new column value with the old column value, and dropping the original column. This is especially problematic with large tables and short maintenance windows.

This article details how to use SWITCH as an alternative technique to add or remove IDENTITY from an existing column. I have been using SWITCH for many years but learned only recently that the identical schema restriction for SWITCH does not include IDENTITY column property; one can SWITCH data from one table to another even though the IDENTITY column property differs. This behavior allows the IDENTITY property to be added or removed.

Overview
All tables are partitioned from a database storage engine perspective since SQL Server 2005, although multiple partitions require Enterprise Edition. The implication of this storage architecture is that one can use ALTER TABLE…SWITCH regardless of SQL Server edition to move the entire contents of one table to another without physically moving rows as long as the source and target tables have an identical schema and are physically aligned (indexes and tables on same filegroup). SWITCH performs fast storage meta-data changes so the operation typically takes less than a second regardless of table size. The target table must be empty before SWITCH. After SWITCH, the target table will contain the data and the source table will be empty (similar to TRUNCATE).

Since the identical schema restriction does not include the IDENTITY column property, SWITCH is a handy technique to add or remove IDENTITY from an existing column of a non-empty table. For an empty table, a simple drop and create is easier and more efficient.

Remove IDENTITY from an Existing Column
The script below uses a staging table without the IDENTITY column property along with SWITCH to remove the IDENTITY property from the column and retain previously assigned identity values. I drop the empty original table and rename the staging table to the original name along with constraints and indexes after the operation.

--create sample table and data
CREATE TABLE dbo.AddIdentityExample(
	  IncrementalColumn int NOT NULL
		CONSTRAINT PK_AddIdentityExample PRIMARY KEY
	, OtherData varchar(20) NULL
	);
CREATE INDEX idx_AddIdentityExample_OtherData
	ON dbo.AddIdentityExample(OtherData);
INSERT INTO dbo.AddIdentityExample VALUES
	  (1, 'Sample data one')
	, (2, 'Sample data two')
	, (3, 'Sample data three');
GO

SET XACT_ABORT ON;
BEGIN TRY
	BEGIN TRAN;
	--create staging table with same schema, indexes, and constraints
	CREATE TABLE dbo.AddIdentityExampleStaging(
		  IncrementalColumn int IDENTITY NOT NULL --IDENTITY column property added
			CONSTRAINT PK_AddIdentityExampleStaging PRIMARY KEY
		, OtherData varchar(20) NULL
		);
	CREATE INDEX idx_AddIdentityExampleStaging_OtherData
		ON dbo.AddIdentityExampleStaging(OtherData);

	ALTER TABLE dbo.AddIdentityExample
		SWITCH TO dbo.AddIdentityExampleStaging;
	DROP TABLE dbo.AddIdentityExample;
	--rename table
	EXEC sp_rename
		  @objname = N'dbo.AddIdentityExampleStaging'
		, @newname = N'AddIdentityExample'
		, @objtype = 'OBJECT';
	--rename constraints
	EXEC sp_rename
		  @objname = N'dbo.PK_AddIdentityExampleStaging'
		, @newname = N'PK_AddIdentityExample'
		, @objtype = 'OBJECT';
	--rename indexes
	EXEC sp_rename
		  @objname = N'dbo.AddIdentityExample.idx_AddIdentityExampleStaging_OtherData'
		, @newname = N'idx_AddIdentityExampleStaging_OtherData'
		, @objtype = N'INDEX';
	--seed IDENTITY with current max column value
	DBCC CHECKIDENT(N'dbo.AddIdentityExample');
	COMMIT;
END TRY
BEGIN CATCH
	IF @@TRANCOUNT > 0 ROLLBACK;
	THROW;
END CATCH;
GO

Add IDENTITY to an Existing Column
This script uses a staging table with the IDENTITY column property along with SWITCH to add the IDENTITY property while retaining the existing column values. After the SWITCH operation, DBCC CHECKIDENT seeds the next IDENTITY value greater than the current value in the column.

Note that an IDENTITY column must now allow NULL values so, in the case of a nullable column, the column must first be altered to NOT NULL before using this method.

--create sample table and data
CREATE TABLE dbo.RemoveIdentityExample(
	  IncrementalColumn int IDENTITY NOT NULL
		CONSTRAINT PK_RemoveIdentityExample PRIMARY KEY
	, OtherData varchar(20) NULL
	);
CREATE INDEX idx_RemoveIdentityExample_OtherData
	ON dbo.RemoveIdentityExample(OtherData);
INSERT INTO dbo.RemoveIdentityExample VALUES
	  ('Sample data one')
	, ('Sample data two')
	, ('Sample data three');
GO

SET XACT_ABORT ON;
BEGIN TRY
	BEGIN TRAN;
	--create staging table with same schema, indexes, and constraints
	CREATE TABLE dbo.RemoveIdentityExampleStaging(
		  IncrementalColumn int NOT NULL --IDENTITY column property removed
			CONSTRAINT PK_RemoveIdentityExampleStaging PRIMARY KEY
		, OtherData varchar(20) NULL
		);
	CREATE INDEX idx_RemoveIdentityExampleStaging_OtherData
		ON dbo.RemoveIdentityExampleStaging(OtherData);

	ALTER TABLE dbo.RemoveIdentityExample
		SWITCH TO dbo.RemoveIdentityExampleStaging;
	DROP TABLE dbo.RemoveIdentityExample;
	--rename table
	EXEC sp_rename
		  @objname = N'dbo.RemoveIdentityExampleStaging'
		, @newname = N'RemoveIdentityExample'
		, @objtype = 'OBJECT';
	--rename constraints
	EXEC sp_rename
		  @objname = N'dbo.PK_RemoveIdentityExampleStaging'
		, @newname = N'PK_RemoveIdentityExample'
		, @objtype = 'OBJECT';
	--rename indexes
	EXEC sp_rename
		  @objname = N'dbo.RemoveIdentityExample.idx_RemoveIdentityExampleStaging_OtherData'
		, @newname = N'idx_RemoveIdentityExampleStaging_OtherData'
		, @objtype = N'INDEX';
	COMMIT;
END TRY
BEGIN CATCH
	IF @@TRANCOUNT > 0 ROLLBACK;
	THROW;
END CATCH;
GO



--create sample table and data
CREATE TABLE dbo.AddIdentityExample(
	  IncrementalColumn int NOT NULL
		CONSTRAINT PK_AddIdentityExample PRIMARY KEY
	, OtherData varchar(20) NULL
	);
CREATE INDEX idx_AddIdentityExample_OtherData
	ON dbo.AddIdentityExample(OtherData);
INSERT INTO dbo.AddIdentityExample VALUES
	  (1, 'Sample data one')
	, (2, 'Sample data two')
	, (3, 'Sample data three');
GO

SET XACT_ABORT ON;
BEGIN TRY
	BEGIN TRAN;
	--create staging table with same schema, indexes, and constraints
	CREATE TABLE dbo.AddIdentityExampleStaging(
		  IncrementalColumn int IDENTITY NOT NULL --IDENTITY column property added
			CONSTRAINT PK_AddIdentityExampleStaging PRIMARY KEY
		, OtherData varchar(20) NULL
		);
	CREATE INDEX idx_AddIdentityExampleStaging_OtherData
		ON dbo.AddIdentityExampleStaging(OtherData);

	ALTER TABLE dbo.AddIdentityExample
		SWITCH TO dbo.AddIdentityExampleStaging;
	DROP TABLE dbo.AddIdentityExample;
	--rename table
	EXEC sp_rename
		  @objname = N'dbo.AddIdentityExampleStaging'
		, @newname = N'AddIdentityExample'
		, @objtype = 'OBJECT';
	--rename constraints
	EXEC sp_rename
		  @objname = N'dbo.PK_AddIdentityExampleStaging'
		, @newname = N'PK_AddIdentityExample'
		, @objtype = 'OBJECT';
	--rename indexes
	EXEC sp_rename
		  @objname = N'dbo.AddIdentityExample.idx_AddIdentityExampleStaging_OtherData'
		, @newname = N'idx_AddIdentityExampleStaging_OtherData'
		, @objtype = N'INDEX';
	DBCC CHECKIDENT(N'dbo.AddIdentityExample');
	COMMIT;
	UPDATE STATISTICS dbo.AddIdentityExample;
END TRY
BEGIN CATCH
	IF @@TRANCOUNT > 0 ROLLBACK;
	THROW;
END CATCH;
GO

Always Use Semicolon Statement Terminators

ANSI-standard semicolon statement terminators are often omitted in T-SQL queries and many developers are unaware that this is syntax is deprecated.  Omitting statement terminators is a dangerous practice because, even if the batch compiles, you may get unexpected results.  Consider the insidious examples below pointed out by SQL Server MVP Erland Sommarskog:

BEGIN TRY
	BEGIN TRAN
	SELECT 1/0 AS CauseAnException
	COMMIT
END TRY
BEGIN CATCH
	SELECT ERROR_MESSAGE()
	THROW
END CATCH

Few of us will catch (no pun intended) the bug in the above script.  What results do you expect after running the above script under SQL Server 2012 or later versions?  Rather than leave this as an exercise for the reader, I’ll spoil the fun and mention that no run-time error is raised at all.  Instead, the THROW statement is interpreted as a column alias for the ERROR_MESSAGE() column.  This sort of coding error is especially nasty because catch blocks are rarely unit tested and this catch block coding mistake hides the run-time error entirely without raising an exception.

Similarly, the absence of statement terminators in the script below causes another problem.  Can you spot it?

BEGIN TRY
	BEGIN TRAN
	SELECT 1/0 AS CauseAnException
	COMMIT
END TRY
BEGIN CATCH
    IF @@TRANCOUNT > 0 ROLLBACK TRAN
    THROW
END CATCH

At least an error is raised in this case, albeit not the one you might expect.  The resultant error is “Cannot roll back THROW. No transaction or savepoint of that name was found”.  This coding bug obfuscates the preceding divide by zero error and prevents the THROW statement from being executed.

Semicolons Will Become Mandatory
Microsoft announced with the SQL Server 2008 release that semicolon statement terminators will become mandatory in a future version so statement terminators other than semicolons (whitespace) are currently deprecated.  This deprecation announcement means that you should always use semicolon terminators in new development.  I honestly don’t expect SQL Server to strictly enforce mandatory semicolons in the near future but it is still a best practice to use semicolon statement to avoid issues like those mentioned earlier as well as facilitate code maintainability.  I suggest specifying statement terminators in all new development and perhaps adding terminators to existing code as you perform maintenance.

Transact-SQL does not currently enforce the ANSI semicolon statement terminator requirement.  Instead, semicolon statement terminators are optional and any whitespace (spaces, tabs, newline) may be used instead.  The exception to this rule is that many of the statements introduced in SQL Server 2005 and later require the preceding statement to be properly terminated in order for the batch to compile.

Below are some guidelines I suggest on when to, and when not to, use semicolon statement terminators.

Suggested Guidelines
The Transact-SQL parser is quite lax, allowing any whitespace (e.g. space, tab, newline) to be used.  This laxness results in ambiguity like the examples at the beginning of this article demonstrate.  Similarly, statement terminators may not only be omitted, they may also be used in inappropriately.  I strongly suggest you adhere to the T-SQL syntax documented in the Books Online even if the parser allows otherwise.  This practice will help future-proof your code since relying on undocumented behavior is inherently risky.

Don’t precede a statement with a semicolon
Remember that the purpose of semicolons is to terminate SQL statements, not begin them.  A common mistake I see is throwing a semicolon in front of statements in order to get a batch of statements to compile, especially with newer statements like WITH (CTE expression) that require previous statement termination.  Although the T-SQL parser currently ignores extraneous and misplaced semi-colons, I suggest they be specified in the appropriate place according statement syntax documented in the SQL Server Books Online.

Specify semicolons at the end of each stand-alone SQL statement
Not only will this conform to the ANSI standard, your intent will be clearer and the code easier to read.

Terminate control-of-flow statement blocks at the end of the control-of-flow scope
Control-of-flow statements are not covered by the ANSI SQL standard because these are proprietary SQL extensions.  The SQL Server Books Online is sketchy on the subject and many of the examples (as of this writing) are inconsistent and do not always include statement terminators.  Furthermore, control-of-flow statement blocks are confusing due to the many variations, nesting, and optional BEGIN/END specifications.

Below are examples illustrating what I believe to be proper use of statement terminators control-of-flow block terminators using IF statements in SQL 2008 and later versions.  The same concepts apply to other control-of-flow constructs like WHILE and TRY/CATCH blocks.  I should add that this batch example will not compile under SQL 2005 because an explicit BEGIN/END block is required to execute a common table expression conditionally in that version.  T-SQL parser enhancements eliminated that requirement in SQL 2008 and later.

IF 1 = 1
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement and IF

IF 1 = 1
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
ELSE
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement and IF

IF 1 = 1
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END; --terminate IF

IF 1 = 1
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END
ELSE
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END; --terminate IF

IF 1 = 1
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
ELSE
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END; --terminate IF

IF 1 = 1
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END
ELSE
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement and IF

IF 1 = 1
  BEGIN
    WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
	IF 1 = 1
		WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
	ELSE
	BEGIN
		WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
	END; --terminate inner nested IF
	WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement
  END
ELSE
  WITH a AS (SELECT 1 a) SELECT a FROM a; --terminate statement and outer IF

Summary
Consistent user of semicolons helps avoid bugs in code that might otherwise go undetected.  Code with statement terminators can also be more easily modified without introducing compile errors and make code easier to maintain because the end of each statement is readily apparent to subsequent developers.  Importantly, you’ll be better positioned for future SQL Server versions by consistently using semicolon statement terminators.

Maximizing Performance with Table-Valued Parameters

I’ve seen TVPs improve performance by orders of magnitude when used appropriately.  This feature isn’t used as often as it should be so I’ll show how to implement a TVP using a C# application example and discuss TVP considerations.

Why Table-Valued Parameters Improve Performance
TVPs allow one to pass multiple rows or values at once to a query or stored procedure.  By doing so, the query can leverage set-based operations to greatly improve performance compared to executing the query/proc many times.  TVPs decrease network latency by reducing network round trips.  Prior to TVPs, one had to employ workarounds like passing a string containing a delimited list of records (or XML) and parsing on the SQL side.  Those techniques are at best a kludge.  Instead TVPs are the right tool for the job in SQL Server 2008 and later.

TVPs (and table variables) are temp tables.  TVPs and table variables are persisted in tempdb much the same way as regular temp tables (# prefix).  The big differences are that TVPs and table variables are limited in scope to the current batch and do not have statistics.  Also, table parameters/variables allow only indexes declared via primary key nor unique constraints; non-unique indexes are not supported.

When a TVP Is passed to SQL Server from a client application, the data is bulk-inserted into tempdb where it is made available to the SQL statement or stored procedure using the declared TVP.  This bulk insert is handled by the client API transparently to the application and allows a large number of rows to be passed to SQL Server efficiently.  In T-SQL, one can declare and load a table variable, where it can be used directly in a SQL statement or passed as a TVP to a stored procedure or query (invoked with sp_executesql).

The exact threshold where TVPs outperform single-row operations is much lower than you might expect.  Your mileage may vary but, at least in my experience, it’s only a few rows.  The performance tests I ran here show that the threshold where a TVP outperforms individual requests with a simple list of customers is only 4 rows.  That said, I would not recommend TVPs for single-row operations due to the start-up overhead but it’s a good choice when the most common use case is multiple rows.

How to Use TVPs
In order to pass a TVP to a SQL statement or stored procedure, one must first create a user-defined table type in SQL Server.  The table type defines the schema of the table SQL Server expects for the TVP.  Listing 1 shows the CREATE DDL for the table type I use in the AdventureWorks2012 database to pass a list of customers to the stored procedure in Listing 2 that returns all orders for these customers.  Note that the stored procedure TVP must be declared as READONLY.

Listing 1: Table type DDL

CREATE TYPE dbo.CustomerList AS TABLE(
CustomerID int NOT NULL
PRIMARY KEY
);

Listing 2: Stored procedure to return orders for multiple customers

CREATE PROC dbo.usp_select_orders_for_multiple_customers
@CustomerList dbo.CustomerList READONLY
AS
SELECT soh.CustomerID, soh.SalesOrderID
FROM Sales.SalesOrderHeader AS soh
JOIN @CustomerList AS cl ON
cl.CustomerID = soh.CustomerID;
RETURN @@ERROR;

TVPs are passed from a .NET application using parameter data type DbType.Structured.  The actual parameter value can be an object of type DataTable, DbDataReader or IEnumberable<SqlDataRecord>.  A DataTable is appropriate when the required TVP data is already in a DataTable object needed for other purposes.  A DbDataReader is a good choice when the source data is the result of a query (note that any DbDataReader may be used, not just a SqlDataReader).   For other cases, I recommend an IEnumerable<SqlDataRecord>.  It is very easy to implement IEnumerable<SqlDataRecord>, especially if you already have an enumerable object like an array or collection containing the values you need to pass.  See the code at the end of this article for an example class that implements IEnumerable<SqlDataRecord> for a TVP.

A benefit with DbDataReader and IEnumerable<SqlDataRecord> is that the source data can be streamed to SQL server without first loading all the TVP data into memory in either the client application or in SQL Server.  This is a big consideration when a lot of data needs to be passed because you are not constrained by memory on ether the client or SQL Server side.

Listing 3 shows the stored procedure I use to compare performance with the TVP method.  This proc takes only a single @CustomerID parameter so it must be called once for each customer.

Listing 3: Stored procedure to return orders for a single customer

CREATE PROC dbo.usp_select_orders_for_single_customer
@CustomerID int
AS
SELECT soh.CustomerID, soh.SalesOrderID
FROM Sales.SalesOrderHeader AS soh
WHERE
soh.CustomerID = @CustomerID;
RETURN @@ERROR;

The complete C# console application I used for the performance tests is in Listing 4.  The code demonstrates the 3 ways to pass a TVP from C# and logs timings for each invocation.  As you can see in Figure 1, the TVP outperformed individual calls once more than 3 customers were passed.  Note that I had to use a logarithmic y-axis scale because of the huge performance difference.  For example, 10,000 individual calls averaged about 1.5 seconds whereas passing 10,000 customers via a TVP took less to 100 milliseconds.

Figure 1:  TVP versus individual call performance
TVP performance graph

Listing 4: Passing TVPs in C#

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
using System.Data.SqlClient;
using System.Diagnostics;
using Microsoft.SqlServer.Server;
using System.IO;

namespace TVPTest
{
    class Program
    {

        private static string connectionString = @"Data Source=MyServer;Initial Catalog=AdventureWorks2012;Integrated Security=SSPI";
        private static StreamWriter logFile = new StreamWriter(@"TvpTest.log", false);

        static void Main(string[] args)
        {

            // write column headers to log file
            logFile.WriteLine("{0},{1},{2}"
                , "Test Name"
                , "Customers"
                , "Durations");

            // run each performance test 100 times
            for (int i = 0; i < 100; ++i)
            {
                runPerformanceTest(1);
                runPerformanceTest(2);
                runPerformanceTest(3);
                runPerformanceTest(4);
                runPerformanceTest(5);
                runPerformanceTest(10);
                runPerformanceTest(25);
                runPerformanceTest(50);
                runPerformanceTest(100);
                runPerformanceTest(500);
                runPerformanceTest(1000);
                runPerformanceTest(2500);
                runPerformanceTest(5000);
                runPerformanceTest(10000);
                logFile.Flush();
            }

            logFile.Close();

            Console.WriteLine("Press any key to close.");
            Console.ReadKey();
        }

        private static void runPerformanceTest(int customerCount)
        {

            Console.WriteLine("{0} customers:", customerCount);

            DataTable orderDataTable;
            Stopwatch sw;
            DateTime duration;

            //individual call baseline
            sw = Stopwatch.StartNew();
            orderDataTable = getOrdersForCustomersIndividually(customerCount);
            sw.Stop();
            duration = new DateTime(sw.Elapsed.Ticks);
            Console.WriteLine("\tIndividual call: {0} orders returned, duration is {1}"
                , orderDataTable.Rows.Count, duration.ToString("HH:mm:ss.ffffff"));
            logFile.WriteLine("{0},{1},{2}"
                , "Individual call"
                , customerCount
                , duration.ToString("HH:mm:ss.ffffff"));

            //TVP from DataTable
            sw = Stopwatch.StartNew();
            orderDataTable = getOrdersForCustomersUsingTvp(getCustomerListDataTable(customerCount));
            sw.Stop();
            duration = new DateTime(sw.Elapsed.Ticks);
            Console.WriteLine("\tTVP DataTable: {0} orders returned, duration is {1}"
                , orderDataTable.Rows.Count, duration.ToString("HH:mm:ss.ffffff"));
            logFile.WriteLine("{0},{1},{2}"
                , "TVP DataTable"
                , customerCount
                , duration.ToString("HH:mm:ss.ffffff"));

            //TVP from DataReader
            sw = Stopwatch.StartNew();
            orderDataTable = getOrdersForCustomersUsingTvp(getCustomerListDataReader(customerCount));
            sw.Stop();
            duration = new DateTime(sw.Elapsed.Ticks);
            Console.WriteLine("\tTVP DataReader: {0} orders returned, duration is {1}"
                , orderDataTable.Rows.Count, duration.ToString("HH:mm:ss.ffffff"));
            logFile.WriteLine("{0},{1},{2}"
                , "TVP DataReader"
                , customerCount
                , duration.ToString("HH:mm:ss.ffffff"));

            //TVP from IEnumerable
            sw = Stopwatch.StartNew();
            orderDataTable = getOrdersForCustomersUsingTvp(getCustomerListSqlDataRecords(customerCount));
            sw.Stop();
            duration = new DateTime(sw.Elapsed.Ticks);
            Console.WriteLine("\tTVP IEnumerable: {0} orders returned, duration is {1}"
                , orderDataTable.Rows.Count, duration.ToString("HH:mm:ss.ffffff"));
            logFile.WriteLine("{0},{1},{2}"
                , "TVP IEnumerable"
                , customerCount
                , duration.ToString("HH:mm:ss.ffffff"));

            Console.WriteLine();

        }

        /// 
        /// Call proc individually for each customer
        /// 
        /// Number of customers to return
        /// DataTable of customer orders
        private static DataTable getOrdersForCustomersIndividually(int customerCount)
        {
            using (var connection = new SqlConnection(connectionString))
            using (var command = new SqlCommand(@"dbo.usp_select_orders_for_single_customer", connection))
            {
                command.CommandType = CommandType.StoredProcedure;
                var customerIdParameter = command.Parameters.Add("@CustomerID", System.Data.SqlDbType.Int);
                var orderListDataAdapter = new SqlDataAdapter(command);
                var orderDataTable = new DataTable();
                for (int customerId = 11000; customerId < 11000 + customerCount; ++customerId)
                {
                    customerIdParameter.Value = customerId;
                    orderListDataAdapter.Fill(orderDataTable);
                }
                return orderDataTable;
            }
        }

        /// 
        /// Call proc with TVP of CustomersIDs
        /// 
        /// List of customers
        /// DataTable of customer orders
        private static DataTable getOrdersForCustomersUsingTvp(Object customerList)
        {
            using (var connection = new SqlConnection(connectionString))
            using (var command = new SqlCommand(@"dbo.usp_select_orders_for_multiple_customers", connection))
            {
                command.CommandType = CommandType.StoredProcedure;
                command.Parameters.Add("@CustomerList", System.Data.SqlDbType.Structured).Value = customerList;
                var orderListDataAdapter = new SqlDataAdapter(command);
                var orderDataTable = new DataTable();
                orderListDataAdapter.Fill(orderDataTable);
                return orderDataTable;
            }
        }

        /// 
        /// Create a DataTable containing list of customers for TVP
        /// 
        /// Number of customers to return
        /// DataTable with single CustomerID column
        private static DataTable getCustomerListDataTable(int customerCount)
        {
            var customerListDataTable = new DataTable();
            customerListDataTable.Columns.Add("CustomerID", typeof(int));

            for (int customerId = 11000; customerId < 11000 + customerCount; ++customerId)
            {
                var row = customerListDataTable.NewRow();
                customerListDataTable.Rows.Add(row);
                row[0] = customerId;
            }
            return customerListDataTable;
        }

        /// 
        /// Create a DataReader containing list of customers for TVP
        /// 
        /// Number of customers to return
        /// DataReader with single CustomerID column        
        private static IDataReader getCustomerListDataReader(int customerCount)
        {
            var connection = new SqlConnection(connectionString);
            var command = new SqlCommand(@"SELECT CustomerID FROM Sales.Customer WHERE CustomerID BETWEEN 11000 AND 11000 + @CustomerCount - 1;", connection);
            command.Parameters.Add("@CustomerCount", System.Data.SqlDbType.Int).Value = customerCount;
            connection.Open();
            command.CommandType = CommandType.Text;
            return command.ExecuteReader();

        }

        /// 
        /// Create an IEnumerable<SqlDataRecord> object containing list of customers for TVP
        /// 
        /// Number of customers to return
        /// IEnumerable<SqlDataRecord> with single CustomerID column
        private static IEnumerable getCustomerListSqlDataRecords(int customerCount)
        {
            var customerList = new List();
            for (int customerId = 11000; customerId < 11000 + customerCount; ++customerId)
            {
                customerList.Add(customerId);
            }
            return new CustomerListSqlDataRecords(customerList);
        }

        /// 
        /// Implement IEnumerable<SqlDataRecord> to return a SqlDataRecord for each CustomerID
        /// in the provided IEnumerable<int>
        /// 
        private class CustomerListSqlDataRecords : IEnumerable
        {
            private SqlMetaData metaData = new SqlMetaData("CustomerID", SqlDbType.Int);

            private IEnumerable customerList;

            public CustomerListSqlDataRecords(IEnumerable customerList)
            {
                this.customerList = customerList;
            }

            public IEnumerator GetEnumerator()
            {
                foreach (var customerId in customerList)
                {
                    var record = new SqlDataRecord(metaData);
                    record.SetInt32(0, customerId);
                    yield return record;
                }
            }

            System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
            {
                throw new NotImplementedException();
            }
        }
    }
}

Denali CTP3: THROW Statement

Not to mince words, T-SQL error handling has historically sucked. I’m excited that SQL Server “Denali” CTP3 (a.k.a. SQL11) includes a long-awaited THROW statement that I hope to see in the final release. In this post, I’ll dive into how this seemingly minor T-SQL enhancement will make it much easier for T-SQL developers to write robust and bug-free error handling code.

T-SQL Error Handling Ugliness

Unlike compiled application code that halts code execution upon an unhandled exception, a T-SQL might continue code execution afterward. T-SQL developers must include error checking/handling is to ensure code doesn’t continue down the “happy” path oblivious to an error, report the error to the caller, perform any necessary cleanup operations (typically ROLLBACK) and continue/halt execution as desired. The script below shows how one might accomplish this without structured error handling:

–Unstructured error handling example

BEGIN TRAN

SELECT 1/0 AS CauseAnError –report error caller

IF @@ERROR<> 0 GOTO ErrorHandler –detect error

COMMIT

GOTO Done
ErrorHandler:

IF @@TRANCOUNT> 0 ROLLBACK–cleanup after error

RETURN –stop further code execution

Done:

PRINT ‘Done’–not executed after error

GO
This script results in the error:

Msg 8134, Level 16, State 1, Line 3

Divide by zero error encountered.
Unstructured error handling like this is especially a pain for multi-statement scripts and stored procedures. One has to include repetitive “IF @@ERROR” check to detect errors after each statement and error-prone unstructured GOTO code. It’s easy to miss error checking/handling bugs in unit testing.

On a positive note, no T-SQL code is necessary to raise the error; SQL Server automatically reports errors to the calling application without any T-SQL code to do so (unless TRY/CATCH is used). This guarantees the calling application is notified of errors during execution.

Two Steps Forward, One Step Back

The introduction of structured error handling (TRY/CATCH) in SQL 2005 is a both a blessing and a curse. The good is that TRY/CATCH avoids the repetitive, error prone and ugly procedural code needed to check @@ERROR after each T-SQL statement and allows one to more easily centralize error handling. The structured error-handling paradigm in T-SQL is more aligned with most application languages.

Consider the equivalent script with TRY/CATCH:

–Structured error handling example

DECLARE

@ErrorNumber int

,@ErrorMessage nvarchar(2048)

,@ErrorSeverity int

,@ErrorState int

,@ErrorLine int;

BEGIN TRY–detect errors

BEGIN TRAN;

SELECT 1/0 AS CauseAnError;

COMMIT;

END TRY

BEGIN CATCH

SELECT

@ErrorNumber =ERROR_NUMBER()

,@ErrorMessage =ERROR_MESSAGE()

,@ErrorSeverity = ERROR_SEVERITY()

,@ErrorState =ERROR_STATE()

,@ErrorLine =ERROR_LINE();

IF @@TRANCOUNT> 0 ROLLBACK; –cleanup after error

RAISERROR(‘Error %d caught at line %d: %s’–report error to caller

,@ErrorSeverity

,@ErrorState

,@ErrorNumber

,@ErrorLine

,@ErrorMessage);

RETURN;–stop further code execution

END CATCH

PRINT ‘Done’; –not executed after error

GO

Msg 50000, Level 16, State 1, Line 21

Error 8134 caught at line 10: Divide by zero error encountered
I really like the way structured error handling catches errors declaratively with centralized error handling. But TRY/CATCH introduces a couple of issues. Foremost is reporting of the error to the caller. A caught error prevents the error message from being returned to the client. When TRY/CATCH is employed, the developer assumes responsibility to notify the application that an error occurred. Failure to do so will result in a silent error undetectable by the calling application, which is seldom desirable. Using TRY/CATCH necessitates that you write a bit of code in the CATCH block to capture, report and/or log error details as well as control code flow after the error.

Another downside of TRY/CATCH before Denali is that you cannot raise the original error because RAISERROR does not allow a system error number to be specified (8134 in this example). Consequently, the divide by zero system error here cannot be raised in the CATCH block; a user-defined error in the 50000+ error number range must be raised instead, obfuscating the original error and line number. So instead of returning error information natively, you must write code to return original error details by some other means, such as in the error message text. This often leads to inconsistencies in the way errors are reported.

THROW to the Rescue

Denali introduces a simple THROW statement. THROW in a CATCH block with no parameters raises the caught error and stops further code execution unless an outer CATCH block exists. This greatly simplifies CATCH block error reporting and control flow code since this THROW behavior is exactly what one typically does after handling a T-SQL error. Furthermore, unlike RAISERROR, THROW retains the original error number, message text, state, severity and line number. This is the biggest T-SQL error handling enhancement since the introduction of TRY/CATCH in SQL Server 2005.

The THROW example below raises the original error and stops further code execution and is less verbose and error-prone than other methods:

–Structured error handling example in Denali CTP3

BEGIN TRY–detect errors

BEGIN TRAN;

SELECT 1/0 AS CauseAnError;

COMMIT;

END TRY

BEGIN CATCH

IF @@TRANCOUNT> 0 ROLLBACK; –cleanup after error

THROW; –report error to caller and stop further code execution

END CATCH

PRINT ‘Done’; –not executed after error

GO

Msg 8134, Level 16, State 1, Line 4

Divide by zero error encountered.
There are only a couple of scenarios I can think of not to use THROW in a CATCH block. One is when you need to continue code execution in the same scope after an error. Another is in an outermost catch block when you want to prevent the error from being returned to the client. However, these cases are the exception (no pun intended) rather than the rule.

Summary

THROW is a simple, yet powerful extension to SQL Server error handling. I’ll discuss some other enhancements to the core database engine as outlined in the What’s New section of the SQL Server “Denali” Books Online in future posts as well.

Calendar Table and Date/Time Functions

I frequently see questions in the forums and newsgroups about how to best query date/time data and perform date manipulation.  Let me first say that a permanent calendar table that materializes commonly used DATEPART values along with time periods you frequently use is invaluable.  I’ve used such a table for over a decade with great success and strongly recommend you implement one on all of your database servers.  I’ve included a sample calendar table (and numbers table) later in this post and you can find other variations of such a table via an internet search.

Removing the Time Portion

A common requirement I have is to remove the time portion from a date/time value.  This is easy in SQL 2008 since you can simply “CAST(SomeDateTimeValue AS date)”.  But the date data type is not available in older SQL Server versions so you need an alternate method.  In SQL 2005 and earlier versions, I recommend the DATEADD…DATEDIFF method below with an arbitrary base date value specified in a format that is independent of the session DATAFORMAT setting:

SELECT CAST(GETDATE() AS date); --SQL 2008 and later

SELECT DATEADD(day, DATEDIFF(day, '19000101', GETDATE()), '19000101'); --SQL 2005 and earlier

I often see a variation of the DATEADD…DATEDIFF technique with the integer zero (no quotes) specified as the base date.  Although this may provide the expected results (I’ve done it myself), I caution against it because it relies on implicit conversion from the internal SQL Server integer date/time storage format.  If you want to be concise, a better approach is to specify an empty string for the base date value since the default value is ‘1900-01-01 00:00:00’.  In my opinion, an explicit data value is more intuitive, though.

SELECT DATEADD(day, DATEDIFF(day, '', GETDATE()), '');

 

I also sometimes see code that extracts the year, month and day date parts and concatenates with separators.  However, that method is dependent on session DATEFORMAT settings and slower than other methods.  See Tibor Karaszi’s The ultimate guide to the datetime datatypes article for details.

First and Last Day of Period

Another common task is to determine the first or last day of a given period.  The script below shows how to accomplish this of you don’t have a calendar table with the calculated values available.

DECLARE @Date date = GETDATE();

SELECT 'First day of year' [DateDescription], DATEADD(year, DATEDIFF(year,'19000101',@Date), '19000101') AS [CalendarDate]
UNION ALL
SELECT 'Last day of year', DATEADD(day,-1,DATEADD(year,0,DATEADD(year,DATEDIFF(year,'19000101',@Date)+1,'19000101')))
UNION ALL
SELECT 'First day of month', DATEADD(month, DATEDIFF(month,'19000101',@Date), '19000101')
UNION ALL
SELECT 'Last day of month', DATEADD(day,-1,DATEADD(month,0,DATEADD(month,DATEDIFF(month,'19000101',@Date)+1,'19000101')))
UNION ALL
SELECT 'First day week (based on DATEFIRST setting)', DATEADD(day,-(DATEPART(weekday ,@Date)-1),DATEDIFF(day,'19000101', @Date))
UNION ALL
SELECT 'Last day of week (based on DATEFIRST setting)', DATEADD(day,-(DATEPART(weekday ,@Date)-1)+6,DATEDIFF(day,'19000101', @Date));

With a calendar table like the one later in this post:

DECLARE @Date date = GETDATE();
SELECT 'First day of year' [DateDescription],(SELECT FirstDateOfYear FROM dbo.Calendar WHERE CalendarDate = @Date)
UNION ALL
SELECT 'Last day of year',(SELECT LastDateOfYear FROM dbo.Calendar WHERE CalendarDate = @Date)
UNION ALL
SELECT 'First day of month',(SELECT FirstDateOfMonth FROM dbo.Calendar WHERE CalendarDate = @Date)
UNION ALL
SELECT 'Last day of month',(SELECT LastDateOfMonth FROM dbo.Calendar WHERE CalendarDate = @Date)
UNION ALL
SELECT 'First day week (based on DATEFIRST setting)',(SELECT FirstDateOfWeek FROM dbo.Calendar WHERE CalendarDate = @Date)
UNION ALL
SELECT 'Last day of week (based on DATEFIRST setting)',(SELECT LastDateOfWeek FROM dbo.Calendar WHERE CalendarDate = @Date);

Calendar and Numbers Table

I think auxiliary calendar and number tables are a must-have on every database server.  These objects allow you to easily perform set-based processing in a number of scenarios.  In fact, the calendar table population script below uses a numbers table to populate the calendar table with several thousand rows in under a second.  This is much more efficient that a WHILE loop.

This calendar table population script also updates the table with most US holidays and adjusts business/non-business days accordingly.  In addition to customizing the script for holidays as observed by your organization, you might add fiscal period start/end dates to facilitate querying based on those cycles.  Also consider creating user-defined functions or stored procedures to encapsulate frequently used code that uses the calendar table.  For example, here is a function that returns the date that is a specified number of business days from the date provided:

CREATE FUNCTION dbo.udf_AddBusinessDays
(@Date date, @BusinessDays int)
RETURNS date
AS
BEGIN
    RETURN (
        SELECT TOP (1) CalendarDate AS BusinessDate
        FROM (SELECT TOP (@BusinessDays) CalendarDate
        FROM dbo.Calendar
        WHERE
            CalendarDate > @Date
            AND BusinessDay = 1
        ORDER BY CalendarDate) AS BusinessDays
        ORDER BY CalendarDate DESC
    );
END;
GO

Script 1: Example calendar table utility function

--auxiliary number table
CREATE TABLE dbo.Numbers(
    Number int NOT NULL
        CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED
);

--load Numbers table with 1,000,000 numbers
WITH t1 AS (SELECT 0 AS n UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0)
    ,t2 AS (SELECT 0 AS n FROM t1 t1a, t1 t1b, t1 t1c, t1 t1d)
    ,t3 AS (SELECT 0 AS n FROM t2 t2a, t2 t2b, t2 t2c)
    ,numbers AS (SELECT ROW_NUMBER() OVER(ORDER BY n) - 1 AS number FROM t3)
INSERT INTO dbo.Numbers WITH (TABLOCKX) (Number)
SELECT number
FROM numbers;

Script 2: Create and populate numbers table.

CREATE TABLE dbo.Calendar(
	CalendarDate date NOT NULL
		CONSTRAINT PK_Calendar PRIMARY KEY CLUSTERED
	,CalendarYear int NOT NULL
	,CalendarMonth int NOT NULL
	,CalendarDay int NOT NULL
	,DayOfWeekName varchar(10) NOT NULL
	,FirstDateOfWeek date NOT NULL
	,LastDateOfWeek date NOT NULL	
	,FirstDateOfMonth date NOT NULL
	,LastDateOfMonth date NOT NULL
	,FirstDateOfQuarter date NOT NULL
	,LastDateOfQuarter date NOT NULL
	,FirstDateOfYear date NOT NULL
	,LastDateOfYear date NOT NULL
	,BusinessDay bit NOT NULL
	,NonBusinessDay bit NOT NULL
	,Weekend bit NOT NULL
	,Holiday bit NOT NULL
	,Weekday bit NOT NULL
	,CalendarDateDescription varchar(50) NULL
);
GO

--load dates from 2000-01-01 through 2099-12-31
WITH t1 AS (SELECT 0 AS n UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0)
	,t2 AS (SELECT 0 AS n FROM t1 t1a, t1 t1b, t1 t1c, t1 t1d)
	,t3 AS (SELECT 0 AS n FROM t2 t2a, t2 t2b)
	,numbers AS (SELECT ROW_NUMBER() OVER(ORDER BY n) - 1 AS number FROM t3)
INSERT INTO dbo.Calendar WITH (TABLOCKX) (
	CalendarDate
	,CalendarYear
	,CalendarMonth
	,CalendarDay
	,DayOfWeekName
	,FirstDateOfWeek
	,LastDateOfWeek
	,FirstDateOfMonth
	,LastDateOfMonth
	,FirstDateOfQuarter
	,LastDateOfQuarter
	,FirstDateOfYear
	,LastDateOfYear
	,BusinessDay
	,NonBusinessDay
	,Weekend
	,Holiday
	,Weekday
	,CalendarDateDescription
	) 
SELECT
	CalendarDate = DATEADD(day, number, '20000101')
	,CalendarYear = DATEPART(year, DATEADD(day, number, '20000101'))
	,CalendarMonth = DATEPART(month, DATEADD(day, number, '20000101'))
	,CalendarDay = DATEPART(day, DATEADD(day, number, '20000101'))
	,DayOfWeekName = DATENAME(weekday, DATEADD(day, number, '20000101'))
	,FirstDateOfWeek = DATEADD(day,-(DATEPART(weekday ,DATEADD(day, number, '20000101'))-1),DATEADD(day, number, '20000101'))
	,LastDateOfWeek = DATEADD(day,-(DATEPART(weekday ,DATEADD(day, number, '20000101'))-1)+6,DATEADD(day, number, '20000101'))
	,FirstDateOfMonth = DATEADD(month, DATEDIFF(month,'20000101',DATEADD(day, number, '20000101')), '20000101')
	,LastDateOfMonth = DATEADD(day,-1,DATEADD(month,0,DATEADD(month,DATEDIFF(month,'20000101',DATEADD(day, number, '20000101'))+1,'20000101')))
	,FirstDateOfQuarter = DATEADD(quarter, DATEDIFF(quarter,'20000101',DATEADD(day, number, '20000101')), '20000101')
	,LastDateOfQuarter = DATEADD(day, -1, DATEADD(quarter, DATEDIFF(quarter,'20000101',DATEADD(day, number, '20000101'))+1, '20000101'))
	,FirstDateOfYear = DATEADD(year, DATEDIFF(year,'20000101',DATEADD(day, number, '20000101')), '20000101')
	,LastDateOfYear = DATEADD(day,-1,DATEADD(year, DATEDIFF(year,'20000101',DATEADD(day, number, '20000101'))+1, '20000101'))
	--initially set all weekdays are business days
	,BusinessDay = CASE WHEN DATENAME(weekday, DATEADD(day, number, '20000101')) IN('Monday','Tuesday','Wednesday','Thursday','Friday') THEN 1 ELSE 0 END
	--all weekends are non-business days
	,NonBusinessDay = CASE WHEN DATENAME(weekday, DATEADD(day, number, '20000101')) IN('Saturday','Sunday') THEN 1 ELSE 0 END
	,Weekend = CASE WHEN DATENAME(weekday, DATEADD(day, number, '20000101')) IN('Saturday','Sunday') THEN 1 ELSE 0 END
	,Holiday = 0 --initially no holidays
	,Weekday = CASE WHEN DATENAME(weekday, DATEADD(day, number, '20000101')) IN('Monday','Tuesday','Wednesday','Thursday','Friday') THEN 1 ELSE 0 END
	,CalendarDateDescription = NULL
FROM numbers
WHERE number < DATEDIFF(day, '20000101', '20991231') + 1;

--New Year's Day
UPDATE dbo.calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
	,Holiday = 1
    ,CalendarDateDescription = 'New Year''s Day'
WHERE
    CalendarMonth = 1
    AND CalendarDay = 1;

--New Year's Day celebrated on Friday, December 31 when January 1 falls on Saturday
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
    ,CalendarDateDescription = 'New Year''s Day Celebrated'
WHERE
    CalendarMonth = 12
    AND CalendarDay = 31
    AND DayOfWeekName = 'Friday';

--New Year's Day celebrated on Monday, January 2 when January 1 falls on Sunday
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
    ,CalendarDateDescription = 'New Year''s Day Celebrated'
WHERE
    CalendarMonth = 1
    AND CalendarDay = 2
    AND DayOfWeekName = 'Monday';    

--Martin Luther King Day - 3rd Monday in January
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
	,Holiday = 1
    ,CalendarDateDescription = 'Martin Luther King Day'
WHERE
    CalendarMonth = 1
    AND DayOfWeekName = 'Monday'
    AND (SELECT COUNT(*) 
		FROM dbo.Calendar c2 
        WHERE 
            c2.CalendarDate <= Calendar.CalendarDate
            AND c2.CalendarYear = Calendar.CalendarYear
            AND c2.CalendarMonth = Calendar.CalendarMonth
            AND c2.DayOfWeekName = 'Monday'
        ) = 3;

--President's Day - 3rd Monday in February
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
	,Holiday = 1
    ,CalendarDateDescription = 'President''s Day'
WHERE
    CalendarMonth = 2
    AND DayOfWeekName = 'Monday'
    AND (SELECT COUNT(*) 
		FROM dbo.Calendar c2 
        WHERE 
            c2.CalendarDate <= Calendar.CalendarDate
            AND c2.CalendarYear = Calendar.CalendarYear
            AND c2.CalendarMonth = Calendar.CalendarMonth
            AND c2.DayOfWeekName = 'Monday'
        ) = 3;

--Easter - first Sunday after the full moon following the vernal (March 21) equinox 
WITH
	t4 AS (SELECT 0 AS n UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0)
    ,t256 AS (SELECT 0 AS n FROM t4 t4a, t4 t4b, t4 t4c)
    ,years AS (SELECT ROW_NUMBER() OVER(ORDER BY n) + 1999 AS year FROM t256)
	,n AS (SELECT years.year, years.year - (19 * (years.year / 19)) AS n FROM years)
	,century AS (SELECT years.year, years.year / 100 AS century FROM years)
	,i AS (SELECT century.year, century.century - (century.century / 4) - ((century.century - ((century.century - 17) / 25)) / 3) + (19 * n.n) + 15  AS i
		FROM century 
		JOIN n ON n.year = century.year)
	,i2 AS (SELECT i.year, i.i - (30 * (i.i / 30 ) ) AS i2
		FROM i)
	,i3 AS (SELECT i2.year, i2.i2 - ((i2.i2 / 28) * (1 - (i2.i2 / 28) * (29 / (i2.i2 + 1)) * ((21 - n.n) / 11)) ) AS i3 
		FROM i2
		JOIN n ON n.year = i2.year)
	,j AS (SELECT i3.year, i3.year + (i3.year / 4) + i3.i3 + 2 - century.century + (century.century / 4 ) AS j 
		FROM i3
		JOIN century ON century.year = i3.year)
	,j2 AS (SELECT j.year, j.j - (7 * (j.j / 7) ) AS j2 
		FROM j)
	,month AS (SELECT j2.year, 3 + (((i3.i3 - j2.j2) + 40) / 44 ) AS month 
		FROM j2
		JOIN i3 ON i3.year = j2.year)
	,day AS (SELECT month.year, month.month, i3.i3 - j2.j2 + 28 - (31 * ( month.month / 4 ) ) AS day 
		FROM i3
		JOIN j2 ON j2.year = i3.year
		JOIN month ON month.year = j2.year)
	,easter AS (SELECT CAST(DATEADD(year, month.year-1900, DATEADD(day, day.day-1, DATEADD(month, month.month-1, ''))) AS date) AS easter
		FROM month
		JOIN day ON day.month = month.month AND day.year = month.year)
UPDATE dbo.Calendar
SET
	Holiday = 1
    ,CalendarDateDescription = 'Easter'
WHERE
    CalendarDate IN(
		SELECT easter
		FROM easter
		);

--Good Friday - 2 days before Easter Sunday
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
	,Holiday = 1
    ,CalendarDateDescription = 'Good Friday'
WHERE
    CalendarDate IN(
        SELECT DATEADD(day, -2, c2.CalendarDate)
        FROM dbo.Calendar c2
        WHERE c2.CalendarDateDescription = 'Easter'
        );

--Memorial Day - last Monday in May
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
	,Holiday = 1
    ,CalendarDateDescription = 'Memorial Day'
WHERE
    CalendarMonth = 5
    AND DayOfWeekName = 'Monday'
    AND CalendarDate IN(
        SELECT MAX(c2.CalendarDate)
        FROM dbo.Calendar c2
        WHERE
            c2.CalendarYear = Calendar.CalendarYear
            AND c2.CalendarMonth = 5
            AND c2.DayOfWeekName = 'Monday'
        );

--Independence Day - July 4th
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
	,Holiday = 1
    ,CalendarDateDescription = 'Independence Day'
WHERE
    CalendarMonth = 7
    AND CalendarDay = 4;

--Independence Day celebrated on Friday, July 3 when July 4 falls on a Saturday
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
    ,CalendarDateDescription = 'Independence Day Celebrated'
WHERE
    CalendarMonth = 7
    AND CalendarDay = 3
    AND DayOfWeekName = 'Friday';

--Independence Day celebrated on Friday, July 3 when July 4 falls on a Saturday
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
    ,CalendarDateDescription = 'Independence Day Celebrated'
WHERE
    CalendarMonth = 7
    AND CalendarDay = 5
    AND DayOfWeekName = 'Monday';

--Labor Day - first Monday in September
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
	,Holiday = 1
    ,CalendarDateDescription = 'Labor Day'
WHERE
    CalendarMonth = 9
    AND DayOfWeekName = 'Monday'
    AND CalendarDate IN(
        SELECT MIN(c2.CalendarDate)
        FROM dbo.Calendar c2
        WHERE
            c2.CalendarYear = calendar.CalendarYear
            AND c2.CalendarMonth = 9
            AND c2.DayOfWeekName = 'Monday'
        );

--Columbus Day - second Monday in October
UPDATE dbo.Calendar
SET
	Holiday = 1
    ,CalendarDateDescription = 'Columbus Day'
WHERE
    CalendarMonth = 10
    AND DayOfWeekName = 'Monday'
    AND (SELECT COUNT(*) 
		FROM dbo.Calendar c2 
        WHERE 
            c2.CalendarDate <= Calendar.CalendarDate
            AND c2.CalendarYear = Calendar.CalendarYear
            AND c2.CalendarMonth = Calendar.CalendarMonth
            AND c2.DayOfWeekName = 'Monday'
        ) = 2;

--Veteran's Day - November 11
UPDATE dbo.Calendar
SET
	Holiday = 1
    ,CalendarDateDescription = 'Veteran''s Day'
WHERE
    CalendarMonth = 11
    AND CalendarDay = 11;

--Thanksgiving - fourth Thursday in November
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
	,Holiday = 1
    ,CalendarDateDescription = 'Thanksgiving'
WHERE
    CalendarMonth = 11
    AND DayOfWeekName = 'Thursday'

    AND (SELECT COUNT(*) FROM 
		dbo.Calendar c2 
        WHERE 
            c2.CalendarDate <= Calendar.CalendarDate
            AND c2.CalendarYear = Calendar.CalendarYear
            AND c2.CalendarMonth = Calendar.CalendarMonth
            AND c2.DayOfWeekName = 'Thursday'
        ) = 4;

UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
	,Holiday = 1
    ,CalendarDateDescription = 'Day after Thanksgiving'
WHERE
    CalendarDate IN(
        SELECT DATEADD(day, 1, c2.CalendarDate)
        FROM dbo.Calendar c2
        WHERE c2.CalendarDateDescription = 'Thanksgiving'
        );

--Christmas Day - December 25th
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
	,Holiday = 1
    ,CalendarDateDescription = 'Christmas Day'
WHERE
    CalendarMonth = 12
    AND CalendarDay = 25;

--Christmas day celebrated on Friday, December 24 when December 25 falls on a Saturday
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
    ,CalendarDateDescription = 'Christmas Day Celebrated'
WHERE
    CalendarMonth = 12
    AND CalendarDay = 24
    AND DayOfWeekName = 'Friday';

--Christmas day celebrated on Monday, December 24 when December 25 falls on a Sunday
UPDATE dbo.Calendar
SET
	BusinessDay = 0
	,NonBusinessDay = 1
    ,CalendarDateDescription = 'Christmas Day Celebrated'
WHERE
    CalendarMonth = 12
    AND CalendarDay = 26
    AND DayOfWeekName = 'Monday';

Script 3: Create and populate calendar table and update with holidays

 

Ad-Hoc Rollup by date/time Interval

I often use aggregate queries to rollup data by an arbitrary date/time interval.  I’ll share some techniques that I use to accomplish the task in case you find these useful, using the same table below:

CREATE TABLE dbo.WebStats

(

      RequestTimestamp datetime NOT NULL,

      Page varchar(255) NOT NULL

);

CREATE CLUSTERED INDEX WebStats_cdx ON dbo.WebStats(RequestTimestamp, Page);

 

INSERT INTO dbo.WebStats (RequestTimestamp, Page)

VALUES

      (‘2010-01-01T00:00:00’, ‘Default.aspx’)

      ,(‘2010-01-01T00:00:15’, ‘Default.aspx’)

      ,(‘2010-01-01T00:01:05’, ‘Order.aspx’)

      ,(‘2010-01-01T00:01:30’, ‘Default.aspx’)

      ,(‘2010-01-01T00:01:40’, ‘OrderStatus.aspx’)

      ,(‘2010-01-01T00:02:05’, ‘Default.aspx’)

      ,(‘2010-01-01T00:03:05’, ‘ProductInfo.aspx’)

      ,(‘2010-01-01T00:03:30’, ‘Default.aspx’);

GO

 

Simple Rollup

Without an auxiliary table, a little DATEADD magic can do the trick.  Here’s an example that summarizes web page requests by minute for the specified date/time range:

DECLARE

      @StartTimestamp datetime = ‘2010-01-01T00:00:00’

      ,@EndTimestamp datetime = ‘2010-01-02T00:00:00’;

 

SELECT

      DATEADD(minute, DATEDIFF(minute, @StartTimestamp, RequestTimestamp), @StartTimestamp) AS Interval,

      COUNT(*) AS PageRequests

FROM dbo.WebStats

GROUP BY

      DATEADD(minute, DATEDIFF(minute, @StartTimestamp, RequestTimestamp), @StartTimestamp)

ORDER BY

      Interval; 

 

Results:

Interval

PageRequests

2010-01-01 00:00:00.000

2

2010-01-01 00:01:00.000

3

2010-01-01 00:02:00.000

1

2010-01-01 00:03:00.000

2

2010-01-01 00:29:00.000

1

2010-01-01 00:31:00.000

1

2010-01-01 00:42:00.000

1

2010-01-01 02:01:00.000

2

2010-01-01 02:03:00.000

2

2010-01-01 02:31:00.000

1

2010-01-01 02:44:00.000

1

2010-01-01 02:49:00.000

1

 

Arbitrary Intervals

The simple rollup method works well for any of the pre-defined units provided by the DATEADD function (year, quarter, month, day, hour, minute, second or week).  However, it lacks the flexibility to roll up to an arbitrary interval like 15 minutes or 30 seconds.  A little DATEADD/DATEDIFF math addresses this gap.  Below is an example of a 30-minute interval rollup using this technique:

DECLARE

      @StartTimestamp datetime = ‘2010-01-01T00:00:00’

      ,@EndTimestamp datetime = ‘2010-01-01T04:00:00’

      ,@IntervalSeconds int = 1800; –30 minutes

SELECT

      DATEADD(second

            ,DATEDIFF(second, @StartTimestamp

            ,RequestTimestamp)

            / @IntervalSeconds * @IntervalSeconds, @StartTimestamp) AS Interval

      ,COUNT(*) AS PageRequests

FROM dbo.WebStats

WHERE

      RequestTimestamp >= @StartTimestamp

      AND RequestTimestamp < @EndTimestamp

GROUP BY

      DATEADD(second

            ,DATEDIFF(second, @StartTimestamp

            ,RequestTimestamp) / @IntervalSeconds * @IntervalSeconds, @StartTimestamp)

ORDER BY

      Interval;

 

Interval

PageRequests

2010-01-01 00:00:00.000

9

2010-01-01 00:30:00.000

2

2010-01-01 02:00:00.000

4

2010-01-01 02:30:00.000

3

 

Missing Intervals

You probably noticed that periods with no activity at all are omitted rather than reporting a zero value.  One method to include the missing intervals is with an outer join to a temporal table containing all the desired intervals.  Ideally, the temporal table would be a permanent one but I’ve found it impractical to maintain such a table for ad-hoc needs.  Fortunately, a utility numbers CTE is a handy way to generate the needed intervals dynamically.  The example below provides up to 65,536 interval values and can be easily extended as needed.

DECLARE

      @StartTimestamp datetime = ‘2010-01-01T00:00:00’

      ,@EndTimestamp datetime = ‘2010-01-01T04:00:00’

      ,@IntervalSeconds int = 1800; –30 minutes

 

WITH

      T2 AS (SELECT 0 AS Num UNION ALL SELECT 0),

      T4 AS (SELECT 0 AS Num FROM T2 AS A CROSS JOIN T2 AS B),

      T256 AS (SELECT 0 AS Num FROM T4 AS A CROSS JOIN T4 AS B CROSS JOIN T4 AS C CROSS JOIN T4 AS D),

      T65536 AS (SELECT ROW_NUMBER() OVER(ORDER BY A.Num) AS Num FROM T256 AS A CROSS JOIN T256 AS B)

SELECT

      DATEADD(second

            ,(Num1) * @IntervalSeconds, @StartTimestamp) AS Interval

      ,COUNT(WebStats.RequestTimestamp) AS PageRequests

FROM T65536

LEFT JOIN dbo.WebStats ON

      WebStats.RequestTimestamp >= DATEADD(second, (Num1) * @IntervalSeconds, @StartTimestamp)

      AND WebStats.RequestTimestamp < DATEADD(second, Num * @IntervalSeconds, @StartTimestamp)

WHERE

      Num <= DATEDIFF(second, @StartTimeStamp, @EndTimestamp) / @IntervalSeconds

GROUP BY

      DATEADD(second

            ,(Num1) * @IntervalSeconds, @StartTimestamp)

ORDER BY

      Interval;  

 

Interval

PageRequests

2010-01-01 00:00:00.000

9

2010-01-01 00:30:00.000

2

2010-01-01 01:00:00.000

0

2010-01-01 01:30:00.000

0

2010-01-01 02:00:00.000

4

2010-01-01 02:30:00.000

3

2010-01-01 03:00:00.000

0

2010-01-01 03:30:00.000

0

 

Conditional INSERT/UPDATE Race Condition

Conditional INSERT/UPDATE Race Condition

 

 

I often see conditional INSERT/UPDATE code like:

CREATE PROCEDURE dbo.Insert_Or_Update_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT ON

 

IF EXISTS(SELECT * FROM dbo.Foo WHERE ID = @ID)

BEGIN

      UPDATE dbo.Foo

      SET bar = @bar

      WHERE ID = @ID

END

ELSE

BEGIN

      INSERT INTO dbo.Foo (ID, Bar)

      VALUES (@ID, @Bar)

END

 

RETURN @@ERROR

 

Despite its prevalence, this “UPSERT” code is fundamentally flawed because it assumes only a single user will execute the proc at a time.  A primary key violation error can occur when executed simultaneously on different connections with the same @ID value instead of the intended UPDATE. 

An even worse situation is when a surrogate key (e.g. IDENTITY) is used as the primary key and the conditional insert is based on a natural key that has no unique constraint or unique index.  In that case, the both INSERTs will succeed but duplicate data (same natural key) will be inserted.  A unique constraint (or unique index) should of course be specified on every key but that’s sometimes overlooked.

You can test the concurrency problem yourself by creating the following table for use with the above stored procedure on a multi-processor box:

CREATE TABLE dbo.Foo

(

      ID int NOT NULL

            CONSTRAINT PK_Foo PRIMARY KEY,

      Bar int NOT NULL

)

 

Then run the following script on different database connections after changing the WAITFOR TIME to the near future:

WAITFOR TIME ’08:00:00′

 

EXEC dbo.Insert_Or_Update_Foo

      @ID = 1,

      @Bar = 1

 

I ran this test and got a primary key violation because parallel execution of the IF EXISTS test returned false on both connections so the INSERT was attempted on both. 

Fortunately, there’s a simple solution to prevent the error:  1) add an explicit transaction and 2) specify SELECT locking hints.  Below is a modified version of the original stored procedure with these changes:

CREATE PROCEDURE dbo.Insert_Or_Update_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT, XACT_ABORT ON

 

BEGIN TRAN

 

IF EXISTS(SELECT * FROM dbo.Foo WITH (UPDLOCK, HOLDLOCK) WHERE ID = @ID)

BEGIN

      UPDATE dbo.Foo

      SET bar = @bar

      WHERE ID = @ID

END

ELSE

BEGIN

      INSERT INTO dbo.Foo (ID, Bar)

      VALUES (@ID, @Bar)

END

 

COMMIT

 

RETURN @@ERROR

 

The UPDLOCK hint instructs SQL Server to use an update lock instead of the shared lock that would normally be acquired for the SELECT.  HOLDLOCK is needed in the default READ COMMITTED isolation level to ensure that the lock is held until the end of the transaction.  This more restrictive update lock will prevent simultaneous queries from either selecting or changing the locked resource.  If the row exists during the SELECT, the locked resource is the existing row.  If no row exists with the specified key, a range lock on the primary key is acquired to prevent inserts of the same key until the lock is released. 

Although not actually required, I also added SET XACT_ABORT ON to help ensure the explicit transaction is closed following a timeout or unexpected error.  See Use Caution with Explicit Transactions in Stored Procedures to see why I recommend this as a standard practice.

I should add that this explicit transaction and locking hints technique will prevent the above mentioned problems but it doesn’t technically prevent a race condition.  The issue with any “UPSERT”/MERGE technique is that the last one in the race wins.  If you run the above proc simultaneously on different connections with the same @ID value but different @Bar values, the last Bar value UPDATE will of course overwrite any previous Bar value.

The issue doesn’t apply only to IF statements.  Even intra-query subqueries like the example below can fall victim to the issue.  Just like IF EXISTS, there is nothing to prevent the same concurrency problem when the NOT EXISTS predicate is evaluated simultaneously on different connections.

CREATE PROCEDURE dbo.Insert_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT ON

 

INSERT INTO dbo.Foo (ID, Bar)

VALUES (@ID, @Bar)

WHERE NOT EXISTS

      (

      SELECT *

      FROM dbo.Foo

      WHERE ID = @ID

      )

 

RETURN @@ERROR

 

Below is the modified proc with the transaction and locking hint changes:

CREATE PROCEDURE dbo.Insert_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT, XACT_ABORT ON

 

BEGIN TRAN

 

INSERT INTO dbo.Foo (ID, Bar)

VALUES (@ID, @Bar)

WHERE NOT EXISTS

      (

      SELECT *

      FROM dbo.Foo WITH (UPDLOCK, HOLDLOCK)

      WHERE ID = @ID

      )

 

COMMIT

 

RETURN @@ERROR

 

I’m not certain if the SQL 2008 MERGE statement suffers from the same concurrency issues since I don’t currently have an adequate (multi-processor) SQL 2008 test environment for the test.  I plan to install the next SQL 2008 CTP on a real (not virtual) machine and test the proc below.  I’ll post the results.

MERGE Stored Procedure

Use Caution with Explicit Transactions in Stored Procedures

Use Caution with Explicit Transactions in Stored Procedures

 

 

 

Explicit transactions are often used within stored procedures to guarantee all-or-nothing data integrity.  However, a little known fact is that a query timeout will leave the transaction open unless non-default session settings and/or special exception handling are used.  I’ll describe how to protect your application from problems following timeouts and other unexpected errors.

Consider the following stored procedure containing an explicit transaction:

CREATE PROCEDURE dbo.UpdateWithExplicitTransaction
	@MyKey int,
	@MyColumnValue int
AS

DECLARE @Error int

BEGIN TRAN

UPDATE dbo.Foo
SET MyColumn = @MyColumnValue
WHERE MyKey = @MyKey

SET @Error = @@ERROR
IF @Error <> 0
BEGIN
	GOTO Done
END

UPDATE dbo.Bar
SET MyColumn = @MyColumnValue
WHERE MyKey = @MyKey

SET @Error = @@ERROR
IF @Error <> 0
BEGIN
	GOTO Done
END

COMMIT

Done:

IF @Error <> 0
BEGIN
	ROLLBACK
END

RETURN @Error

You execute the script below from SQL Server Management Studio or Query Analyzer with the query timeout option set to 30 seconds and the second UPDATE statement in the proc times out. 

timeout option set to 30 seconds and the second UPDATE statement in the proc times out.

EXEC dbo.UpdateWithExplicitTransaction
PRINT ‘execution completed’
GO
SELECT
	@@ERROR AS [@@ERROR],
	@@TRANCOUNT AS [@@TRANCOUNT]
GO

Assuming default session settings, check all that apply:

a)      Proc execution continues after the failed UPDATE

b)      @@ERROR is zero

c)       @@TRANCOUNT is zero

d)      The PRINT statement is executed

Let me first mention something important about timeouts before I provide the correct answer(s).  A command timeout occurs in the client application, not the SQL Server backend.  A timeout is basically just a cancel request that is sent by the client API when a command executes longer than the specified interval.  A timeout is very much like pressing the stop button in Query Analyzer or Management Studio because you feel a query has been running too long.  The only difference is that the stop is issued by the client API on behalf of the application.

Both “A” (proc continues) and “D” (PRINT executes) are false because the attention event from the client instructed SQL Server to cancel the currently executing batch in its entirety.  No code after the UPDATE executes, including the PRINT statement following the stored procedure execute.  This is logical since a query cancel or timeout wouldn’t be much use if SQL Server continued executing statements afterward.

“B” (zero @@ERROR) is true.  @@ERROR is zero because no error occurred on the backed; SQL Server successfully canceled the batch per the client cancel request after the timeout.  The timeout error is raised only on the client by the API to notify the application (SSMS/QA in this example) that the command timed out.  SSMS and QA simply catch the error and display the error message from the API.

“C” (zero @@TRANCOINT) is false because XACT_ABORT OFF is the default session setting.  With XACT_ABORT OFF, it is the client application’s responsibility to trap the timeout error and rollback the transaction if necessary.  The transaction is left open and uncommitted following the timeout error.  This can have serious and undesirable consequences if the application performs other work on the connection, unaware of the open transaction.

Using SET XACT_ABORT

SET XACT_ABORT specifies what action SQL Server should take following run-time errors.  The default session setting is SET XACT_ABORT OFF, which indicates that only the Transact-SQL statement that raised the error is rolled back and the transaction continues.  Depending on the severity of the error, the entire transaction may be rolled back and batch aborted, even with SET XACT_ABORT is OFF.   

A side effect of SET XACT_ABORT OFF is that a cancel/timeout error can leave an open transaction so it’s the client’s responsibility to cleanup following cancel/timeout.  To safeguard against leaving an open transaction, applications that execute transactions with SET XACT_ABORT OFF need to roll back transactions and perhaps close the connection following SQL exceptions.  

Note that with connection pooling, simply closing the connection without a rollback will only return the connection to the pool and the transaction will remain open until later reused or removed from the pool.  This can result in locks begin held unnecessary and cause other timeouts and rolling blocks.

SET XACT_ABORT ON instructs SQL Server to rollback the entire transaction and abort the batch when a run-time error occurs.  Compile errors (e.g. syntax errors) are not affected by SET XACT_ABORT. 

In my experience, SET XACT_ABORT ON provides the desired behavior in most cases.  I’ve never run into a situation where I wouldn’t want to rollback a transaction following a cancel or timeout.   I nearly always specify SET XACT_ABORT ON in stored procedures that contain explicit transactions to ensure that transactions are rolled back even if the application code doesn’t clean up properly.  The only time I don’t use XACT_ABORT is in rare cases where I need to trap and handle specific errors in Transact-SQL and continue.

I strongly recommend that SET XACT_ABORT ON be included in all stored procedures with explicit transactions unless you have a specific reason to do otherwise.  The consequences of an application unwittingly performing work on a connection with an open transaction are disastrous.

SQL Server error handling in general is a huge topic I focused on only on timeout errors and SET XACT_ABORT here.  For a thorough discussion of SQL Server error handling, I suggest perusing articles Implementing Error Handling with Stored Procedures and Error Handling in SQL Server – a Background by SQL Server MVP Erland Sommarskog.