SQL Server 2016 SP1 Standard Edition Enhancements

I seldom get excited about service packs but the changes released with SQL Server 2016 SP1 are the most significant I’ve seen in a SQL Server service pack in 20+ years. Microsoft announced this week at the Microsoft Connect(); developer’s conference that SQL Server 2016 SP1, which is available for download immediately, allows features previously available only in Enterprise/Developer Editions to be used in lessor Standard, Web, Express, and LocalDB Editions too. Features like table partitioning, In-Memory OLTP, and columnstore are now options for developers and DBAs using SQL Server Standard Edition and even the free Express Edition in production. See SQL Server 2016 Service Pack 1 (SP1) released !!! for the complete matrix of programmability features by edition along with other cool SP1 information.

The implications are huge now that SQL Server has the same programmability surface area among editions. The choice of the production edition can be made independently based on operational needs rather than programmability features. Developers can use a free edition (i.e. LocalDB, Express or Developer) without fear a feature won’t be available in production as long as prod is running SQL Server 2016 SP1 or greater. DBAs can now choose the appropriate edition for production based on other considerations like advanced high availability, TDE, Auditing as well as performance features like higher supported memory, more number of cores, and advanced scanning. This separation of concerns avoids the need to lock in the production edition early in the application lifecycle, making development easier and production implementation more flexible.

Real World Use Case Scenario
I work with an ISV with hundreds of customers running a mix of Standard and Enterprise Edition. Their needs vary widely and SQL Server Enterprise Edition is not an option for some due to budget constraints. Some tables are often quite large so partitioning is required for manageability and, for their reporting workload, partitioning also improves performance of large scans due to partition elimination. The ugliness though, is that table partitioning (and/or columnstore) is the right tool for the job but was not an option for customers on Standard Edition.

The ISV initially compromised and used view partitioning instead of table partitioning so that the same code would run regardless of edition. Although that provided the expected manageability benefits, there were some downsides. Compilation time increased significantly as the number of partitioned view member tables increased as did the query plan complexity. This sometimes resulted in poor query plans against very large tables and especially impacted larger and most valued customers, most of which were running Enterprise Edition.

To address the problem before SQL Server 2016 SP1, the ISV added conditional code to the application so that either view or table partitioning could be used depending on the SQL Server edition. This wasn’t ideal as it added code complexity and doubled the number of QA test cases for application features that performed partition maintenance. However, since the resultant benefits for their larger customers on Enterprise Edition were quite significant; the additional costs of development and testing were well-justified.

Now that table partitioning is available in SQL Server 2016 SP1 Standard Edition, they plan to require SQL Server 2016 SP1 (or later) going forward, use table partitioning unconditionally, and perhaps introduce usage of other features like columnstore that were previously Enterprise only. Not only will this simplify the code base and test cases, customers on Standard Edition will be happier with their experience and can upgrade to Enterprise if they so choose without reinstalling or reconfiguring the application. It will of course take some time before all their customers upgrade to the latest product version and SQL 2016 SP1+ but the future is much brighter now.

Perform Due Diligence
If you are new to features previously available only in Enterprise Edition, I suggest you perform due diligence before using these features. Memory-optimized features like columnstore and In-Memory OLTP require additional physical memory and insufficient memory with memory-optimized features will be a production show-stopper. Make sure your hardware is sized appropriately regardless of edition and, in the case of editions other than Enterprise or Developer, memory requirements don’t exceed the maximum capacity limits for that edition. Although very powerful, In-Memory OLTP is a fundamentally different paradigm that you might be accustomed to regarding transactional behavior and isolation levels. Be sure you fully understand these features before using it in development or production.

Summary
I hope these changes are enough motivation for you to consider upgrading to SQL Server 2016 SP1, especially if you are running Standard Edition or are currently on an older SQL Server version. Together with the fact that SQL Server 2016 just runs faster, the time and effort spend in upgrading is a solid investment that will pay dividends regardless of edition.

SQL Saturday – St. Louis 2016

Please join us for SQL Saturday, St. Louis 2016. We have 25 sessions scheduled covering a wide range of topics on the Microsoft Data Platform stack.

As in the past few years, the event will be held at the SLU Center for Workforce & Organizational Development, 3545 Lindell Blvd , Saint Louis, Missouri 63103. Visit the event home page for registration, directions and session details.

See you there!

SSMS 2016 is Free

The general available release of SQL Server Management Studio 2016 is free. No SQL Server license is required to install and use it for either development or production purposes. This GA version supports SQL Server versions 2008 through 2016 plus Azure SQL Database. You might be able to use it against earlier SQL versions too but that’s not officially supported and features will be limited. SSMS 2016, now based on the Visual Studio 2015 isolated shell, can be downloaded from the SSMS Download Page.

Prior to SQL Server 2016, one typically installed SSMS from the SQL Server installation media and was upgraded only by CUs, service packs, or SQL Server version upgrades. This changed with the release of SQL Server 2016; SSMS is a now a separate download. After installation, regular updates are made available (monthly) and can be installed from the SSMS Tools–>Check for Updates… menu option. These frequent updates allow the tooling to stay aligned with both on-prem and Azure SQL Database while delivering enhancements and bug fixes to SSMS much more quickly. I am quite impressed by the responsiveness of the SSMS development team in this regard, having used it since the initial preview versions of the VS 2015 isolated shell were made available.

An important change in this GA version compared to earlier versions of SSMS is that SET XACT_ABORT ON is now the default session setting. That’s a good thing, in my opinion, because it helps ensure one doesn’t accidentally leave a transaction open in a query window following an error (I have many horror stories about that). But it does change script execution behavior because queries, with or without error handling, will generally not continue after an error with XACT_ABORT ON. Be aware of this when running scripts from SSMS. This and other SET options can be specified from the SSMS Query–>Query options–>Execution–>Advanced. I strongly recommend specifying SET XAT_ABORT ON unless you have a specific reason not to do so.

Kudos to the Microsoft for providing SSMS as a free tool and to the SSMS development team for their ongoing commitment to SQL Server tooling improvements and timely delivery.

QUERYTRACEON Permissions

I recently had a SQL Server 2014 stored procedure that ran for several hours with high CPU usage. I narrowed the problem down to a single query with dozens of joins was stuck in compilation. Suspecting an issue with the new cardinality estimator, I changed the database compatibility level from 120 (SQL Server 2014) to 110 (SQL Server 2012) so that the legacy CE would be used. The query then compiled immediately and executed quickly, which confirmed the new CE as the culprit of the excessive compilation times.

OPTION QUERYTRACEON (9481) to the Rescue
Changing the database compatibility level was a heavy-handed option I didn’t want to pursue because some queries benefited from the new CE behavior. Instead, I specified trace flag 4891 using the OPTION QUERYTRACEON(9481) query hint to use the legacy CE for the problem query. This is an acceptable workaround for me until the bug is addressed. However, you might not be so lucky. A more common use case for OPTION QUERYTRACEON(9481) is performance regression with the new CE. Although not documented as far as I know, the QUERYTRACEON query hint requires sysadmin permissions unless executed from within a stored procedure. Consequently, QUERYTRACEON won’t be an option for those who need the hint for queries run by non-sysadmins outside stored procs. I found SQL Server MVP Ola Hallengren’s Connect suggestion to allow QUERYTRACEON with no additional permissions. I upvoted and encourage you to do so too.

Beware Job Recruiting SCAM

I’ve had multiple reports of a scam where persons have used my identity to entice IT professionals into applying for positions they have no business recruiting for. The scammers later ask for a PayPal payment for their services. The authorities have been notified of this fraudulent activity.

I urge everyone to be vigilant with regards to social engineering scams like this. Beware of unsolicited emails from persons you don’t know personally. It’s is easy for unscrupulous individuals to glean much information from publicly available sources (job descriptions, professional profiles, etc.) and tailor a message customized for a particular job seeker.

Stairway Series on SQLServerCentral.com

SQLServerCentral.com launched a new Stairway content series today, targeting specific areas of SQL Server.  Each Stairway includes a series of up to 12 levels focused on a specific SQL Server topic.  The goal is to guide DBAs and developers with little or no understanding of a subject through a sequence of tutorials in order to quickly gain the knowledge one needs to use a SQL Server feature confidently in a production environment.  Kalen Delaney, editor of the Stairway series, is one of the most respected experts in the world-wide SQL Server community.

I was flattered when Kalen gave me the opportunity to contribute to the series with a Stairway on Server-side Tracing.  For years I’ve cautioned against using Profiler indiscriminately both here as well as in the MSDN forums and newsgroups.  But it seems many DBAs still don’t differentiate between Profiler and server-side tracing.  I’m hoping this Server-side Tracing Stairway will empower DBAs with the knowledge to choose the right tool for the job.

My apologies for having gone dark for the last several months.   The subject of this post is the primary reason; there are only so many hours in the day L

Don’t Bloat Proc Cache with Parameters

Most of us have long been aware that non-parameterized ad-hoc queries can pollute procedure cache with entries that will never be reused.  Each distinct query string is a separate cache entry and, unless the exact query with the same values is executed again, the plan will remain in cache unnecessarily until aged out.

However, I was surprised to learn recently that even parameterized queries can bloat procedure cache if one isn’t careful.  This was brought to by attention by SQL Server MVP Erland Sommarskog and confirmed by fellow MVP Adam Machanic, who experienced this problem at a customer site.

The issue is that parameter declarations are part of cache entry statement text.  Consider the following code:

string lastName = “Sommarskog”;

 

SqlCommand selectCommand =

    new SqlCommand(

        “SELECT ContactID, FirstName, LastName “ +

        “FROM AdventureWorks.Person.Contact “ +

        “WHERE LastName = @LastName”,

        connection);

 

selectCommand.Parameters.AddWithValue(“@LastName”, lastName);

 

SqlClient defined the parameter as nvarchar(10) because the supplied string value length was 10 and .NET strings are Unicode.  Below is the resulting statement text from sys.dm_exec_query_plan:

(@LastName nvarchar(10))SELECT ContactID, FirstName, LastName FROM AdventureWorks.Person.Contact WHERE LastName = @LastName

 

Now run the same code with a different last name value:

string lastName = “Machanic”;

 

SqlCommand selectCommand =

    new SqlCommand(

        “SELECT ContactID, FirstName, LastName “ +

        “FROM AdventureWorks.Person.Contact “ +

        “WHERE LastName = @LastName”,

        connection);

 

selectCommand.Parameters.AddWithValue(“@LastName”, lastName);

 

You probably guessed it – an additional cache entry was created for the same query due to the different name length:

(@LastName nvarchar(8))SELECT ContactID, FirstName, LastName FROM AdventureWorks.Person.Contact WHERE LastName = @LastName

 

It is likely that no more than a dozen or so cache entries will exist for this particular query since the only differentiator is the last name length.  However, with queries that contain multiple string parameters of varying lengths, the number of cache entries increases dramatically while the likelihood of reuse diminishes.  In Adam’s case, the main culprit was a string with a large list of values (between 1 and 1000+ items) that was split using a technique described in Erland’s Arrays and Lists articles http://www.sommarskog.se. 

Preventing Procedure Cache Bloat

An easy way to prevent unneeded procedure cache entries is to explicitly specify the maximum data length for string and other variable length values.  In fact, it’s a good idea to specify both the length and SqlDbType to ensure the parameter declaration matches the target column.  Not only will this help prevent cache bloat due to different parameter lengths, the matching data type can avoid the performance gotcha of a non-sargable expression caused by implicit data type conversion.

There are a number techniques to specify the parameter length and SqlDbType in .NET.  For example, the code below uses an overloaded SqlCommand.Parameters.Add method.  If you use AddWithValue, be sure to set the Size and SqlDbType properties of the newly created parameter afterward.

string lastName = “Sommarskog”;

 

SqlCommand selectCommand =

    new SqlCommand(

        “SELECT ContactID, FirstName, LastName “ +

        “FROM AdventureWorks.Person.Contact “ +

        “WHERE LastName = @LastName”,

        connection);

 

selectCommand.Parameters.Add(“@LastName”, SqlDbType.NVarChar, 50).Value = lastName;

 

 

Keep Schema and Ownership Simple

I like to keep things simple because simplicity is easier to manage and less prone to error.  When I’m faced with schema design decisions, I pick the selection with the least complexity that meets my objectives.  Here are some of my thoughts regarding schema and ownership in SQL 2005.

Schema

A schema is basically a container that categorizes database objects and simplifies security administration.  As a namespace, schemas logically organize objects without the need for special object naming rules.  Different objects can have the same name as long as they exist in different schemas because the schema name is essentially an extension of the object name that will “uniqueify” the name within a database. 

Categorizing objects by schema is particularly useful in complex databases with many objects.  There is some subjectivity on exactly how one might draw schema boundaries but the basic concept is the same; group related objects into different schema to provide organization to an otherwise unwieldy schema.  Classifying related objects by schema makes complex databases easier to understand and manage.

Schema also simplifies security administration because permissions can be granted en mass at the schema level.  For example, I can grant EXECUTE permissions on all objects in a schema with a single statement like “GRANT EXECUTE ON SCHEMA::Sales TO SalesRole”.  I can grant CONTROL permissions on a schema to allow privileged users to full control over a specific schema but not others in the same database.

Even with the option to use multiple schemas, I tend to use the built-in dbo schema.  I do this because most of the applications I maintain were developed before SQL 2005 and all objects are already in the dbo schema.  Some of those legacy systems could benefit from multiple schemas but I’ll continue to use dbo for those applications to be consistent until I need to add a group of new objects that are appropriate for a separate schema.  The new SQL 2005 databases I’ve developed thus far have been fairly simple and haven’t warranted using multiple schemas for either classification or security purposes.

Ownership

The owner is important for two main reasons:  1) the owner has powerful CONTROL permissions over all owned objects and 2) the owner determines whether or not the ownership chain is broken between objects.  Schema-contained objects will inherit the schema owner unless explicitly overridden using ALTER AUTHORIZATION.  Personally, I think it best for objects to inherit the schema owner in the vast majority of cases; if an object warrants a different owner than the schema, the object probably belongs in a different schema.

I use the built-in dbo principal for ownership unless I have a reason to do otherwise.  This approach is perfect in environments where only db-owner role members can create objects and schemas are used solely as a namespace rather than a security boundary.  The dbo principal exists in all databases so there is no a need to create a user or role for ownership purposes.  Simple is good.

Different schema owners provide a security boundary between objects in different schema because this breaks the ownership chain.  With a broken chain, explicit permissions on indirectly referenced objects are needed by the end user or the impersonated principal.  Different schema owners ensure that I don’t inadvertently provide access to data in different schema via ownership chaining.

Note that an owner can be any database principal and does not necessarily need to be associated with a login.  I find this feature especially handy in situations where I want to specify an owner other than dbo.  Since I can specify an owner that is not a real person, I won’t need to change ownership if the owner leaves the company or moves on to other roles in the organization. 

It’s probably best to create a principal (role or a user without login) with the same name as the schema for ownership purposes when different owners are desired.  The only case I can think of where it might be appropriate for a real person to own a schema (at least initially) is in a development environment when non-dbo users might create schemas and objects.

Conditional INSERT/UPDATE Race Condition

Conditional INSERT/UPDATE Race Condition

 

 

I often see conditional INSERT/UPDATE code like:

CREATE PROCEDURE dbo.Insert_Or_Update_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT ON

 

IF EXISTS(SELECT * FROM dbo.Foo WHERE ID = @ID)

BEGIN

      UPDATE dbo.Foo

      SET bar = @bar

      WHERE ID = @ID

END

ELSE

BEGIN

      INSERT INTO dbo.Foo (ID, Bar)

      VALUES (@ID, @Bar)

END

 

RETURN @@ERROR

 

Despite its prevalence, this “UPSERT” code is fundamentally flawed because it assumes only a single user will execute the proc at a time.  A primary key violation error can occur when executed simultaneously on different connections with the same @ID value instead of the intended UPDATE. 

An even worse situation is when a surrogate key (e.g. IDENTITY) is used as the primary key and the conditional insert is based on a natural key that has no unique constraint or unique index.  In that case, the both INSERTs will succeed but duplicate data (same natural key) will be inserted.  A unique constraint (or unique index) should of course be specified on every key but that’s sometimes overlooked.

You can test the concurrency problem yourself by creating the following table for use with the above stored procedure on a multi-processor box:

CREATE TABLE dbo.Foo

(

      ID int NOT NULL

            CONSTRAINT PK_Foo PRIMARY KEY,

      Bar int NOT NULL

)

 

Then run the following script on different database connections after changing the WAITFOR TIME to the near future:

WAITFOR TIME ’08:00:00′

 

EXEC dbo.Insert_Or_Update_Foo

      @ID = 1,

      @Bar = 1

 

I ran this test and got a primary key violation because parallel execution of the IF EXISTS test returned false on both connections so the INSERT was attempted on both. 

Fortunately, there’s a simple solution to prevent the error:  1) add an explicit transaction and 2) specify SELECT locking hints.  Below is a modified version of the original stored procedure with these changes:

CREATE PROCEDURE dbo.Insert_Or_Update_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT, XACT_ABORT ON

 

BEGIN TRAN

 

IF EXISTS(SELECT * FROM dbo.Foo WITH (UPDLOCK, HOLDLOCK) WHERE ID = @ID)

BEGIN

      UPDATE dbo.Foo

      SET bar = @bar

      WHERE ID = @ID

END

ELSE

BEGIN

      INSERT INTO dbo.Foo (ID, Bar)

      VALUES (@ID, @Bar)

END

 

COMMIT

 

RETURN @@ERROR

 

The UPDLOCK hint instructs SQL Server to use an update lock instead of the shared lock that would normally be acquired for the SELECT.  HOLDLOCK is needed in the default READ COMMITTED isolation level to ensure that the lock is held until the end of the transaction.  This more restrictive update lock will prevent simultaneous queries from either selecting or changing the locked resource.  If the row exists during the SELECT, the locked resource is the existing row.  If no row exists with the specified key, a range lock on the primary key is acquired to prevent inserts of the same key until the lock is released. 

Although not actually required, I also added SET XACT_ABORT ON to help ensure the explicit transaction is closed following a timeout or unexpected error.  See Use Caution with Explicit Transactions in Stored Procedures to see why I recommend this as a standard practice.

I should add that this explicit transaction and locking hints technique will prevent the above mentioned problems but it doesn’t technically prevent a race condition.  The issue with any “UPSERT”/MERGE technique is that the last one in the race wins.  If you run the above proc simultaneously on different connections with the same @ID value but different @Bar values, the last Bar value UPDATE will of course overwrite any previous Bar value.

The issue doesn’t apply only to IF statements.  Even intra-query subqueries like the example below can fall victim to the issue.  Just like IF EXISTS, there is nothing to prevent the same concurrency problem when the NOT EXISTS predicate is evaluated simultaneously on different connections.

CREATE PROCEDURE dbo.Insert_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT ON

 

INSERT INTO dbo.Foo (ID, Bar)

VALUES (@ID, @Bar)

WHERE NOT EXISTS

      (

      SELECT *

      FROM dbo.Foo

      WHERE ID = @ID

      )

 

RETURN @@ERROR

 

Below is the modified proc with the transaction and locking hint changes:

CREATE PROCEDURE dbo.Insert_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT, XACT_ABORT ON

 

BEGIN TRAN

 

INSERT INTO dbo.Foo (ID, Bar)

VALUES (@ID, @Bar)

WHERE NOT EXISTS

      (

      SELECT *

      FROM dbo.Foo WITH (UPDLOCK, HOLDLOCK)

      WHERE ID = @ID

      )

 

COMMIT

 

RETURN @@ERROR

 

I’m not certain if the SQL 2008 MERGE statement suffers from the same concurrency issues since I don’t currently have an adequate (multi-processor) SQL 2008 test environment for the test.  I plan to install the next SQL 2008 CTP on a real (not virtual) machine and test the proc below.  I’ll post the results.

MERGE Stored Procedure

Blog Post #1

Bill Graziano invited me to start a blog on www.sqlteam.com when he spoke at our June 2007 St. Louis SQL Server User Group.  I’ve been active online for over a decade answering questions in the Microsoft SQL Server Newsgroups and truly enjoy helping out in the SQL Server community.   I’d considered blogging before and didn’t want to make the commitment but, after mulling it over for a while, I think spending some of my time blogging is worthwhile.

I’m a jack-of-all-trades SQL Server DBA and developer going back to SQL Server 4.21a under Windows NT 3.1.  I expect most of my blog posts will be technical, inspired by issues from the newsgroups or by projects at work.  As I run into topics I think SQL Server professionals might find particularly interesting or useful, I plan to blog away.