Keep Schema and Ownership Simple

I like to keep things simple because simplicity is easier to manage and less prone to error.  When I’m faced with schema design decisions, I pick the selection with the least complexity that meets my objectives.  Here are some of my thoughts regarding schema and ownership in SQL 2005.

Schema

A schema is basically a container that categorizes database objects and simplifies security administration.  As a namespace, schemas logically organize objects without the need for special object naming rules.  Different objects can have the same name as long as they exist in different schemas because the schema name is essentially an extension of the object name that will “uniqueify” the name within a database. 

Categorizing objects by schema is particularly useful in complex databases with many objects.  There is some subjectivity on exactly how one might draw schema boundaries but the basic concept is the same; group related objects into different schema to provide organization to an otherwise unwieldy schema.  Classifying related objects by schema makes complex databases easier to understand and manage.

Schema also simplifies security administration because permissions can be granted en mass at the schema level.  For example, I can grant EXECUTE permissions on all objects in a schema with a single statement like “GRANT EXECUTE ON SCHEMA::Sales TO SalesRole”.  I can grant CONTROL permissions on a schema to allow privileged users to full control over a specific schema but not others in the same database.

Even with the option to use multiple schemas, I tend to use the built-in dbo schema.  I do this because most of the applications I maintain were developed before SQL 2005 and all objects are already in the dbo schema.  Some of those legacy systems could benefit from multiple schemas but I’ll continue to use dbo for those applications to be consistent until I need to add a group of new objects that are appropriate for a separate schema.  The new SQL 2005 databases I’ve developed thus far have been fairly simple and haven’t warranted using multiple schemas for either classification or security purposes.

Ownership

The owner is important for two main reasons:  1) the owner has powerful CONTROL permissions over all owned objects and 2) the owner determines whether or not the ownership chain is broken between objects.  Schema-contained objects will inherit the schema owner unless explicitly overridden using ALTER AUTHORIZATION.  Personally, I think it best for objects to inherit the schema owner in the vast majority of cases; if an object warrants a different owner than the schema, the object probably belongs in a different schema.

I use the built-in dbo principal for ownership unless I have a reason to do otherwise.  This approach is perfect in environments where only db-owner role members can create objects and schemas are used solely as a namespace rather than a security boundary.  The dbo principal exists in all databases so there is no a need to create a user or role for ownership purposes.  Simple is good.

Different schema owners provide a security boundary between objects in different schema because this breaks the ownership chain.  With a broken chain, explicit permissions on indirectly referenced objects are needed by the end user or the impersonated principal.  Different schema owners ensure that I don’t inadvertently provide access to data in different schema via ownership chaining.

Note that an owner can be any database principal and does not necessarily need to be associated with a login.  I find this feature especially handy in situations where I want to specify an owner other than dbo.  Since I can specify an owner that is not a real person, I won’t need to change ownership if the owner leaves the company or moves on to other roles in the organization. 

It’s probably best to create a principal (role or a user without login) with the same name as the schema for ownership purposes when different owners are desired.  The only case I can think of where it might be appropriate for a real person to own a schema (at least initially) is in a development environment when non-dbo users might create schemas and objects.

Database Owner Troubles

Do you know who owns your databases?  Execute sp_helpdb on your SQL Server instances and you might find some surprises under the “owner” column.  It isn’t uncommon to see accounts of people who have left the company or moved on to other roles in the organization that don’t require privileged database access.  Yet these owners still have full database permissions, including the ability to drop the database.  To prevent these security issues and other problems, consider establishing an appropriate database ownership standard for your environments.

Database ownership is an often forgotten detail because it is implicitly set to the database creator’s account.  The owner will initially be a Windows account or SQL login, depending on the authentication method used by the creator.  Note that the owner is always an individual account, not a group or role, so a database created by a sysadmin role member is actually owned by the creator’s individual account instead of a built-in security principal (unless the creator logged in using the “sa” account).

A Best Practice is to change the database owner immediately after creating, restoring or attaching a database.  Unless I have a reason to do otherwise, I specify “sa” as the database owner.  This can be done with sp_changedbowner in SQL 2000 or with ALTER AUTHORIZATION in SQL 2005:

SQL 2000:

EXEC MyDatabase..sp_changedbowner ‘sa’;

SQL 2005 and SQL 2008

ALTER AUTHORIZATION ON DATABASE::MyDatabase to sa;

 

The Significance of the Database Owner

Database ownership is important from a security perspective because the owner account is mapped to the built-in “dbo” user.   The “dbo” user, sysadmin role members and db_owner role members all have full database permissions and can also DROP the database.  The database owner is also used as the authorization of the “dbo” schema, which comes into play with ownership chaining.  With cross-database chaining, the databases involved must have the same owner in order to provide an unbroken chain for “dbo” schema objects.

A difference between the database owner and db_owner role members is that there is exactly one “dbo” user (the database owner) but there may be many users that are db_owner role members.  The owner’s account cannot be explicitly added to the database because the owner is already implicitly mapped to the “dbo” user and an account can be mapped to no more than one user per database.  If you attempt to add the owner as a database user, error message “The proposed new database owner is already a user or aliased in the database” results.

 

Troubleshooting Database Ownership

The database owner is ultimately identified by the account SID (security identifier).  The creator’s account SID is recorded in 2 places:  1)  at the server level in sys.databases/sysdatabases and  2)  in the database as the dbo user SID in sys.database_principals/sysusers.  These SIDs will normally match but can get out-of-sync following a database restore or attach.  You will also end up with a NULL database owner if the owner’s Windows account is deleted because of the orphaned SID.

Mismatched owner SIDs can result in problems such as

·         Problems executing system stored procedures

·         Problems with tools

·         Broken cross-database ownership chains

The sample queries below will help identify problem database owners.  Problem owners will have mismatched SIDs and/or NULL owner names.  Fortunately, problem database ownership is easy to remedy.  Simply change the database owner using sp_changedbowner or ALTER AUTHORIZATION as you would after creating a new database.  In some cases, you might get an erroneous “The proposed new database owner is already a user or aliased in the database” due to the mismatch.  A workaround in this situation is to temporarily change the database owner to a non-conflicting login and then back to the desired owner.

 

SQL 2000:

IF OBJECT_ID(N‘tempdb..#owners’, ‘U’) IS NOT NULL

      DROP TABLE #owners;

 

CREATE TABLE #owners

(

      database_name sysname NOT NULL,

      sys_databases_sid varbinary(85) NOT NULL,

      sys_databases_owner nvarchar(256) NULL,

      sys_users_sid varbinary(85) NULL,

      sys_users_owner nvarchar(256) NULL

);

 

INSERT INTO #owners

      (

            database_name,

            sys_databases_sid,

            sys_databases_owner

      )

      SELECT

            name,

            sid,

            SUSER_SNAME(sid)

      FROM master.dbo.sysdatabases;

 

EXEC sp_MSforeachdb

      UPDATE #owners

      SET sys_users_sid = (

                  SELECT sid

                  FROM [?].dbo.sysusers

                  WHERE name = ”dbo”),

            sys_users_owner = (

                  SELECT SUSER_SNAME(sid)

                  FROM [?].dbo.sysusers

                  WHERE name = ”dbo”)

      WHERE database_name = ”?”

      ;

 

SELECT * FROM #owners

WHERE sys_databases_sid <> sys_users_sid;

 

IF OBJECT_ID(N‘tempdb..#owners’, ‘U’) IS NOT NULL

      DROP TABLE #owners;

GO

SQL 2005:

IF OBJECT_ID(N‘tempdb..#owners’, ‘U’) IS NOT NULL

      DROP TABLE #owners;

 

CREATE TABLE #owners

(

      database_name sysname NOT NULL,

      sys_databases_sid varbinary(85) NOT NULL,

      sys_databases_owner nvarchar(256) NULL,

      sys_users_sid varbinary(85) NULL,

      sys_users_owner nvarchar(256) NULL

);

 

INSERT INTO #owners

      (

            database_name,

            sys_databases_sid,

            sys_databases_owner

      )

      SELECT

            name,

            owner_sid,

            SUSER_SNAME(owner_sid)

      FROM sys.databases;

 

EXEC sp_MSforeachdb

      UPDATE #owners

      SET sys_users_sid = (

                  SELECT sid

                  FROM [?].sys.database_principals

                  WHERE name = ”dbo”),

            sys_users_owner = (

                  SELECT SUSER_SNAME(sid)

                  FROM [?].sys.database_principals

                  WHERE name = ”dbo”)

      WHERE database_name = ”?”

      ;

 

SELECT * FROM #owners

WHERE

      sys_databases_sid <> sys_users_sid

      OR sys_databases_owner IS NULL;

 

IF OBJECT_ID(N‘tempdb..#owners’, ‘U’) IS NOT NULL

      DROP TABLE #owners;

GO

 

Avoid Causing Problems with Profiler

The last thing you want to do is introduce instability when gathering useful performance and troubleshooting information via Profiler.  However, I’ve found that many DBAs are not aware that Profiler and SQL Trace need to be used carefully in order to minimize overhead that can negatively affect overall SQL Server performance and lead to problems like query timeouts.  I want to get the word out on how to use these invaluable tools with minimal server impact.

First, here’s a little background on SQL Trace architecture.  SQL Trace runs in the SQL Server process to gather and filter traced events.  A trace can either write directly to a file (server-side trace) or return data directly to an application like Profiler in near real-time.  In both cases, traces are ultimately created and managed using documented stored procedures (sp_trace_create, sp_trace_setevent, sp_trace_setfilter and sp_trace_setstatus).  However, instead of calling these stored procedures directly, SQL 2005 Profiler uses the SMO API which wraps these stored procedure calls and well as calls to an undocumented procedure to retrieve trace data.

When an event occurs inside of SQL Server, SQL Trace passes the event to all running traces that include the event.  Each trace then filters the event as specified by the trace definition and queues the selected event data before either being written directly to a file or returned to the Profiler instance (or other SMO-based application).

Writing trace data directly to a file is very efficient since it’s done directly by SQL Server with minimal overhead.  The active trace file can be viewed using the fn_trace_gettable() table-valued function on the same instance as the trace.  However, the trace needs to be stopped or a file rollover must occur before the trace data can be imported into Profiler or accessed from another SQL Server instance using fn_trace_gettable().

Profiler, on the other hand, retrieves and processes trace data in near real time.  The drawback is that the interactive graphical display requires quite a bit more overhead than writing directly to a file and more resources are also needed to transfer the trace data from SQL Server to the application.  This overhead isn’t typically noticeable unless the trace generates a lot of events or the server is low on resources.  However, I’ve seen cases where a high-volume Profiler trace degrades overall server performance to the point of causing query timeouts and slow response time.  SQL Server MVP Linchi Shea’s post Performance Impact: Profiler Tracing vs. Server Side SQL Tracing provides metrics that validate my experiences.

I routinely use server-side SQL Trace scripts instead of Profiler on production servers to minimize trace resource usage.  Since it’s quite tedious to create a trace script from scratch, I use Profiler against a development server to create the desired trace and then script the trace from the Profiler menu (FileàExportàScript Trace Definition).  I then change the script to specify the trace file path, max file size and rollover option.  I finally run the script on the production server to start the trace and make note of the TraceID that will be needed to stop and delete the trace with sp_trace_setstatus.  Function fn_trace_getinfo(DEFAULT) comes in handy if I forget the TraceID or want to list currently defined traces.

The Optimizing SQL Trace in the Books Online lists some SQL Trace performance guidelines.  Here are some of those plus some of my own:

  • Run Profiler remotely instead of directly on server
  • Avoid including events that occur frequently (e.g. Lock:Acquired) unless absolutely needed
  • Include only event classes needed
  • Specify limiting filters to reduce the number of events
  • Avoid redundant data (e.g. SQL:BatchStarting and SQL:BatchCompleted)
  • Avoid running large traces with Profiler; consider a server-side SQL Trace instead
  • Limit server-side trace file size and manage space usage

I hope these tips help you avoid introducing performance problems when using Profiler and SQL Trace.

Conditional INSERT/UPDATE Race Condition

Conditional INSERT/UPDATE Race Condition

 

 

I often see conditional INSERT/UPDATE code like:

CREATE PROCEDURE dbo.Insert_Or_Update_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT ON

 

IF EXISTS(SELECT * FROM dbo.Foo WHERE ID = @ID)

BEGIN

      UPDATE dbo.Foo

      SET bar = @bar

      WHERE ID = @ID

END

ELSE

BEGIN

      INSERT INTO dbo.Foo (ID, Bar)

      VALUES (@ID, @Bar)

END

 

RETURN @@ERROR

 

Despite its prevalence, this “UPSERT” code is fundamentally flawed because it assumes only a single user will execute the proc at a time.  A primary key violation error can occur when executed simultaneously on different connections with the same @ID value instead of the intended UPDATE. 

An even worse situation is when a surrogate key (e.g. IDENTITY) is used as the primary key and the conditional insert is based on a natural key that has no unique constraint or unique index.  In that case, the both INSERTs will succeed but duplicate data (same natural key) will be inserted.  A unique constraint (or unique index) should of course be specified on every key but that’s sometimes overlooked.

You can test the concurrency problem yourself by creating the following table for use with the above stored procedure on a multi-processor box:

CREATE TABLE dbo.Foo

(

      ID int NOT NULL

            CONSTRAINT PK_Foo PRIMARY KEY,

      Bar int NOT NULL

)

 

Then run the following script on different database connections after changing the WAITFOR TIME to the near future:

WAITFOR TIME ’08:00:00′

 

EXEC dbo.Insert_Or_Update_Foo

      @ID = 1,

      @Bar = 1

 

I ran this test and got a primary key violation because parallel execution of the IF EXISTS test returned false on both connections so the INSERT was attempted on both. 

Fortunately, there’s a simple solution to prevent the error:  1) add an explicit transaction and 2) specify SELECT locking hints.  Below is a modified version of the original stored procedure with these changes:

CREATE PROCEDURE dbo.Insert_Or_Update_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT, XACT_ABORT ON

 

BEGIN TRAN

 

IF EXISTS(SELECT * FROM dbo.Foo WITH (UPDLOCK, HOLDLOCK) WHERE ID = @ID)

BEGIN

      UPDATE dbo.Foo

      SET bar = @bar

      WHERE ID = @ID

END

ELSE

BEGIN

      INSERT INTO dbo.Foo (ID, Bar)

      VALUES (@ID, @Bar)

END

 

COMMIT

 

RETURN @@ERROR

 

The UPDLOCK hint instructs SQL Server to use an update lock instead of the shared lock that would normally be acquired for the SELECT.  HOLDLOCK is needed in the default READ COMMITTED isolation level to ensure that the lock is held until the end of the transaction.  This more restrictive update lock will prevent simultaneous queries from either selecting or changing the locked resource.  If the row exists during the SELECT, the locked resource is the existing row.  If no row exists with the specified key, a range lock on the primary key is acquired to prevent inserts of the same key until the lock is released. 

Although not actually required, I also added SET XACT_ABORT ON to help ensure the explicit transaction is closed following a timeout or unexpected error.  See Use Caution with Explicit Transactions in Stored Procedures to see why I recommend this as a standard practice.

I should add that this explicit transaction and locking hints technique will prevent the above mentioned problems but it doesn’t technically prevent a race condition.  The issue with any “UPSERT”/MERGE technique is that the last one in the race wins.  If you run the above proc simultaneously on different connections with the same @ID value but different @Bar values, the last Bar value UPDATE will of course overwrite any previous Bar value.

The issue doesn’t apply only to IF statements.  Even intra-query subqueries like the example below can fall victim to the issue.  Just like IF EXISTS, there is nothing to prevent the same concurrency problem when the NOT EXISTS predicate is evaluated simultaneously on different connections.

CREATE PROCEDURE dbo.Insert_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT ON

 

INSERT INTO dbo.Foo (ID, Bar)

VALUES (@ID, @Bar)

WHERE NOT EXISTS

      (

      SELECT *

      FROM dbo.Foo

      WHERE ID = @ID

      )

 

RETURN @@ERROR

 

Below is the modified proc with the transaction and locking hint changes:

CREATE PROCEDURE dbo.Insert_Foo

      @ID int,

      @Bar int

AS

 

SET NOCOUNT, XACT_ABORT ON

 

BEGIN TRAN

 

INSERT INTO dbo.Foo (ID, Bar)

VALUES (@ID, @Bar)

WHERE NOT EXISTS

      (

      SELECT *

      FROM dbo.Foo WITH (UPDLOCK, HOLDLOCK)

      WHERE ID = @ID

      )

 

COMMIT

 

RETURN @@ERROR

 

I’m not certain if the SQL 2008 MERGE statement suffers from the same concurrency issues since I don’t currently have an adequate (multi-processor) SQL 2008 test environment for the test.  I plan to install the next SQL 2008 CTP on a real (not virtual) machine and test the proc below.  I’ll post the results.

MERGE Stored Procedure

Use Caution with Explicit Transactions in Stored Procedures

Use Caution with Explicit Transactions in Stored Procedures

Explicit transactions are often used within stored procedures to guarantee all-or-nothing data integrity.  However, a little known fact is that a query timeout will leave the transaction open unless non-default session settings and/or special exception handling are used.  I’ll describe how to protect your application from problems following timeouts and other unexpected errors.

Consider the following stored procedure containing an explicit transaction:

CREATE PROCEDURE dbo.UpdateWithExplicitTransaction
    @MyKey int,
    @MyColumnValue int
AS

DECLARE @Error int

BEGIN TRAN

UPDATE dbo.Foo
SET MyColumn = @MyColumnValue
WHERE MyKey = @MyKey

SET @Error = @@ERROR
IF @Error <> 0
BEGIN
    GOTO Done
END

UPDATE dbo.Bar
SET MyColumn = @MyColumnValue
WHERE MyKey = @MyKey

SET @Error = @@ERROR
IF @Error <> 0
BEGIN
    GOTO Done
END

COMMIT

Done:

IF @Error <> 0
BEGIN
    ROLLBACK
END

RETURN @Error

You execute the script below from SQL Server Management Studio or Query Analyzer with the query timeout option set to 30 seconds and the second UPDATE statement in the proc times out. 

timeout option set to 30 seconds and the second UPDATE statement in the proc times out.

EXEC dbo.UpdateWithExplicitTransaction
PRINT ‘execution completed’
GO
SELECT
    @@ERROR AS [@@ERROR],
    @@TRANCOUNT AS [@@TRANCOUNT]
GO

Assuming default session settings, check all that apply:

  1. Proc execution continues after the failed UPDATE
  2. @@ERROR is zero
  3. @@TRANCOUNT is zero
  4. The PRINT statement is executed

Let me first mention something important about timeouts before I provide the correct answer(s).  A command timeout occurs in the client application, not the SQL Server backend.  A timeout is basically just a cancel request that is sent by the client API when a command executes longer than the specified interval.  A timeout is very much like pressing the stop button in Query Analyzer or Management Studio because you feel a query has been running too long.  The only difference is that the stop is issued by the client API on behalf of the application.

Both “A” (proc continues) and “D” (PRINT executes) are false because the attention event from the client instructed SQL Server to cancel the currently executing batch in its entirety.  No code after the UPDATE executes, including the PRINT statement following the stored procedure execute.  This is logical since a query cancel or timeout wouldn’t be much use if SQL Server continued executing statements afterward.

“B” (zero @@ERROR) is true.  @@ERROR is zero because no error occurred on the backed; SQL Server successfully canceled the batch per the client cancel request after the timeout.  The timeout error is raised only on the client by the API to notify the application (SSMS/QA in this example) that the command timed out.  SSMS and QA simply catch the error and display the error message from the API.

“C” (zero @@TRANCOINT) is false because XACT_ABORT OFF is the default session setting.  With XACT_ABORT OFF, it is the client application’s responsibility to trap the timeout error and rollback the transaction if necessary.  The transaction is left open and uncommitted following the timeout error.  This can have serious and undesirable consequences if the application performs other work on the connection, unaware of the open transaction.

Using SET XACT_ABORT

SET XACT_ABORT specifies what action SQL Server should take following run-time errors.  The default session setting is SET XACT_ABORT OFF, which indicates that only the Transact-SQL statement that raised the error is rolled back and the transaction continues.  Depending on the severity of the error, the entire transaction may be rolled back and batch aborted, even with SET XACT_ABORT is OFF.   

A side effect of SET XACT_ABORT OFF is that a cancel/timeout error can leave an open transaction so it’s the client’s responsibility to cleanup following cancel/timeout.  To safeguard against leaving an open transaction, applications that execute transactions with SET XACT_ABORT OFF need to roll back transactions and perhaps close the connection following SQL exceptions.  

Note that with connection pooling, simply closing the connection without a rollback will only return the connection to the pool and the transaction will remain open until later reused or removed from the pool.  This can result in locks begin held unnecessary and cause other timeouts and rolling blocks.

SET XACT_ABORT ON instructs SQL Server to rollback the entire transaction and abort the batch when a run-time error occurs.  Compile errors (e.g. syntax errors) are not affected by SET XACT_ABORT. 

In my experience, SET XACT_ABORT ON provides the desired behavior in most cases.  I’ve never run into a situation where I wouldn’t want to rollback a transaction following a cancel or timeout.   I nearly always specify SET XACT_ABORT ON in stored procedures that contain explicit transactions to ensure that transactions are rolled back even if the application code doesn’t clean up properly.  The only time I don’t use XACT_ABORT is in rare cases where I need to trap and handle specific errors in Transact-SQL and continue.

I strongly recommend that SET XACT_ABORT ON be included in all stored procedures with explicit transactions unless you have a specific reason to do otherwise.  The consequences of an application unwittingly performing work on a connection with an open transaction are disastrous.

SQL Server error handling in general is a huge topic I focused on only on timeout errors and SET XACT_ABORT here.  For a thorough discussion of SQL Server error handling, I suggest perusing articles Implementing Error Handling with Stored Procedures and Error Handling in SQL Server – a Background by SQL Server MVP Erland Sommarskog.

SQL Trace Parameter values are not always as they seem

I stumbled across surprising SQL Trace/Profiler behavior I think is worth mentioning.  Parameter values reported in trace RPC starting/completed events are not the values that SQL Server uses to execute the query.  Here’s an example I recently discovered that shows this behavior.

I ran the following C# code to execute parameterized query “SELECT @DateTime” with the parameter value set to October 11, 2007.  The console message verified that SQL Server returned the expected date.

 C# Code 

Here is the SQL Profiler trace of the of the SQL:BatchCompleted and RPC:Completed events:

Appliction SQL Trace

I pasted the script from the trace and ran it from a SQL Server Management Studio query window.  Here’s the trace of the SSMS script execution:

SSMS SQL Trace

Even though the SQL looks the same, the SSMS query returned a different date (“November 10, 2007”) than the application (“October 11, 2007”)!  What’s up with that?  Why would the same script return different values when run from application code vs. Management Studio (or Query Analyzer), even with identical session settings?

The reason for the behavior difference is the trace event class.  The application trace showed an RPC:Completed event but the SSMS session trace showed SQL:BatchCompleted.  Both had the same “exec sp_executesql…” statement in the TextData.  Even though the trace reported the same TextData for both events, SQL Server processed the RPC differently than the SQL batch and the trace didn’t show the complete story for the RPC event.

The issue is that the datetime input parameter value for the RPC event was not really included in the SQL statement like the trace RPC:Completed event showed.  The actual datetime parameter value that SQL Server used during RPC execution was passed in native (i.e. binary) format via the low-level TDS protocol.  The trace TextData was only a reverse-engineered string representation of that value rather than the actual value SQL Server used.

In contrast, the batch sent by SSMS wasn’t parameterized so SQL Server needed to parse the datetime string value in the batch text.  The datetime string “2007-10-11 00:00:00:000” (generated by the trace, not passed by the app) is ambiguous and interpreted differently depending on the DATEFORMAT setting.  Due to the “DATEFORMAT DMY” setting in the script, the date string value was (mis)interpreted as November 10, 2007 instead of October 11, 2007.  The DATEFORMAT setting had no affect on the RPC (parameterized value) because the setting affects only string parsing and not the native datetime value provided by the application.

I discovered this issue when I was helping a user who was working with a French language SQL Server (that’s why I specified DATEFORMAT DMY) troubleshoot a datetime parameter problem.  The trace data led me down the wrong path because it didn’t immediately click with me that 1) DATEFORMAT doesn’t affect RPC datetime parameters and 2) trace RPC TextData isn’t used for query execution anyway. 

I filed Connect feedback on this to suggest that SQL Trace/Profiler be changed to serialize RPC datetime values in a DATEFORMAT neutral format like “2007-10-11T00:00:00:000” or “20071011 00:00:00:000”.  This will provide the same behavior when the trace TextData SQL is executed as a batch and be a bit more intuitive when analyzing trace data.

 

Blog Post #1

Bill Graziano invited me to start a blog on www.sqlteam.com when he spoke at our June 2007 St. Louis SQL Server User Group.  I’ve been active online for over a decade answering questions in the Microsoft SQL Server Newsgroups and truly enjoy helping out in the SQL Server community.   I’d considered blogging before and didn’t want to make the commitment but, after mulling it over for a while, I think spending some of my time blogging is worthwhile.

I’m a jack-of-all-trades SQL Server DBA and developer going back to SQL Server 4.21a under Windows NT 3.1.  I expect most of my blog posts will be technical, inspired by issues from the newsgroups or by projects at work.  As I run into topics I think SQL Server professionals might find particularly interesting or useful, I plan to blog away.