Archive | SQL Server RSS for this section

Fill Factor

The never ending debate — what should your fill factors be set to?  There is no one set answer, but there are some things you can do to at least get yourself a little more performance depending on how the environment is set up for you.

I came into a position and the reporting setup had a *ton* of indexes already set to a fill factor of 90.  The person who decided to go the 90 route no longer was employed and the reporting team just continued the tradition because they didn’t know any better.

A quick query to run on the databases in question — this will restrict it to clustered indexes that are not set to be full:

SELECT as TableName
, as IndexName
, fill_factor
, i.rows
FROM sys.indexes
INNER JOIN sys.tables ON sys.tables.object_id = sys.indexes.object_id
INNER JOIN sys.partitions AS i ON sys.tables.object_id = i.object_id
AND i.index_id = 1
where fill_factor between 1 and 99 and sys.indexes.type_desc = ‘clustered’

In my case, I found a couple 20 million row tables [on development] to experiment with — just wanted to confirm that changing it would improve performance.  First, you have to find out how the procedures that are hitting that table run to before you make a change.  Using this query, I found a couple stored procedures and tested their performance:

WHERE [text] LIKE ‘%InsertNameOfTableHere%’
AND OBJECTPROPERTY(id, ‘IsProcedure’) = 1

In my case — I like to turn stats on so I can get some real values:



exec ProcedureNameHere


Make a note of the cpu/elapsed times as well as the reads for the table in question.  In my case
SQL Server Execution Times:
CPU time = 2871 ms, elapsed time = 61580 ms.

SQL Server Execution Times:
CPU time = 3385 ms, elapsed time = 62395 ms.

Table ‘tblNamehere’. Scan count 1237, logical reads 144961, physical reads 538, read-ahead reads 63937, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


I altered the fill factor

After about thirty seconds [it depends upon the size of the table], it was finished.

I then run the stored procedure again and get:
SQL Server Execution Times:
CPU time = 2855 ms, elapsed time = 45136 ms.

SQL Server Execution Times:
CPU time = 3323 ms, elapsed time = 45718 ms.

Table ‘tblNameHere’. Scan count 1237, logical reads 126408, physical reads 459, read-ahead reads 63760, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


That is a pretty significant difference for a very minor change.  We are not done, though  — you do want to do some diligence with your nightly ReOrg process.  Hopefully you are using Ola Hallengren’s solution.  Make sure you turn on the LogToTable=’Y’ parameter, this way when reorg’s happen, you have a log of it.  You can turn this off after a while, but it is pretty nice to have so that you can prove out your work.

After you have changed some fill factors, wait for the nightly Reorg to happen a couple of times.  Then run this query [change date as necessary]:
select databasename, objectname, indexname, count(1)
from commandlog
where commandtype <> ‘update_statistics’
and starttime > ‘2014-05-09 20:00:01.240’
group by databasename, objectname, indexname
having count(1) > 1
order by count(1) desc

This will give you a listing of all of the indexes that were rebuilt more than once — if the ones you changed continue showing up, lower the fill factor by ~2 or so until you get to the point it isn’t having to reorganize it nightly.

You should also try to keep an eye on page splits although it can be difficult to track them down.  Jonathan Kehayias has a great posting on using Extended Events to track them here

Questions? Comments?  Post!


One thing that has been tough for me as a DBA is the change in development styles from  using stored procedures to developers using LINQ instead.

In case you are not familiar with LINQ, essentially the programming language of choice creates the queries for you on the fly based upon your object model — which was created based upon the database schema.

There are some things that are good to look at as far as optimization.  This is where we can assist the developers in getting the best performance out of something we have very little control over!

theText.[text] as QueryText
, stats.execution_count AS ExecutionCount
, stats.max_elapsed_time AS MaxElapsedTime
, ISNULL(stats.total_elapsed_time / stats.execution_count, 0) AS AvgElapsedTime
, stats.creation_time AS LogCreatedOn
, ISNULL(stats.execution_count / DATEDIFF(s, stats.creation_time, GETDATE()), 0) AS FrequencyPerSec
FROM sys.dm_exec_query_stats stats
CROSS APPLY sys.dm_exec_sql_text( stats.sql_handle ) theText
WHERE execution_count > 50
AND theText.[text] like ‘%@p[_][_]linq[_][_]0%’
ORDER BY stats.max_elapsed_time DESC

If you run this query [and you have the space on your server for query plans to be saved] – you will get a listing of the cached query plans that were called by LINQ.  The most common issue I have seen is where LINQ is sending over nvarchar but the database is set to a varchar field.  Make your developers aware and they can adjust their code appropriately so that it sends the correct data typing.  Keep an eye on the AvgElapsedTimes, obviously the lower the better!  Feel free to lower the execution_count value if you want more results.  As always, if possible, run on development servers so the code can be fixed there first.

Questions? Comments?  Post below!

Script fun!

I wanted to post some of my newer scripts in hopes that they give people a new way to think about doing less manual work.

Most of these came about due to server migrations and not wanting to repeat tasks for each database on servers with a large number of databases.

;with BUS as (
SELECT *, row_Number() over (partition by database_name order by backup_start_date desc) as rn
FROM msdb.dbo.backupset
WHERE type=’D’ –D = Full; I = Diff; L = Log
and database_name not in (‘model’, ‘msdb’, ‘master’)
) select database_name, bus.media_set_id, ‘RESTORE DATABASE [‘ + database_name + ‘] FROM DISK = N”’ + physical_device_name + ”’ WITH FILE = 1, NORECOVERY, NOUNLOAD, REPLACE, STATS = 1′
from bus
inner join msdb.dbo.backupmediafamily bmf on bus.media_set_id = bmf.media_set_id
where rn = 1

*note*:  Change the D to an I and run again to get the diff listing
select ‘RESTORE DATABASE [‘ + name + ‘] WITH NORECOVERY’,*
from sys.databases
where name not in (‘master’, ‘tempdb’, ‘model’, ‘msdb’)
and state_desc = ‘Restoring’

select ‘ALTER DATABASE [‘ + name + ‘] SET COMPATIBILITY_LEVEL = 110’,*
from sys.databases
where name not in (‘master’, ‘tempdb’, ‘model’, ‘msdb’)
and [compatibility_level] < 110

Anyway — I hope this helps you think about new ways to do things more efficiently!

Failed Upgrade SQL Server 2008R2 to SQL Server 2012

So I recently had a fun experience attempting an in place upgrade from SQL Server 2008R2 to SQL Server 2012.


Virtualized server, took a copy and disabled the NIC so that I could do a test in-place upgrade without affecting anything.  The installer can’t find updates which is fine and gives no warnings about anything that will cause problems.

It goes through the whole process and everything is successful *except* for database services.  It complains because I have the default backup location set to a UNC, but since the NIC is disabled it can’t hit the UNC.

I try to repair, doesn’t do anything.  I try the upgrade process again but it won’t let me proceed because everything has already been done.  I try to uninstall from add/remove programs but that doesn’t work either.  The original error message said to run setup.exe /ACTION=uninstall  [bla bla bla], that didn’t work either.

That’s a lot of failed attempts… you want the actual solution don’t you!

Search your server for “Datastore_Discovery.xml” — find the most recent one [if you tried the upgrade process multiple times there will be a bunch of files].  Open the file and search for .Inactive.  Copy down each unique GUID corresponding to .Inactive nodes — you can ignore the child nodes as those will have the same GUID.

Now take that list and open up the command prompt and type:

msiexec /x {insert guid here}

Do that for each of the unique GUID’s that were in your list.

Once you are done run your setup.exe again but instead of going through the upgrade process right away, click on Tools -> Installed SQL Server Discovery Report.  If you see anything referencing Inactive, make sure you go back to the newest XML file and check for .Inactive again and repeat the msiexec /x {guid here} command.

If everything looks ok, proceed with the upgrade process again.  In my case, I made sure to change the default backup path on the databases in question first.

Hopefully this saves you some time!

Drop failed for login | Endpoints

Have you ever tried to drop a user from SQL in an effort to clean up your security and gotten the error: Server principal ” has granted one or more permissions(s).?

Was it related to SQL Server 2012 and your Availability Groups?

I ran into this recently and if it helps someone, then good!

First, make sure you know what the permission was for:

Select * from sys.server_permissions
where grantor_principal_id =
(Select principal_id from sys.server_principals where name = N’userNameHere’)

Sample result:

class class_desc major_id minor_id grantee_principal_id grantor_principal_id type permission_name state state_desc
105 ENDPOINT 65536 0 283 282 CO CONNECT G GRANT

You should see an entry [or more than one] and the grantor would have the id of the user you were trying to remove — in this case it is for an endpoint.  Now find out what that actual permission is for:

Select * from sys.server_principals where principal_id = 283  [replace with ID]

You can see that when creating the Availability Group the user creating the AG was assigned as giving permission to the HADR_Endpoint to the cluster account.

If you use the UI to change the permission it won’t work.  You have to actually issue the t-sql:

ALTER Authorization on Endpoint::[Hadr_endpoint] to [OtherUser]

Now that this is done, you should be able to disable the existing user [or deny access] until you are certain it isn’t being used — and then feel free to delete it!

Questions? Comments?  Post!

Deadlocks — Are you watching them?

One of the strangest things I think is that SQL Server does not alert you to deadlocks by default.  I recently deployed some deadlock monitoring out so that I could be more aware of what exactly is happening on my servers.  Those of you with third party monitoring tools would already have similar functionality — this is mostly for those that are not spending a lot of money on monitoring software.  The foundation for this is available at

Things to watch for:

Third party applications seem to just throw deadlocks all of the time.  Some of them, if you google, simply suggest turning on snapshot isolation so that you get versioning.  Please make sure your tempdb situation is under control before taking this advice.

Sharepoint still sucks and the crawls seem to throw a lot of deadlocks.

And now to the fun part, the scripting!

I put this table in master, feel free to put it somewhere else.  I do most of my administration through my central management server so it makes deployment very simple.  As always, never deploy anything without knowing exactly what it is doing and always always always test on a development server first.

USE [master]

/****** Object: Table [dbo].[DeadlockEvents] Script Date: 10/30/2013 2:06:06 PM ******/


CREATE TABLE [dbo].[DeadlockEvents](
[AlertTime] [datetime] NULL,
[DeadlockGraph] [xml] NULL



Next up create a SQL Agent Job to that will be called by the alert you are going to create in a little bit.
USE [msdb]

/****** Object: Job [[Monitor]] Capture Deadlock Graph] Script Date: 10/30/2013 2:06:33 PM ******/
SELECT @ReturnCode = 0
/****** Object: JobCategory [[Uncategorized (Local)]]] Script Date: 10/30/2013 2:06:34 PM ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]’ AND category_class=1)
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N’JOB’, @type=N’LOCAL’, @name=N'[Uncategorized (Local)]’
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback


EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'[Monitor] Capture Deadlock Graph’,
@description=N’Job for responding to DEADLOCK_GRAPH events’,
@category_name=N'[Uncategorized (Local)]’,
@notify_email_operator_name=N’SQL Server Notifications’, @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [Insert graph into DeadlockEvents] Script Date: 10/30/2013 2:06:34 PM ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N’Insert graph into DeadlockEvents’,
@os_run_priority=0, @subsystem=N’TSQL’,
@command=N’INSERT INTO DeadlockEvents
(AlertTime, DeadlockGraph)
VALUES (getdate(), N”$(ESCAPE_SQUOTE(WMI(TextData)))”)

EXEC msdb.dbo.sp_send_dbmail
@body = N”$(ESCAPE_SQUOTE(WMI(TextData)))”,
@subject = ”DeadLock Occurred On Server”,
@importance = ”High” ;’,
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)’
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
GOTO EndSave


And now you will create the alert:
USE [msdb]

/****** Object: Alert [DeadlockWMI] Script Date: 10/30/2013 2:07:17 PM ******/
EXEC msdb.dbo.sp_add_alert @name=N’DeadlockWMI’,
@job_name=N'[Monitor] Capture Deadlock Graph’

Note:  This uses WMI.

Finally — you will want to turn on a little check box in SQL Agent, but since we like to make as many changes through queries [CMS] as possible:

USE [msdb]
EXEC msdb.dbo.sp_set_sqlagent_properties @alert_replace_runtime_tokens=1

Now the final step is that you do need to restart SQL Agent in order to get these working properly.  Make sure you do not skip this step.

Extra fun — what about blocking?

Here is the alert:

USE [msdb]

/****** Object: Alert [BlockingWMI] Script Date: 11/13/2013 7:28:29 AM ******/
EXEC msdb.dbo.sp_add_alert @name=N’BlockingWMI’,
@wmi_query=N’SELECT * FROM Blocked_Process_report’,
@job_name=N’Put Your Job Here’

A small addition for blocked processes — you will want to use sp_configure and determine what triggers a blocked process:

blocked process threshold (s) is the configuration item, so if you want it to trigger a blocked process after twenty seconds of blocking then do:

sp_configure ‘blocked process threshold (s)’, 20

Questions? Comments?  Leave a message!

SQL Enterprise Policy Management

I implemented the Enterprise Policy Management a long time ago but ran into some issues that some other people may be having similar issues.

The EPM can be found at:

The problem I had occurred after it had been running for a few weeks and was due to the process timing out.

The actual error is:

The corresponding line is ‘Invoke-Sqlcmd -ServerInstance $CentralManagementServer -Database $HistoryDatabase -Query “exec policy.epm_LoadPolicyHistoryDetail”  -ErrorAction Stop’. Correct the script and reschedule the job. The error information returned by PowerShell is: ‘Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.  The statement has been terminated.  ‘

There are two ways to fix this depending on what you want to do — the easiest way is to open up the powershell script and scroll all the way to the bottom and find this line:

Invoke-Sqlcmd -ServerInstance $CentralManagementServer -Database $HistoryDatabase -Query “exec policy.epm_LoadPolicyHistoryDetail”  -ErrorAction Stop

and replace it with:

Invoke-Sqlcmd -ServerInstance $CentralManagementServer -Database $HistoryDatabase -Query “exec policy.epm_LoadPolicyHistoryDetail”  -ErrorAction Stop -querytimeout 0

If that doesn’t work you may need to put a real value in for querytimeout.  I believe this may be the case for servers before SQL 2012.

The other method is to remove that line from the powershell script [or just comment it out with the #] and add a new step to your SQL agent job that has a target database of your warehouse and use:  exec policy.epm_LoadPolicyHistoryDetail.

Voila, you are back in business!

Questions?  Comments?  Leave a note!