Archive | Performance RSS for this section

Fill Factor

The never ending debate — what should your fill factors be set to?  There is no one set answer, but there are some things you can do to at least get yourself a little more performance depending on how the environment is set up for you.

I came into a position and the reporting setup had a *ton* of indexes already set to a fill factor of 90.  The person who decided to go the 90 route no longer was employed and the reporting team just continued the tradition because they didn’t know any better.

A quick query to run on the databases in question — this will restrict it to clustered indexes that are not set to be full:

SELECT
sys.tables.name as TableName
, sys.indexes.name as IndexName
, fill_factor
, i.rows
FROM sys.indexes
INNER JOIN sys.tables ON sys.tables.object_id = sys.indexes.object_id
INNER JOIN sys.partitions AS i ON sys.tables.object_id = i.object_id
AND i.index_id = 1
where fill_factor between 1 and 99 and sys.indexes.type_desc = ‘clustered’
ORDER BY
sys.tables.name,
sys.indexes.name

In my case, I found a couple 20 million row tables [on development] to experiment with — just wanted to confirm that changing it would improve performance.  First, you have to find out how the procedures that are hitting that table run to before you make a change.  Using this query, I found a couple stored procedures and tested their performance:

SELECT OBJECT_NAME(id)
FROM SYSCOMMENTS
WHERE [text] LIKE ‘%InsertNameOfTableHere%’
AND OBJECTPROPERTY(id, ‘IsProcedure’) = 1
GROUP BY OBJECT_NAME(id)

In my case — I like to turn stats on so I can get some real values:

SET STATISTICS IO ON
GO

SET STATISTICS TIME ON
GO

exec ProcedureNameHere

SET STATISTICS IO OFF
GO
SET STATISTICS TIME OFF
GO

Make a note of the cpu/elapsed times as well as the reads for the table in question.  In my case
SQL Server Execution Times:
CPU time = 2871 ms, elapsed time = 61580 ms.

SQL Server Execution Times:
CPU time = 3385 ms, elapsed time = 62395 ms.

Table ‘tblNamehere’. Scan count 1237, logical reads 144961, physical reads 538, read-ahead reads 63937, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 

I altered the fill factor
ALTER INDEX [PK_tblNameHere_Schedule] ON [dbo].[tblNameHere] REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, ONLINE = ON, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100)

After about thirty seconds [it depends upon the size of the table], it was finished.

I then run the stored procedure again and get:
SQL Server Execution Times:
CPU time = 2855 ms, elapsed time = 45136 ms.

SQL Server Execution Times:
CPU time = 3323 ms, elapsed time = 45718 ms.

Table ‘tblNameHere’. Scan count 1237, logical reads 126408, physical reads 459, read-ahead reads 63760, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 

That is a pretty significant difference for a very minor change.  We are not done, though  — you do want to do some diligence with your nightly ReOrg process.  Hopefully you are using Ola Hallengren’s solution.  Make sure you turn on the LogToTable=’Y’ parameter, this way when reorg’s happen, you have a log of it.  You can turn this off after a while, but it is pretty nice to have so that you can prove out your work.

After you have changed some fill factors, wait for the nightly Reorg to happen a couple of times.  Then run this query [change date as necessary]:
select databasename, objectname, indexname, count(1)
from commandlog
where commandtype <> ‘update_statistics’
and starttime > ‘2014-05-09 20:00:01.240’
group by databasename, objectname, indexname
having count(1) > 1
order by count(1) desc

This will give you a listing of all of the indexes that were rebuilt more than once — if the ones you changed continue showing up, lower the fill factor by ~2 or so until you get to the point it isn’t having to reorganize it nightly.

You should also try to keep an eye on page splits although it can be difficult to track them down.  Jonathan Kehayias has a great posting on using Extended Events to track them here

Questions? Comments?  Post!

Advertisements

LINQ

One thing that has been tough for me as a DBA is the change in development styles from  using stored procedures to developers using LINQ instead.

In case you are not familiar with LINQ, essentially the programming language of choice creates the queries for you on the fly based upon your object model — which was created based upon the database schema.

There are some things that are good to look at as far as optimization.  This is where we can assist the developers in getting the best performance out of something we have very little control over!

SELECT DISTINCT
theText.[text] as QueryText
, stats.execution_count AS ExecutionCount
, stats.max_elapsed_time AS MaxElapsedTime
, ISNULL(stats.total_elapsed_time / stats.execution_count, 0) AS AvgElapsedTime
, stats.creation_time AS LogCreatedOn
, ISNULL(stats.execution_count / DATEDIFF(s, stats.creation_time, GETDATE()), 0) AS FrequencyPerSec
FROM sys.dm_exec_query_stats stats
CROSS APPLY sys.dm_exec_sql_text( stats.sql_handle ) theText
WHERE execution_count > 50
AND theText.[text] like ‘%@p[_][_]linq[_][_]0%’
ORDER BY stats.max_elapsed_time DESC

If you run this query [and you have the space on your server for query plans to be saved] – you will get a listing of the cached query plans that were called by LINQ.  The most common issue I have seen is where LINQ is sending over nvarchar but the database is set to a varchar field.  Make your developers aware and they can adjust their code appropriately so that it sends the correct data typing.  Keep an eye on the AvgElapsedTimes, obviously the lower the better!  Feel free to lower the execution_count value if you want more results.  As always, if possible, run on development servers so the code can be fixed there first.

Questions? Comments?  Post below!

Deadlocks — Are you watching them?

One of the strangest things I think is that SQL Server does not alert you to deadlocks by default.  I recently deployed some deadlock monitoring out so that I could be more aware of what exactly is happening on my servers.  Those of you with third party monitoring tools would already have similar functionality — this is mostly for those that are not spending a lot of money on monitoring software.  The foundation for this is available at http://msdn.microsoft.com/en-us/library/ms186385.aspx.

Things to watch for:

Third party applications seem to just throw deadlocks all of the time.  Some of them, if you google, simply suggest turning on snapshot isolation so that you get versioning.  Please make sure your tempdb situation is under control before taking this advice.

Sharepoint still sucks and the crawls seem to throw a lot of deadlocks.

And now to the fun part, the scripting!

I put this table in master, feel free to put it somewhere else.  I do most of my administration through my central management server so it makes deployment very simple.  As always, never deploy anything without knowing exactly what it is doing and always always always test on a development server first.

USE [master]
GO

/****** Object: Table [dbo].[DeadlockEvents] Script Date: 10/30/2013 2:06:06 PM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[DeadlockEvents](
[AlertTime] [datetime] NULL,
[DeadlockGraph] [xml] NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

GO

 

Next up create a SQL Agent Job to that will be called by the alert you are going to create in a little bit.
USE [msdb]
GO

/****** Object: Job [[Monitor]] Capture Deadlock Graph] Script Date: 10/30/2013 2:06:33 PM ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object: JobCategory [[Uncategorized (Local)]]] Script Date: 10/30/2013 2:06:34 PM ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]’ AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N’JOB’, @type=N’LOCAL’, @name=N'[Uncategorized (Local)]’
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'[Monitor] Capture Deadlock Graph’,
@enabled=1,
@notify_level_eventlog=2,
@notify_level_email=2,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N’Job for responding to DEADLOCK_GRAPH events’,
@category_name=N'[Uncategorized (Local)]’,
@owner_login_name=N’sa’,
@notify_email_operator_name=N’SQL Server Notifications’, @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [Insert graph into DeadlockEvents] Script Date: 10/30/2013 2:06:34 PM ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N’Insert graph into DeadlockEvents’,
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N’TSQL’,
@command=N’INSERT INTO DeadlockEvents
(AlertTime, DeadlockGraph)
VALUES (getdate(), N”$(ESCAPE_SQUOTE(WMI(TextData)))”)

EXEC msdb.dbo.sp_send_dbmail
@recipients=”sqlservernotifications@yourcompanyhere.com”,
@body = N”$(ESCAPE_SQUOTE(WMI(TextData)))”,
@body_format=”HTML”,
@subject = ”DeadLock Occurred On Server”,
@importance = ”High” ;’,
@database_name=N’master’,
@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)’
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

And now you will create the alert:
USE [msdb]
GO

/****** Object: Alert [DeadlockWMI] Script Date: 10/30/2013 2:07:17 PM ******/
EXEC msdb.dbo.sp_add_alert @name=N’DeadlockWMI’,
@message_id=0,
@severity=0,
@enabled=1,
@delay_between_responses=0,
@include_event_description_in=1,
@category_name=N'[Uncategorized]’,
@wmi_namespace=N’\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER’,
@wmi_query=N’SELECT * FROM DEADLOCK_GRAPH’,
@job_name=N'[Monitor] Capture Deadlock Graph’
GO

Note:  This uses WMI.

Finally — you will want to turn on a little check box in SQL Agent, but since we like to make as many changes through queries [CMS] as possible:

USE [msdb]
GO
EXEC msdb.dbo.sp_set_sqlagent_properties @alert_replace_runtime_tokens=1
GO

Now the final step is that you do need to restart SQL Agent in order to get these working properly.  Make sure you do not skip this step.

Extra fun — what about blocking?

Here is the alert:

USE [msdb]
GO

/****** Object: Alert [BlockingWMI] Script Date: 11/13/2013 7:28:29 AM ******/
EXEC msdb.dbo.sp_add_alert @name=N’BlockingWMI’,
@message_id=0,
@severity=0,
@enabled=1,
@delay_between_responses=300,
@include_event_description_in=1,
@category_name=N'[Uncategorized]’,
@wmi_namespace=N’\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER’,
@wmi_query=N’SELECT * FROM Blocked_Process_report’,
@job_name=N’Put Your Job Here’
GO

A small addition for blocked processes — you will want to use sp_configure and determine what triggers a blocked process:

blocked process threshold (s) is the configuration item, so if you want it to trigger a blocked process after twenty seconds of blocking then do:

sp_configure ‘blocked process threshold (s)’, 20
reconfigure

Questions? Comments?  Leave a message!

Helpful Scripts pt2!

Yesterday I presented some scripts from Glenn Berry which can help you figure out most things to know about your database.

Today I will introduce you to another famous script, this time from Brent Ozar.  The script I am talking about is his Blitz Script.

The nice thing about his scripts is the ability to quickly determine some hazards of how your server is setup.  This could be high VLF counts, database on the C drive, high wait times, the existence of heaps, and numerous other useful items to note.

Obviously install it in a development environment first so you can be aware of what the scripts do and get permission before you install it on your production servers.

One of the best things about the Blitz Script is that you can copy the results seamlessly into Excel to consolidate all of your servers and give you and your team a task list of things that need to be looked at to help make your systems faster and more stable.

The nice thing about the SQL Server Community is that most people put their work out there for everyone to benefit.

Arm yourself with the knowledge of the community and your servers will benefit greatly!

As always, please post any questions or comments!

My SQL Server is being crushed under CPU pressure!

So I recently had someone reach out to me with issues on their SQL Server.  No matter what they did, whenever they would reboot the CPU usage of SQL Server would pummel their server.

After gaining access to the system I confirm that the sqlserver process is indeed up around 85%.

I run one of my favorite [stolen] queries:

WITH Waits AS
(SELECT wait_type, wait_time_ms / 1000. AS wait_time_s,
100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct,
ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN (‘CLR_SEMAPHORE’,’LAZYWRITER_SLEEP’,’RESOURCE_QUEUE’, ‘SLEEP_TASK’
,’SLEEP_SYSTEMTASK’,’SQLTRACE_BUFFER_FLUSH’, ‘WAITFOR’, ‘LOGMGR_QUEUE’,’CHECKPOINT_QUEUE’
,’REQUEST_FOR_DEADLOCK_SEARCH’,’XE_TIMER_EVENT’, ‘BROKER_TO_FLUSH’,’BROKER_TASK_STOP’, ‘CLR_MANUAL_EVENT’
,’CLR_AUTO_EVENT’, ‘DISPATCHER_QUEUE_SEMAPHORE’, ‘FT_IFTS_SCHEDULER_IDLE_WAIT’
,’XE_DISPATCHER_WAIT’, ‘XE_DISPATCHER_JOIN’))
SELECT W1.wait_type,
CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s,
CAST(W1.pct AS DECIMAL(12, 2)) AS pct,
CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct
FROM Waits AS W1
INNER JOIN Waits AS W2
ON W2.rn <= W1.rn
GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct
HAVING SUM(W2.pct) – W1.pct < 95;

Using this — I was presented with SOS_Scheduler_Yield as the biggest offender by far.  For those of you unfamiliar [tsk tsk!], this confirms the CPU pressure.

A common cause of CPU issues [SOS_Scheduler_Yield] with SQL Server is actually very simple to solve:  missing indexes.

I pulled up another fun script of mine [thank you Brent Ozar]:

SELECT sys.objects.name
, (avg_total_user_cost * avg_user_impact) * (user_seeks + user_scans) AS Impact
, ‘CREATE NONCLUSTERED INDEX ix_IndexName ON ‘ + sys.objects.name COLLATE DATABASE_DEFAULT + ‘ ( ‘ + IsNull(mid.equality_columns, ”) + CASE WHEN mid.inequality_columns IS NULL
THEN ”
ELSE CASE WHEN mid.equality_columns IS NULL
THEN ”
ELSE ‘,’ END + mid.inequality_columns END + ‘ ) ‘ + CASE WHEN mid.included_columns IS NULL
THEN ”
ELSE ‘INCLUDE (‘ + mid.included_columns + ‘)’ END + ‘;’ AS CreateIndexStatement
, mid.equality_columns
, mid.inequality_columns
, mid.included_columns
FROM sys.dm_db_missing_index_group_stats AS migs
INNER JOIN sys.dm_db_missing_index_groups AS mig ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS mid ON mig.index_handle = mid.index_handle AND mid.database_id = DB_ID()
INNER JOIN sys.objects WITH (nolock) ON mid.OBJECT_ID = sys.objects.OBJECT_ID
WHERE (migs.group_handle IN
(
SELECT TOP (500) group_handle
FROM sys.dm_db_missing_index_group_stats WITH (nolock)
ORDER BY (avg_total_user_cost * avg_user_impact) * (user_seeks + user_scans) DESC))
AND OBJECTPROPERTY(sys.objects.OBJECT_ID, ‘isusertable’)=1
ORDER BY 2 DESC , 3 DESC

The results were what you would expect … extremely high impact for essentially all of the tables involved.  After getting permission [always ask for database you are not the administrator of!] — I went to work doing a couple of indexes at a time and assessing the performance.  After the first two indexes, CPU usage was down to 50%.  After another handful of indexes the machine stabilized around 20-30%.

Problem solved.

I hope this helps someone out there.

Please let me know if you have any comments or concerns!