Archive | May 2013

Trace Your Server – Automating Profiler!

Need to find out who is accessing your server?  How about what is being accessed?  Need to work on compliance issues?  Just need to see what is being run?

Have no fear, SQL Profiler is here!

Open up SSMS, go to tools and then click on SQL Profiler.

There are an abundance of options but I will walk you through my typical setup.

Go to file – New Trace and connect to the database you are wanting to run your trace against.

Personally, I uncheck audit login/logout, and existing connections.  I then [usually] do Show All Columns and then make sure I display HostName.  This will display where the database calls are being called from.  Let’s take for example you want to see who is hitting a certain database from SSMS [get these people off of production!].  If you go to your filters section [Column Filters] click on Application Name.  Under “like” enter in %Management Studio% and press enter.  This will now show you every command being run on this instance through Management Studio.  Want to reduce it a little more … perhaps to just a single database?

Click on your column filters again, navigate to Database Name, go to your like section and put the name of the database you want to watch.  As always, do this on a development server so that you can test things out first.   Click on Run and watch the commands fly [unless the server isn’t used very much … if it isn’t then open up SSMS, connect to the server in question and issue some commands to verify your code is working!].

The next part is where it gets more entertaining… obviously you don’t want to just be running Profiler all the time.  Well guess what?  You can call all of this from a SQL Agent Job.

The first step is to script the trace that you just created — go to Profiler and click File -> Export -> Script Trace Definition -> For SQL Server 2005 – SQL Server …

Next, edit the file in notepad just so you can look at it.

You will see some variables declared — then a bunch of sp_trace_setevents.  Near the bottom you will see your filters that you setup.

One *KEY* thing to note is that Profiler seems to drop database restrictions.  Don’t ask me why … but it does.  So before you run anything make sure you add the following command under all of the other setfilter’s:

exec sp_trace_setfilter @TraceID, 35, 0, 6, N’DATABASE_NAME_HERE’

We are going to make a couple of other changes before we test out our work.  First declare a couple variables at the top of the code — I do:

declare @datetime datetime

declare @filename nvarchar(100)

then afterwards I set these values:

set @Datetime = DateAdd(hh, 1, getdate())
set @FileName = N’F:\Trace\CollectionData_’ + CONVERT(VARCHAR(8), GETDATE(), 112)

set @maxfilesize = 150

you want to do this before the command exec @rc = sp_trace create, because you need to modify this one as well to look like:

exec @rc = sp_trace_create @TraceID output, 0, @Filename, @maxfilesize, @DateTime

Now copy everything over to your SSMS and connect to your development server — and run it.  Come back in an hour [or whatever you set @dateTime to] and copy the file to your desktop, open up Profiler and open the file.  Voila, you have data!  You can now export it to a different database server if you want to run queries against the data you have collected.

Everything look ok?  Ready to automate this?

Open up your SQL Agent — create a new job.  Add a job step and name it whatever you want … copy in your code and then schedule it.

Always start small and know that you can add rollover files and change your maximum file sizes.  I like to keep my file sizes fairly small but make sure if you are using roll-over that you know pretty closely how much space you have on your server to save these files.

Rejoice in the knowledge that you can now keep track of all sorts of fun things.  Please be sure not to go overboard as there is definitely a performance hit to your server if you have a lot of tracing going on — depending on what you are doing one or two small ones should not provide much of a noticeable impact, however.

Questions?  Comments?  Let me know!  I will hopefully be updating this post soon, having an images hiccup.  My apologies but you will have to do without the supporting images for now, I’m sorry!

Tracking Production Changes

One very important item that a lot of DBA’s miss out on is logging changes to their production environments.  You need to be aware of who is changing objects in case things go wrong.  Are they trying to by-pass change control?  Did they get proper sign off?  Are you trying to maintain compliance with Sarbanes-Oxley?

Well fear no longer, with a few quick scripts you can be ready to go!

First we need to create our repository table — it is always best to have a ChangeLog table so you can have an easy to query history table.

USE [msdb]
GO

/****** Object: Table [dbo].[DDL_ChangeLog] Script Date: 5/14/2013 7:27:42 AM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_PADDING ON
GO

CREATE TABLE [dbo].[DDL_ChangeLog](
[DatabaseName] [varchar](256) NULL,
[EventType] [varchar](50) NULL,
[ObjectName] [varchar](256) NULL,
[ObjectType] [varchar](25) NULL,
[SqlCommand] [varchar](max) NULL,
[EventDate] [datetime] NOT NULL,
[LoginName] [varchar](256) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

GO

SET ANSI_PADDING OFF
GO

ALTER TABLE [dbo].[DDL_ChangeLog] ADD CONSTRAINT [DF_EventsLog_EventDate] DEFAULT (getdate()) FOR [EventDate]
GO

 

Next up — do you want an email every time a change is made?  If so, have a quick trigger on the ChangeLog table:

CREATE TRIGGER [dbo].[DDL_ReportBack]
ON [msdb].[dbo].[DDL_ChangeLog]
WITH EXECUTE AS ‘NT AUTHORITY\SYSTEM’
AFTER INSERT
AS
BEGIN
Declare @DatabaseName varchar(256)
Declare @EventType varchar(50)
Declare @SqlCommand varchar(max)
Declare @EventDate varchar (50)
Declare @LoginName varchar(256)
Declare @Result varchar(max)
Declare @header varchar(200)
Declare @server varchar(120)
set @server = cast((select SERVERPROPERTY(‘servername’))as Char)
Set @header = ‘DDL Change on ‘ + @server
SELECT @EventType = EventType, @DatabaseName = DatabaseName, @LoginName = LoginName, @SQLCommand =SQLCommand,
@EventDate = EventDate
FROM [MSDB].[dbo].[DDL_ChangeLog]
where EventDate = (select MAX(EventDate)FROM [MSDB].[dbo].[DDL_ChangeLog])
order BY EventType
Set @Result = ”+ @EventType +’
‘+ @LoginName +’
‘+ @EventDate +”+CHAR(13) + CHAR(10)+’
‘ +@SQLCommand + ”+CHAR(13) + CHAR(10)+”+CHAR(13) + CHAR(10)+”
EXEC msdb.dbo.sp_send_dbmail
@recipients = ‘you@email.com’,
@body = @Result,
@subject = @header,
@body_format=’TEXT’
END

GO

 

And finally, the trigger that makes it all work:

CREATE TRIGGER [DDL_Security_Trigger]
ON ALL SERVER
WITH EXECUTE AS ‘NT AUTHORITY\SYSTEM’
FOR DDL_DATABASE_LEVEL_EVENTS,DDL_SERVER_LEVEL_EVENTS
AS

set nocount on
declare @data xml
set @data = EVENTDATA()
If ((@data.value(‘(/EVENT_INSTANCE/ObjectType)[1]’, ‘varchar(25)’)) NOT LIKE ‘STATISTICS’ ) AND
(@data.value(‘(/EVENT_INSTANCE/TSQLCommand)[1]’, ‘varchar(max)’) NOT LIKE ‘%REORGANIZE%’) AND
(@data.value(‘(/EVENT_INSTANCE/TSQLCommand)[1]’, ‘varchar(max)’) NOT LIKE ‘%REBUILD%’) AND
(@data.value(‘(/EVENT_INSTANCE/TSQLCommand)[1]’, ‘varchar(max)’) NOT LIKE ‘%dmfragmentationworktable%’) AND
(@data.value(‘(/EVENT_INSTANCE/TSQLCommand)[1]’, ‘varchar(max)’) NOT LIKE ‘%index IX_SQLdoctor%’) AND
(@data.value(‘(/EVENT_INSTANCE/TSQLCommand)[1]’, ‘varchar(max)’) NOT LIKE ‘%UserOrGroup(Name = @UserNamePattern).SetSID%’)

Begin
insert into [MSDB].[dbo].[DDL_ChangeLog](databasename, eventtype,
objectname, objecttype, sqlcommand, loginname)
values(
@data.value(‘(/EVENT_INSTANCE/DatabaseName)[1]’, ‘varchar(256)’),
@data.value(‘(/EVENT_INSTANCE/EventType)[1]’, ‘varchar(50)’),
@data.value(‘(/EVENT_INSTANCE/ObjectName)[1]’, ‘varchar(256)’),
@data.value(‘(/EVENT_INSTANCE/ObjectType)[1]’, ‘varchar(25)’),
@data.value(‘(/EVENT_INSTANCE/TSQLCommand)[1]’, ‘varchar(max)’),
@data.value(‘(/EVENT_INSTANCE/LoginName)[1]’, ‘varchar(256)’)
)
END

GO

Make sure you enable the trigger:

ENABLE TRIGGER [DDL_Security_Trigger] ON ALL SERVER
GO

Make some changes and see how the magic works [obviously do this in a development environment first!].

It is always good to have extensive logging of the changes that are happening on your servers.

Questions? Comments?  Let me know!

Helpful Scripts pt2!

Yesterday I presented some scripts from Glenn Berry which can help you figure out most things to know about your database.

Today I will introduce you to another famous script, this time from Brent Ozar.  The script I am talking about is his Blitz Script.

The nice thing about his scripts is the ability to quickly determine some hazards of how your server is setup.  This could be high VLF counts, database on the C drive, high wait times, the existence of heaps, and numerous other useful items to note.

Obviously install it in a development environment first so you can be aware of what the scripts do and get permission before you install it on your production servers.

One of the best things about the Blitz Script is that you can copy the results seamlessly into Excel to consolidate all of your servers and give you and your team a task list of things that need to be looked at to help make your systems faster and more stable.

The nice thing about the SQL Server Community is that most people put their work out there for everyone to benefit.

Arm yourself with the knowledge of the community and your servers will benefit greatly!

As always, please post any questions or comments!

Helpful Scripts to Add to Your Templates!

So after yesterday’s post about the benefits of using the template explorer, I wanted to follow it up with some scripts.

The first ones are actually from a very well respected Microsoft MVP, Glenn Berry.

These are great to run on servers that other people hand you so that you can become familiar with how they are configured right away, without having to spend all day typing it up.  It is also nice to run if you are migrating your databases from one server to another, you can confirm all of the appropriate settings are mapped accordingly so that you don’t run into any unforeseen issues the day of migration.

Out of respect for Glenn, I am going to post the links to his posts about each set of Diagnostic Queries rather than just posting the scripts:

SQL Server 2005 – http://sqlserverperformance.wordpress.com/2013/05/07/sql-server-2005-diagnostic-information-queries-may-2013/

SQL Server 2008 – http://sqlserverperformance.wordpress.com/2013/05/07/sql-server-2008-diagnostic-information-queries-may-2013/

SQL Server 2008R2 – http://sqlserverperformance.wordpress.com/2013/05/06/sql-server-2008-r2-diagnostic-information-queries-may-2013/

SQL Server 2012 – http://sqlserverperformance.wordpress.com/2013/05/06/sql-server-2012-diagnostic-information-queries-may-2013/

Tomorrow I will post a few more and give yet another thumbs up to a well respected individual in the SQL Server community.

Thank you Glenn!

Using the Template Explorer

From my experience, one of the lesser used features of SQL Studio Management Studio is the Template Explorer.  Over the next few days I will give some samples of what I use it for on a daily basis.

But first we must cover how to use it!

Open up your SSMS and either press Ctrl-Alt-T or click on View –>Template Explorer as seen below:

Image

A new window should appear on the side of your SSMS which looks like:

Image

Here you can see all of the pre-configured ones setup by Microsoft.  These are fun but what is important is YOU.  At the top right-click “SQL Server Templates” and select New->Folder.

Name this new folder whatever you want. *tip* prefixing it with a numeric will keep it at the top of the list.  *Note — you will have to close out of SSMS and reload to see this take effect.

Now if you take your new folder and right-click, you can do New->Template.  This creates a new file.  See the below image if you are having issues.

Image

Once you click edit you are presented with a regular query window.  You can type whatever you want in this window and save it.  The information will be persisted inside your SSMS any time you open it.  No more hunting for scripts all over your desktop!

Now say you have them all setup at your work place and want to take them home?

Navigate to:

C:\Users\“PUT_MachineUser_HERE”\AppData\Roaming\Microsoft\Microsoft SQL Server\100\Tools\Shell\Templates\Sql

You can copy all of your scripts, very easily.

Questions?  Comments?  Leave a message!

My SQL Server is being crushed under CPU pressure!

So I recently had someone reach out to me with issues on their SQL Server.  No matter what they did, whenever they would reboot the CPU usage of SQL Server would pummel their server.

After gaining access to the system I confirm that the sqlserver process is indeed up around 85%.

I run one of my favorite [stolen] queries:

WITH Waits AS
(SELECT wait_type, wait_time_ms / 1000. AS wait_time_s,
100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct,
ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN (‘CLR_SEMAPHORE’,’LAZYWRITER_SLEEP’,’RESOURCE_QUEUE’, ‘SLEEP_TASK’
,’SLEEP_SYSTEMTASK’,’SQLTRACE_BUFFER_FLUSH’, ‘WAITFOR’, ‘LOGMGR_QUEUE’,’CHECKPOINT_QUEUE’
,’REQUEST_FOR_DEADLOCK_SEARCH’,’XE_TIMER_EVENT’, ‘BROKER_TO_FLUSH’,’BROKER_TASK_STOP’, ‘CLR_MANUAL_EVENT’
,’CLR_AUTO_EVENT’, ‘DISPATCHER_QUEUE_SEMAPHORE’, ‘FT_IFTS_SCHEDULER_IDLE_WAIT’
,’XE_DISPATCHER_WAIT’, ‘XE_DISPATCHER_JOIN’))
SELECT W1.wait_type,
CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s,
CAST(W1.pct AS DECIMAL(12, 2)) AS pct,
CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct
FROM Waits AS W1
INNER JOIN Waits AS W2
ON W2.rn <= W1.rn
GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct
HAVING SUM(W2.pct) – W1.pct < 95;

Using this — I was presented with SOS_Scheduler_Yield as the biggest offender by far.  For those of you unfamiliar [tsk tsk!], this confirms the CPU pressure.

A common cause of CPU issues [SOS_Scheduler_Yield] with SQL Server is actually very simple to solve:  missing indexes.

I pulled up another fun script of mine [thank you Brent Ozar]:

SELECT sys.objects.name
, (avg_total_user_cost * avg_user_impact) * (user_seeks + user_scans) AS Impact
, ‘CREATE NONCLUSTERED INDEX ix_IndexName ON ‘ + sys.objects.name COLLATE DATABASE_DEFAULT + ‘ ( ‘ + IsNull(mid.equality_columns, ”) + CASE WHEN mid.inequality_columns IS NULL
THEN ”
ELSE CASE WHEN mid.equality_columns IS NULL
THEN ”
ELSE ‘,’ END + mid.inequality_columns END + ‘ ) ‘ + CASE WHEN mid.included_columns IS NULL
THEN ”
ELSE ‘INCLUDE (‘ + mid.included_columns + ‘)’ END + ‘;’ AS CreateIndexStatement
, mid.equality_columns
, mid.inequality_columns
, mid.included_columns
FROM sys.dm_db_missing_index_group_stats AS migs
INNER JOIN sys.dm_db_missing_index_groups AS mig ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS mid ON mig.index_handle = mid.index_handle AND mid.database_id = DB_ID()
INNER JOIN sys.objects WITH (nolock) ON mid.OBJECT_ID = sys.objects.OBJECT_ID
WHERE (migs.group_handle IN
(
SELECT TOP (500) group_handle
FROM sys.dm_db_missing_index_group_stats WITH (nolock)
ORDER BY (avg_total_user_cost * avg_user_impact) * (user_seeks + user_scans) DESC))
AND OBJECTPROPERTY(sys.objects.OBJECT_ID, ‘isusertable’)=1
ORDER BY 2 DESC , 3 DESC

The results were what you would expect … extremely high impact for essentially all of the tables involved.  After getting permission [always ask for database you are not the administrator of!] — I went to work doing a couple of indexes at a time and assessing the performance.  After the first two indexes, CPU usage was down to 50%.  After another handful of indexes the machine stabilized around 20-30%.

Problem solved.

I hope this helps someone out there.

Please let me know if you have any comments or concerns!

“investing in y…

“investing in yourself is the best investment you will ever make. it will not only improve your life, it will improve the lives of all those around you.”

 – Robin S. SharmaThe Monk Who Sold His Ferrari: A Fable About Fulfilling Your Dreams & Reaching Your Destiny

It’s time to help others and myself through our tribulations with SQL Server.