Showing posts with label SQL. Show all posts
Showing posts with label SQL. Show all posts

Monday, October 4, 2010

What I learned today: No backup or restore with the DAC

For everything I teach, I learn something in return.

I began to train a new junior level DBA today. He’s someone that works with us already as a developer but he’s shown an interest in learning how to be a DBA so I’m beginning to teach him what I know. To start off we began by discussing his daily tasks, then we started to talk about backup and recovery, topics I gave him as an assignment last week. For this week’s homework I told him to read up on the Dedicated Administrator Connection (DAC) and to get familiar with SQLCMD. I mentioned how the DAC was useful in situations where the server was under a strain and you need a way to connect to troubleshoot the problem. Then I said you can use it for backups and recoveries.

I should have quit when I was ahead.

As my new colleague correctly pointed out, you can’t use the DAC for backups or recovery. If you try, you’ll get the following error:

Msg 3637, Level 16, State 3, Line 1
A parallel operation cannot be started from a DAC connection.
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

And BOL can’t make it any clearer:

SQL Server prohibits running parallel queries or commands with the DAC. For example, error 3637 is generated if you execute either of the following statements with the DAC:

  • RESTORE
  • BACKUP

(Ref: http://technet.microsoft.com/en-us/library/ms189595.aspx)

For some reason I’d forgotten that. It was something that I had read a long time ago and forgot since I don’t use the DAC very often, never in production other than to confirm that it was set up through the firewall. I had assumed that backing up a database in an emergency would be one of the reasons I’d use the DAC. Luckily I was reminded before a situation arose where I relied on it.

So I actually learned two things today. I was reminded about the restriction of the DAC, and I learned to be more careful about what I tell others when it comes to SQL, not to assume facts and to do my own homework. I hope I remember both lessons equally. 

 

 

 

Monday, August 16, 2010

What I learned today – CHECKDB and database snapshots

A few days ago I was sitting in my local coffee shop, getting a little light reading done. At work we’re about to move from the Standard Edition of SQL 2005 to the Enterprise Edition and I wanted to learn more about some features of database snapshots. Snapshots are basically just a read-only copy of a database taken at a specific time. Anyway I was reading Chapter 3 – Database and Database Files in the excellent book Microsoft SQL Server 2008 Internals. The discussion turned to space allocated for the snapshots. That’s when I read that DBCC CHECKDB actually runs against a snapshot of the database.

That caught me by surprise. Until then I thought that the CHECKDB ran against the database itself.

Now I thought I was doing OK with protecting our databases. I faithfully run CHECKDB weekly against all databases and check for any errors. I set the CHECKSUM option and verify backups. I occasionally restore databases on a standby server to make sure there’s no problems with the backups. And I monitor database and drive space pretty closely.

But I was flying blind. I have no idea how near I came to having CHECKDB fail because of a lack of space. So I started digging a little deeper into CHECKDB. And I was able to learn a little bit more of how CHECKDB works with snapshots.

For anything concerning CHECKDB, your first resource should be Paul Randal (Blog | Twitter). Read his excellent series on CHECKDB From Every Angle to get a good understanding of what’s going on. And he also wrote Chapter 11 – DBCC Internals in Microsoft SQL Server 2008 Internals, a great book you should read anyway.   

CHECKDB needs a consistent view of the database to work with. In SQL 7 it acquired table locks. In SQL 2000 it analyzed the transaction log. Beginning with SQL 2005 and going forward it creates a hidden snapshot on the same volume as the database – you have no control over where it’s placed. If you’re running CHECKDB at the same time that your server has a heavy workload the snapshot can run out of space and you’ll get an error showing that CHECKDB didn’t complete. If that happens you have a few options;

  • Create your own database snapshot on a drive that has enough space and run CHECKDB against that snapshot. CHECKDB will know that its running against a snapshot and won’t create another one.
  • Start the SQL service in single user mode.
  • Put the database in read-only mode. 

Once CHECKDB has finished it will delete the snapshot if it created one.

But there are a few gotcha’s with CHECKDB creating the snapshot. First, it has to be created on an NTFS drive. Second, you can’t use the WITH TABLOCK hint. And third, you can’t run against tempdb; but why would you want to?.  

I’m sure I’m over simplifying. I still need to finish Chapter 11.

One other thing I’m wondering. If CHECKDB can create a snapshot database in SQL 2005 regardless of the edition, can the Standard Edition create one with a hidden flag?

Tuesday, July 27, 2010

Creating your own default trace

Are you aware of the default trace that SQL Server runs when it starts? Did you know it’s configurable?

No, you can’t edit it. But you can still run a trace on startup that captures only the events that you want.

The first thing you should do is to see if the default trace is enabled. Simple enough, just run sp_configure with advanced options and look for 'default trace enabled'. Or query sys.configurations for configuration_id 1568.

SELECT * 
FROM sys.configurations
WHERE configuration_id = 1568


On my laptop it is.



image



There’s a few ways to see what the trace is doing. First find the trace id. It should be 1, but you can make sure…



SELECT * FROM sys.traces

SELECT * FROM ::fn_trace_getinfo(0)


…then query fn_traace_geteventinfo(1) to see what events the default trace is looking at. You can see the descriptions of the events by joining to some system views. 


SELECT tc.name AS [Category],
dt.eventid, te.name AS [Event Name],
dt.columnid, tcol.name AS [Column Name]
FROM ::fn_trace_geteventinfo(1) AS dt
INNER JOIN sys.trace_events AS te ON dt.eventid = te.trace_event_id
INNER JOIN sys.trace_columns AS tcol ON dt.columnid = tcol.trace_column_id
INNER JOIN sys.trace_categories AS tc ON te.category_id = tc.category_id


image 



But what if, after looking at the events that the default trace is looking at, you decide to add to the list? Or maybe you want to skip some events? Remember, you can’t alter the default trace. 



First disable the default trace. Use sp_configure, the change doesn’t require a restart. It will also close the trace.



sp_configure 'default trace enabled', 0;
go

reconfigure


Next create a stored procedure in the master database that will create and start your trace. Use sp_trace_create to generate a trace id, use sp_trace_setevent to select the events you want to capture, and finally start the trace with sp_trace_setstatus. Read more about creating traces here. If you aren’t familiar with creating traces via T-SQL use SQL Profiler to capture the events you need and set whatever properties you need then extract the trace properties to a script file.


Finally mark your new procedure to start when SQL does by using sp_procoption. This procedure takes 3 parameters; the name of your procedure, the option ‘startup’, and a true or false value. ‘Startup’ is the only option available, true sets to autostart, and false sets to not autostart. 


In my demo script below I’ve created a trace that will track what a certain user does in the database as well as when they log in or out. 


USE [master]
GO

CREATE PROCEDURE [dbo].[MyDefaultTrace]
AS

begin
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
declare @filecount int
set @maxfilesize = 100
set @filecount = 5

exec @rc = sp_trace_create @TraceID output, 2,
N'C:\Perflogs\MyDefaultTrace',
@maxfilesize, NULL, @filecount

declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 14, 7, @on
exec sp_trace_setevent @TraceID, 14, 23, @on
exec sp_trace_setevent @TraceID, 14, 8, @on
exec sp_trace_setevent @TraceID, 14, 64, @on
exec sp_trace_setevent @TraceID, 14, 1, @on
exec sp_trace_setevent @TraceID, 14, 9, @on
exec sp_trace_setevent @TraceID, 14, 21, @on
exec sp_trace_setevent @TraceID, 14, 25, @on
exec sp_trace_setevent @TraceID, 14, 41, @on
exec sp_trace_setevent @TraceID, 14, 49, @on
exec sp_trace_setevent @TraceID, 14, 57, @on
exec sp_trace_setevent @TraceID, 14, 2, @on
exec sp_trace_setevent @TraceID, 14, 6, @on
exec sp_trace_setevent @TraceID, 14, 10, @on
exec sp_trace_setevent @TraceID, 14, 14, @on
exec sp_trace_setevent @TraceID, 14, 26, @on
exec sp_trace_setevent @TraceID, 14, 66, @on
exec sp_trace_setevent @TraceID, 14, 3, @on
exec sp_trace_setevent @TraceID, 14, 11, @on
exec sp_trace_setevent @TraceID, 14, 35, @on
exec sp_trace_setevent @TraceID, 14, 51, @on
exec sp_trace_setevent @TraceID, 14, 12, @on
exec sp_trace_setevent @TraceID, 14, 60, @on
exec sp_trace_setevent @TraceID, 15, 7, @on
exec sp_trace_setevent @TraceID, 15, 15, @on
exec sp_trace_setevent @TraceID, 15, 23, @on
exec sp_trace_setevent @TraceID, 15, 8, @on
exec sp_trace_setevent @TraceID, 15, 16, @on
exec sp_trace_setevent @TraceID, 15, 64, @on
exec sp_trace_setevent @TraceID, 15, 9, @on
exec sp_trace_setevent @TraceID, 15, 17, @on
exec sp_trace_setevent @TraceID, 15, 41, @on
exec sp_trace_setevent @TraceID, 15, 49, @on
exec sp_trace_setevent @TraceID, 15, 57, @on
exec sp_trace_setevent @TraceID, 15, 6, @on
exec sp_trace_setevent @TraceID, 15, 10, @on
exec sp_trace_setevent @TraceID, 15, 14, @on
exec sp_trace_setevent @TraceID, 15, 18, @on
exec sp_trace_setevent @TraceID, 15, 26, @on
exec sp_trace_setevent @TraceID, 15, 66, @on
exec sp_trace_setevent @TraceID, 15, 3, @on
exec sp_trace_setevent @TraceID, 15, 11, @on
exec sp_trace_setevent @TraceID, 15, 35, @on
exec sp_trace_setevent @TraceID, 15, 51, @on
exec sp_trace_setevent @TraceID, 15, 12, @on
exec sp_trace_setevent @TraceID, 15, 60, @on
exec sp_trace_setevent @TraceID, 15, 13, @on
exec sp_trace_setevent @TraceID, 15, 21, @on
exec sp_trace_setevent @TraceID, 43, 7, @on
exec sp_trace_setevent @TraceID, 43, 15, @on
exec sp_trace_setevent @TraceID, 43, 8, @on
exec sp_trace_setevent @TraceID, 43, 64, @on
exec sp_trace_setevent @TraceID, 43, 1, @on
exec sp_trace_setevent @TraceID, 43, 9, @on
exec sp_trace_setevent @TraceID, 43, 41, @on
exec sp_trace_setevent @TraceID, 43, 2, @on
exec sp_trace_setevent @TraceID, 43, 10, @on
exec sp_trace_setevent @TraceID, 43, 3, @on
exec sp_trace_setevent @TraceID, 43, 11, @on
exec sp_trace_setevent @TraceID, 43, 35, @on
exec sp_trace_setevent @TraceID, 43, 51, @on
exec sp_trace_setevent @TraceID, 43, 12, @on
exec sp_trace_setevent @TraceID, 43, 5, @on
exec sp_trace_setevent @TraceID, 43, 13, @on
exec sp_trace_setevent @TraceID, 43, 6, @on
exec sp_trace_setevent @TraceID, 43, 14, @on
exec sp_trace_setevent @TraceID, 43, 62, @on
exec sp_trace_setevent @TraceID, 45, 7, @on
exec sp_trace_setevent @TraceID, 45, 8, @on
exec sp_trace_setevent @TraceID, 45, 64, @on
exec sp_trace_setevent @TraceID, 45, 1, @on
exec sp_trace_setevent @TraceID, 45, 41, @on
exec sp_trace_setevent @TraceID, 45, 10, @on
exec sp_trace_setevent @TraceID, 45, 3, @on
exec sp_trace_setevent @TraceID, 45, 11, @on
exec sp_trace_setevent @TraceID, 45, 35, @on
exec sp_trace_setevent @TraceID, 45, 51, @on
exec sp_trace_setevent @TraceID, 45, 12, @on
exec sp_trace_setevent @TraceID, 45, 5, @on
exec sp_trace_setevent @TraceID, 45, 13, @on
exec sp_trace_setevent @TraceID, 45, 6, @on
exec sp_trace_setevent @TraceID, 45, 14, @on
exec sp_trace_setevent @TraceID, 45, 62, @on
exec sp_trace_setevent @TraceID, 45, 15, @on
exec sp_trace_setevent @TraceID, 41, 7, @on
exec sp_trace_setevent @TraceID, 41, 15, @on
exec sp_trace_setevent @TraceID, 41, 8, @on
exec sp_trace_setevent @TraceID, 41, 64, @on
exec sp_trace_setevent @TraceID, 41, 1, @on
exec sp_trace_setevent @TraceID, 41, 41, @on
exec sp_trace_setevent @TraceID, 41, 10, @on
exec sp_trace_setevent @TraceID, 41, 3, @on
exec sp_trace_setevent @TraceID, 41, 11, @on
exec sp_trace_setevent @TraceID, 41, 35, @on
exec sp_trace_setevent @TraceID, 41, 51, @on
exec sp_trace_setevent @TraceID, 41, 12, @on
exec sp_trace_setevent @TraceID, 41, 5, @on
exec sp_trace_setevent @TraceID, 41, 13, @on
exec sp_trace_setevent @TraceID, 41, 6, @on
exec sp_trace_setevent @TraceID, 41, 14, @on


-- Set the Filters
declare @intfilter int
declare @bigintfilter bigint

exec sp_trace_setfilter @TraceID, 11, 0, 6, N'BadUser'

exec sp_trace_setstatus @TraceID, 1

end

GO

EXEC sp_procoption N'MyDefaultTrace', 'startup', '1'

GO



Now when I check to see what traces are running trace id #1 is the trace created in my procedure. The trace start is also written to the SQL error log when the master database starts.



image 



As always, please test this on a server you can play with before putting into production.

Wednesday, July 21, 2010

Winning the DBA Open

Last weekend I, like millions of other golf fans around the world, watched as Louis Oosthuizen, an unknown (at least to me) golfer out of South Africa, won The Open Championship at St Andrews by an astounding 7 strokes. He showed confidence in his abilities throughout the tournament, never losing his composure playing in some of the toughest conditions on any course in the world. Congratulations, Louis!

 

As i watched the game, I began to think that being a DBA and being a champion golfer weren’t that different. Both need to be confident in their abilities, both need to have a solid game plan, and both have to be able to execute their craft under pressure, being prepared for any eventuality.

Preparing for the tournament

A good golfer gets to the course early, days before the tournament begins. They play the course, talk to locals to learn any particular quirks the course may have, and practices hundreds of shots of different lengths or difficulties.

golfer

Good DBAs do the same. They find out about the host server, talking to the system team or SAN administrator; what kind of disk subsystem is there? what’s the memory or CPU like? does the server meet the requirements for SQL?

You drive for show…

The golfer starts his round on the tee. He’ll determine the club to use based on experience and conditions; where’s the wind coming from? how long is the hole? where’s the danger spots? A well hit drive will end up far down the fairway, perfectly placed for his next shot. 

Click to view

The DBA does the same when he sets up the server. His experience comes into play knowing what has to be done, and server conditions will help him to correctly place the system databases and data and log files for other databases. He’ll know what options to set and what to set them to. His server will be set up to optimal conditions, perfectly placed for hosting the company data.

…the approach shot…

Now the golfer will try to get his ball onto the green. He’ll again choose the right club, line up the shot, take a practice swing or two, then hits his best shot. He’ll know how his ball will fly because he has hit the same shot hundreds of times before so he can compare against past performance.

Click to view

The DBAs approach shot is to benchmark his system. He should know how the server will perform during peak and non-peak times.

---getting out of a hazard…

Sometimes, though, the golfer’s approach shot is mishit. Or maybe a gust of wind holds up the shot and it lands in a greenside bunker. A good golfer knows how to get out of those situations and save par.

humorous golf hazard

Similarly, a DBA must know how to get out of disaster scenarios. He knows he has set up and is maintaining  a proper backup strategy and has practiced recovering databases until he knows the procedure inside and out. He knows the different options available for DBCC CHECKDB.

---and you putt for dough

This is where the good golfers shine. They don’t fade when standing over a three foot putt with the tournament on the line. They know they’ve made the same putt over and over before, and they’ve practiced for just this occasion. They read the putt’s line, judge the speed, and with millions of eyes watching they calmly knock it hit it center cup. 

Click to view

The DBA also is good under pressure. They’ve written and checked over their scripts, and they’re confidant they’ll perform as written. They don’t wilt when the CIO is standing over their shoulder asking why performance is so bad on their server. 

The caddy…

All good golfers know the value of their caddy. A caddy’s job is to help his golfer, pointing out the course yardage, offering the right club, taking some of the pressure of the golfer. Sometimes the caddy is also a teacher, pointing out flaws in the golfer’s swing.

Golf Caddies Baltusrol

A DBA also needs a caddy. Their caddy can be a mentor or coworker at work. They can use social networks for advice on specific problems or as a sounding board.      

Preparing for the next tournament

The golfer is always practicing. He’s at the driving range on his day off, constantly working on correcting swing flaws and working on new shots.

Likewise the DBA is constantly learning. They attend their local PASS chapter meetings or go to national conventions. They’re constantly reading blogs and whitepapers, learning new skills to help them manage their servers or help their company.

The moral? Like in golf, or any endeavor, the secret of being a good DBA is to be prepared. Learn your craft. Ask your caddy when you have questions. Practice when you’re able. Keep learning. And you too can win the DBA Open.

Monday, July 19, 2010

What I learned today – importing mixed data types from Excel

Recently a coworker came up with an issue I never ran into before. He’s importing a large number of Excel spreadsheets into a SQL 2005 instance. One of the columns contains mixed data; 1, 2, 3, x, y, z for example. What he found is that if the first 20 or so rows are strictly numeric then the connection treats the whole column as numeric, and alphanumeric values are returned as NULLs. He’s querying using OPENROWSET, but it looks like this happens in SSIS as well. 

What his research found was that there’s a few registry keys that come into play. There’s also another property he needed to set in his connection string. When he adds “IMEX=1” it allows importing of mixed data types based on the the registry setting “ImportMixedTypes”. The default value is “text”. One drawback to text is that you’re limited to 255 characters. The only other available value is “Majority Type”. When you use Majority Type some of the values are scanned and the data type with the most values is used.

The number of rows scanned is based on the “TypeGuessRow” registry setting. By default it’s set to 8, but valid values are 0 to 16. If you set it to 0 then all rows are sampled. Setting it to a value between 1 and 16 samples that number.

Both keys can be found at HKEY_Local_Machine/Software/Microsoft/Jet/4.0/Engines/Excel/.  

Here are a few articles that explain the issue better.

http://jingyang.spaces.live.com/blog/cns!CC21A118B1B5250!318.entry?sa=781031802

http://www.instantpages.ltd.uk/ADODB_WP.htm

UPDATED 7/20/2010: You can read more about the actual issue here.

Wednesday, July 14, 2010

Changing the name of your SQL server

My company recently changed their standard naming conventions for computers, so yesterday I had to rename my workstation. Usually this isn’t a big deal, except that I’m running locally a default instance of SQL 2005 and a named instance of SQL 2008.Again, not a big deal since this is just my local playground. But I wanted to sync up the names.

Let’s say that my laptop was named “CHICAGO”. That makes the default instance also “CHICAGO'”, and my named instance “CHICAGO\KATMAI”. Now my laptop name changed to “NEWCHICAGO”. My SQL instances stay as “CHICAGO” and “CHICAGO\KATMAI”. How do you change them to match the new computer name?

Couldn’t be simpler, just execute two procedures. For the default instance.

USE master;
GO

EXEC sp_dropserver 'CHICAGO';
GO

EXEC sp_addserver 'NEWCHICAGO', local;
GO

It’s the same for a named instance. Just add the instance name.

USE master;
GO

EXEC sp_dropserver 'CHICAGO\KATMAI';
GO

EXEC sp_addserver 'NEWCHICAGO\KATMAI', local;
GO
Then, just restart the SQL service and you should see the name change.


Note, though, that this only changes the server name, not the instance name.  I haven’t had an opportunity to try that yet.

Tuesday, July 13, 2010

Another use for PowerShell and SQL

A few months ago I wrote a PowerShell script that looped through servers and databases  to save the properties inside an Excel spreadsheet (PowerShell and SQL). I thought it’s time to add another one.

One of the daily tasks I set up for myself as a SQL DBA is to go through the error logs for anything that may be an indicator of a problem. This is especially true on Sunday afternoons or Monday mornings, after my weekly DBCC CHECKDB job runs. I’ve configured the jobs to email me the results, but since I’m monitoring 10 instances they can be time consuming to read through. I usually just cut to the bottom, where DBCC reports the number of errors it finds. If the number of errors is 0, all is well and good. Otherwise, of course, further investigation is needed.

This seemed like a natural for a new PowerShell script. One of the great strengths of PowerShell is it’s ability to connect to as many remote servers as needed. And since I knew that the DBCC CHECKDB summary is written in the error log all I need to do is to read the logs. I had read this post by Buck Woody (blog | twitter) about doing almost exactly what I wanted to do. I merged his script with my earlier script and made a few other changes. My final code looks like this:

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 

#Get SQL account information
$credential = Get-Credential
$loginName = $credential.UserName -replace("\\","")
$password = $credential.Password

#Get variables
$logDate = read-host("
Enter log date")
$searchPhrase = "
*"
$searchPhrase += read-host("
Enter search phrase")
$searchPhrase += "
*"
$srvlist = @(get-content "
C:\Power\SQL_Servers.txt")

#Loop through all instances
foreach ($instance in $srvlist)
{
$instance
$srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $instance
$srv.ConnectionContext.LoginSecure = $false
$srv.ConnectionContext.set_Login($loginName)
$srv.ConnectionContext.set_SecurePassword($password)
$srv.ReadErrorLog(0) | where {$_.LogDate -gt $logDate -and $_.Text -like $searchPhrase}
}



Since I don’t always access the servers from a computer on the domain, I added login credentials. You can remove these if you want to use Windows authentication. After that I read a date and search phrase I type in. I found that this can come in handy when I want to change what I’m searching for. For instance, I can enter the phrase “DBCC CHECKDB” and the script will return the summaries based on the date I entered. Or I can search for “BACKUP” or “ERROR”, whatever my heart desires.

Tuesday, July 6, 2010

Stop SQL Agent when enabling Service Broker on msdb

I forget this every time I set up a new server.

When I set up database mail on anew server I usually get the message “Service Broker is not enabled on msdb. Do you want to enable it?” Clicking OK gives me an unresponsive SQL server. Running the command “ALTER DATABASE msdb SET Enable_Broker” runs forever with no message returned.

What I forget each and every time is that the SQL Agent is running. Checking the locks or running sp_who2 shows that my ALTER DATABASE is blocked by the process running the Agent. Simply stopping the Agent allows the statement to finish, whether from the query window or clicking OK in the dialog box mentioned earlier. Then I just restart the Agent and everything’s hunky-dory again.

Maybe now I’ll remember it.

Monday, July 5, 2010

My favorite SQL features

Recently I ran across a contest sponsored by Microsoft SQL Server MVP Ken Simmons (Blog | Twitter). One of the ways to enter the contest was to create a blog post listing favorite SQL features. But how do you select a favorite? That’s like asking someone to choose a favorite song. Like most things the the SQL world, the correct answer is “it depends”, usually on what I’m trying to do at the time.

So here’s my top 10 favorite SQL features and tips, in the order that they come to mind. I’m ignoring features I’m not currently using, like backup compression and data encryption, and limiting myself to those I use the most often.

  1. Integration with PowerShell. This is a great feature to administer multiple SQL instances easily. For those who haven’t been on the internet in the last few years, PowerShell is Microsoft’s new scripting language based on the .NET framework. And starting with SQL 2008, PowerShell became integrated with SQL. What does this mean to you as a DBA? Well, imagine yourself as the new head DBA for your company, and you’re now in charge of 100 SQL instances. Your boss tells you that your first task is to standardize all instances. How can you quickly view the properties of each server? With PowerShell and remoting, you can easily query each instance and save the results to an Excel spreadsheet for easy analysis. I gave an example of how I do it in an earlier blog post here.      
  2. Dynamic Management Views and Functions (DMV). Introduced with SQL 2005, DMVs are a great way to see how your server is performing. I’m not going to go into them in depth here since they’ve been written about by many others much better than I can. If you’re not familiar with them, do yourself a favor and start reading about them. They’ll make your life much easier.
  3. Storing queries in the toolbox. Like all DBAs, I’ve got a number of queries that I run over and over again. There’s numerous ways to store these for easy access, but my favorite for short one-line queries is to store them on the toolbox. Simply open the toolbox (Ctrl + Alt + x) and drag your query over. To recall it, open the toolbox, copy the desired query, and paste it into a new query window.
  4. Policies. Introduced in SQL 2008, Policy Based Management allows DBAs to manage multiple servers from a central server. Policies can standardize server configurations and check them for compliance.   
  5. Report Services. Built-in reporting capabilities for the same price as the database engine. ‘nuff said.
  6. IntelliSense. This feature gives the same statement completion that you see in Visual Studio, not surprising since SQL Management Studio uses the same IDE and Visual Studio. Unfortunately it doesn’t work against 2005 servers or earlier. It did in the earlier betas but was removed in the final release. Hopefully Microsoft will put it back.
  7. Object Explorer Details. You can now see details of different database objects using the details view (F7). It’s also configurable. Just right click on the screen and choose what columns you want to see.    image   
  8. Running queries against multiple instances. If you set up Registered Groups and register SQL instances by groups, you can execute a query against all servers in the group. You can select all groups, but you can’t choose to ignore instances inside the group. It’s all or nothing. But still, a significant timesaver when you need to run multiple statements against multiple instances.  
  9. Setting variable values. One of the nice things in the .NET world is the ability to use easily change the value of a variable; i += 1 for instance. Now you can do that in SQL too. Instead of 
  10. SET @i = @i + 1


    you can now do it like this 


    SET @i += 1



  11. SQL Community. Not technically part of SQL itself, but I feel it’s important nevertheless. I first began learning about SQL on my own and quickly found myself overwhelmed. There was no time to learn at work, and I needed to find the knowledge to do my job. I found the Chicago SQL User Group and found what I needed; experienced DBAs volunteering to talk about a subject they love. ChiSUG lead me to PASS, an international group with tons of free resources. I began following blogs written by the same people authoring the articles on PASS and presenting at the group meetings. Reading these blogs lead me to twitter, where I can follow SQL experts in real-time and ask for help with the #SQLhelp tag. Finally there’s SQL Saturdays, day-long events with SQL experts speaking on various topics. Notice the recurring theme about these resources; they are all free. I’m constantly amazed by the amount of content that SQL DBAs and developers make available for no cost, and the time they put in as volunteers at in-person events. To the entire SQL community, thank you. I wouldn’t be where I am today without your help. 

Tuesday, June 29, 2010

PIVOT compared to CASE for a crosstab result

Yesterday I was playing with a test query for a coworker using the PIVOT keyword when it dawned on me that I never compared it to using CASE statements. So, here’s the requirements.

There is a table called lab_results, with a partial DDL statement below

CREATE TABLE [dbo].[lab_results](
[lab_result_id] [char](12) NOT NULL,
[patient_id] [char](8) NOT NULL,
[test_code] [numeric](4, 0) NOT NULL,
[sample_date] [datetime] NULL,
.............................
.............................
CONSTRAINT [PK_lab_results] PRIMARY KEY CLUSTERED
(
[lab_result_id] ASC
) ON [PRIMARY]

CREATE NONCLUSTERED INDEX [IX_lab_results_3] ON [dbo].[lab_results]
(
[test_code] ASC,
[sample_date] ASC
) ON [Secondary]


This table has approximately 18 million rows, and you need to get a count of test codes during each year. Anything prior to 1/1/2006 would be counted together. The data would look something like this


image 


Prior to SQL 2005, I would use a series of CASE statements to perform a count of each test code


SELECT test_code
, SUM(CASE WHEN DATEPART(yy, sample_date) = 2010 THEN 1 END) AS [2010]
, SUM(CASE WHEN DATEPART(yy, sample_date) = 2009 THEN 1 END) AS [2009]
, SUM(CASE WHEN DATEPART(yy, sample_date) = 2008 THEN 1 END) AS [2008]
, SUM(CASE WHEN DATEPART(yy, sample_date) = 2007 THEN 1 END) AS [2007]
, SUM(CASE WHEN DATEPART(yy, sample_date) = 2006 THEN 1 END) AS [2006]
, SUM(CASE WHEN DATEPART(yy, sample_date) < 2006 THEN 1 END) AS [Pre2006]
FROM lab_results
GROUP BY test_code


Using the new (to SQL 2005 anyway) PIVOT keyword, I could rewrite the query like this


SELECT test_code, [2010], [2009], [2008], [2007], [2006], [2005] AS Pre2006
FROM
(
SELECT test_code, sample_date
, CASE WHEN DATEPART(yy, sample_date) < 2006 THEN 2005
ELSE DATEPART(yy, sample_date)
END AS LabResultYear
FROM lab_results
) AS tmp
PIVOT (COUNT(sample_date) FOR LabResultYear IN ([2010], [2009], [2008], [2007], [2006], [2005])) AS p;


But I was never looked at the execution plan before yesterday to see how each was run. So I finally did, and here’s what I saw. First the plan for the CASE statement…


image 


…and then the plan for the PIVOT example.


 image 






At first glance they seem identical. Both are using the same operators and both are using an index seek on the non-clustered index. However there are small differences. The CASE statement completes 2 seconds faster than the PIVOT query, 45 seconds to 47. The index seek,while having an overall higher percentage of the whole, has the same operator costs as the PIVOT. The difference shows in the Stream Aggregate operator. The CASE operator is 10.8, which is 10% of the whole query. In the PIVOT example, the Stream Aggregate is 18% of the whole, and the operator cost is 19.8.



One other point about the PIVOT keyword. If you compare my code snippets to the screenshots of the executions, you’ll notice that I’m actually running against the master database and using a fully qualified name. If I ran it directly against the database where the table lives it would fail. Why? because that database, while running in on a SQL 2005 server, is still at compatibility level 7 (I know,I know). It doesn’t work unless you’re at level 9 or higher.

Thursday, April 15, 2010

What I learned tonight – tempdb size when SQL starts

I went to the April monthly meeting of the Chicago SQL Server User Group tonight to see Brad McGehee (Blog | Twitter). Brad’s in town to present at SQLSaturday # 31 in Chicago and he also gave a talk to us tonight on optimizing tempdb performance.

It was a good session because it went over tempdb and best practices. Most of the information I already knew but it’s always good to hear it again, even as a refresher. And there’s usually something I took for granted but was wrong about. And tonight was no exception.

I knew that the data and log files for tempdb are recreated when the SQL service starts. But I thought that tempdb got it’s db options from the model database. For instance, if the model database is set at 2 MB for the data and 1 MB for the log then tempdb would inherit those same settings. But i was wrong. As Brad mentioned tonight, tempdb starts out as 8 MB for the data and the log file is 1 MB. On my laptop its’s actually 512 KB but maybe I heard him wrong.

Anyway, to properly resize tempdb you should run the ALTER DATABASE command…

USE master;
GO

ALTER DATABASE tempdb
MODIFY FILE ( NAME = 'tempdev', SIZE = 25600KB , FILEGROWTH = 5120KB );
GO

ALTER DATABASE tempdb
MODIFY FILE ( NAME = 'templog', SIZE = 2048KB , FILEGROWTH = 1024KB );
GO





…and tempdb will be correctly resized after every restart. And you’ve heard that you should create multiple tempdb files depending on the number of CPU’s you have? Well, just do that at the same time. Here I’m renaming, moving, resizing, and adding a file to tempdb; 



USE master;
GO

ALTER DATABASE tempdb
MODIFY FILE (NAME = 'tempdev', NEWNAME = 'tempdev1', FILENAME = 'C:\SQL2008\DATA\tempdev1.mdf');
GO

ALTER DATABASE tempdb
MODIFY FILE (NAME = 'templog', FILENAME = 'C:\SQL2008\DATA\templog.ldf');
GO

ALTER DATABASE tempdb
MODIFY FILE (NAME = 'tempdev1', SIZE = 25600KB , FILEGROWTH = 5120KB);
GO

ALTER DATABASE tempdb
MODIFY FILE (NAME = 'templog', SIZE = 2048KB , FILEGROWTH = 1024KB);
GO

ALTER DATABASE tempdb
ADD FILE (NAME = 'tempdev2',
FILENAME = 'C:\SQL2008\DATA\tempdev2.ndf',
SIZE = 25600KB , FILEGROWTH = 5120KB);
GO


The usual disclaimers go here. Don’t use this script without understanding it as it can screw up your server. Don’t use it in production until you’ve testes it.



And I’ll leave the discussion of multiple tempdb files for now.




Tuesday, April 13, 2010

SQL error log finds all copies of resource db after moving db

Just a short post about something interesting I saw in the SQL error log a while back.

A while ago, I moved the system databases off the C drive’s of our servers to another drive on the SAN. That included moving the resource database. You can read Microsoft’s recommendations on moving system db’s on MSDN.

Anyway, I copied the mssqlsystemresource.mdf and mssqlsystemresource.ldf files to a backup location in case of an emergency. But only one database was attached. 

This interesting entry in the SQL error log was after I restarted the server:

The resource database has been detected in two different locations. Attaching the resource database in the same directory as the master database at 'O:\SQL2005\DATA\master.mdf' instead of the currently attached resource database at 'O:\SQL2005\Data\mssqlsystemresource.mdf'.

So, SQL saw both the attached db and the unattached backup copy when I restarted the service. The only time I saw the message was the first reboot, not any subsequent restarts.

Wednesday, March 24, 2010

Using the Inserted and Deleted tables outside of a trigger

Just a simple little post about a conversation about triggers I had earlier today with a developer, specifically using them to audit update and delete statements against a table. During our talk I mentioned that, starting in SQL 2005, the inserted and deleted tables were available outside of a trigger. He wasn’t aware of that until I mentioned it. So, for anyone else who’s not familiar with the inserted and deleted tables, here’s a little summary.

Briefly, when you insert, update, or delete records from a table, SQL stores those records in a virtual table; inserted for inserts and deleted for updates and deletes. These tables are available inside of table triggers. Microsoft SQL Books Online has an article explaining the use of the inserted and deleted tables inside of triggers; read that first as a refresher. In our shop we use them for monitoring who is updating and deleting certain critical tables, and we can roll back changes if necessary. 

Starting with SQL 2005, you can use the OUTPUT clause to access the inserted and deleted tables outside of DML triggers. You can read the article in Books On Line.  

Here’s a simple demo script to run. I added a primary key and a non nullable column to show what happens in error conditions. 

USE tempdb;
GO

SET NOCOUNT ON;
GO

-- Create a testing table
USE [tempdb]
GO

IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[TestOutput]') AND type in (N'U'))
DROP TABLE [dbo].[TestOutput]
GO

USE [tempdb]
GO

CREATE TABLE [dbo].[TestOutput](
[Field1] [int] NOT NULL,
[Field2] [char](1) NOT NULL,
CONSTRAINT [PK_TestOutput] PRIMARY KEY CLUSTERED
(
[Field1] ASC
)ON [PRIMARY]
) ON [PRIMARY]

GO

--Testing of Outupt statement
DECLARE @TestOutput_Audit TABLE
(
Field1 INT,
Field2 CHAR(1),
[Action] CHAR(1),
Action_by VARCHAR(25),
Action_Date DATETIME
)

-- Testing Insert
INSERT INTO TestOutput (Field1, Field2)
OUTPUT inserted.Field1, inserted.Field2, 'I', SUSER_NAME(), GETDATE()
INTO @TestOutput_Audit (Field1, Field2, [Action],Action_by, Action_Date)
VALUES (1, 'A')

INSERT INTO TestOutput (Field1, Field2)
OUTPUT inserted.Field1, inserted.Field2, 'I', SUSER_NAME(), GETDATE()
INTO @TestOutput_Audit (Field1, Field2, [Action],Action_by, Action_Date)
VALUES (1, 'A')

SELECT * FROM TestOutput
SELECT * FROM @TestOutput_Audit

-- Testing Update
UPDATE TestOutput
SET Field2 = 'B'
OUTPUT deleted.Field1, deleted.Field2, 'U', SUSER_NAME(), GETDATE()
INTO @TestOutput_Audit (Field1, Field2, [Action],Action_by, Action_Date)
WHERE Field1 = 1

UPDATE TestOutput
SET Field2 = NULL
OUTPUT deleted.Field1, deleted.Field2, 'U', SUSER_NAME(), GETDATE()
INTO @TestOutput_Audit (Field1, Field2, [Action],Action_by, Action_Date)
WHERE Field1 = 1

SELECT * FROM TestOutput
SELECT * FROM @TestOutput_Audit

-- Testing Delete
DELETE TestOutput
OUTPUT deleted.Field1, deleted.Field2, 'D', SUSER_NAME(), GETDATE()
INTO @TestOutput_Audit (Field1, Field2, [Action],Action_by, Action_Date)
WHERE Field1 = 1

SELECT * FROM TestOutput
SELECT * FROM @TestOutput_Audit



The INSERT statement simply inserts a single record into my table. Afterwards you can see the same record in the table as well as my audit table. The second record wasn’t added to either table because it’s a primary key violation.


Msg 2627, Level 14, State 1, Line 18

Violation of PRIMARY KEY constraint 'PK_TestOutput'. Cannot insert duplicate key in object 'dbo.TestOutput'.


The statement has been terminated.



Field1      Field2

----------- ------


1           A



Field1      Field2 Action Action_by                 Action_Date

----------- ------ ------ ------------------------- -----------------------


1           A      I      PERTELLT3\John            2010-03-24 17:58:03.407





I make a simple UPDATE. Now the table shows the update and the audit shows the value before the update took place. The second update fails because Field2 doesn’t allow NULL values; the main record isn’t updated nor added to the audit table.



Msg 515, Level 16, State 2, Line 33

Cannot insert the value NULL into column 'Field2', table 'tempdb.dbo.TestOutput'; column does not allow nulls. UPDATE fails.


The statement has been terminated.



Field1      Field2

----------- ------


1           B



Field1      Field2 Action Action_by                 Action_Date

----------- ------ ------ ------------------------- -----------------------


1           A      I      PERTELLT3\John            2010-03-24 17:58:03.407


1           A      U      PERTELLT3\John            2010-03-24 17:58:03.413





Finally, when I delete the record I can see the record is gone from the main table but stored in the audit table.



Field1      Field2

----------- ------



Field1      Field2 Action Action_by                 Action_Date

----------- ------ ------ ------------------------- -----------------------


1           A      I      PERTELLT3\John            2010-03-24 17:58:03.407


1           A      U      PERTELLT3\John            2010-03-24 17:58:03.413


1           B      D      PERTELLT3\John            2010-03-24 17:58:03.413

Monday, March 1, 2010

Issues installing SQL 2005 SP3

Before I get to the details, I’d like to ask a few questions. I’d be interested in hearing from anyone who’s had problems installing SQL service packs or hotfixes. Has anyone else had the errors I’m mentioning below, or anything similar? What procedures or best practices do you follow when installing patches? 

Last week I finally was able to install SQL 2005 SP3 on our production servers. I’ve installed countless service packs and hotfixes in the past on numerous servers both physical and virtual, development and production. And I’ve never run into the problems I had last week.

In this case, I’m upgrading four production sites from SQL 2005 SP2 with the CU7 hotfix installed, build 3239 (see this article from SQLServerCentral that matches build numbers to service packs). When I installed SP3 on our dev and test servers I received the usual warning about running software. So last week I tried to stop the SQL services on the first 2 servers. This brought up Error #1: The services were telling me that I lacked the proper permissions to stop the SQL service. Now my account is an administrative account on the servers so it has plenty of rights, certainly enough to stop the service. ISP3.

Now comes Error #2: The system databases on a SAN drive were read only, and again I didn’t have proper permissions to rewrite them! Again, I’m an administrator and should have been able to do this with no problems. Reading the install logs just gave me the same information. And again, this error only occurred on the first 2 servers, not the final 2.

Next I logged off the servers and logged back in using the same account that the SQL services run under. I continued to get the same 2 errors on Server 1, but Server 2 let me complete the installation. Still no errors at all on Servers 3 or Server 4. Now I’m getting concerned.

My next step was to restart Server 1. After the server came back up, I could finally install the service pack. When I finished I checked the Windows and SQL error logs for any clues that would explain why my permissions were being denied. I must admit my Windows admin skills aren’t the greatest so I probably missed the cause. I did notice that the anti-virus program we run had similar issues at the same time, but the anti-virus doesn’t hit the database files and I wasn’t trying to do anything with the anti-virus service. My guess is that somehow Server 1 and Server 2 somehow couldn’t validate my account after I logged in.

Anyway, I thought I was finished. Not quite. My final step was to verify the new version. So I ran SELECT SERVERPROPERTY('ProductLevel') and all four servers returned SP3. The build number now showed as 9.00.4035. All looks good, right? A few days later I ran EXEC sp_server_info.

Finally we get to Error #3. On Server 2, remember that I had problems at first with permissions but I thought the SP installed correctly on the 2nd try. But now looking at the value for DBMS_VER and SYS_SPROC_VERSION I’m seeing the build is still showing 9.00.3239! Why? I’m not sure. I think it’s due to 1 of 2 reasons.

First, even though the SP3 install looked to be successful the second time it somehow skipped a component. During the reinstall I thought I selected all components, but it’s possible that I only thought I did, skipping something. NOTE TO SELF: for future installs document what components you’re installing. At least take a screenshot of what I’ve checked.

The second possibility is because of Analysis Services. SSAS was installed on Server 2 but the service is disabled. That makes sense since we’re not using SSAS currently, but the reason it was installed on just one server is lost to the mists of time. In any case the SP3 install logs show that SSAS was disabled and it’s at build 3239. But by that logic it would only be as 3239 if it was running during other patches. Again, I can’t be sure.

Tuesday, February 16, 2010

How I save table sizes for all databases

Back when I first started monitoring our production servers, I wanted a way to track the size and growth of all tables in our databases. At the time, we were really only worried about 1 legacy database, and all the objects in that database were in the dbo schema. So tracking size growth was pretty easy; I ran the following script against that database weekly and compared results.

declare @TableName    sysname

declare mySpace cursor for
select name from sys.sysobjects where xtype = 'U' 
order by name
open mySpace
fetch next from mySpace into @TableName
while @@fetch_status = 0
begin
insert into PerfDB.dbo.TableSpace(TableName, NumRows, Reserved, DataUsed, IndexUsed, Unused)
exec sp_spaceused @TableName
fetch next from mySpace into @TableName
end
close mySpace 
deallocate mySpace

After a while, we began to create new projects that required new databases. And these databases had objects that were created into separate schemas. So now, if I wanted to track changes to all tables in all databases, I’d have to run the above code against all databases. Do-able, certainly, but not exactly what I wanted to maintain. I’d have to add the code to each new database we added to the server. And sp_spaceused only returns the name of the table, not the schema name. I wanted to separate the tables by database as well as schema.

So I played around with using two well known undocumented procedures, sp_MSForEachDB and sp_MS_ForEachTable. I’ve used them before, but I never nested foreachtable inside foreachdb. And I still needed a way to separate the schema from the table name. And I wanted to get away from using a cursor.

I found the following script on a SQLServerCentral forum that gave me an idea. Running this script showed that the value of ‘?’ when running sp_MSForEachTable showed the schema name as well as the table name.

declare @command1 varchar(8000)
select @command1 = 
'
IF ''@'' <> ''master'' AND ''@'' <> ''model'' AND ''@'' <> ''msdb'' AND ''@'' <> ''tempdb'' 
begin
use [@] execute sp_MSForEachTable ''print ''''?'''' '' 
end    
'
exec sp_MSforeachdb @command1, '@'
So now I can just run sp_spaceused inside the sp_MSForEachTable block and strip out the schema into a separate column.

Here’s the code I use now

CREATE TABLE  #TableSpace (
DatabaseName    nvarchar(128) NULL,
SchemaName        nvarchar(128) NULL,
TableName        nvarchar(128) NOT NULL,
rows            int NOT NULL,
reserved        varchar(15) NOT NULL,
data            varchar(15) NOT NULL,
index_size        varchar(15) NOT NULL,
unused            varchar(15) NOT NULL)
DECLARE @SQL varchar(8000)
SELECT @SQL = 
'
IF ''@'' <> ''master'' AND ''@'' <> ''model'' AND ''@'' <> ''msdb'' AND ''@'' <> ''tempdb'' 
BEGIN
USE [@] EXECUTE sp_MSForEachTable ''INSERT INTO #TableSpace (TableName, rows, reserved, data, index_size, unused) EXEC sp_spaceused ''''?''''; 
UPDATE #TableSpace SET SchemaName = LEFT(''''?'''', CHARINDEX(''''.'''', ''''?'''', 1) - 2) WHERE SchemaName IS NULL;
UPDATE #TableSpace SET DatabaseName = ''''@'''' WHERE DatabaseName IS NULL; ''
END    
'
EXEC sp_MSforeachdb @SQL, '@'

In my example I’m creating a temporary table, but in practice I use a permanent table.  Line 16 is obvious, it skips the system databases. Line 18 calls the sp_spaceused procedure for each table inside each database.

Line 19 is where I strip out the schema name. I just use CHARINDEX function to return the position of the dot separator, then I use the LEFT function to return the characters before it. Since ‘?’ returns the table name as [Schema].[TableName] I also want to skip the brackets, so I move the position one more place to the left to take care of the ] character. I could have been even fancier and did even more string manipulation to take care of the [ character, but it was just as easy to use the REPLACE function on the whole column.

And Line 20 stores the database name. Running the script on my laptop running SQL 2008 gives me results like these

image

I’ve run this against SQL 2005 and SQL 2008 servers. I haven’t tested it against SQL 2000. As with any code you find on the internet, please test it and become familiar with it before you put it into production.  

Monday, January 18, 2010

Using the RELOG utility – Part 2

In my previous post (What I Learned Today - the RELOG utility) I discussed what I discovered the first time I learned about the RELOG utility. In this post I’ll continue with my education. Again, I’ve used Ask The Performance Team, Microsoft TechNet, and this blog post at My Torn Data Pages as my references.

I found that I could output a list of counters from a perfmon counter log using RELOG (RELOG counterfilename –q –o counterfilename). But you can take that a step further. Simply edit your counter file and delete the counters you don’t want. Then rerun RELOG and use the –cf switch to import a list of counters to filter for. My examples use the same counter log file as before:

RELOG SQL1_20100106.csv – q –o counters.txt -- to get my list of counters

RELOG SQL1_20100106.csv –cf counters.txt –c csv –o filtered1.csv --  to filter by those counters after editing the counters.txt file.

And of course I can still filter by time with the –b and –e switches:

RELOG SQL1_20100106.csv –cf counters.txt -b "1/5/2010 9:00:00AM" -e "1/5/2010 10:00:00 AM" –f csv -o filtered2.csv

You can save your settings in a config file…

[cf]
counters.txt
[b]
1/5/2010 9:00:00AM
[e]
1/5/2010 10:00:00AM
[f]
csv
[o]
configresults.csv

…and use just the –config switch
RELOG SQL1_20100106.csv –config relogconfig.txt

image

Now there’s plenty of ways to filter your perfmon files. The last way is to save the results in a SQL database.

Inserting into a SQL database

The –o switch specifies where to output the results of a perfmon counter log file. You have the option to export to a flat file, which I’ve been doing so far, or directly to a database. And that’s simple. First, create a DSN pointing to the server and database of your choice. Stan Segers, whose blog I’ve been referencing at My Torn Data Pages, is correct in noting that you need to use the SQL Driver. Like Stan, I wasn’t able to connect if I chose the SQL Native Client. Then, when you use the –o switch, specify SQL: and the name of your DSN. In the example my counter name is Counters.

RELOG SQL1_20100106.csv –cf counters.txt –o:SQL:Counters 

RELOG will create these 3 tables:

clip_image001

The tables are created the first time you run RELOG to that database. The utility generates a GUID for each run. Each time you run it it will insert records into the CounterDetails and CounterData tables. But it won’t insert into or update the DisplayToID table. So to keep a record of each time you run the utility, specify a different DisplayString. Everything after the exclamation point (!) is the string.  

RELOG SQL1_20100106.csv –cf counters.txt –o:SQL:Counters!DCICHISQL1 

Now that all my performance counters are in the database, I can easily query against them.clip_image002  

I hope I’ve shown you ways to use the RELOG utility. As you can see, it’s pretty powerful.                            

Tuesday, January 12, 2010

Quick access to your common scripts

Everyone has their own library of SQL scripts they run often, and everyone has their own methods for storing them. These a a few ways you may not be aware of.

My first method is pretty well documented. Have you every just clicked Ctrl + F1 and see the results of sp_who? Well, you can add your own procedure calls here. In Management Studio simply open Tools > Options and select Keyboard. Here you’ll see 12 slots with shortcuts of Alt + F1, Ctrl + F1, and Ctrl + 1 through 0. Three of these are already defined; sp_who, sp_help, and sp_lock. You can leave these or replace them with your own, and you can add your own in any available slot. So now you have a way of calling 12 procedures with just a keyboard shortcut. 

image

The second way I ran across by accident and works best for small queries. In Visual Studio you have the ability to store snippets of code in the Toolbox. One day I thought I’d try it in SSMS, since it’s similar to the VS IDE. And it worked. What you need to do is open the Toolbox ( Ctrl + Alt + X). You’ll most likely only see the General tab with no controls; that’s fine since there aren’t any. Right click in the Toolbox, select Add Tab, and give it a name; mine is simply Scripts. Next, open or write your script in a query window. Then highlight the script and drag it onto the Toolbox. You probably won’t see it yet; right click the Toolbox again and make sure both List All and Show All are selected. And voila! There’s your script! To run it just right click the snippet, copy the code, and paste in a new query window.

If you hover the mouse over the snippet you just added you’ll see the entire code. Notice this is added with Text: appended to the start. you can change this by right clicking the snippet and choosing rename, then just replace the word “Text’ with something more meaningful. 

image

The third way is even better. Most people know that there are templates with SQL scripts you can use. Again, you can add your own and have them available anytime you open SSMS.

If you open the Template Explorer (Ctrl + Alt + T) you’ll see the templates that Microsoft gives. There’s two ways to add your own templates. Right click SQL Server Template and choose New > Template. You can also choose Folder to categorize your scripts, otherwise the new template will be in the root. Give your new template or folder a descriptive name. Then right click the template, choose edit, write your code and save it. Now just double click the template to open it in a new query window.

The other way to add your script to the Template Explorer is to add it to folder where the other templates are. On an XP or a Windows 2003 Server that’s at Documents And Settings\your account\Application Data\Microsoft\Microsoft SQL Server\100 (for SQL 2008) or 90 (for SQL 2005)\Tools\Shell\Templates\Sql. On a Vista or Windows 7 box you can find it in Users\your account\AppData\Roaming\Microsoft\Microsoft SQL Server\100 (for SQL 2008) or 90 (for SQL 2005)\Tools\Shell\Templates\SQL.

One word of warning. If you do save your scripts as templates, remember to back them up occasionally. It would be easy to overlook them if you ever rebuild your computer and need to restore them. But of course you already knew that.

image