Showing posts with label SQLServerPedia Syndication. Show all posts
Showing posts with label SQLServerPedia Syndication. Show all posts

Monday, October 4, 2010

What I learned today: No backup or restore with the DAC

For everything I teach, I learn something in return.

I began to train a new junior level DBA today. He’s someone that works with us already as a developer but he’s shown an interest in learning how to be a DBA so I’m beginning to teach him what I know. To start off we began by discussing his daily tasks, then we started to talk about backup and recovery, topics I gave him as an assignment last week. For this week’s homework I told him to read up on the Dedicated Administrator Connection (DAC) and to get familiar with SQLCMD. I mentioned how the DAC was useful in situations where the server was under a strain and you need a way to connect to troubleshoot the problem. Then I said you can use it for backups and recoveries.

I should have quit when I was ahead.

As my new colleague correctly pointed out, you can’t use the DAC for backups or recovery. If you try, you’ll get the following error:

Msg 3637, Level 16, State 3, Line 1
A parallel operation cannot be started from a DAC connection.
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

And BOL can’t make it any clearer:

SQL Server prohibits running parallel queries or commands with the DAC. For example, error 3637 is generated if you execute either of the following statements with the DAC:

  • RESTORE
  • BACKUP

(Ref: http://technet.microsoft.com/en-us/library/ms189595.aspx)

For some reason I’d forgotten that. It was something that I had read a long time ago and forgot since I don’t use the DAC very often, never in production other than to confirm that it was set up through the firewall. I had assumed that backing up a database in an emergency would be one of the reasons I’d use the DAC. Luckily I was reminded before a situation arose where I relied on it.

So I actually learned two things today. I was reminded about the restriction of the DAC, and I learned to be more careful about what I tell others when it comes to SQL, not to assume facts and to do my own homework. I hope I remember both lessons equally. 

 

 

 

Monday, August 16, 2010

What I learned today – CHECKDB and database snapshots

A few days ago I was sitting in my local coffee shop, getting a little light reading done. At work we’re about to move from the Standard Edition of SQL 2005 to the Enterprise Edition and I wanted to learn more about some features of database snapshots. Snapshots are basically just a read-only copy of a database taken at a specific time. Anyway I was reading Chapter 3 – Database and Database Files in the excellent book Microsoft SQL Server 2008 Internals. The discussion turned to space allocated for the snapshots. That’s when I read that DBCC CHECKDB actually runs against a snapshot of the database.

That caught me by surprise. Until then I thought that the CHECKDB ran against the database itself.

Now I thought I was doing OK with protecting our databases. I faithfully run CHECKDB weekly against all databases and check for any errors. I set the CHECKSUM option and verify backups. I occasionally restore databases on a standby server to make sure there’s no problems with the backups. And I monitor database and drive space pretty closely.

But I was flying blind. I have no idea how near I came to having CHECKDB fail because of a lack of space. So I started digging a little deeper into CHECKDB. And I was able to learn a little bit more of how CHECKDB works with snapshots.

For anything concerning CHECKDB, your first resource should be Paul Randal (Blog | Twitter). Read his excellent series on CHECKDB From Every Angle to get a good understanding of what’s going on. And he also wrote Chapter 11 – DBCC Internals in Microsoft SQL Server 2008 Internals, a great book you should read anyway.   

CHECKDB needs a consistent view of the database to work with. In SQL 7 it acquired table locks. In SQL 2000 it analyzed the transaction log. Beginning with SQL 2005 and going forward it creates a hidden snapshot on the same volume as the database – you have no control over where it’s placed. If you’re running CHECKDB at the same time that your server has a heavy workload the snapshot can run out of space and you’ll get an error showing that CHECKDB didn’t complete. If that happens you have a few options;

  • Create your own database snapshot on a drive that has enough space and run CHECKDB against that snapshot. CHECKDB will know that its running against a snapshot and won’t create another one.
  • Start the SQL service in single user mode.
  • Put the database in read-only mode. 

Once CHECKDB has finished it will delete the snapshot if it created one.

But there are a few gotcha’s with CHECKDB creating the snapshot. First, it has to be created on an NTFS drive. Second, you can’t use the WITH TABLOCK hint. And third, you can’t run against tempdb; but why would you want to?.  

I’m sure I’m over simplifying. I still need to finish Chapter 11.

One other thing I’m wondering. If CHECKDB can create a snapshot database in SQL 2005 regardless of the edition, can the Standard Edition create one with a hidden flag?

Tuesday, July 27, 2010

Creating your own default trace

Are you aware of the default trace that SQL Server runs when it starts? Did you know it’s configurable?

No, you can’t edit it. But you can still run a trace on startup that captures only the events that you want.

The first thing you should do is to see if the default trace is enabled. Simple enough, just run sp_configure with advanced options and look for 'default trace enabled'. Or query sys.configurations for configuration_id 1568.

SELECT * 
FROM sys.configurations
WHERE configuration_id = 1568


On my laptop it is.



image



There’s a few ways to see what the trace is doing. First find the trace id. It should be 1, but you can make sure…



SELECT * FROM sys.traces

SELECT * FROM ::fn_trace_getinfo(0)


…then query fn_traace_geteventinfo(1) to see what events the default trace is looking at. You can see the descriptions of the events by joining to some system views. 


SELECT tc.name AS [Category],
dt.eventid, te.name AS [Event Name],
dt.columnid, tcol.name AS [Column Name]
FROM ::fn_trace_geteventinfo(1) AS dt
INNER JOIN sys.trace_events AS te ON dt.eventid = te.trace_event_id
INNER JOIN sys.trace_columns AS tcol ON dt.columnid = tcol.trace_column_id
INNER JOIN sys.trace_categories AS tc ON te.category_id = tc.category_id


image 



But what if, after looking at the events that the default trace is looking at, you decide to add to the list? Or maybe you want to skip some events? Remember, you can’t alter the default trace. 



First disable the default trace. Use sp_configure, the change doesn’t require a restart. It will also close the trace.



sp_configure 'default trace enabled', 0;
go

reconfigure


Next create a stored procedure in the master database that will create and start your trace. Use sp_trace_create to generate a trace id, use sp_trace_setevent to select the events you want to capture, and finally start the trace with sp_trace_setstatus. Read more about creating traces here. If you aren’t familiar with creating traces via T-SQL use SQL Profiler to capture the events you need and set whatever properties you need then extract the trace properties to a script file.


Finally mark your new procedure to start when SQL does by using sp_procoption. This procedure takes 3 parameters; the name of your procedure, the option ‘startup’, and a true or false value. ‘Startup’ is the only option available, true sets to autostart, and false sets to not autostart. 


In my demo script below I’ve created a trace that will track what a certain user does in the database as well as when they log in or out. 


USE [master]
GO

CREATE PROCEDURE [dbo].[MyDefaultTrace]
AS

begin
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
declare @filecount int
set @maxfilesize = 100
set @filecount = 5

exec @rc = sp_trace_create @TraceID output, 2,
N'C:\Perflogs\MyDefaultTrace',
@maxfilesize, NULL, @filecount

declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 14, 7, @on
exec sp_trace_setevent @TraceID, 14, 23, @on
exec sp_trace_setevent @TraceID, 14, 8, @on
exec sp_trace_setevent @TraceID, 14, 64, @on
exec sp_trace_setevent @TraceID, 14, 1, @on
exec sp_trace_setevent @TraceID, 14, 9, @on
exec sp_trace_setevent @TraceID, 14, 21, @on
exec sp_trace_setevent @TraceID, 14, 25, @on
exec sp_trace_setevent @TraceID, 14, 41, @on
exec sp_trace_setevent @TraceID, 14, 49, @on
exec sp_trace_setevent @TraceID, 14, 57, @on
exec sp_trace_setevent @TraceID, 14, 2, @on
exec sp_trace_setevent @TraceID, 14, 6, @on
exec sp_trace_setevent @TraceID, 14, 10, @on
exec sp_trace_setevent @TraceID, 14, 14, @on
exec sp_trace_setevent @TraceID, 14, 26, @on
exec sp_trace_setevent @TraceID, 14, 66, @on
exec sp_trace_setevent @TraceID, 14, 3, @on
exec sp_trace_setevent @TraceID, 14, 11, @on
exec sp_trace_setevent @TraceID, 14, 35, @on
exec sp_trace_setevent @TraceID, 14, 51, @on
exec sp_trace_setevent @TraceID, 14, 12, @on
exec sp_trace_setevent @TraceID, 14, 60, @on
exec sp_trace_setevent @TraceID, 15, 7, @on
exec sp_trace_setevent @TraceID, 15, 15, @on
exec sp_trace_setevent @TraceID, 15, 23, @on
exec sp_trace_setevent @TraceID, 15, 8, @on
exec sp_trace_setevent @TraceID, 15, 16, @on
exec sp_trace_setevent @TraceID, 15, 64, @on
exec sp_trace_setevent @TraceID, 15, 9, @on
exec sp_trace_setevent @TraceID, 15, 17, @on
exec sp_trace_setevent @TraceID, 15, 41, @on
exec sp_trace_setevent @TraceID, 15, 49, @on
exec sp_trace_setevent @TraceID, 15, 57, @on
exec sp_trace_setevent @TraceID, 15, 6, @on
exec sp_trace_setevent @TraceID, 15, 10, @on
exec sp_trace_setevent @TraceID, 15, 14, @on
exec sp_trace_setevent @TraceID, 15, 18, @on
exec sp_trace_setevent @TraceID, 15, 26, @on
exec sp_trace_setevent @TraceID, 15, 66, @on
exec sp_trace_setevent @TraceID, 15, 3, @on
exec sp_trace_setevent @TraceID, 15, 11, @on
exec sp_trace_setevent @TraceID, 15, 35, @on
exec sp_trace_setevent @TraceID, 15, 51, @on
exec sp_trace_setevent @TraceID, 15, 12, @on
exec sp_trace_setevent @TraceID, 15, 60, @on
exec sp_trace_setevent @TraceID, 15, 13, @on
exec sp_trace_setevent @TraceID, 15, 21, @on
exec sp_trace_setevent @TraceID, 43, 7, @on
exec sp_trace_setevent @TraceID, 43, 15, @on
exec sp_trace_setevent @TraceID, 43, 8, @on
exec sp_trace_setevent @TraceID, 43, 64, @on
exec sp_trace_setevent @TraceID, 43, 1, @on
exec sp_trace_setevent @TraceID, 43, 9, @on
exec sp_trace_setevent @TraceID, 43, 41, @on
exec sp_trace_setevent @TraceID, 43, 2, @on
exec sp_trace_setevent @TraceID, 43, 10, @on
exec sp_trace_setevent @TraceID, 43, 3, @on
exec sp_trace_setevent @TraceID, 43, 11, @on
exec sp_trace_setevent @TraceID, 43, 35, @on
exec sp_trace_setevent @TraceID, 43, 51, @on
exec sp_trace_setevent @TraceID, 43, 12, @on
exec sp_trace_setevent @TraceID, 43, 5, @on
exec sp_trace_setevent @TraceID, 43, 13, @on
exec sp_trace_setevent @TraceID, 43, 6, @on
exec sp_trace_setevent @TraceID, 43, 14, @on
exec sp_trace_setevent @TraceID, 43, 62, @on
exec sp_trace_setevent @TraceID, 45, 7, @on
exec sp_trace_setevent @TraceID, 45, 8, @on
exec sp_trace_setevent @TraceID, 45, 64, @on
exec sp_trace_setevent @TraceID, 45, 1, @on
exec sp_trace_setevent @TraceID, 45, 41, @on
exec sp_trace_setevent @TraceID, 45, 10, @on
exec sp_trace_setevent @TraceID, 45, 3, @on
exec sp_trace_setevent @TraceID, 45, 11, @on
exec sp_trace_setevent @TraceID, 45, 35, @on
exec sp_trace_setevent @TraceID, 45, 51, @on
exec sp_trace_setevent @TraceID, 45, 12, @on
exec sp_trace_setevent @TraceID, 45, 5, @on
exec sp_trace_setevent @TraceID, 45, 13, @on
exec sp_trace_setevent @TraceID, 45, 6, @on
exec sp_trace_setevent @TraceID, 45, 14, @on
exec sp_trace_setevent @TraceID, 45, 62, @on
exec sp_trace_setevent @TraceID, 45, 15, @on
exec sp_trace_setevent @TraceID, 41, 7, @on
exec sp_trace_setevent @TraceID, 41, 15, @on
exec sp_trace_setevent @TraceID, 41, 8, @on
exec sp_trace_setevent @TraceID, 41, 64, @on
exec sp_trace_setevent @TraceID, 41, 1, @on
exec sp_trace_setevent @TraceID, 41, 41, @on
exec sp_trace_setevent @TraceID, 41, 10, @on
exec sp_trace_setevent @TraceID, 41, 3, @on
exec sp_trace_setevent @TraceID, 41, 11, @on
exec sp_trace_setevent @TraceID, 41, 35, @on
exec sp_trace_setevent @TraceID, 41, 51, @on
exec sp_trace_setevent @TraceID, 41, 12, @on
exec sp_trace_setevent @TraceID, 41, 5, @on
exec sp_trace_setevent @TraceID, 41, 13, @on
exec sp_trace_setevent @TraceID, 41, 6, @on
exec sp_trace_setevent @TraceID, 41, 14, @on


-- Set the Filters
declare @intfilter int
declare @bigintfilter bigint

exec sp_trace_setfilter @TraceID, 11, 0, 6, N'BadUser'

exec sp_trace_setstatus @TraceID, 1

end

GO

EXEC sp_procoption N'MyDefaultTrace', 'startup', '1'

GO



Now when I check to see what traces are running trace id #1 is the trace created in my procedure. The trace start is also written to the SQL error log when the master database starts.



image 



As always, please test this on a server you can play with before putting into production.

Wednesday, July 21, 2010

Winning the DBA Open

Last weekend I, like millions of other golf fans around the world, watched as Louis Oosthuizen, an unknown (at least to me) golfer out of South Africa, won The Open Championship at St Andrews by an astounding 7 strokes. He showed confidence in his abilities throughout the tournament, never losing his composure playing in some of the toughest conditions on any course in the world. Congratulations, Louis!

 

As i watched the game, I began to think that being a DBA and being a champion golfer weren’t that different. Both need to be confident in their abilities, both need to have a solid game plan, and both have to be able to execute their craft under pressure, being prepared for any eventuality.

Preparing for the tournament

A good golfer gets to the course early, days before the tournament begins. They play the course, talk to locals to learn any particular quirks the course may have, and practices hundreds of shots of different lengths or difficulties.

golfer

Good DBAs do the same. They find out about the host server, talking to the system team or SAN administrator; what kind of disk subsystem is there? what’s the memory or CPU like? does the server meet the requirements for SQL?

You drive for show…

The golfer starts his round on the tee. He’ll determine the club to use based on experience and conditions; where’s the wind coming from? how long is the hole? where’s the danger spots? A well hit drive will end up far down the fairway, perfectly placed for his next shot. 

Click to view

The DBA does the same when he sets up the server. His experience comes into play knowing what has to be done, and server conditions will help him to correctly place the system databases and data and log files for other databases. He’ll know what options to set and what to set them to. His server will be set up to optimal conditions, perfectly placed for hosting the company data.

…the approach shot…

Now the golfer will try to get his ball onto the green. He’ll again choose the right club, line up the shot, take a practice swing or two, then hits his best shot. He’ll know how his ball will fly because he has hit the same shot hundreds of times before so he can compare against past performance.

Click to view

The DBAs approach shot is to benchmark his system. He should know how the server will perform during peak and non-peak times.

---getting out of a hazard…

Sometimes, though, the golfer’s approach shot is mishit. Or maybe a gust of wind holds up the shot and it lands in a greenside bunker. A good golfer knows how to get out of those situations and save par.

humorous golf hazard

Similarly, a DBA must know how to get out of disaster scenarios. He knows he has set up and is maintaining  a proper backup strategy and has practiced recovering databases until he knows the procedure inside and out. He knows the different options available for DBCC CHECKDB.

---and you putt for dough

This is where the good golfers shine. They don’t fade when standing over a three foot putt with the tournament on the line. They know they’ve made the same putt over and over before, and they’ve practiced for just this occasion. They read the putt’s line, judge the speed, and with millions of eyes watching they calmly knock it hit it center cup. 

Click to view

The DBA also is good under pressure. They’ve written and checked over their scripts, and they’re confidant they’ll perform as written. They don’t wilt when the CIO is standing over their shoulder asking why performance is so bad on their server. 

The caddy…

All good golfers know the value of their caddy. A caddy’s job is to help his golfer, pointing out the course yardage, offering the right club, taking some of the pressure of the golfer. Sometimes the caddy is also a teacher, pointing out flaws in the golfer’s swing.

Golf Caddies Baltusrol

A DBA also needs a caddy. Their caddy can be a mentor or coworker at work. They can use social networks for advice on specific problems or as a sounding board.      

Preparing for the next tournament

The golfer is always practicing. He’s at the driving range on his day off, constantly working on correcting swing flaws and working on new shots.

Likewise the DBA is constantly learning. They attend their local PASS chapter meetings or go to national conventions. They’re constantly reading blogs and whitepapers, learning new skills to help them manage their servers or help their company.

The moral? Like in golf, or any endeavor, the secret of being a good DBA is to be prepared. Learn your craft. Ask your caddy when you have questions. Practice when you’re able. Keep learning. And you too can win the DBA Open.

Monday, July 19, 2010

What I learned today – importing mixed data types from Excel

Recently a coworker came up with an issue I never ran into before. He’s importing a large number of Excel spreadsheets into a SQL 2005 instance. One of the columns contains mixed data; 1, 2, 3, x, y, z for example. What he found is that if the first 20 or so rows are strictly numeric then the connection treats the whole column as numeric, and alphanumeric values are returned as NULLs. He’s querying using OPENROWSET, but it looks like this happens in SSIS as well. 

What his research found was that there’s a few registry keys that come into play. There’s also another property he needed to set in his connection string. When he adds “IMEX=1” it allows importing of mixed data types based on the the registry setting “ImportMixedTypes”. The default value is “text”. One drawback to text is that you’re limited to 255 characters. The only other available value is “Majority Type”. When you use Majority Type some of the values are scanned and the data type with the most values is used.

The number of rows scanned is based on the “TypeGuessRow” registry setting. By default it’s set to 8, but valid values are 0 to 16. If you set it to 0 then all rows are sampled. Setting it to a value between 1 and 16 samples that number.

Both keys can be found at HKEY_Local_Machine/Software/Microsoft/Jet/4.0/Engines/Excel/.  

Here are a few articles that explain the issue better.

http://jingyang.spaces.live.com/blog/cns!CC21A118B1B5250!318.entry?sa=781031802

http://www.instantpages.ltd.uk/ADODB_WP.htm

UPDATED 7/20/2010: You can read more about the actual issue here.

Wednesday, July 14, 2010

Changing the name of your SQL server

My company recently changed their standard naming conventions for computers, so yesterday I had to rename my workstation. Usually this isn’t a big deal, except that I’m running locally a default instance of SQL 2005 and a named instance of SQL 2008.Again, not a big deal since this is just my local playground. But I wanted to sync up the names.

Let’s say that my laptop was named “CHICAGO”. That makes the default instance also “CHICAGO'”, and my named instance “CHICAGO\KATMAI”. Now my laptop name changed to “NEWCHICAGO”. My SQL instances stay as “CHICAGO” and “CHICAGO\KATMAI”. How do you change them to match the new computer name?

Couldn’t be simpler, just execute two procedures. For the default instance.

USE master;
GO

EXEC sp_dropserver 'CHICAGO';
GO

EXEC sp_addserver 'NEWCHICAGO', local;
GO

It’s the same for a named instance. Just add the instance name.

USE master;
GO

EXEC sp_dropserver 'CHICAGO\KATMAI';
GO

EXEC sp_addserver 'NEWCHICAGO\KATMAI', local;
GO
Then, just restart the SQL service and you should see the name change.


Note, though, that this only changes the server name, not the instance name.  I haven’t had an opportunity to try that yet.

Tuesday, July 13, 2010

Another use for PowerShell and SQL

A few months ago I wrote a PowerShell script that looped through servers and databases  to save the properties inside an Excel spreadsheet (PowerShell and SQL). I thought it’s time to add another one.

One of the daily tasks I set up for myself as a SQL DBA is to go through the error logs for anything that may be an indicator of a problem. This is especially true on Sunday afternoons or Monday mornings, after my weekly DBCC CHECKDB job runs. I’ve configured the jobs to email me the results, but since I’m monitoring 10 instances they can be time consuming to read through. I usually just cut to the bottom, where DBCC reports the number of errors it finds. If the number of errors is 0, all is well and good. Otherwise, of course, further investigation is needed.

This seemed like a natural for a new PowerShell script. One of the great strengths of PowerShell is it’s ability to connect to as many remote servers as needed. And since I knew that the DBCC CHECKDB summary is written in the error log all I need to do is to read the logs. I had read this post by Buck Woody (blog | twitter) about doing almost exactly what I wanted to do. I merged his script with my earlier script and made a few other changes. My final code looks like this:

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 

#Get SQL account information
$credential = Get-Credential
$loginName = $credential.UserName -replace("\\","")
$password = $credential.Password

#Get variables
$logDate = read-host("
Enter log date")
$searchPhrase = "
*"
$searchPhrase += read-host("
Enter search phrase")
$searchPhrase += "
*"
$srvlist = @(get-content "
C:\Power\SQL_Servers.txt")

#Loop through all instances
foreach ($instance in $srvlist)
{
$instance
$srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $instance
$srv.ConnectionContext.LoginSecure = $false
$srv.ConnectionContext.set_Login($loginName)
$srv.ConnectionContext.set_SecurePassword($password)
$srv.ReadErrorLog(0) | where {$_.LogDate -gt $logDate -and $_.Text -like $searchPhrase}
}



Since I don’t always access the servers from a computer on the domain, I added login credentials. You can remove these if you want to use Windows authentication. After that I read a date and search phrase I type in. I found that this can come in handy when I want to change what I’m searching for. For instance, I can enter the phrase “DBCC CHECKDB” and the script will return the summaries based on the date I entered. Or I can search for “BACKUP” or “ERROR”, whatever my heart desires.

Monday, July 5, 2010

My favorite SQL features

Recently I ran across a contest sponsored by Microsoft SQL Server MVP Ken Simmons (Blog | Twitter). One of the ways to enter the contest was to create a blog post listing favorite SQL features. But how do you select a favorite? That’s like asking someone to choose a favorite song. Like most things the the SQL world, the correct answer is “it depends”, usually on what I’m trying to do at the time.

So here’s my top 10 favorite SQL features and tips, in the order that they come to mind. I’m ignoring features I’m not currently using, like backup compression and data encryption, and limiting myself to those I use the most often.

  1. Integration with PowerShell. This is a great feature to administer multiple SQL instances easily. For those who haven’t been on the internet in the last few years, PowerShell is Microsoft’s new scripting language based on the .NET framework. And starting with SQL 2008, PowerShell became integrated with SQL. What does this mean to you as a DBA? Well, imagine yourself as the new head DBA for your company, and you’re now in charge of 100 SQL instances. Your boss tells you that your first task is to standardize all instances. How can you quickly view the properties of each server? With PowerShell and remoting, you can easily query each instance and save the results to an Excel spreadsheet for easy analysis. I gave an example of how I do it in an earlier blog post here.      
  2. Dynamic Management Views and Functions (DMV). Introduced with SQL 2005, DMVs are a great way to see how your server is performing. I’m not going to go into them in depth here since they’ve been written about by many others much better than I can. If you’re not familiar with them, do yourself a favor and start reading about them. They’ll make your life much easier.
  3. Storing queries in the toolbox. Like all DBAs, I’ve got a number of queries that I run over and over again. There’s numerous ways to store these for easy access, but my favorite for short one-line queries is to store them on the toolbox. Simply open the toolbox (Ctrl + Alt + x) and drag your query over. To recall it, open the toolbox, copy the desired query, and paste it into a new query window.
  4. Policies. Introduced in SQL 2008, Policy Based Management allows DBAs to manage multiple servers from a central server. Policies can standardize server configurations and check them for compliance.   
  5. Report Services. Built-in reporting capabilities for the same price as the database engine. ‘nuff said.
  6. IntelliSense. This feature gives the same statement completion that you see in Visual Studio, not surprising since SQL Management Studio uses the same IDE and Visual Studio. Unfortunately it doesn’t work against 2005 servers or earlier. It did in the earlier betas but was removed in the final release. Hopefully Microsoft will put it back.
  7. Object Explorer Details. You can now see details of different database objects using the details view (F7). It’s also configurable. Just right click on the screen and choose what columns you want to see.    image   
  8. Running queries against multiple instances. If you set up Registered Groups and register SQL instances by groups, you can execute a query against all servers in the group. You can select all groups, but you can’t choose to ignore instances inside the group. It’s all or nothing. But still, a significant timesaver when you need to run multiple statements against multiple instances.  
  9. Setting variable values. One of the nice things in the .NET world is the ability to use easily change the value of a variable; i += 1 for instance. Now you can do that in SQL too. Instead of 
  10. SET @i = @i + 1


    you can now do it like this 


    SET @i += 1



  11. SQL Community. Not technically part of SQL itself, but I feel it’s important nevertheless. I first began learning about SQL on my own and quickly found myself overwhelmed. There was no time to learn at work, and I needed to find the knowledge to do my job. I found the Chicago SQL User Group and found what I needed; experienced DBAs volunteering to talk about a subject they love. ChiSUG lead me to PASS, an international group with tons of free resources. I began following blogs written by the same people authoring the articles on PASS and presenting at the group meetings. Reading these blogs lead me to twitter, where I can follow SQL experts in real-time and ask for help with the #SQLhelp tag. Finally there’s SQL Saturdays, day-long events with SQL experts speaking on various topics. Notice the recurring theme about these resources; they are all free. I’m constantly amazed by the amount of content that SQL DBAs and developers make available for no cost, and the time they put in as volunteers at in-person events. To the entire SQL community, thank you. I wouldn’t be where I am today without your help. 

Tuesday, June 29, 2010

PIVOT compared to CASE for a crosstab result

Yesterday I was playing with a test query for a coworker using the PIVOT keyword when it dawned on me that I never compared it to using CASE statements. So, here’s the requirements.

There is a table called lab_results, with a partial DDL statement below

CREATE TABLE [dbo].[lab_results](
[lab_result_id] [char](12) NOT NULL,
[patient_id] [char](8) NOT NULL,
[test_code] [numeric](4, 0) NOT NULL,
[sample_date] [datetime] NULL,
.............................
.............................
CONSTRAINT [PK_lab_results] PRIMARY KEY CLUSTERED
(
[lab_result_id] ASC
) ON [PRIMARY]

CREATE NONCLUSTERED INDEX [IX_lab_results_3] ON [dbo].[lab_results]
(
[test_code] ASC,
[sample_date] ASC
) ON [Secondary]


This table has approximately 18 million rows, and you need to get a count of test codes during each year. Anything prior to 1/1/2006 would be counted together. The data would look something like this


image 


Prior to SQL 2005, I would use a series of CASE statements to perform a count of each test code


SELECT test_code
, SUM(CASE WHEN DATEPART(yy, sample_date) = 2010 THEN 1 END) AS [2010]
, SUM(CASE WHEN DATEPART(yy, sample_date) = 2009 THEN 1 END) AS [2009]
, SUM(CASE WHEN DATEPART(yy, sample_date) = 2008 THEN 1 END) AS [2008]
, SUM(CASE WHEN DATEPART(yy, sample_date) = 2007 THEN 1 END) AS [2007]
, SUM(CASE WHEN DATEPART(yy, sample_date) = 2006 THEN 1 END) AS [2006]
, SUM(CASE WHEN DATEPART(yy, sample_date) < 2006 THEN 1 END) AS [Pre2006]
FROM lab_results
GROUP BY test_code


Using the new (to SQL 2005 anyway) PIVOT keyword, I could rewrite the query like this


SELECT test_code, [2010], [2009], [2008], [2007], [2006], [2005] AS Pre2006
FROM
(
SELECT test_code, sample_date
, CASE WHEN DATEPART(yy, sample_date) < 2006 THEN 2005
ELSE DATEPART(yy, sample_date)
END AS LabResultYear
FROM lab_results
) AS tmp
PIVOT (COUNT(sample_date) FOR LabResultYear IN ([2010], [2009], [2008], [2007], [2006], [2005])) AS p;


But I was never looked at the execution plan before yesterday to see how each was run. So I finally did, and here’s what I saw. First the plan for the CASE statement…


image 


…and then the plan for the PIVOT example.


 image 






At first glance they seem identical. Both are using the same operators and both are using an index seek on the non-clustered index. However there are small differences. The CASE statement completes 2 seconds faster than the PIVOT query, 45 seconds to 47. The index seek,while having an overall higher percentage of the whole, has the same operator costs as the PIVOT. The difference shows in the Stream Aggregate operator. The CASE operator is 10.8, which is 10% of the whole query. In the PIVOT example, the Stream Aggregate is 18% of the whole, and the operator cost is 19.8.



One other point about the PIVOT keyword. If you compare my code snippets to the screenshots of the executions, you’ll notice that I’m actually running against the master database and using a fully qualified name. If I ran it directly against the database where the table lives it would fail. Why? because that database, while running in on a SQL 2005 server, is still at compatibility level 7 (I know,I know). It doesn’t work unless you’re at level 9 or higher.

Monday, June 28, 2010

Report Server issues

I’ve had a lot of Report Server issues in the last few months. I thought we solved it once, only to see it come up again in a slightly different way. But I think I can put it all to bed now.

The first time was a few months ago, right when I left on vacation (isn’t that always the way?). The C drive on one of our reporting servers began to run out of space, filling up 6 – 8 GB in a matter of hours.

Before I get too far ahead of myself, let me describe our layout. We have four regional databases running SQL 2005 Standard Edition, each with data for that region. Two regions are in our corporate headquarters, the other two are in my office. We also have two SQL 2008 Standard Edition servers that run in a virtual environment, one in each datacenter. The reports hosted on each reporting center are identical except for their data sources; the corporate report server fetches data from the two corporate databases, and the local server does the same. The issue we had was on the server in the corporate datacenter.

Our first step was to check the temp files. We noticed a few small files and one massive, 8 GB file that we couldn’t delete without stopping and restarting the report service. A few hours later the massive file reappeared. All of these temp files were prefixed “RSTemp_”.

We moved the location of the temp files to a larger drive to give us more time before the largest temp file grew too large. By default the location is on C:\Program Files\Microsoft SQL Server\MSRS10.<<Instance Name>>\Reporting Services. You can change it by modifying the rsreportserver.config file and adding the entry to the <Service> tag:

    <Service>
---
<FileShareStorageLocation>
<Path>I:\RSTempFiles</Path>
</FileShareStorageLocation>

</Service>





That gave us extra space, but it didn’t solve the issue. A little more digging showed that we had a long running report that was also getting bad data. The query for the report was looking for a scalar value but was returning multiples. The query was also a long running query. What we found was happening was that the report execution was timing out, but the query was still active. The user didn’t see the timeout and constantly tried to re-run the report. Consequently this report was continuously paged out to disk, and it never died. Once we cleaned up the data and fixed the reports query, the problem hasn’t reappeared.



The next issue was related to performance. We noticed at certain times of day the report server would get really sluggish. Reports that would complete in 10 seconds would now take 10 minutes. Looking at the server resources showed no increase in CPU, IO, or memory usage. We didn’t see a repeat of the super-big temp file, though there was an increase in the number of temp files.



A little more investigation showed that, when the performance began to degrade, there were locks being held by one process clearing the sessions that blocked another process that was writing session data. We were finally able to track this to a third party application we use to monitor performance. It didn’t look like the software was configured properly.



In investigating both of these issues we looked at the data in the ExecutionLogStorage table in the ReportServer database. If you’re not familiar with this table, you should check out Robert Bruckner’s excellent post on ExecutionLog2 View - Analyzing and Optimizing Reports.



In the first issue, we were able to identify the report by seeing the pagination values in the additional information data. In the second, we could spot when the slowness would begin by seeing the TimeProcessing values rise.         

Thursday, April 15, 2010

What I learned tonight – tempdb size when SQL starts

I went to the April monthly meeting of the Chicago SQL Server User Group tonight to see Brad McGehee (Blog | Twitter). Brad’s in town to present at SQLSaturday # 31 in Chicago and he also gave a talk to us tonight on optimizing tempdb performance.

It was a good session because it went over tempdb and best practices. Most of the information I already knew but it’s always good to hear it again, even as a refresher. And there’s usually something I took for granted but was wrong about. And tonight was no exception.

I knew that the data and log files for tempdb are recreated when the SQL service starts. But I thought that tempdb got it’s db options from the model database. For instance, if the model database is set at 2 MB for the data and 1 MB for the log then tempdb would inherit those same settings. But i was wrong. As Brad mentioned tonight, tempdb starts out as 8 MB for the data and the log file is 1 MB. On my laptop its’s actually 512 KB but maybe I heard him wrong.

Anyway, to properly resize tempdb you should run the ALTER DATABASE command…

USE master;
GO

ALTER DATABASE tempdb
MODIFY FILE ( NAME = 'tempdev', SIZE = 25600KB , FILEGROWTH = 5120KB );
GO

ALTER DATABASE tempdb
MODIFY FILE ( NAME = 'templog', SIZE = 2048KB , FILEGROWTH = 1024KB );
GO





…and tempdb will be correctly resized after every restart. And you’ve heard that you should create multiple tempdb files depending on the number of CPU’s you have? Well, just do that at the same time. Here I’m renaming, moving, resizing, and adding a file to tempdb; 



USE master;
GO

ALTER DATABASE tempdb
MODIFY FILE (NAME = 'tempdev', NEWNAME = 'tempdev1', FILENAME = 'C:\SQL2008\DATA\tempdev1.mdf');
GO

ALTER DATABASE tempdb
MODIFY FILE (NAME = 'templog', FILENAME = 'C:\SQL2008\DATA\templog.ldf');
GO

ALTER DATABASE tempdb
MODIFY FILE (NAME = 'tempdev1', SIZE = 25600KB , FILEGROWTH = 5120KB);
GO

ALTER DATABASE tempdb
MODIFY FILE (NAME = 'templog', SIZE = 2048KB , FILEGROWTH = 1024KB);
GO

ALTER DATABASE tempdb
ADD FILE (NAME = 'tempdev2',
FILENAME = 'C:\SQL2008\DATA\tempdev2.ndf',
SIZE = 25600KB , FILEGROWTH = 5120KB);
GO


The usual disclaimers go here. Don’t use this script without understanding it as it can screw up your server. Don’t use it in production until you’ve testes it.



And I’ll leave the discussion of multiple tempdb files for now.




Tuesday, April 13, 2010

SQL error log finds all copies of resource db after moving db

Just a short post about something interesting I saw in the SQL error log a while back.

A while ago, I moved the system databases off the C drive’s of our servers to another drive on the SAN. That included moving the resource database. You can read Microsoft’s recommendations on moving system db’s on MSDN.

Anyway, I copied the mssqlsystemresource.mdf and mssqlsystemresource.ldf files to a backup location in case of an emergency. But only one database was attached. 

This interesting entry in the SQL error log was after I restarted the server:

The resource database has been detected in two different locations. Attaching the resource database in the same directory as the master database at 'O:\SQL2005\DATA\master.mdf' instead of the currently attached resource database at 'O:\SQL2005\Data\mssqlsystemresource.mdf'.

So, SQL saw both the attached db and the unattached backup copy when I restarted the service. The only time I saw the message was the first reboot, not any subsequent restarts.

Wednesday, March 24, 2010

Using the Inserted and Deleted tables outside of a trigger

Just a simple little post about a conversation about triggers I had earlier today with a developer, specifically using them to audit update and delete statements against a table. During our talk I mentioned that, starting in SQL 2005, the inserted and deleted tables were available outside of a trigger. He wasn’t aware of that until I mentioned it. So, for anyone else who’s not familiar with the inserted and deleted tables, here’s a little summary.

Briefly, when you insert, update, or delete records from a table, SQL stores those records in a virtual table; inserted for inserts and deleted for updates and deletes. These tables are available inside of table triggers. Microsoft SQL Books Online has an article explaining the use of the inserted and deleted tables inside of triggers; read that first as a refresher. In our shop we use them for monitoring who is updating and deleting certain critical tables, and we can roll back changes if necessary. 

Starting with SQL 2005, you can use the OUTPUT clause to access the inserted and deleted tables outside of DML triggers. You can read the article in Books On Line.  

Here’s a simple demo script to run. I added a primary key and a non nullable column to show what happens in error conditions. 

USE tempdb;
GO

SET NOCOUNT ON;
GO

-- Create a testing table
USE [tempdb]
GO

IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[TestOutput]') AND type in (N'U'))
DROP TABLE [dbo].[TestOutput]
GO

USE [tempdb]
GO

CREATE TABLE [dbo].[TestOutput](
[Field1] [int] NOT NULL,
[Field2] [char](1) NOT NULL,
CONSTRAINT [PK_TestOutput] PRIMARY KEY CLUSTERED
(
[Field1] ASC
)ON [PRIMARY]
) ON [PRIMARY]

GO

--Testing of Outupt statement
DECLARE @TestOutput_Audit TABLE
(
Field1 INT,
Field2 CHAR(1),
[Action] CHAR(1),
Action_by VARCHAR(25),
Action_Date DATETIME
)

-- Testing Insert
INSERT INTO TestOutput (Field1, Field2)
OUTPUT inserted.Field1, inserted.Field2, 'I', SUSER_NAME(), GETDATE()
INTO @TestOutput_Audit (Field1, Field2, [Action],Action_by, Action_Date)
VALUES (1, 'A')

INSERT INTO TestOutput (Field1, Field2)
OUTPUT inserted.Field1, inserted.Field2, 'I', SUSER_NAME(), GETDATE()
INTO @TestOutput_Audit (Field1, Field2, [Action],Action_by, Action_Date)
VALUES (1, 'A')

SELECT * FROM TestOutput
SELECT * FROM @TestOutput_Audit

-- Testing Update
UPDATE TestOutput
SET Field2 = 'B'
OUTPUT deleted.Field1, deleted.Field2, 'U', SUSER_NAME(), GETDATE()
INTO @TestOutput_Audit (Field1, Field2, [Action],Action_by, Action_Date)
WHERE Field1 = 1

UPDATE TestOutput
SET Field2 = NULL
OUTPUT deleted.Field1, deleted.Field2, 'U', SUSER_NAME(), GETDATE()
INTO @TestOutput_Audit (Field1, Field2, [Action],Action_by, Action_Date)
WHERE Field1 = 1

SELECT * FROM TestOutput
SELECT * FROM @TestOutput_Audit

-- Testing Delete
DELETE TestOutput
OUTPUT deleted.Field1, deleted.Field2, 'D', SUSER_NAME(), GETDATE()
INTO @TestOutput_Audit (Field1, Field2, [Action],Action_by, Action_Date)
WHERE Field1 = 1

SELECT * FROM TestOutput
SELECT * FROM @TestOutput_Audit



The INSERT statement simply inserts a single record into my table. Afterwards you can see the same record in the table as well as my audit table. The second record wasn’t added to either table because it’s a primary key violation.


Msg 2627, Level 14, State 1, Line 18

Violation of PRIMARY KEY constraint 'PK_TestOutput'. Cannot insert duplicate key in object 'dbo.TestOutput'.


The statement has been terminated.



Field1      Field2

----------- ------


1           A



Field1      Field2 Action Action_by                 Action_Date

----------- ------ ------ ------------------------- -----------------------


1           A      I      PERTELLT3\John            2010-03-24 17:58:03.407





I make a simple UPDATE. Now the table shows the update and the audit shows the value before the update took place. The second update fails because Field2 doesn’t allow NULL values; the main record isn’t updated nor added to the audit table.



Msg 515, Level 16, State 2, Line 33

Cannot insert the value NULL into column 'Field2', table 'tempdb.dbo.TestOutput'; column does not allow nulls. UPDATE fails.


The statement has been terminated.



Field1      Field2

----------- ------


1           B



Field1      Field2 Action Action_by                 Action_Date

----------- ------ ------ ------------------------- -----------------------


1           A      I      PERTELLT3\John            2010-03-24 17:58:03.407


1           A      U      PERTELLT3\John            2010-03-24 17:58:03.413





Finally, when I delete the record I can see the record is gone from the main table but stored in the audit table.



Field1      Field2

----------- ------



Field1      Field2 Action Action_by                 Action_Date

----------- ------ ------ ------------------------- -----------------------


1           A      I      PERTELLT3\John            2010-03-24 17:58:03.407


1           A      U      PERTELLT3\John            2010-03-24 17:58:03.413


1           B      D      PERTELLT3\John            2010-03-24 17:58:03.413

Tuesday, March 16, 2010

PowerShell and SQL

When PowerShell was first released a few years back I wasn’t too excited. I didn’t use scripting very much, mainly for cleaning out old backup files from our archived storage. I thought that using VBScript would suffice for my limited means. But I started reading up on PowerShell V1, even tinkering around with it a bit. However learning PowerShell soon dropped down on my “Must Learn” technologies. There was just too many other things being introduced.

My mind didn’t change much when PowerShell V2 was in beta. Then Microsoft told us how PowerShell was going to be integrated into SQL Server 2008 (among other products) and PowerShell began moving back up my list. I’d read articled on MSDN and blogs by SQL experts, but my thinking never really changed. I figured that PowerShell was just one more way of doing things I already did in other ways.

One of the items on my “To Do” list has always been to document the servers I’m responsible for. I toyed with different methods of gathering the information, from running SSIS packages to a central repository to executing T-SQL queries in a Reporting Service report to manually (and tediously) typing the data into Word docs.

Then I read an article by Edwin Sarmiento on MSSQL Tips. In his article, he gave an example of using SMO inside PowerShell to write the information to an Excel spreadsheet. Finally the light bulb went on; I could use PowerShell to create my Run Book!

Now I’m not going to go into an in-depth explanation on PowerShell, or SMO. There’s ton’s of information available, and they explain much clearer than I can. I’m still learning myself. Check out Allen White (blog), Buck Woody (blog | twitter), Aaron Nelson (blog | twitter) and many others that I’m not mentioning here. I just wanted to share the method I’m beginning to use. It’s still evolving as I learn new tricks.

The basis of my script was the one used by Edwin Sarmiento in this article. I’ve added a few things of my own to make it more like what I want. For instance I add server information as well as database information. And I put each server onto a different worksheet. 

The first thing my script does is to load the SMO namespace. After that I use the Get-Credential cmdlet to get an account to connect to the servers (I won’t always be connecting with Windows Authentication), create an Excel object and get a list of servers from a text file. After that I make sure that there’s enough worksheets in the workbook for all servers listed. Since a new Excel workbook has 3 worksheets by default I start my loop at 4.

#Get list of servers 
$srvlist = @(get-content ".\SQL_Servers.txt")

#Counter variable for rows
$c = $srvlist.Count 
$intRow = 1

#Verify there's a sheet in the workbook for each server 
for ($i = 4; $i -le $c; $i++) 
{
$Workbook.Sheets.Add() 
}


Then it’s just a matter of looping through the servers and databases, writing the values to the workbooks and applying formatting. My final step is to rename each sheet.



$name = $instance -replace("\\", "-")
$Sheet.Name = $name


Here’s an example of what one of my sheets looks like…



image



…and here’s the whole script. For this example I stopped at 7 columns so it would all show on the screen without scrolling. But my final script will show more information about both the servers and the databases.



#region LoadAssemblies
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
#endregion

#Get SQL account information
$credential = Get-Credential
$loginName = $credential.UserName -replace("\\","")   $password = $credential.Password

$Excel = New-Object -ComObject Excel.Application
$Excel.visible = $False
$Workbook = $Excel.Workbooks.Add()

#Get list of servers 
$srvlist = @(get-content ".\SQL_Servers.txt")

#Counter variable for rows
$c = $srvlist.Count 
$intRow = 1

#Verify there's a sheet in the workbook for each server 
for ($i = 4; $i -le $c; $i++) 
{
$Workbook.Sheets.Add() 
}

$a = 1
#Read thru the contents of the SQL_Servers.txt file
foreach ($instance in $srvlist) 
{
$Sheet = $Workbook.Worksheets.Item($a) 
$srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $instance
$srv.ConnectionContext.LoginSecure = $false
$srv.ConnectionContext.set_Login($loginName)
$srv.ConnectionContext.set_SecurePassword($password)    
#set headers
$Sheet.Cells.Item($intRow,1) = "INSTANCE NAME:"
$Sheet.Cells.Item($intRow,1).Font.Bold = $True
$Sheet.Cells.Item($intRow,2) = $instance
$Sheet.Cells.Item($intRow,2).Font.Bold = $True
$intRow += 1
$Sheet.Cells.Item($intRow,1) = "VERSION:"
$Sheet.Cells.Item($intRow,2) = "EDITION:"
$Sheet.Cells.Item($intRow,3) = "COLLATION:"
$Sheet.Cells.Item($intRow,4) = "OS VERSION:"
$Sheet.Cells.Item($intRow,5) = "PLATFORM:"
$Sheet.Cells.Item($intRow,6) = "PHYS MEM:"
$Sheet.Cells.Item($intRow,7) = "NUM CPU:"
for ($col = 1; $col –le 7; $col++)
{
$Sheet.Cells.Item($intRow,$col).Font.Bold = $True
$Sheet.Cells.Item($intRow,$col).Interior.ColorIndex = 48
$Sheet.Cells.Item($intRow,$col).Font.ColorIndex = 34
}

#get values
$intRow += 1

$Sheet.Cells.Item($intRow,1) = $srv.Information.Version 
$Sheet.Cells.Item($intRow,2) = $srv.Information.Edition
$Sheet.Cells.Item($intRow,3) = $srv.Information.Collation
$Sheet.Cells.Item($intRow,4) = $srv.Information.OSVersion
$Sheet.Cells.Item($intRow,5) = $srv.Information.Platform
$Sheet.Cells.Item($intRow,6) = $srv.Information.PhysicalMemory
$Sheet.Cells.Item($intRow,7) = $srv.Information.Processors

$intRow += 2

$Sheet.Cells.Item($intRow,1) = "DATABASES"
$Sheet.Cells.Item($intRow,1).Font.Bold = $True

$intRow += 1

$Sheet.Cells.Item($intRow,1) = "DATABASE NAME"
$Sheet.Cells.Item($intRow,2) = "COLLATION"
$Sheet.Cells.Item($intRow,3) = "COMPATIBILITY LEVEL"
$Sheet.Cells.Item($intRow,4) = "AUTOSHRINK"
$Sheet.Cells.Item($intRow,5) = "RECOVERY MODEL"
$Sheet.Cells.Item($intRow,6) = "SIZE (MB)"
$Sheet.Cells.Item($intRow,7) = "SPACE AVAILABLE (MB)"
for ($col = 1; $col –le 7; $col++)
{
$Sheet.Cells.Item($intRow,$col).Font.Bold = $True
$Sheet.Cells.Item($intRow,$col).Interior.ColorIndex = 48
$Sheet.Cells.Item($intRow,$col).Font.ColorIndex = 34
}
$intRow += 1

$dbs = $srv.Databases
ForEach ($db in $dbs) 
{
#Divide the value of SpaceAvailable by 1KB 
$dbSpaceAvailable = $db.SpaceAvailable/1KB 
#Format the results to a number with three decimal places 
$dbSpaceAvailable = "{0:N3}" -f $dbSpaceAvailable
$Sheet.Cells.Item($intRow, 1) = $db.Name
$Sheet.Cells.Item($intRow, 2) = $db.Collation
$Sheet.Cells.Item($intRow, 3) = $db.CompatibilityLevel
#Change the background color of the Cell depending on the AutoShrink property value 
if ($db.AutoShrink -eq "True")
{
$fgColor = 3
}
else
{
$fgColor = 0
}
$Sheet.Cells.Item($intRow, 4) = $db.AutoShrink 
$Sheet.Cells.item($intRow, 4).Interior.ColorIndex = $fgColor
$Sheet.Cells.Item($intRow, 5) = $db.RecoveryModel
$Sheet.Cells.Item($intRow, 6) = "{0:N3}" -f $db.Size
#Change the background color of the Cell depending on the SpaceAvailable property value 
if ($dbSpaceAvailable -lt 1.00)
{
$fgColor = 3
}
else
{
$fgColor = 0
}
$Sheet.Cells.Item($intRow, 7) = $dbSpaceAvailable 
$Sheet.Cells.item($intRow, 7).Interior.ColorIndex = $fgColor

$intRow += 1

}
$Sheet.UsedRange.EntireColumn.AutoFit()
$name = $instance -replace("\\", "-")
$Sheet.Name = $name
$intRow = 1
$a += 1
}
#Save file and close Excel. 
$xlExcel8 = 56
$timeStamp = Get-Date -Format "yyyyMMdd_HH_mm"
$fileName = ".\ServerInfo_" + $timeStamp + ".xls"
$Workbook.SaveAs($fileName, $xlExcel8)
$Excel.Quit
[System.Runtime.Interopservices.Marshal]::ReleaseComObject($Excel)



I’ve run this script using PowerShell V2 and Excel 2007 and the Excel 2010 beta with no problems. I don’t believe there’s anything specific to V2 in the script. But I do have a warning, though. If, while you run the script, you get impatient and click somewhere inside Excel, you’ll get an error and the script will stop (NOTE TO SELF: Add error handling!). The error is HRESULT 800ac472 and it’s an Excel error, not PowerShell.

Monday, March 1, 2010

Issues installing SQL 2005 SP3

Before I get to the details, I’d like to ask a few questions. I’d be interested in hearing from anyone who’s had problems installing SQL service packs or hotfixes. Has anyone else had the errors I’m mentioning below, or anything similar? What procedures or best practices do you follow when installing patches? 

Last week I finally was able to install SQL 2005 SP3 on our production servers. I’ve installed countless service packs and hotfixes in the past on numerous servers both physical and virtual, development and production. And I’ve never run into the problems I had last week.

In this case, I’m upgrading four production sites from SQL 2005 SP2 with the CU7 hotfix installed, build 3239 (see this article from SQLServerCentral that matches build numbers to service packs). When I installed SP3 on our dev and test servers I received the usual warning about running software. So last week I tried to stop the SQL services on the first 2 servers. This brought up Error #1: The services were telling me that I lacked the proper permissions to stop the SQL service. Now my account is an administrative account on the servers so it has plenty of rights, certainly enough to stop the service. ISP3.

Now comes Error #2: The system databases on a SAN drive were read only, and again I didn’t have proper permissions to rewrite them! Again, I’m an administrator and should have been able to do this with no problems. Reading the install logs just gave me the same information. And again, this error only occurred on the first 2 servers, not the final 2.

Next I logged off the servers and logged back in using the same account that the SQL services run under. I continued to get the same 2 errors on Server 1, but Server 2 let me complete the installation. Still no errors at all on Servers 3 or Server 4. Now I’m getting concerned.

My next step was to restart Server 1. After the server came back up, I could finally install the service pack. When I finished I checked the Windows and SQL error logs for any clues that would explain why my permissions were being denied. I must admit my Windows admin skills aren’t the greatest so I probably missed the cause. I did notice that the anti-virus program we run had similar issues at the same time, but the anti-virus doesn’t hit the database files and I wasn’t trying to do anything with the anti-virus service. My guess is that somehow Server 1 and Server 2 somehow couldn’t validate my account after I logged in.

Anyway, I thought I was finished. Not quite. My final step was to verify the new version. So I ran SELECT SERVERPROPERTY('ProductLevel') and all four servers returned SP3. The build number now showed as 9.00.4035. All looks good, right? A few days later I ran EXEC sp_server_info.

Finally we get to Error #3. On Server 2, remember that I had problems at first with permissions but I thought the SP installed correctly on the 2nd try. But now looking at the value for DBMS_VER and SYS_SPROC_VERSION I’m seeing the build is still showing 9.00.3239! Why? I’m not sure. I think it’s due to 1 of 2 reasons.

First, even though the SP3 install looked to be successful the second time it somehow skipped a component. During the reinstall I thought I selected all components, but it’s possible that I only thought I did, skipping something. NOTE TO SELF: for future installs document what components you’re installing. At least take a screenshot of what I’ve checked.

The second possibility is because of Analysis Services. SSAS was installed on Server 2 but the service is disabled. That makes sense since we’re not using SSAS currently, but the reason it was installed on just one server is lost to the mists of time. In any case the SP3 install logs show that SSAS was disabled and it’s at build 3239. But by that logic it would only be as 3239 if it was running during other patches. Again, I can’t be sure.

Tuesday, February 16, 2010

How I save table sizes for all databases

Back when I first started monitoring our production servers, I wanted a way to track the size and growth of all tables in our databases. At the time, we were really only worried about 1 legacy database, and all the objects in that database were in the dbo schema. So tracking size growth was pretty easy; I ran the following script against that database weekly and compared results.

declare @TableName    sysname

declare mySpace cursor for
select name from sys.sysobjects where xtype = 'U' 
order by name
open mySpace
fetch next from mySpace into @TableName
while @@fetch_status = 0
begin
insert into PerfDB.dbo.TableSpace(TableName, NumRows, Reserved, DataUsed, IndexUsed, Unused)
exec sp_spaceused @TableName
fetch next from mySpace into @TableName
end
close mySpace 
deallocate mySpace

After a while, we began to create new projects that required new databases. And these databases had objects that were created into separate schemas. So now, if I wanted to track changes to all tables in all databases, I’d have to run the above code against all databases. Do-able, certainly, but not exactly what I wanted to maintain. I’d have to add the code to each new database we added to the server. And sp_spaceused only returns the name of the table, not the schema name. I wanted to separate the tables by database as well as schema.

So I played around with using two well known undocumented procedures, sp_MSForEachDB and sp_MS_ForEachTable. I’ve used them before, but I never nested foreachtable inside foreachdb. And I still needed a way to separate the schema from the table name. And I wanted to get away from using a cursor.

I found the following script on a SQLServerCentral forum that gave me an idea. Running this script showed that the value of ‘?’ when running sp_MSForEachTable showed the schema name as well as the table name.

declare @command1 varchar(8000)
select @command1 = 
'
IF ''@'' <> ''master'' AND ''@'' <> ''model'' AND ''@'' <> ''msdb'' AND ''@'' <> ''tempdb'' 
begin
use [@] execute sp_MSForEachTable ''print ''''?'''' '' 
end    
'
exec sp_MSforeachdb @command1, '@'
So now I can just run sp_spaceused inside the sp_MSForEachTable block and strip out the schema into a separate column.

Here’s the code I use now

CREATE TABLE  #TableSpace (
DatabaseName    nvarchar(128) NULL,
SchemaName        nvarchar(128) NULL,
TableName        nvarchar(128) NOT NULL,
rows            int NOT NULL,
reserved        varchar(15) NOT NULL,
data            varchar(15) NOT NULL,
index_size        varchar(15) NOT NULL,
unused            varchar(15) NOT NULL)
DECLARE @SQL varchar(8000)
SELECT @SQL = 
'
IF ''@'' <> ''master'' AND ''@'' <> ''model'' AND ''@'' <> ''msdb'' AND ''@'' <> ''tempdb'' 
BEGIN
USE [@] EXECUTE sp_MSForEachTable ''INSERT INTO #TableSpace (TableName, rows, reserved, data, index_size, unused) EXEC sp_spaceused ''''?''''; 
UPDATE #TableSpace SET SchemaName = LEFT(''''?'''', CHARINDEX(''''.'''', ''''?'''', 1) - 2) WHERE SchemaName IS NULL;
UPDATE #TableSpace SET DatabaseName = ''''@'''' WHERE DatabaseName IS NULL; ''
END    
'
EXEC sp_MSforeachdb @SQL, '@'

In my example I’m creating a temporary table, but in practice I use a permanent table.  Line 16 is obvious, it skips the system databases. Line 18 calls the sp_spaceused procedure for each table inside each database.

Line 19 is where I strip out the schema name. I just use CHARINDEX function to return the position of the dot separator, then I use the LEFT function to return the characters before it. Since ‘?’ returns the table name as [Schema].[TableName] I also want to skip the brackets, so I move the position one more place to the left to take care of the ] character. I could have been even fancier and did even more string manipulation to take care of the [ character, but it was just as easy to use the REPLACE function on the whole column.

And Line 20 stores the database name. Running the script on my laptop running SQL 2008 gives me results like these

image

I’ve run this against SQL 2005 and SQL 2008 servers. I haven’t tested it against SQL 2000. As with any code you find on the internet, please test it and become familiar with it before you put it into production.