Thursday, September 17, 2009
Report Service error
I first checked the execution log table in the report database and I saw he was getting a rendering error. Next I had him rerun the report in the viewer. This time he received a generic error saying that there was a problem. But if he clicked the page layout button to render the report in HTML the report ran fine.
My next step was to check the logs on the report server instance. Sure enough I found the error. It was an IO issue; not enough space on the drive. Luckily this was on a testing server and not production.
Now the testing server is not set up the same as a production server. This box is a jumble of programs and databases, running multiple SQL instances for 2000, 2005, and 2008. The report server temp database is on the same drive as the report server database and the error logs and dumps. At the time I saw this the drive had about 100 MB free. And while 100 MB isn't a lot of space nowadays, I would have thought it would have been enough to run the report (it had over 800 pages when we freed up space for it).
At this point I'm not sure where that image is being produced, in the reports database or the report temp database. I've never dug into the report temp db before. I'd like to head from anyone who has had similar problems, or can show me where else I should be looking.
Monday, September 14, 2009
Updates to previous posts
In my posts on virtualization I forgot to mention something. While I did say I was running with the multi-thread option instead of the replay in order option, I left the number of threads at the default of 4. I ran it with 64 threads earlier, but with the replay in order option. Those replays took well over 5 hours, as opposed to 2 1/2 – 3 hours with multi-thread at 4. But if I compare multi-threading at 4 with multi-threading at 64, then the times are much better – 3 1/2 hours compared to 2 3/4 hours on the same virtual server. I’ll summarize all my tests in a few weeks when I’ve completed all my tests.
The second post I need to update regards repeating data in a page header using SSRS. I solution was to use a shared variable. This had potential problems which I neglected to mention. If you use a shared variable then anyone running the report at the same time could overwrite each other values. I knew this at the time I put the report into production, but I thought the odds of that happening would be slim. Later I found I could produce that exact behavior in testing; when I ran the report simultaneously with another tester we did indeed cross values.
I changed the report to use a hashtable instead. The value in each record would be unique since it will depend on the user id of whoever is running the report. As an example of repeating a customer name on each page I
- Added a hidden row to the top of the table control on the report. The table was already there and the grouping was already set on the customer name.
- Added the customer name field from the table’s dataset to the hidden row.
- In the report code section I added a shared hashtable – DIM htCustomer AS system.collections.hashtable = NEW system.collections.hashtable. The hashtable still needs to be shared.
- Still in the code section, I added a new function to add a record to the hastable if it didn’t exist or update it if it did. The function takes three parameters; the value from the hidden field, a group name, and the user id. It returns the customer name for the current user
DIM key AS STRING = groupName & UserID
IF NOT group IS NOTHING
DIM g AS STRING= CTYPE(group, STRING)
IF NOT (htCustomer.ContainsKey(key) THEN
htCustomer.Add(key, g)
ELSE
IF NOT (htCustomer(key).Equals(g)) THEN
htCustomer(key) = g)
END IF
END IF
END IF
RETURN htCustomer(key)
5. Add a page header to the report. Add a new text box that uses an expression to call the function and pass in the parameters
=Code.SetCustomerName(ReportItems!txtCustomer.Value, “Customer”, User!UserID)
ReportItems!txtCustomer is the hidden field; Customer is my group name, and User!ID is a global parameter available to the report.
Friday, September 11, 2009
Documenting servers
At this point I'm only documenting the servers, not any processes like database backups and restores. Those are important and I've already done most of those, just not formally. I'll do that also.
I'll start with the server itself. What I'm adding is the server name, ip address, if the server is physical or virtual (if virtual the name/location of the host server), the OS and any SPs of patches installed, number of CPUs, amount of memory, size/location of the page file, and the drives and total space. If the drives are on a SAN then I should include a mapping of the SAN.
For the SQL installation, the edition and any SPs or hot fixes, the instance name (we only run one instance per server), any non-default options that were configured, accounts used to run services, configured alerts, jobs and their schedules, if components like SSRS are installed and info about their configurations if they are.
For the databases, the db names (including the virtual name and physical path), compatibility level, recovery model, auto growth and size restrictions, and any db options.
For database backups, the type of backup, where it backs up to, and when it backs up. This info may be duplicated since our backups are handled by jobs.
Is there anything else I'm missing?
Wednesday, September 9, 2009
Alas, no booting to a .VHD for me
I’ve been reading quite a bit lately about attaching a virtual drive to a computer in Windows 7 and Windows 2008 Server R2. It would be nice to have the ability to boot directly into a virtual server. But I’m not seeing a way I can achieve it, at least right now.
My hardware would be my current laptop, an HP Pavilion TX 2000 (I wanted the tablet capabilities) running a 64 bit version of Windows 7 Ultimate. It has 2 CPUs, 4 GB of RAM and a 250 GB hard drive, certainly enough to run one virtual machine. I wanted to run a virtual 64 bit Windows 2008 Server R2 as my BI sandbox. Unfortunately, neither Virtual Server or Virtual PC support running 64 bit guests, only 32 bit. So I built my VM using VMWare’s Workstation. But those virtual disks can’t be mounted.
So I created another VM, this time a 32 bit version of Windows 2008. I created a fixed disk, installed the OS, and followed the directions on Charlie Calvert's blog. Mounting the .vhd file was simple, as was using bcdboot. When I rebooted both servers showed in the boot launcher. Everything good to go, right?
Wrong. When I tried to boot into the .vhd, I'd get an error message that the computer was not correct because of hardware changes. And the computer manager no longer showed the .vhd drive as mounted. That’s when I went back and reread the fine print.
Windows 7 only supports mounting Windows 7 or Windows 2008 Server R2. My VM was only 2008. At this point my options are to restore my VMWare VM (backed up luckily) or to try to install R2 directly into a vhd. But I don’t think that will work, either; it wouldn’t be compatible with Virtual PC.
Well, VMWare Workstation is still a great option. It’s just disappointing I haven’t figured out a way to do this yet.
East Iowa SQL User Group meeting
This may be one of the nuttiest things I’ve ever done, at least from a technical side.Yesterday I drove to Cedar Rapids to attend the monthly meeting of the I380 Corridor (East Iowa) SQL User Group. That’s a 9 hour round trip drive for a 2 hour meeting, not a very efficient use of my time. But I was glad I did.
The East Iowa group is headed by Michelle Ufford, writer of the SQL Fool blog and Ed Leighton-Dick. Last night’s meeting included a tip of the month by Michelle on using Adam Machanic’s latest version of his who is active script. This was followed by a Live Meeting with Jessica Moss who gave an introduction to data warehousing. That’s what I came for. Ed taped the meeting and hopefully it will come out clean enough so they can post it. I’d love to see it again.
Jessica discussed just what data warehousing is. She went into the design and life cycle of a dw, the definitions of fact and dimension tables, the types of dimensions, the differences between a star and snowflake schema, and how SSIS, SSAS, and SSRS fit into all this. She also discussed the different methodologies, Kimball and Inmon. Finally she gave a demo of a dw, from ETL to reporting, using the AdventureWorksDW2008 database. There was only one question, from Michelle, about using procedures to load data as opposed to a Slowly Changing control in SSIS. Jessica recommended using the SSIS approach.
Jessica’s presentation was exactly what I was looking for, a beginning point for how to build a data warehouse. There’s a lot of information and discussions out there but most of it assumes some knowledge already. And classes usually focus on the tools, not the methods. I need to learn both if I’m to create a good design.
Everyone at the user group was friendly. It was a small gathering, maybe 15 people total. The only difference between them and the Chicago group is they cut their pizza in wedges rather than squares.
There was one piece of bad news, however. The Iowa group is sponsoring a SQL Saturday on October 17th in Iowa City. But they are far behind what they need as far as registered attendees. They need to hit a certain level for funding from their sponsors to kick in. I think they are running into an issue where there’s another SQL Saturday scheduled the same day in Orlando and others in October. That’s a shame, because I really want to go to this one. Jessica is supposed to come and expand on last night’s presentation. They’ll make the decision Friday if they need to cancel.
Friday, September 4, 2009
Replaying Traces – What I’ve Learned
So I’ve been running these replays for a while now, with mixed results. There are a number of lessons I’ve learned along the way.
- Check what’s running inside the workload you’re replaying. There may be something that will affect the replay. In my case there was a collection trace that was started by the third party software I use for monitoring the production servers for performance. Since this trace had no corresponding stop, it kept running after the rest of my workload was completed, making me think there was a different issue. The only thing I don’t understand is why it stopped on it’s own in half the replays.
- After you load the workload, you can apply a filter and reload the workload. This is how I eliminated any call for sp_trace_setstatus.
- Restore the master database to eliminate any replay errors due to mismatched database id’s. This comes directly from BOL. I was ignoring it because at first I wasn’t seeing any errors. But later, especially after I started on the virtual servers, I was seeing more and more errors where SQL wasn’t running against the right database. So when I started the last round of replays I restored the master db first.
I’m in the process of replaying all scenarios again so the data is consistent. So far I’ve completed the 4 physical tests. Coming next are the 3 32-bit virtual, 2 64-bit virtual, and 1 final with VMWare’s 8 CPU server.