pointers, solutions and scripts for the SQL DBA
not intended to replace common sense

8/17/2010

CDC: dbo.fn_all_changes_... frustrating

I've come to find that the generated dbo.fn_all_changes_... function for Change Data Capture (CDC) in SQL 2008 is not my favorite. I think it is poorly implemented.

(In fact, skip to the end of this blog entry for a quick fix to fn_all_changes_dbo_... that solves this problem.)

They are created for you with the sproc sys.sp_cdc_generate_wrapper_function with the idea of giving you a way to easily select changed data rows between two dates.  The first two parameters of the fn_all_changes_... (and its sister, the fn_net_changes...) are dates, start and end date.

This works for my testing table called dbo.cdctest:
select * from dbo.fn_all_changes_dbo_cdctest (null,null, 'all')

This doesn't:
select * from dbo.fn_all_changes_dbo_cdctest ('2010-01-01',null, 'all')

It returns:

Msg 313, Level 16, State 3, Line 1
An insufficient number of arguments were supplied for the procedure or function cdc.fn_cdc_get_all_changes_ ... .
The statement has been terminated.

Why?  Because when looking for LSN's at or after 2010-01-01, fn_all_changes_.. chokes.

(Yeah, there are other reasons the error can happen.  Its a dummy response to an inherent problem that TVF's can't return errors.  There are other problems that could create the above error message.  This blog entry is specifically dealing with the problem of dates-before-CDC-was-enabled.)

On my server, the date 2010-01-01 occurs before any CDC was activated.  Before any LSN's had been created for this tableThe error occurs if you provide a date before CDC was started. 

Which means you have to give it a date/time that is between the time you set up CDC and the time of the first changed data events.  A very specific date/time, too.

Even though I didn't set up CDC until this afternoon, this still returns a valid LSN:

SELECT @lsn = sys.fn_cdc_map_time_to_lsn ('smallest greater than or equal','01/01/2010')

But if I run this, I can find out exactly when I set up CDC on this test table, dbo.cdctest.  Note the increments of time in the first parameter.

select * from dbo.fn_all_changes_dbo_cdctest ('2010-08-17 16:46:24.6',null, 'all')
select * from dbo.fn_all_changes_dbo_cdctest ('2010-08-17 16:46:24.7',null, 'all')
select * from dbo.fn_all_changes_dbo_cdctest ('2010-08-17 16:46:24.8',null, 'all')
select * from dbo.fn_all_changes_dbo_cdctest ('2010-08-17 16:46:24.9',null, 'all')



Here's what I get

Msg 313, Level 16, State 3, Line 1
An insufficient number of arguments were supplied for the procedure or function cdc.fn_cdc_get_all_changes_ ... .
The statement has been terminated.
Msg 313, Level 16, State 3, Line 2
An insufficient number of arguments were supplied for the procedure or function cdc.fn_cdc_get_all_changes_ ... .
The statement has been terminated.

(69 row(s) affected)

(69 row(s) affected)


Research that "An insufficient number of arguments were supplied for the procedure or function cdc.fn_cdc_get_all_changes_" error and you find that it is a dummy response for when a valid LSN can't be found

That's inconvenient.

LSN's are binary expressions that compare with logical operators, right? What is the reason it couldn't have been done with simple > < signs?  (funny, because >.< is exactly how I feel...)

This CDC function works with the date 01/01/2010, even though I started CDC long after 01/01/2010:

SELECT sys.fn_cdc_map_time_to_lsn ('smallest greater than or equal','01/01/2010')

Returns:

0x0000001B000002690034

And this still works too:

select * from cdc.dbo_cdctest_CT where __$Start_lsn > 0x0000001B000002690034 order by __$start_lsn


So why doesn't this work? 


select * from dbo.fn_all_changes_dbo_cdctest ('2010-01-01',null, 'all')


Because "the LSN range is invalid"?

I'm sure there's a perfectly good reason, but even the folks on the product team have to admit, its goofy, frustrating and counter-intuitive.

And the most frustrating part is that they could fix it by adding two lines to fn_all_changes_dbo_...:

Before the line:

if @from_lsn is not null and @to_lsn is not null and     (@from_lsn = [sys].[fn_cdc_increment_lsn](@to_lsn)) 
    return 

Add:


    if @from_lsn is null
        select @from_lsn = [sys].[fn_cdc_get_min_lsn]('dbo_cdctest')   

And it fixes it.  No joke.  (Obviously, I'm disclaiming myself from any liability if you muck with a MS-provided stored proc on your production box.  I don't recommend making the above fix, I'm only pointing out its maddening simplicity.)

And before you ask about my insistence about using 01/01/2010, its just for an example.  Nothing special.  And sure, this problem is not a big one because CDC is typically used on a rolling or ongoing basis, not from a beginning-to-now basis. 

I welcome any and all responses on this.

UPDATE: small typo fixes.

8/14/2010

8/11/2010

Visual Studio Schema Compare frustration

In Visual Studio Schema Compare, if you have a problem where the Export to Editor is grayed out, the Write Updates is unavailable or the Schema Update Script window is blank, it is in my experience one of two things.
  1. One or both of your connections is to a SQL Server, and you don't have adequate permissions.  See here.
  2. One or both of your connections is to a Database Solution, and you need to rebuild your database project(s).  If you can't rebuild your database because of errors... fix them, and rebuild the database project.  Make sure you have the sqlcmdvariables chosen for the compare and setup correctly between your database projects, if you have more than one. 

Check Sharepoint version with SQL

Ever needed to check your sharepoint installation version, but don't have access to check it in central admin?

Run this query in the _Config database.

SELECT Version, TimeStamp FROM Versions      
WHERE VersionId = '00000000-0000-0000-0000-000000000000'    
ORDER BY Id DESC

8/10/2010

Backup SQL with Powershell (long)

Had to set up back up a SQL Express 2005 database.  Why some vendors insist on creating software dependent on SQL Express, and then not giving their clients any way to back up said data, is mind-bogglingly stupid.

Here's a PowerShell script I wrote to accept a server name, database, and target backup folder.  It performs a single full backup when called. 

This script, which includes error-handling, sub-folder creation and more robust error reporting, is a lot more involved than any examples I found online, had to amalgamate the knowledge of dozens of google bing searches. 

Some key syntax features of Powershell that I learned about and implemented here:
  1. The InnerException loop is the only way to get the actual SQL Error out.  Otherwise, you're just stuck with Error 3041, "BACKUP failed to complete the command", which is not informative at all.  This is probably the most critical part of this, the ability to get meaningful errors into a text file log in the event of a backup failure.
  2. The Test-Path verifies if the container exists, so that you can create a directory if it doesn't.
  3. The $SMOBackupObject.Database wants a Database name (string) not a Database Object.  That really threw me off for an hour or so.  My fault, ofcourse.
  4. I used the SilentlyContinue ErrorActionPreference so that the user would see a clean error message (in my if $error.count section) instead of a messy red error from PowerShell, which would not be informative anyway (see #1).
  5. I used both the Trap syntax (to access the Exception object) and the if $error.count, because I wanted to have it return something to the end user regardless. 
  6. The out-null parts also hide messy returns to the end-user.
  7. The $error.clear() is also pretty important, as it clears out the errors that had run beforehand, if the user wants to try running the script again.  Not sure why this is necessary, its not intuitive that errors persist like that.  Open to any advice.

    function SQLFullBackup ([string]$SQLServerName, [string]$SQLDBname, [string]$BackupFolder) 
    {  
     
        # SQL Server Backup
        # Executes a single FULL backup of a given database 
     
        $error.clear()
        $ErrorActionPreference = 'SilentlyContinue'
        $BackupDate = Get-Date -format yyyyMMddHHmmss
     
        # SPECIFY SQLServer\Instance name with this variable
        $SQLServer =  New-Object ("Microsoft.SqlServer.Management.Smo.Server") $SQLServerName
     
     
        Trap [Exception] {
            $err = $_.Exception
            while ( $err.InnerException )
                {
                $err = $err.InnerException
                $errorfullmessage += $err.Message + " "
                };
     
            $errorfullmessage | out-File ($BackupFolder + "\BackupHistory_Error_" + $SQLDBName + "_" `
                + $BackupDate + ".txt");
        }
     
        #Create the subfolder if it does not already exist.
        if ((Test-Path -path ($BackupFolder + "\" + $SQLDBName) -pathtype container) -ne $True)
        {
            New-Item -path $BackupFolder -name $SQLDBName -type directory | out-null
        }
     
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null
     
        $SMOBackupObject = New-Object ("Microsoft.SqlServer.Management.Smo.Backup")
        $SMOBackupObject.Database = $SQLDBName
        $SMOBackupObject.Action = "Database" # File, Log
        $SMOBackupObject.Incremental = $false
        $SMOBackupObject.Devices.AddDevice($BackupFolder + "\" + $SQLDBName + "\" + $SQLDBName + "_" `
                + $BackupDate + ".bak", "File")
        $SMOBackupObject.SqlBackup($SQLServer) 
     
        if($error.count -gt 0)
        {   
            "Backup of database " + $SQLDBName+ " failed.  See log file in " + $BackupFolder + "\BackupHistory_Error_" `
            + $SQLDBName + "_" + $BackupDate + ".txt"
        }
                Else
        {
            "Backup of database " + $SQLDBName + " successful."
        }
     
    };
     
    Here are the function calls, per database, for the above function:

    SQLFullBackup ".\SQLEXPRESS" "express" "c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\backup"
    SQLFullBackup ".\SQLEXPRESS" "express2" "c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\backup"
     
    Here's sample output of a successful call or a failed call.


    PS C:\Windows\system32>C:\powershell_scripts\sqlfullbackup calls.ps1
    Backup of database express successful.
    Backup of database express2 failed.  See log file in 
    c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\backup\BackupHistory_Error_wexpress2_20100810094357.txt 

    Here's the files created:
    BackupHistory_Error_express2_20100810094357.txt
    \express\express_20100810094357.bak

    And the error log actually contains an informative SQL Error.  In this case:

    An exception occurred while executing a Transact-SQL statement or batch. 
    Could not locate entry in sysdatabases for database 'express2'. No entry found with that name. 
    Make sure that the name is entered correctly.
    BACKUP DATABASE is terminating abnormally. 


    How did this DBA first get into Powershell?  Here's a blog post about it.  

    How I Learned to Stop Worrying and Love the Powershell

    So, as a precursor to my next blog post where I actually do something SQL related with Powershell, I'll post my first two powershell scripts here.  They are completely unrelated to SQL, but was my first exposure to Powershell 2.0 and all its majesty.

    I am a consultant, and my Cisco VPN client app has half a dozen different VPN connections.  I spent a lot of time on client sites, or, on a VPN to the client side from the office or the home office.  Needless to say, there's lots of stuff I do (such as non-billable with like this exercise, and other efforts that are billable) while not on a VPN.

    And since I don't want certain apps running on my machine while I'm on VPN, that I usually run while I am not on VPN, I wrote a pair of Powershell scripts, one called onvpn and another called offvpn.

    Here's what they do:

    offvpn.ps1 starts up my google calendar sync (which syncs my google calendar to exchange to my iphone, very handy), tweetdeck, firefox, and MSN Messenger.

    Clear-Host
    if (-not (Get-Process googlecalendarsync -ea 0) ) { Start-Process "C:\Program Files\Google\Google Calendar Sync\GoogleCalendarSync.exe" }
    if (-not (Get-Process firefox -ea 0) ) { start-process "C:\Program Files\Mozilla Firefox\firefox.exe" }
    if (-not (Get-Process msnmsgr -ea 0) ) { start-process  "C:\Program Files\Windows Live\Messenger\msnmsgr.exe" }

    if ( Get-Process vpngui -ea 0 ) { Stop-Process -name vpngui }
    if ( Get-Process vpnui -ea 0 ) { Stop-Process -name vpnui }

    what does onvpn.ps1 do?  Just the opposite, plus a few more things like the amazonmp3downloader app that only starts up on demand.  It also starts up my vpngui client and my Remote Desktop Client manager app, which I'll inevitably be using over the VPN client.


    Clear-Host
    Get-Process firefox -ea 0 | % { $_.CloseMainWindow() } 
    if ( Get-Process googlecalendarsync -ea 0 ) { Stop-Process -name googlecalendarsync } if ( Get-Process msnmsgr -ea 0 ) { Stop-Process -name msnmsgr } if ( Get-Process amazonmp3downloader -ea 0 ) { Stop-Process -name amazonmp3downloader } if (-not ( Get-Process vpngui -ea 0 ) ) { Start-Process "C:\Program Files\Cisco Systems\VPN Client\vpngui.exe" } if (-not ( Get-Process RDCMan -ea 0 ) ) { Start-Process "C:\Program Files\Remote Desktop Connection Manager\RDCMan.exe" }
     
    The Get-Process call determines if the processes are running before starting or stopping them.

    One small annoyance is that the Stop-Process command isn't that graceful. Firefox always starts up with the "well, this is embarrassing" screen asking me if I want to try and restore the previous session. If there was a way to more gracefully shut down these apps, I'd be open to it.

    (Also, as a corollary, I figured out how to post nice code.  The Visual Studio 2010 PowerTools has a copy-to-HTML feature.  Much nicer, eh?)

    Questions and comments are welcome.

    UPDATE: Updated the onvpn.ps1 script to use a kinder, gentler way of shutting down some apps.  See the comments.

    8/06/2010

    SSRS: does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files

    In case anyone gets this error:

    Server Error in '/' Application.
    The current identity ({whatever domain\username}) does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files'.

    This blog entry helped me out.  The result was odd but it worked.

    C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -ga "{whatever domain\username}"
     
    Start granting {whatever domain\username} access to the IIS metabase and other
     directories used by ASP.NET.
    An error has occurred: 0x800703f0 An attempt was made to reference a token that
    does not exist.

    Fun with SQL 2005 Service Pack 3

    At a remote client site last night, I ran into a series of problems with SQL 2005 RTM and updating it to SP3.

    Error Number : 1612
    Error Description : Unable to install Windows Installer MSP file

    like that, on the SQL, AS, NS, RS, DTS and Tools components. Furthermore, the SP3 install was asking me for the original install media locations of the .msi files before failing each component during the Service Pack installation.

    I tried a few of things unsuccessfully: setting explicit admin permissions on numerous folders (has worked in the past), using a different copy of SQL 2005 CD1 media, trying SP2 instead (same problems), uninstalling the antivirus software, googling like a mad man. (Er... I mean Binging.)

    Eventually, my mad searching led me to this MSDN Blog Entry, which initially seemed very daunting. And I was correct, it was very time-consuming. But it worked.

    (Note that if you've found this blog entry because you're experiencing some of the errors within, the above Blog Entry link Parts 1 and 2 are what you are looking for.)

    This error in the service pack log files was the most telling:

    Warning: Local cached package 'C:\WINDOWS\Installer\a1b67256.msi' is missing.

    45 minutes of searching through logs for eight-digit random strings, copying files from the unpacked SP3 exe and the original media into c:\windows\installer, SQL, AS and RS all were installed by the service pack correctly.

    But, the NS, DTS and Tools components gave me an entirely different error.

    Error 2902
    Operation ixfAssemblyCopy called out of sequence.

     I re-copied the msi and msp files, thinking that perhaps I had done something wrong. That didn't work. I reinstalled, noticing that NS and DTS were installed to be available on first run. I fully installed all features this time around for NS, DTS and Tools, but again, SP3 reported the above ixfAssemblyCopy error.

    To get SP3 to take Tools, I had to uninstall and reinstall that component, using the CD2:\Setup\SqlRun_Tools.msi. That allowed Service Pack 3 to finally successfully update. 

    What a night!

    Some talking points:

    • Interesting how I was told five years ago that when Microsoft made SSIS, they didn't re-use a single line of code for DTS. That is hard to believe, since the installers for SSIS are still called SqlRun_DTS.msi.

    • There was some interesting stuff in the logs that threw me off. Stuff like

      MSI (s) (74:74) [20:12:05:278]: Note: 1: 2203 2: i:\35a1934bc4ebd525729353c57bb8\HotFixSQL\Files\sqlrun_sql.msp 3: -2147287037
      MSI (s) (74:74) [20:12:05:278]: SOURCEMGMT: Source is invalid due to missing/inaccessible package.

      That's really odd, since there actually is an I:\ drive mapped on this server, and it doesn't contain any folders like that. I assume that long-string-named folder is temporary and that its use of I: would be virtual. 

      Could the Service Pack actually have been thrown off by the fact that an i: drive existed? I couldn't unmap it, wasn't my server, but I can't help but thinking that was fishy.  Anyone else encountered service pack issues when an I: drive exists?

    • I still don't know the problem with why SP3 couldn't copy the .msi and .msp files on its own. Seems simple enough, what's the explanation? Couldn't be permissions, I gave explicit admin rights to the installing user to every folder I could. Talk about aggravating.