Thursday, January 30, 2014

It May Be Necessary Re-enable Service Broker After Baremetal Restore

After the restore of a database, or perhaps a bare-metal restore of a Windows server running SQL Server, unique IDs in the Service Broker settings may have changed, causing the broker to fail when starting. So, you will need to reactivate Service Broker.

You may first notice this because Database Mail gives you an error message about the Service Broker not being enabled in the MSDB database.

For example:
"...Either Service Broker is disabled in msdb, or msdb failed to start. ... Bring msdb online, or enable Service Broker."

Clicking OK to this prompt will cause SQL Server Management Studio to hang or freeze up. You need to do this in a T-SQL script.

Similarly,

ALTER DATABASE MSDB SET ENABLE_BROKER

will never complete.

It may be necessary to run

ALTER DATABASE MSDB SET NEW_BROKER WITH ROLLBACK IMMEDIATE

to reset the service broker's unique ID on MSDB and then

ALTER DATABASE MSDB SET ENABLE_BROKER

will complete successfully.

Monday, January 27, 2014

Start/Stop Group of Azure VM's

If you've already configured your local machine's PowerShell environment to interact with your Azure subscription (see here: https://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/), then you can begin saving yourself an awful lot of button-clicking by scripting out the start/stop of your VMs, especially if you have a cluster of Azure VMs for testing and demonstrating SQL Server Availability Groups, as an example.

You will need to change a few things in these scripts to suit your purposes:
  1. Change "YourAzureAccount@hotmail.com" to your Azure account. For MSDN-based Azure subscriptions like mine, this address was my Hotmail email address.
  2. Change "Visual Studio Premium with MSDN" to the name of your Azure Subscription. This was the name my MSDN-based account was given by default.
  3. Populate the $vms variable with a list of Azure VM's in the cluster you're looking to start/stop as a group, replacing "yourVMName-alwayson-dom" and so forth.
Big thanks and credit for the assist on this blog post to Cody Gros, SharePoint Solutions Architect and my coworker at Sparkhound.

Stop a list of Azure Accounts:
#if expired, Add-AzureAccount

Get-AzureSubscription | ? {$_.ActiveDirectoryUserId -eq 'YourAzureAccount@hotmail.com' -and $_.SubscriptionName -match "Visual Studio Premium with MSDN" } | Set-AzureSubscription -SubscriptionId $_.SubscriptionID

$vmHash =@{}
$vms = "yourVMName-alwayson-dom","yourVMName-alwaysonsql1","yourVMName-alwaysonsql2","yourVMName-alwaysonWSFC"

Get-AzureVM | foreach{$vmHash.Add($_.Name,$_.ServiceName)}

foreach ($vm in $vms) {    
    $currentVMService = $vmHash[$vm] 
    Write-Host "Current VM:$($vm)"
    $thisvm = Get-AzureVM -ServiceName $currentVMService -Name $vm
    Write-Host "Stopping VM:$($thisvm.Name)"
    Stop-AzureVM -Name $thisvm.Name -ServiceName $thisvm.ServiceName #-StayProvisioned  
}
Note about the -StayProvisioned tag above. Specifying this option will retain some IP settings, but will cause your VM's to continue to accrue Azure credit, even while stopped.  Use with care.

Start a list of Azure Accounts:
#if expired, Add-AzureAccount


Get-AzureSubscription | ?{$_.ActiveDirectoryUserId -eq 'YourAzureAccount@hotmail.com' -and $_.SubscriptionName -match "Visual Studio Premium with MSDN" } | Set-AzureSubscription -SubscriptionId $_.SubscriptionID

$vmHash =@{}
$vms = "yourVMName-alwayson-dom","yourVMName-alwaysonsql1","yourVMName-alwaysonsql2","yourVMName-alwaysonWSFC"

Get-AzureVM | foreach{$vmHash.Add($_.Name,$_.ServiceName)}

foreach ($vm in $vms) {    
    $currentVMService = $vmHash[$vm] 
    Write-Host "Current VM:$($vm)"
    $thisvm = Get-AzureVM -ServiceName $currentVMService -Name $vm
    Write-Host "Starting VM:$($thisvm.Name)"
    Start-AzureVM -Name $thisvm.Name -ServiceName $thisvm.ServiceName    
}

If upon running the scripts you receive either of these errors:
get-azuresubscription : The term 'get-azuresubscription' is not recognized as the name of a cmdlet, 
Get-AzureVM : The term 'Get-AzureVM' is not recognized as the name of a cmdlet, function, script file, or 

Then you don't have the PowerShell module loaded, and PowerShell isn't automatically loading it for you. Use the Import-Module command. Below is the default location of the module, downloaded from here: https://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/
Import-Module 'C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Azure.psd1'

Updated 20160322 to reflect the latest module paths

Wednesday, January 22, 2014

Easy Backup History Review Script

I often use these scripts to check on the backups of all databases in an unfamiliar SQL Server instance, regardless of the platform that is taking the backups (SQL maintenance plans, T-SQL scripts in an agent job, and third-party software).

The query sorts databases by the backup type and then the recovery model configured. Because the backups are grouped by database name, it shows the most recent full backup for each database, most recent differential if it exists, and then the most recent transaction log backup (but only for databases in FULL or BULK_LOGGED recovery models).

Most importantly, it will show the most recent transaction log backup for all databases in FULL or BULK_LOGGED recovery models, even if a transaction log backup has never been taken.

If any of the dates look out of place, too old or NULL, they will stand out right away.

  • The first two scripts are functionally equivalent. The first uses SQL 2005 system views (sys.databases) that are only appropriate if all databases are running SQL 2005 compatibility mode or higher. 
  • The second script is using SQL 2000 system views (sysdatabases), which are now deprecated.
  • The third script provides a granular list of all backup events by database, and is valid for SQL 2000 and up.
--sql2005 and above
select 
 database_Name
 , backuptype 
 , d.recovery_model_desc
 , BackupDate = MAX(BackupDate)
 from sys.databases d
 inner join 
 (
select distinct 
 database_name
 , backuptype = case type WHEN 'D' then 'Database'
       WHEN 'I' then 'Differential database'
       WHEN 'L' then 'Transaction Log'
       WHEN 'F' then 'File or filegroup'
       WHEN 'G' then 'Differential file'
       WHEN 'P' then 'Partial'
       WHEN 'Q' then 'Differential partial' END
 , BackupDate = MAX(backup_start_date)  
 from msdb.dbo.backupset bs       
 group by Database_name, type
 UNION 
 select distinct
 db_name(d.database_id)
 , backuptype = 'Database'
 , null
 FROM master.sys.databases d
 UNION
 select distinct
 db_name(d.database_id)
 , backuptype = 'Transaction Log'
 , null
  FROM master.sys.databases d
  where d.recovery_model_desc in ('FULL', 'BULK_LOGGED')
  
 ) a
 on db_name(d.database_id) = a.database_name
 group by database_name, backuptype, d.recovery_model_desc
order by backuptype, recovery_model_desc, BackupDate asc
 
 
 --sql 2000 and above
select distinct 
 database_name = d.name 
 , a.backuptype 
 , RecoveryModel = databasepropertyex(d.name, 'Recovery')  
 , BackupDate = Max(a.backup_start_date)  
 from master.dbo.sysdatabases d
 left outer join 
 (  select distinct 
   database_name
   , backuptype = case type WHEN 'D' then 'Database'
         WHEN 'I' then 'Differential database'
         WHEN 'L' then 'Transaction Log'
         WHEN 'F' then 'File or filegroup'
         WHEN 'G' then 'Differential file'
         WHEN 'P' then 'Partial'
         WHEN 'Q' then 'Differential partial' END
   , backup_start_date = MAX(backup_start_date)   
   from msdb.dbo.backupset bs       
   group by Database_name, type
   UNION 
   select distinct
     d.name
   , backuptype = 'Database'
   , null
   FROM master.dbo.sysdatabases d
   UNION
   select distinct
     d.name
   , backuptype = 'Transaction Log'
   , null
    FROM master.dbo.sysdatabases d
    where databasepropertyex(d.name, 'Recovery') in ('FULL', 'BULK_LOGGED')
  
 ) a
 on d.name = a.database_name
 group by d.name , backuptype , databasepropertyex(d.name, 'Recovery')
order by backuptype, RecoveryModel, BackupDate asc
 
--granular backup history
select distinct 
 database_name
 , type
 , backuptype = case type WHEN 'D' then 'Database'
       WHEN 'I' then 'Differential database'
       WHEN 'L' then 'Transaction Log'
       WHEN 'F' then 'File or filegroup'
       WHEN 'G' then 'Differential file'
       WHEN 'P' then 'Partial'
       WHEN 'Q' then 'Differential partial' END
 , BackupDate = backup_start_date
 , database_backup_lsn
 , bf.physical_device_name
 from msdb.dbo.backupset bs 
 left outer join msdb.dbo.[backupmediafamily] bf
 on bs.[media_set_id] = bf.[media_set_id]
 --where database_name = 'databasenamehere'
 --and type in ('d', 'i')
 
  order by database_name asc, backupdate desc
  

Wednesday, January 15, 2014

Not Dead Yet - Text and NText in SQL 2014

I was recently approached by a client who was running SQL Server 2008 with some databases in SQL 2000 compatibility level. He asked what would happen if his databases were backed up and restored to SQL Server 2014. What's the compatibility level? Is the text data type still supported? Would it even restore?

Here's the answers, based off SQL Server 2014 currently in CTP2, and for these matters, they hold true with this statement on MSDN: "No breaking changes in SQL Server 2014."

SQL Server 2012 does not support SQL 2000 compatibility mode, and neither does the SQL Server 2014 CTP2.

When restoring a full backup of a SQL Server 2008 R2 database in SQL 2000 compatibility level to SQL Server 2014 CTP2, the restore process will upgrade the database and change the compatibility level to 100 (SQL 2008).

Compatibility Levels Supported by SQL 2012 SP1

Compatibility Levels Supported by SQL 2014 CTP2

Frustratingly, one of the deprecated old data types from SQL's nascent years is still hanging around.  The text (and ntext) data types still work in SQL Server 2014 CTP2.

When given the opportunity to upgrade that text data type to a modern data type, take care to do a MAX(DATALENGTH(textcolumn)) and find out how many bytes you're actually storing. If it is less than 8000 bytes, (8000 characters in text or 4000 characters in ntext), don't just go straight to varchar(max).

Other breaking changes that occurred after SQL 2000, like ambiguous columns in an ORDER BY or old "star join" syntax ("*=" and "=*"), are things you need to watch out for when promoting the compatibility mode of a SQL 2000 database, as always.

When you restore a SQL 2000 compatibility level database to SQL Server 2012 or SQL Server 2014, syntax that broke between SQL 2000 and SQL 2005 are now pushed into action and will fail. This isn't news, just a reminder.

Tuesday, January 07, 2014

Don't Hitch Your Wagon To MS Access

I know this might seem like an odd topic on a SQL Server blog, but it's a common and critical enough problem that I want to address it. (The audience of this blog post is for business managersdepartmental decision makers, and yes, even human resources professionals.)

As a consultant for an information technology services company, I have seen far too often in my time the risks and outcomes of years of "lazy" "investment" in Microsoft Access as the front-end and repository for enterprise-level data.

I've seen multiple companies, across a wide variety of industries, commit these sins of Microsoft Access:
  • Mission critical loan decision-making data in Access, sitting on a VP's desktop PC (with no backup)
  • AR and AP inside a simple Access app, forcing a single office worker bee to do both, a clear GAAP violation
  • Company without ability to fix/modify their billing system when the legacy developer left, and no skillset left in house for the tangle of VBA code embedded in Access
  • Company unaware of Access size limitations, forced emergency redesign just to send invoices
  • Access data corrupted because of inadvertent office worker use, no alternative but to manually re-enter data
  • Field data dumped from Excel into Access so that it is "queryable", without any validation or integrity
  • Entrenched Access data is difficult and expensive to integrate with newer, modern systems that the company is investing in
So Tempting

Microsoft Access is very easy to use for non-developer folks to develop basic data entry screens, data storage, and reports. In this sense, it's also great for students. My first database class at LSU was taught entirely within the confines of Access 2000.

Think about that.

It's perfect for untrained students. Should you really be saddling your enterprise with mission-critical functionality based on a consumer office product? There are more capable alternatives that will scale up with your company's growth and the historical data collected.

It's just too easy to create a screen to capture this data, present a decent report, all while creating important data that is stored on a consumer-grade hard disk. You might be shocked to realize that many IT departments don't "back up the whole computer" with any more regularly than weekly - if you're lucky. Your poor IT folks won't be pleasantly receptive to recovering or supporting Access databases you created without their knowledge. 

At the very least, storing the Access databases on a network drive, and breaking the database out into separate .MDE (front-end forms) and .MDB (back-end data) files is what your IT folks will want to do, if they are familiar with handling these types of files. However, that's still not enterprise-ready.

It "Just Works", Until It Doesn't

Many non-technical users are not aware that Access databases cannot exceed 2 GB in size. Aside from size limitations, Access also provides no enterprise-level point-in-time backup and recovery. It's backed up just like any other file, which is to say, no more survivable than nightly.

Access is also not designed for concurrency. It may get along fine when two or three people use it, but the lightweight database engine simply is not designed to support multiple users. If it works for your team of users, data corruption is probable.

You've probably already encountered this, and warned users to "stay out of the database" while important data entry is occurring.

Microsoft documentation is full of caveats about the lack of scalability for Access:

"Microsoft Jet [the database engine inside Access] can only handle a limited number of sessions." - http://support.microsoft.com/kb/225048/en-us
"...when multiple concurrent users make requests of a Microsoft Access database, unpredictable results may occur." - http://support.microsoft.com/kb/299973
"...it was not intended (or architected) for high-stress performance..." - http://support.microsoft.com/kb/222135/en-us

Get a Real Database Solution

Open up the lines of communication to your IT managers and inquire about skillsets in custom development, SQL Server and SharePoint. The payoff is worth it. The project may take longer to develop, but satisfaction with how well this new projects works for capturing and reporting your data will be higher, not to mention the basics of disaster recovery.

Microsoft SQL Server is much more suited to handle your data with integrity, with disaster recoverability, and with performance. SQL Server Reporting Services can be used by developers to serve faster-loading, automatically-updated and highly securable read-only reports in your web browser to users above and below you on the org chart. SQL Server Integration Services can be used to pump data from heterogeneous data sources (excel files, text files, or other database platforms such as Oracle.) Microsoft SharePoint is an excellent provider of document sharing and office collaboration, while also providing a quick and easy way for developers to create data entry forms that run in your web browser.

Typically when consulting on a large legacy Access migration, where a company is feeling the squeeze from their overmatched old Access platform, I recommend a combination of Microsoft SQL Server and Microsoft SharePoint. Using a multi-stage migration, we can first move data out of Access and onto SQL Server using "linked tables", then migrate custom forms and report functionality to SharePoint and Reporting Services one by one.

These products work well together and you may be surprised to learn that they are already in your corporate IT environment.

The Real Advantages

Microsoft Access is not an enterprise database engine in the practical sense. It is a database simulator. It isn't until becoming aware of the advantages of an enterprise database server that this distinction isn't clear.

Using a real database platform like SQL Server also allows your IT department to be more involved in its maintenance and administration, which is a good thing, and will certainly help them sleep better at night.

Remember, your IT folks probably can't help you recover a mission-critical Access database they didn't know existed.

So to wrap us up, and because this is actually a SQL Server blog, let's go over some of the major differences between Access and SQL Server, from a bit of a technical angle.

MS Access vs SQL Server

Higher concurrency: Microsoft SQL Server is designed for access by multiple users, and is far superior to Access database when asked to handle simultaneous reads and writes. It's not the fault of Access, it just wasn't designed for use in all but the more simplistic environments.

File Size: Most versions of Access have a file size limitation of at most 2 GB per file, which may or may not be large enough to handle multiple years of your data.  Access also cannot leverage all of the memory or processors (severely limited) on any given desktop or server. Microsoft SQL Server has no such limitations on size or memory.

64-bit Performance: Access' Jet database engine was not created to leverage multiple CPU's, and still relies on old 32-bit processing. In the past decade, most operating systems and applications have moved to more efficient 64-bit processing. SQL Server provides both 32-bit and 64-bit versions, but 64-bit is highly recommended. 

Disaster Recovery: The most important advantage of Microsoft SQL Server is disaster recovery capability, including point-in-time recovery, which can roll forward to the point before a data was lost or corrupted, and reduce the data loss tolerance in the case of disk failure. Using SQL Server backups, a database currently in use can be automatically backed up on a schedule without user interruption.
  • Access database corruption is a danger and the only fallback would be the restoration of a previous version of the database file, if such a backup exists. 
  • To perform an Access database backup within the application, you must manually initiate this and close all objects first. 
High Availability: Using AlwaysOn Availability Groups in SQL Server 2012, Microsoft SQL Server can provide manual or automatic failover across geographic distances using Windows Failover Clustering. This can also provide a readable, live remote copy of the data that can serve reports. Access has no ability to provide this kind of failover or redundancy.

Scalability: Reporting Services and Integration Services are two built-in features of Microsoft SQL Server that are much faster to develop, maintain, and deploy custom-built reports. They can be used to replace highly-customized Access remote queries, linked tables and reports, for many users and for large amounts of data.
  • “Archiving” data by moving it out of active tables and into older passive tables is an undesired source of complexity in the Access database environment. SQL Server Enterprise edition’s Horizontal Partitioning feature on tables can transparently achieve the advantages of spreading data across multiple physical disk volumes without coding applications to read from both "current” from “archive” tables.
Relational Integrity and Index: SQL Server has superior foreign key, primary key, uniqueness and constraint enforcement.  Indexing and compression are far more robust in SQL Server, which improves data integrity, performance and disk optimization. Datatypes in SQL Server closely resemble those in Microsoft Access, but with a much wider range of capability. 

Security: Microsoft SQL Server Enterprise Edition satisfies military and government standards for security, internal auditing and encryption, and with more granular security permissions for users in different roles in the company.