Showing posts with label azure. Show all posts
Showing posts with label azure. Show all posts

Monday, September 14, 2020

[Updated] Prepping the local SSD D: in an Azure VM for SQL Server TempDB

One of the few good, safe uses for the local SSD volume of an Azure VM is for SQL Server tempdb data and log files. 

Note: this blog post has been updated, specifically to solution 3b below. See comments.

Typically this is the D: volume of an Azure VM, though not always in the case of migrated VMs.

Moving your SQL Server instance's tempdb files to the D: volume is recommended for performance, as long as the tempdb files fit it the D: that has been allocated, based on your VM size. 

When the D: is lost due to deallocation, as expected, the subfolder you created for the tempdb files (if applicable) and the NTFS permissions granting SQL Server permission to the folder are no longer present. SQL Server will be unable to create the tempdb files in the subfolder and will not start. Even if you put the tempdb data and log files in the root of D:, after deallocation, that's still not a solution, as the NTFS permissions to the root of D: won't exist. In either case, SQL Server will be unable to create the tempdb files and will not start.

In the Windows Application Event log of the Azure VM, you'll see something like:
Event ID: 17204
FCB::Open failed: Could not open file D:\tempdb.mdf for file number 1.  OS error: 2(The system cannot find the file specified.).
Event ID: 5123
CREATE FILE encountered operating system error 5(Access is denied.) while attempting to open or create the physical file 'D:\tempdb.mdf'.
On Azure Windows VMs, the local SSD temporary disk is D:, I'll assume it is for you as well. On Azure Linux VMs, the local SSD temporary disk is /dev/sdb by default. I mention these solutions for Windows OS below.

Solutions


There are three solutions to this problem in Windows. To test any of these solutions, don't just restart your VM, you have to deallocate it (stop it in the Azure Portal, then restart it). That's the way to be sure the contents of D: has been lost. However, data on the temporary disk may be lost during Azure maintenance, which is mostly out of your control, so it is important to solve this issue when using the D: for tempdb.

Note that simply shutting down the Azure VM from inside operating system does not deallocate the VM, it only stops the VM. During a normal, successful reboot of the VM OS, data on the D: will not be lost. A reboot initiated from inside an RDP session, for example, shouldn't trigger the contents of D: to be lost. You must stop the VM from the Azure Portal to deallocate the VM. This is the difference between a VM's status of "Stopped" or "Stopped (deallocated)". 

1. Windows Scheduled Task Calling PowerShell

The first solution it appears is the one Microsoft documented years ago here but is no longer necessary. To be clear, Solution 3 below, using the IaaS Agent, is the recommended and best option now.

The old workaround was something like this. A simple PowerShell script running in Windows scheduled task, triggered on Startup. It also requires setting the SQL Server service to "Automatic (Delayed Startup)". 

The script creates the subfolder on D: for the tempdb, then grants permissions to the SQL Server service account, before SQL Server tries to start.

This type of script and delayed startup is a yet-unsolved necessary complication, but this solution works fine.

2. Add SQL service account to local Administrators group

The second solution is to place the tempdb files on the root of D: but then make the SQL Server service account a local Windows administrator, so that it has permissions to write to the root of D:.  This violates most security best practices (though it is more common than you'd think). 

Key here is placing tempdb in the root of D:. Even as a member of the local Administrators group on the server, SQL Server will not create subfolders upon startup, and a subfolder on D: won't exist after deallocation. 

I'm not a fan but it's a solution, and this was certainly the simple, though it violated the principles of least privilege.

3a. Azure SQL Server IaaS Agent Extension, installed with VM (recommended)

A third and best solution involves the Azure SQL VM Resource Provider. It places the SQL Server IaaS Agent Extension that connects the SQL instance to the Azure portal for all sorts of easy integrations, bringing a lot of manageability to the Azure Portal. It is generally highly recommended. 

The SQL Server IaaS Agent Extension should handle preparing D: for the tempdb, solving startup issues for an Azure VM with SQL Server. This definitely works for VM images with SQL Server from the Azure marketplace, which by default configure tempdb on the local SSD D: anyway. I have tested this out with the newest "SQL Server 2019 on Windows Server 2019" image available in the Azure Marketplace. It works. For example:


"If you create the VM from Azure portal or Azure quickstart templates and place Temp DB on the Local Disk then you do not need any further action." Turns out, Solution 3 here is very similar to Solution 1. When installed from the marketplace with the SQL Server IaaS Agent installed, the Azure VM has a Windows Scheduled task triggered At Startup to call "C:\Program Files\Microsoft SQL Server IaaS Agent\Bin\SqlIaaSExtension.SqlServerStarter.exe".  

I speculate that this app runs on startup to prep the D: for tempdb, handling the permissions on the  D:\TempDb subfolder for the SQL Server service account, which is not a member of the local admins group by default. In this case, unlike Solution #1 above, the SQL Server service is not configured to "Automatic (Delayed Startup)", it is still configured to "Automatic", as usual.

Again, to test this out, don't just restart your VM, you have to deallocate it (stop it in the Azure Portal, start it). That's the way to be sure the D: has been deallocated and restarted.

I have split this solution into two parts because 3a only appears to work if SQL and the SQL VM Resource Provider are pre-installed via the image gallery. 

3b. Azure SQL Server IaaS Agent Extension installed manually

But what about for an existing VM, not created with the SQL Server IaaS Agent Extension pre-installed from the Azure VM image gallery? This is currently unclear. [UPDATE August 2023]: This now works. The IaaS Agent now configures tempdb on the ephemeral local SSD D: drive, even if registered on the VM after SQL Server was installed. This solution should work the same as 3a, above: Manage SQL Server virtual machines in Azure by using the Azure portal - SQL Server on Azure VMs.

We can install the SQL Server IaaS Agent Extension ourselves by registering the VM with the SQL VM resource provider. Once the RP is associated with your Azure subscription, adding the VM is easy. Install the SQL Server IaaS Agent Extension in lightweight management mode, without restarting the VM from the Azure CLI:
az sql vm create --name vm-iaasag-postsql --resource-group vm-iaasag-postsql_group --location eastus2 --license-type PAYG
Options for PowerShell are available in the same link.

However, you'll notice the SQL Server IaaS Agent Extension in lightweight management mode doesn't actually create the executable found in "C:\Program Files\Microsoft SQL Server IaaS Agent\Bin\". For that to be present, get up to "Full management mode". You can upgrade from lightweight management mode, or go straight to full management mode, but it costs a SQL Server service restart either way for Full management mode.

Currently though, if I create a new Windows VM without SQL Server, install a SQL instance, install the SQL Server IaaS Agent Extension in full management mode, and even schedule "C:\Program Files\Microsoft SQL Server IaaS Agent\Bin\SqlIaaSExtension.SqlServerStarter.exe" in a Scheduled Task triggered on startup (just like in a VM from the image gallery has), it doesn't appear to prep the local SSD D:\ drive for TempDB upon startup after deallocation. SQL Server service still fails to start because it cannot create TempDB. 

So, the SQL Server IaaS Agent Extension appears to prep the local SSD D:\ drive for the TempDB files, but only if SQL and the SQL VM RP are pre-installed via the image gallery. I'm still actively working on a solution to this and reaching out to Microsoft. I will update this blog post at a later date.

Thursday, May 28, 2020

SOLVED! Untrusted SAS certs after SQL startup

If you've tried doing Backup to URL with SQL Server using a Shared Access Signature (SAS) certificate and received this error:

Error: 18204, Severity: 16, State: 1.BackupDiskFile::CreateMedia: Backup device 'https://account.blob.core.windows.net/container/folder/msdb_log_202005170801.trn' failed to create. Operating system error 50(The request is not supported.).Cannot open backup device 'https://account.blob.core.windows.net/container/folder/msdb_log_202005170801.trn'. Operating system error 50(The request is not supported.). [SQLSTATE 42000] (Error 3201) BACKUP LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013). NOTE: The step was retried the requested number of times (1) without succeeding. The step failed. 


You may have received the same error I encountered.

This error popped up only after startup of SQL Server. To resolve the problem, we'd recreate the SAS key, using the same cert in the same script, and the backups would start working again. This affected all types of SQL database backups.

The issue was not our syntax (note the options, the syntax here is standard, but you do need to provide MAXTRANSFERSIZE = 4194304 and BLOCKSIZE = 65536 to backup to Azure blob storage.)
BACKUP DATABASE DB_name
TO URL = 'https://account.blob.core.windows.net/container/server-name/msdb_log_202005170801.trn'
WITH COMPRESSION, CHECKSUM, FORMAT, MAXTRANSFERSIZE = 4194304, BLOCKSIZE = 65536;


The issue was not the SAS key itself, which we tried creating via Storage Explorer, with PowerShell, with various expiration dates, with shared policies, etc.

Rather, the issue was with our SAS-based cert. Here was our syntax:
CREATE CREDENTIAL [https://account.blob.core.windows.net/container/server-name]
WITH IDENTITY='Shared Access Signature'
, SECRET='sv=2019-02-02&xxxxx';
Again, the syntax was correct in the sense that backups would start working again right away. But after a reboot, the cert would not longer be available to SQL. 

On the Azure Storage side, the logged failure was:

AuthenticationErrorDetail:Signed expiry time [Fri, 01 May 2020 13:30:52 GMT] has to be after signed start time [Tue, 12 May 2020 22:30:57 GMT]
Why would the exact same SAS key expire after reboot, but then start working again when it was dropped/created? And why wouldn't ALTER'ing the key work?

Our ticket with Microsoft moved through the storage team, the SQL team, and eventually the SQL engine engineering team, who did a complete stack dump to catch the error.

The issue turned out to be related to the hyphen in our credential creation above, note I included "server-name" because our server names have hyphens in them. Turns out, this is unnecessary and caused SQL to choke on trusting the certificate after startup. Here's the explanation from Microsoft support after analysis of the debugger dump:
When you create the credential, the data as we expected is stored in cache and is also persisted to disk but since the data is available in cache, we directly access with full URL name and it works. But after the SQL Server service is restarted and since the credential wont be available in cache, we try to query it from metadata and the way we do it there seems to be an issue on how we fragment the URL and we incorrectly look for container with foldername causing the issue. This is also why Alter wont work and only DROP\CREATE works since it’s in cache again.
We changed the credential to omit the server-specific folder name in the credential. Note that we had originally included the full path to the backup folder because I had read a little too much into this portion of the documentation: THE CREDENTIAL NAME argument requires that the name match the container path." 

This credential creation worked, and kept working after SQL startup.
CREATE CREDENTIAL [https://account.blob.core.windows.net/container] WITH IDENTITY='Shared Access Signature'
, SECRET='sv=2019-02-02&xxxxx';

Note the credential name ends with the container path but doesn't include the server name-specific subfolder, and still does not end with a '/'. 

The support escalation engineer marked the ticket as a bug and filed a defect with the engineering team, so we may have a solution in place for this soon. The end.




Oh, also, when we were first starting to use Backup to URL, we got this on some of our older instances of SQL Server:
"Backup/Restore to URL device error: Error while decoding the storage key."
Problem was trying to use an SAS key on SQL Server prior to 2016. Not supported! Gotta use the old syntax and a certificate for the  storage account identity and access key. 

Tuesday, March 05, 2019

Backup of an Azure SQL DB downloaded outside of Azure

From a dev colleague: “Asking for a client... do you know how to get a backup of an Azure SQL DB downloaded outside of Azure?”

Short answer - You can’t…

Medium answer - … and you don’t really need to for anything related to DR or HA. Backups are automagically handled in Azure, Azure SQL DB georeplication can be set up in a few button clicks. Instead of thinking of Azure SQL DB as a traditional SQL database in a server, think of it as a platform, consider spinning up a stand-alone copy of the production Azure SQL DB in a cheaper tier to set as pre-production. It’d be a better, truer dev/test/qa environment as a result.

Long answers – Okay, so you really want to get the Azure SQL DB to an on-prem SQL Server instance. Sigh.

  1. You can manually script it out into CREATE and INSERT T-SQL statements – schema and data – with various tools, including a compare with SSDT, or an officially supported free crossplatform scripter that is Python-based. Keep in mind this might be a huge file and a lengthy manual process with a large database.
  1. You could also use the “Export” button on the Azure portal for the Azure SQL DB to create a BACPAC on an Azure storage account. The file is locked with a username/password which you specify at the time of export.  It will take a few minutes, even for a small database. Once it's done, you can download the .bacpac file from the blob location with Azure Storage Explorer or something similar, and then import the .bacpac. Keep in mind that this might also be a huge file, and a lengthy manual process with a large database.



Sunday, September 25, 2016

SQL SSRS Express Edition Cannot Connect to Azure SQL Databases

This has bit a couple of clients recently, figured I'd put out a PSA.

Express editions of SQL Server Reporting Service, from SQL 2016 on down, cannot connect to Azure SQL Databases. Turns out, getting something for free does have some significant limitations.

For example, you'll see an error message "The Report Server has encountered a configuration error" on a data source page, when creating a new SSRS data source in the Report Manager website. What you may have not noticed on this page was the possible values in the Data Source Type drop down list.

In express edition, "Microsoft SQL Server" is the only option. In Standard and higher editions, there are many data types to choose from, including "Microsoft SQL Azure". Remember that Azure SQL Databases can't accept OLEDB connections.

But why? On this page (and see caption right), Microsoft lists a large number of "Built-in data extensions," Azure SQL Database included. The explanation lies here, where under possible "Supported data source", Express edition only supports Express edition, while Standard and above supports "All SQL Server editions."

This is what you're looking at in SSRS Express:


This is what you could be doing, in SSRS Standard or higher:


So what's the solution?

Upgrading the edition of a SSRS Express edition instance in-place is possible

Remember you can install various SQL Server features (Database, RS, IS, AS) on different Windows servers, each only once, as part of your license. So if you already own a Standard+ edition SQL Server somewhere but aren't using the SSRS feature, you now have a good use for it. 
Myth BUSTED: The above is incorrect! In the Licensing Guide for each recent version of SQL Server you'll find this: "The software components of a single SQL Server 2016 license cannot be separated. Any OSE running any of the licensed components of SQL Server 2016, requires a license. For example, if the SQL Server DB is deployed in one OSE and SQL Server RS is deployed in another, both OSEs must be fully licensed for SQL Server 2016 accordingly." The language was different and less specific for SQL 2005.

There are also a large number of fantastic (and also fantasy) ideas that developers will have, including SSRS alternatives. Listen to them, sure. But keep in mind that if your platform is SSRS and all the features that come with it, you will need to pay for it at some point. SQL Express edition and never been and never will be intended or suitable for standalone production usage. 

Tuesday, April 05, 2016

SQL Server PerfMon Counters Not Listed in dm_os_performance_counters

Had an issue with a new Azure VM install of SQL Server, but have had this issue in other, older places as well. SQL Server's performance counters in dm_os_performance_counters and/or perfmon have gone missing. How to get them back?

You might see a small subset of all performance counters here in the DMV view. (There should be 1335 rows in this DMV for SQL Server 2014 RTM, 1245 for SQL 2012 SP2. More are added regularly with major releases.) In the example I saw this morning, only 51 rows were returned, and only for the XTP series of objects (XTP Transaction Log, XTP Cursors, etc).

Strangely, I had now found this problem on brand-new Azure VM's running SQL Server, specifically in the pre-built SQL 2014 Availability Group resource group, which can be deployed easily from the new Azure portal.

The issue can be resolved with a pair of PowerShell commands which re-register performance counter names and explanation text to the SQL Server service in the Windows registry.

In PowerShell, modify the below for your service.
  • In the first line, change the driver name parameter to match the SQL Server instance name, MSSQLSERVER is the name for the default instance. To do the same for a specific instance, it would be MSSQL$InstanceName. 
  • In the second line, replace my path below with the path for your instance.

unlodctr MSSQLSERVER

lodctr "C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Binn\perf-MSSQLSERVERsqlctr.ini"
A restart of the affected service, in this case the SQL Server service, is necessary for the changes to take effect.

Monday, January 27, 2014

Start/Stop Group of Azure VM's

If you've already configured your local machine's PowerShell environment to interact with your Azure subscription (see here: https://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/), then you can begin saving yourself an awful lot of button-clicking by scripting out the start/stop of your VMs, especially if you have a cluster of Azure VMs for testing and demonstrating SQL Server Availability Groups, as an example.

You will need to change a few things in these scripts to suit your purposes:
  1. Change "YourAzureAccount@hotmail.com" to your Azure account. For MSDN-based Azure subscriptions like mine, this address was my Hotmail email address.
  2. Change "Visual Studio Premium with MSDN" to the name of your Azure Subscription. This was the name my MSDN-based account was given by default.
  3. Populate the $vms variable with a list of Azure VM's in the cluster you're looking to start/stop as a group, replacing "yourVMName-alwayson-dom" and so forth.
Big thanks and credit for the assist on this blog post to Cody Gros, SharePoint Solutions Architect and my coworker at Sparkhound.

Stop a list of Azure Accounts:
#if expired, Add-AzureAccount

Get-AzureSubscription | ? {$_.ActiveDirectoryUserId -eq 'YourAzureAccount@hotmail.com' -and $_.SubscriptionName -match "Visual Studio Premium with MSDN" } | Set-AzureSubscription -SubscriptionId $_.SubscriptionID

$vmHash =@{}
$vms = "yourVMName-alwayson-dom","yourVMName-alwaysonsql1","yourVMName-alwaysonsql2","yourVMName-alwaysonWSFC"

Get-AzureVM | foreach{$vmHash.Add($_.Name,$_.ServiceName)}

foreach ($vm in $vms) {    
    $currentVMService = $vmHash[$vm] 
    Write-Host "Current VM:$($vm)"
    $thisvm = Get-AzureVM -ServiceName $currentVMService -Name $vm
    Write-Host "Stopping VM:$($thisvm.Name)"
    Stop-AzureVM -Name $thisvm.Name -ServiceName $thisvm.ServiceName #-StayProvisioned  
}
Note about the -StayProvisioned tag above. Specifying this option will retain some IP settings, but will cause your VM's to continue to accrue Azure credit, even while stopped.  Use with care.

Start a list of Azure Accounts:
#if expired, Add-AzureAccount


Get-AzureSubscription | ?{$_.ActiveDirectoryUserId -eq 'YourAzureAccount@hotmail.com' -and $_.SubscriptionName -match "Visual Studio Premium with MSDN" } | Set-AzureSubscription -SubscriptionId $_.SubscriptionID

$vmHash =@{}
$vms = "yourVMName-alwayson-dom","yourVMName-alwaysonsql1","yourVMName-alwaysonsql2","yourVMName-alwaysonWSFC"

Get-AzureVM | foreach{$vmHash.Add($_.Name,$_.ServiceName)}

foreach ($vm in $vms) {    
    $currentVMService = $vmHash[$vm] 
    Write-Host "Current VM:$($vm)"
    $thisvm = Get-AzureVM -ServiceName $currentVMService -Name $vm
    Write-Host "Starting VM:$($thisvm.Name)"
    Start-AzureVM -Name $thisvm.Name -ServiceName $thisvm.ServiceName    
}

If upon running the scripts you receive either of these errors:
get-azuresubscription : The term 'get-azuresubscription' is not recognized as the name of a cmdlet, 
Get-AzureVM : The term 'Get-AzureVM' is not recognized as the name of a cmdlet, function, script file, or 

Then you don't have the PowerShell module loaded, and PowerShell isn't automatically loading it for you. Use the Import-Module command. Below is the default location of the module, downloaded from here: https://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/
Import-Module 'C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Azure.psd1'

Updated 20160322 to reflect the latest module paths

Tuesday, December 03, 2013

Upcoming Special Events for the Baton Rouge SQL Server User Groups

I'm happy to announce a big slate of upcoming events for Baton Rouge User Groups and especially the SQL Server User Group in the next three months:

December 11: Holiday Party at On the Border - free for user group attendees, door prizes from sponsors! http://www.brssug.org/group-news/december13batonrougeusergroupsholidayparty


January 8 '14: Two speakers from Microsoft will present to the User Groups on all the new things happening with Azure. This is going to be big! http://www.brssug.org/group-news/january14batonrougeusergroupsmeeting


February 12 '14: The day of the user groups meeting, we'll have two national Business Intelligence speakers presenting a paid all-day training at the Louisiana Tech Park, then hanging around to present at the SQL Server User Group meeting that night!


Tim Mitchell’s Real World SSIS: A Survival Guide:https://www.eventbrite.com/e/real-world-ssis-a-survival-guide-tickets-9555728441


Bill Pearson's Practical Self-Service BI with PowerPivot for Excel:http://www.eventbrite.com/e/practical-self-service-bi-with-powerpivot-for-excel-tickets-9450327183


Please visit our website for complete details: www.brssug.org
Info on the Tech Park Location is here:www.brssug.org/where

Wednesday, April 13, 2011

Lightning Round - SQL Azure Overview

Here is the slidedeck for my April 13 presentation at the joint Baton Rouge SQL Server and .net User Group meeting on the campus of LSU.