pointers, solutions and scripts for the SQL DBA
not intended to replace msdn, common sense or oxford commas

Monday, November 17, 2014

Quickly Add Telnet

Have you found yourself on a server or workstation that doesn't have telnet (typical in new builds and new Azure VM's from the gallery), and need it to test port connectivity? Did you receive this error when you tried to execute telnet in cmdshell or powershell?

telnet : The term 'telnet' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1

Three quick ways to add the Telnet Client to the server you're on right now that are much faster alternatives than adding telnet via the Windows features section of the Control Panel. Note that you must launch PowerShell or Command Prompt as an administrator to run these commands, so non-administrators will need to track down a helpful sysadmin first.

Install telnet via DISM (via cmd/PowerShell):

This option should complete the installation of Telnet in seconds. You may receive an error if a reboot is pending because of another installation.

dism /online /Enable-Feature /FeatureName:TelnetClient
Install telnet via pkgmgr (via cmd/PowerShell):

This option should also complete the installation of Telnet in seconds. PkgMgr was deprecated along with a few other tools when replaced by DISM for Windows 7. PkgMgr still works on later operating systems, including Windows 8.1 and Windows Server 2012 R2.

pkgmgr /iu:"TelnetClient"


Install telnet via Install-WindowsFeature (via PowerShell):

This method is just as fast but unfortunately doesn't work for desktop installations (client-based operating systems) like Windows 7 or 8.1. Neither does the cmdlet it replaced, the deprecated Add-WindowsFeature. For desktop operating systems, go with the first option above.
Import-Module servermanager

Install-WindowsFeature telnet-client

Saturday, September 13, 2014

Houston TechFest 2014: SQL Admin Best Practices with DMV's

Awesome crowd this morning for my SQL Admin Best Practices with DMV's presentation at Houston TechFest 2014, thanks very much for attending! There were some exceptionally useful questions raised today, great job to my audience.

Here is the .zip file as promised for my presentation slide deck and .sql files, including the new content for SQL 2014. Download here

Wednesday, September 03, 2014

Restoring Toggle Results Window as Ctrl+R in SSMS

Recent versions of SQL Server Management Studio have unmapped Control+R as the toggle for the Results window in a typical query window. Each time I work on a new SSMS environment or refresh my laptop, I find myself seeking out how to restore this setting.

Here's how, including a picture below.
  1. In SSMS, go to Tools, Options.
  2. Under Environment, click on Keyboard.
  3. In the Show Commands containing box, you're looking for "Window.ShowResultsPane".
  4. Change the "Use new shortcut in" dropdown to "SQL Query Editor." 
    • Note: This step is important - mapping Ctrl+R to "Global" won't work.
  5. Click in the "Press shortcut keys" field and hit Ctrl+R on your keyboard. Click the Assign button.
  6. Click OK.
  7. In your query windows, you can now Ctrl+R to toggle the results window all you like.


Tuesday, September 02, 2014

The Nine SQL Server Careers

Microsoft SQL Server is a mature and broad technology platform that supports a diverse set of careers - this blog post is an attempt to provide technical detail to my personal theory on careers in the Microsoft SQL Server world.

A mid-tier SQL Server professional who may carry the title "Database Administrator" could find career traction with high-level skill in only three or four of these categories. It would be a rare accomplishment to find someone with honest expertise in all nine of these buckets, and most mid-tier SQL Server professionals have significant experience with no more than six of these roles.

It is also important for any IT professional to be aware of his/her limitations. We should all "know what we don't know," so this blog post is an effort to quantify these items. It is also very likely that the modern "DBA" possesses skill sets in .NET and other surrounding technologies which I do not aim to include here.


Tuesday, August 26, 2014

Painful or Helpful? No SSMS Multiserver Results if One Instance Errors

I've been going back and forth with this question for a few days now.

If one instance in a multiserver query presents an error, no resultsets from any of the other instances are displayed, even if n "row(s) affected)" is displayed in the Messages tab.

This is particularly painful when querying dynamic management objects across server groups, because new DMV's and new DMV columns are being introduced in every version, sometimes with service packs.
  • Is this a feature preventing you from using code that won't work against the group as a whole, saving you from accidentally assuming that all instances returned data?
  • Or, is this a pain that causes you to need version-specific registered server groups to get at data from some instances?
There is no right answer to this question, only the opportunity to be aware of this behavior.

There is an old MSConnect item from 2007 for a similar issue that was marked as "won't fix" in 2011, so it is not a bug. I am not sure if there is any beneficial intent from MS to prevent you from getting any data when one instance in a group has a syntax error.

Imagine if, within a large list of production SQL Servers, a handful are of a prior SQL version where some utility scripts won't work.

Should I then create another registered server group for instances that can run sys.dm_os_volume_stats, or a group for instances that support the new columns added to sys.dm_exec_query_stats? No correct answer here either, in the absence of an option to allow for partial resultsets to be returned from multiserver queries, you will need to consider what suits your needs best.

Here's the steps to reproduce for a simple scenario:

1. Add a SQL 2008 instance and a SQL 2008 R2 instance to a new Registered Servers group.
2. Create a new multiserver query.
3. Attempt to query a DMF or DMV that was introduced in SQL 2008 R2, such as sys.dm_os_volume_stats. (Sample script below.)
4. The message table includes one error and one rows returned message, like below. No Results tab is returned, so the rows are not visible even though the query worked successfully on the SQL 2008 R2 instance.
sqldemo1\sql2008(domain\user): Msg 208, Level 16, State 1, Line 2Invalid object name 'sys.dm_os_volume_stats'.sqldemo1\sql2008r2(domain\user): (2 row(s) affected)
Sample script:
select distinct
vs.volume_Mount_point,
file_system_type,
   drive_size_GB = convert(decimal(19,2), vs.total_bytes/1024./1024./1024. ) ,
   drive_free_space_GB = convert(decimal(19,2), vs.available_bytes/1024./1024./1024. ),
   drive_percent_free = CONVERT(DECIMAL(9,2), vs.available_bytes * 100.0 / vs.total_bytes)
FROM
   sys.master_files AS f
CROSS APPLY
   sys.dm_os_volume_stats(f.database_id, f.file_id) vs

I have confirmed this behavior in Microsoft SQL Server Management Studio 12.0.2000.8, 11.0.3128.0 and 10.50.4000.0.

Thursday, August 21, 2014

SQLSat Birmingham: SQL Server Admin Best Practices with DMVs

Thanks for attending my session today SQL Server Admin Best Practices with DMVs at 8AM!

Here's the link to a .zip file to download the slide deck and all sample files: http://1drv.ms/VH3mda

SQLSat Birmingham: SQL Server Permissions and Security Principals

Thanks for attending my session today, SQL Server Permissions and Security Principals at 9:15AM!

Here's the link to a .zip file to download the slide deck and all sample files: http://1drv.ms/VH3lpx

Tuesday, August 19, 2014

SQL Saturday Birmingham 2014

I'm looking forward to geauxing with a handful of Sparkhound colleagues and friends to speak at SQL Saturday Birmingham on August 23 at Samford University.

Fun trivia fact: This will be the first time I'll have spoken at a SQL Saturday east of Baton Rouge since SQL Saturday Pensacola stopped happening annually after 2012, but, the second time I'll have spoken this year in the state of Alabama.

I'll be presenting the first two sessions of the DBA track at 8AM and 9:15AM on the topics of SQL Admin Best Practices with DMV's and then SQL Server Permissions and Security Principals.

My colleague Cody Gros will be speaking on PowerShell Fundamentals for SharePoint 2010/2013 at 10:30AM in the PowerShell room, then we'll get together for the Sparkhound lunch session at 11:45AM, also in the PowerShell room. My wife will then be giving two presentations after lunch in the Prof Dev \ Misc Room as well, so I'll be there for the duration!

I'll see you there! Don't forget to print your SpeedPASS doc!

Here's the links to downloads for my two presentations:

SQLSat Birmingham: SQL Server Permissions and Security Principals

SQLSat Birmingham: SQL Server Admin Best Practices with DMVs


Sunday, August 10, 2014

SQLSaturday Baton Rouge Business Intelligence and Recap - 2 of 2

(This is the second part of a two-part post on a SQLSaturday Baton Rouge 2014 recap.)

Out of a total of 614 attendees in house on August 2, 264 responded to our post-event cross-platform flurry of online survey invitations to surveymonkey. After reviewing this valuable feedback, it is definitely something we should have also been doing all along. We got some great feedback and testimonials, like these actual responses:
  • It was great networking, learning, fun and there were light-sabers.
  • Some very good talks with great content, large community of very smart, talented developers, great networking, and awesome SWAG/goodies
  • Great networking opportunity, good way to get to know the community. Some talks were really excellent presentations on state of the art database techniques.
  • Lots of companies with great networking opportunities; lots of free training and free stuff. High quality and free is very unique.
  • It was a fun way to network and learn about how everyone is using technology that we can bring back to our own organizations. It is a chance to learn something new, and meet new people.
  • My first sql saturday and certainly not my last. Had a very nice time.
We also got some great info on stuff that we, our speakers and our sponsors could do better in 2015. You better believe we are listening to those and will follow up on it. I highly recommend to SQLSaturday organizers to followup your event with a survey like this.

As for what sessions people want to see next year? More demand for content at both sides of the spectrum, from expert level to beginner, and lots of requests for new technology topics, summarized by these actual responses:
  • I would like to see more "deep dive" presentations. Those submitted were good, but would like to see some that go more in-depth. Some did, but several were very high level. It's good to have variety, but I feel there could have been more "deep dive" sessions.
  • More advanced areas in each track 
  • more intermediate or advanced BI/BA
  • Cover beginning concepts to persons new to SQL
  • Basic SQL Server Indexing, Generally more SQL Server basics
  • Even the "beginner" level seminars were way above the heads of people like me who are truly beginners and who don't yet have real world experience with SQL. A few sessions truly targeting newcomers to the field would help.
  • Anything new and upcoming to keep me up to date with technology.
  • SQL 2014 specific stuff
  • Machine Learning
  • Underwater basketweaving
We asked attendees about their attendance during the day:

I was there all day!
75.10%
196
I missed some of the afternoon sessions.
19.54%
51
I missed some of the morning sessions.
5.36%
14
Total261

We asked attendees how we could best reach them to tell them about SQLSaturday Baton Rouge 2015:

Email from SQLSaturday.com (back in March)
34.10%
89
Social Media - (Facebook, G+, LinkedIn, Twitter, FriendFace)
18.77%
49
A colleague at work
11.88%
31
User Group meeting
8.81%
23
Email from the User Group or tech community
8.81%
23
Friend
6.13%
16
Events Calendar from media outlet (newspaper, TV, radio)
4.21%
11
My manager
3.83%
10
Other (please specify)
3.45%
9
Someone I manage
0.00%
0
Total261

We asked attendees who their favorite booth was (by any measure they liked), and envoc's shag-carpeted, comfy-couch swag pad was the winner:


We asked who was the favorite speaker at the event, and the world-travelling Kevin Boles and his two auditorium-sized SQL Dev sessions carried the day. However, a whopping 37 different speakers received at least two votes their "favorite speaker", so great job to the 13 track, 5 session schedule lineup of speakers this year!


Percent Count
Kevin Boles 8.2% 19
Raghu Dodda 5.6% 13
Jennifer McCown 5.6% 13
Brian Rigsby 5.2% 12
Sean McCown 4.7% 11
George Mauer 4.3% 10
Cherie Sheriff 4.3% 10
Carlos Bossy 4.3% 10
Thomas Allsup 3.9% 9
Mike F. Robbins 3.4% 8

We asked about our jambalaya lunch (from Pot & Paddle Jambalaya Kitchen):

Lunch was great!
63.01%
155
Lunch was just OK.
23.17%
57
I chose not to eat the free lunch.
10.98%
27
I missed lunch.
2.85%
7
Total246

We got lots of good feedback about improving next year and praise for the this year. Too much to summarize here, other than that rest assured - we hear you!

Finally, on the suggestion of my wife with a business degree, we asked the classic "Would you recommend SQL Saturday" question. 



'Nuff said.


SQLSaturday Baton Rouge Business Intelligence and Recap - 1 of 2

(This is the first part of a two-part post on a SQLSaturday Baton Rouge 2014 recap.)

Been quiet on this blog the past few weeks as the crush of one of the world's largest SQLSaturday events arrives.

SQLSaturday #324 Baton Rouge this past Saturday was a big success for us and made me very proud of my alma mater (which made attendance mandatory for the entire Masters of Science in Analytics program), my city (except for its traffic), our 2014 sponsors who filled the atrium with booths, and the crew of the Baton Rouge SQL Saturday planning committee, who worked to clean up two full-size trailer loads of garbage. A big public and personal thanks to all of the volunteers and planning committee members who helped this year!

We had 614 verified attendees in 2014. This is a great accomplishment for Baton Rouge and a 9.4% increase from last year's actual attendance. Here's our photo album of the event.

Here's the final numbers on some of the business intelligence we worked up using registration data from the last five years of Baton Rouge SQL Saturday events, which includes all but our first year of the event in Baton Rouge in 2009.

Note in the first graph below how we registered 274 attendees in the final workweek of the event, that's 31% of our total registrants and 45% of our actual attendance. Look for the plateaus in the lines - those are weekends.

Building on my feelings from last year at this time, I believe that doubling the number of registered attendees one month away from the event is a decent indicator of actual attendance. In this case, roughly 300 registered on July 2 turned into 614 folks through the doors.


Registered volunteers this year gratefully rose to meet the occasion. We had new volunteers, older and young, who we'd never met before, hauling garbage out the back door well after the raffle ended. I couldn't thank enough and still feel indebted grateful from my tired feet for helping move and clean.


Curious when your folks register? Tuesday is the most popular day for folks to register but Monday at 9AM and Friday at 9AM at the hottest hours of the week. This data has been compiled over 5 years of registration data. Note the impressive spike at 11AM on Saturday morning - prime lawn-cutting time turned into productive stall tactics?


Twitter is staggeringly popular among the SQL Saturday speakers - many of which were pulling their hair out as I10E crawled to a halt on Friday afternoon before the much-anticipated banquet at Baton Rouge's best churrascaria. This was the first year that more than a quarter of our attendees registered with their Twitter accounts, and judging by activity on #SQLSatBR it was a popular event.


This isn't a big surprise - Louisianans overwhelmingly make up the attendees of Baton Rouge SQL Saturday...



 ... but this part I am proud of. As we continue to grow the conference to its (theoretical?) limit, the percentage of locals in our natural gas-boom-bound city is rising, this year overtaking the number of folks from outside the city. That's word of mouth marketing for you.


Finally, and this may be related to some of the anonymous feedback we received in our post-event SurveyMonkey, we saw vegetarians make a resurgence in attendance after slipping in each of the past three years. It looked like they were headed for extinction until a big bounceback in 2014. Perhaps this should have prepared us for an anonymous comment excoriating us for the lack of cruelty-free vegetarian lunch alternatives.

Thanks to everyone who was a part of SQL Saturday Baton Rouge! There is a post-mortem meeting this week for the 2014 event and to begin prep for the 2015 event, if you'd like to get involved, please email sqlpassbr at gmail com.

(This is the first part of a two-part post on a SQLSaturday Baton Rouge 2014 recap... continue to part two)

Thursday, July 10, 2014

SQL Saturday Baton Rouge 2014


Since I have been involved with and eventually head of the Baton Rouge SQL Saturday planning committee each year since 2009, my July's annually have been pretty hectic. Putting together orders for shirts and supplies, drinks, coordinating speaker communication, wrangling volunteers, working with sponsors, etc. But each year I am part of an awesome team of volunteers and sponsors from the Baton Rouge technical community and our efforts culminate in the annual SQL Saturday Baton Rouge event, one of the largest and longest-running annual SQL Saturday events in the world!

SQL Saturday is a global event to bring Information Technology speakers and professionals together for a community-driven, community-attended free day of technical training. The Baton Rouge SQL Saturday 2014 event is produced by the Baton Rouge User Groups, and will be our sixth annual event on campus at LSU. We expect ~600 regional IT professionals and national speakers to join us.

This free conference is open to the public and is perfect for students, database administrators, .NET developers, business intelligence developer, SharePoint admins and developers, IT managers, server admins, network admins, and job-seekers.


We've got almost 70 one-hour sessions in 13 tracks and a broad range of expertise in technologies coming to our event at the LSU Business Education Complex on August 2.

SQL DBA Foundations
DBA Advanced
Business Intelligence
SQL Dev
.NET 1
.NET 2 / ALM
Web / Mobile Dev
SharePoint
IT Pro
PowerShell
Career Development
CIO / IT Manager
Special Blue Cross Blue Shield Track

Register today! sqlsaturday.com/324/

"What You Need To Know About Being a DBA and What You Need To Know That You Don’t Know Yet"

Last night was a big success for the first annual Baton Rouge User Groups Networking Night, an all-user-groups invited event that saw members of the Baton Rouge SQL Server, .NET, Java, IT Pro, SharePoint, Women in IT user groups, plus members of user groups in Hammond, Lafayette and New Orleans (all about one hour away) join for a night of short-format speeches on career and professional development. We also were joined by a bunch of new folks from the public who'd never been to a user group meeting before - they picked a great first meeting to attend!



To the ~60 folks in attendance last night, the other seven speakers who made it a great event, our two Platinum SQL Saturday Baton Rouge sponsors, my all Sparkhound colleagues who joined me to speak, my wife who was volun-told into speaking from an HR perspective, and my friends in the user group community, thanks for being a part of that great event last night!

I was one of the eight community speakers and enjoyed presenting a thrown-together talk with a simple and concise title: "What You Need To Know About Being a DBA and What You Need To Know That You Don’t Know Yet". It was a ton of fun to present and share in the laughter!

It's a short slide deck and I pushed my 10-minute window a little bit, but thankfully our timer and stage-hook-master Jeremy Beckham from BCBS allowed some lenience.

If you are curious to see the "big list" of things you need to know to be a DBA, here's a link to download my PowerPoint slidedeck.

Saturday, June 21, 2014

Baton Rouge SharePoint Saturday 2014: SQL Server Best Practices for SharePoint Databases

Thanks for joining us at the first annual Baton Rouge SharePoint Saturday!

Here is my presentation for the 1:15 presentation in Room 1, "SQL Server Best Practices for SharePoint Databases". Great discussion, great questions!

Click here to download the PowerPoint 2013 presentation.



Monday, June 16, 2014

Baton Rouge SharePoint Saturday

I'm thrilled to be joining many of my colleagues at Baton Rouge SharePoint Saturday this weekend at the Louisiana Technology Park. My friend and Sparkhound coworker Cody Gros, and the Baton Rouge SharePoint User Group, are bringing SharePoint Saturday to Baton Rouge for the first time.

SharePoint administrators, end users, architects, developers, and other professionals that work with Microsoft SharePoint Technologies will meet for the 1st Baton Rouge SharePoint Saturday Event on June 21, 2014. Similar to the SQL Saturday concept, SharePoint Saturday Events (SPS Events) are free, one-day events held in cities around the world.

I'll be presenting "SQL Server Best Practices for SharePoint Databases" in the IT Pro track on Saturday, I look forward to seeing you there!

Thursday, April 24, 2014

Using a Filtered Index to Enforce Filtered Uniqueness

One of the advantages and uses of Filtered Indexes (introduced with SQL Server 2008) is to provide filtered uniqueness. Obviously this has some implications, so please understand what you're trying to accomplish.

In the below example, we have a compound natural key of the two fields key1 and key2.  The field bit1 is not a member of the natural key, but does inform us as to some table usage. Perhaps bit1 indicates that this record is Active (1) or Inactive (0), or whether the data is Confirmed (1) or Unconfirmed (0), or Deleted (0) or Not Deleted (1).

In any case, we only want to enforce uniqueness for when bit1 = 1, which indicates:
  1. This value is regularly filtered for use where bit1 = 1
  2. We don't care whether there are duplicate records for when bit1 = 0.
In this way, you could "deactivate" (in business terms) a record by setting bit1 = 0, without violating your natural key's uniqueness on (key1, key2).
drop table dbo.tabletestdups 
go
create table dbo.tabletestdups 
( key1 int not null
, key2 int not null
, bit1 bit not null
)
go
create unique nonclustered index idx_nc_u_f_tabletestdups_key1_key2_bit1 on dbo.tabletestdups (key1, key2, bit1) 
go
insert into dbo.tabletestdups (key1, key2, bit1) values (1,1,1) --succeed
insert into dbo.tabletestdups (key1, key2, bit1) values (2,2,1) --succeed
insert into dbo.tabletestdups (key1, key2, bit1) values (3,3,1) --succeed
insert into dbo.tabletestdups (key1, key2, bit1) values (1,1,0) --succeed
insert into dbo.tabletestdups (key1, key2, bit1) values (2,2,0) --succeed
insert into dbo.tabletestdups (key1, key2, bit1) values (3,3,0) --succeed
go
insert into dbo.tabletestdups (key1, key2, bit1) values (1,1,1) --fails
go
insert into dbo.tabletestdups (key1, key2, bit1) values (1,1,0) --fails
go
drop index idx_nc_u_f_tabletestdups_key1_key2_bit1 on dbo.tabletestdups 
go
create unique nonclustered index idx_nc_u_f_tabletestdups_key1_key2_bit1 on dbo.tabletestdups (key1, key2, bit1) WHERE bit1 = 1 --Note the important WHERE clause here at the end of the index.
go
insert into dbo.tabletestdups (key1, key2, bit1) values (1,1,1) --fails
go
insert into dbo.tabletestdups (key1, key2, bit1) values (1,1,0) --succeeds because the unique constraint only enforces bit1 = 1.
go
select * from dbo.tabletestdups











Note that rows 4 and 7 have allowed duplicate combination of key1 =1, key2= 1 and bit1 = 0, but that previous attempts to insert a duplicate combination of key1 =1, key2= 1 and bit1 = 1 failed.

Filtered indexes also provide some powerful tuning and storage advantages for commonly-accessed search predicates, with some caveats. Here is an excellent article explaining the caveats: http://blogs.msdn.com/b/timchapman/archive/2012/08/27/the-drawback-of-using-filtered-indexes.aspx

Wednesday, April 23, 2014

Real heavyweights: Float vs Decimal, the Thrilla in Precision

This is a followup to a previous post where I likened SQL Server float datatype to Muhammad Ali. Specifically, I wanted to address the phrase "negligible data storage increase" to the test for different numeric data types.

In short, I will dismiss the notion that float provides a realistic advantage in storage of large numbers. (Also, I had a blog title idea too catchy to pass on.)

I'll avoid recapping in total my earlier post about the dangerous consequences of rounding with the float data type. The float data type is still popular with developers and database designers because of its supposed advantages in data storage size. (Also, some code-first modelers use float as a default. Also dangerous. And lazy.)

Too often, developers and database designers ignore the large potential for inconsistent rounding with float (and real) data types, whereas a decimal(p,s) data type is not only consistent with storage, but it provides transparent logical intent to the development team. No surprises with truncation of numerals to the right of the decimal point with the decimal data type.

Here's my test lab for a number with 3 digits to the right of the decimal point.

create table realtest (real1 float(24))
create table floattest (float53 float)--float(53) is the default
create table decimal9test (decimal9 decimal(9,3))
create table decimal19test (decimal19 decimal(19,3))
go
insert into realtest (real1) Values (123456.123)
insert into floattest (float53) Values (123456.123)
insert into decimal9test (decimal9) Values (123456.123)
insert into decimal19test (decimal19) Values (123456.123)
go
insert into realtest (real1) select real1 -1. from realtest
insert into floattest (float53) select float53 -1. from floattest
insert into decimal9test (decimal9) select decimal9 -1. from decimal9test
insert into decimal19test (decimal19) select decimal19 -1. from decimal19test
go 21
go
select sizemb = SUM(sizemb), tablename from (
select SizeMb = (p.in_row_reserved_page_count*8.)/1024.,
tablename = o.name, indexname = i.name
from sys.dm_db_partition_stats p
inner join sys.objects o on p.object_id = o.object_id
inner join sys.indexes i on i.object_id = o.object_id and i.index_id = p.index_id
where o.is_ms_shipped = 0
) x
where tablename in ('floattest', 'realtest', 'decimal9test', 'decimal19test')
group by tablename
order by sizemb desc
go
select top 1 * from realtest
select top 1 * from floattest
select top 1 * from decimal9test
select top 1 * from decimal19test
go

First, the resultsets showing the storage of 2+ million rows in a single-column table.

tablenamerow_countsizemb
realtest209715226.382812
floattest209715234.445312
decimal9test209715228.382812
decimal19test209715236.507812

Here's the values that were actually stored, notice the important differences.

real1
123456.1
float53
123456.123
decimal9
123456.123
decimal19
123456.123

You can see that the ~10mb we saved with the real (or float(24)) data type isn't much good because of the catastrophic loss of precision. And even with 2 million rows, we've saved roughly 2mb of space by using float.

Now, let's run the same test, instead with the value 123456.1234567 instead. (The Decimal9 table has been eliminated from this result because it can only store a number with scale 9.)

create table realtest (real1 float(24) )
create table floattest(float53 float) --float(53) is the default
create table decimal19test(decimal19 decimal(19,7) )
...
tablenamerow_countsizemb
realtest209715226.382812
floattest209715234.445312
decimal19test209715236.507812

real1
123456.1
float53
123456.1234567
decimal19
123456.1234567

Note again that real is wholly unacceptable with its transparent loss of precision, while float and a properly-aligned decimal data type store the data appropriately and yet without a significant difference in storage (less than 6% difference).

Now, to slam the door on float.

Let's apply DATA_COMPRESSION = PAGE to this data.
create table realtest (real1 float(24) ) with (data_compression = page)
create table floattest(float53 float) with (data_compression = page)
create table decimal19test(decimal19 decimal(19,7) )with (data_compression = page)
...

tablenamerow_countsizemb
floattest209715226.382812
decimal19test209715224.320312

We can clearly that page compression reverses the small advantage that floattest enjoyed.

Since page data compression is a very popular, very widespread and very useful tool on modern Enterprise edition SQL Servers, we can rule out float as having any space saving advantage. In short, float loses the last ground it stood on.

The real and float data type columns only compressed by 15% and 23% respectively. The fixed decimal data type compressed, in this case, by 43.5% (36.5mb to 24.3mb).

If we scale the page compressed data comparison up to 134 million rows...

tablename
row_count
sizemb
floattest
134217728
1686.070312
decimal19test
134217728
1556.015625

...we confirm that floattest has lost its so-called space savings advantage, and is more than 7% larger than the same data in a proper decimal data type.

Saturday, March 29, 2014

South Alabama Day of .NET Presentations Downloads

I have been running into longtime SQL Saturday friend Ryan Duclos at SQL Saturday events in Baton Rouge and Pensacola since the beginning SQL Saturday days, so when he headed up a Day of .NET in Mobile, AL at the University of South Alabama, I was happy to join the agenda.

Big thanks and major kudos to Ryan (and wife) for organizing #SADoDN (or "Sad Odin" LOL). A job well done!  Also thanks to all attended and made it a great event!

I presented two sessions:

9:30am
SQL Admin Best Practices with DMV's [Beginner] Download slidedeck and sample scripts here

11am
SQL Server Permissions and Security Principals [Beginner] Download slidedeck and sample scripts here

Thursday, March 27, 2014

South Alabama Day of .NET 2014

About 200 miles east of Baton Rouge there lies a beachside town called Mobile, AL that will settle down long enough from spring break to host the first South Alabama Day of .NET at the University of South Alabama. The wife and I are looking forward to presenting at the event on Saturday, March 29, where we will both be giving back-to-back presentations in the morning.

In the Business Intelligence/Database track, look for:

  SQL Admin Best Practices with DMV's [Beginner] In this practical and script-focused session, we'll discuss many best practices regarding SQL Server administration, maintenance, optimizations and monitoring using Dynamic Management Views. DMV's are essential tools for the SQL administrator, but have a wide range of applications by developers, network engineers and DBAs. We will delve into best practices for server maintenance, index fragmentation, wait type analysis and future features. This session will be informative and eye-opening to anyone from a junior DBA on up.

  SQL Server Permissions and Security Principals [Beginner] A review of SQL Server permissions starting with the basics and moving into the security implications behinds stored procedures, views, database ownership, application connections, consolidated databases, application roles and more.

See you there!

Thursday, February 27, 2014

Data Architecture Virtual Group - Whiteboard Data Architecture Presentation

Thanks to all 130+ folks who joined, suffered my jokes and handwriting, and asked fantastic questions during my presentation to the SQL PASS Data Architecture Virtual Group meeting.

We covered three large project templates, including an end-to-end application and Data Warehouse, a SharePoint-list driven Data Warehouse, and an "Access Jailbreak" complete with a rude drawing.

Because many people asked: I presented "whiteboard" style using Wacom UCTH470 Bamboo Capture Pen & Touch Tablet and a regular stylus (not powered, no "eraser"). Aside from some hiccups with OneNote 2013 recognizing my handwriting as text and switching to text mode (I think that's the reason), the illustration-style presentation went well.  I am no artist or penmanship champion, but I hope this was in the very least a thought-provoking exercise!

To download my whiteboards from OneNote from today's presentation in PDF format click here.

To view the recording of the video on Youtube: http://youtu.be/9VRQtkwtD6U



Wednesday, February 26, 2014

Acadiana SQL Server User Group - Introduction to SQL Server Security Principals and Permissions

Great crowd last night and a great job by organizers Glenda Gable and Jill Joubert for the very first meeting of the Acadiana SQL Server User Group! I'm looking forward to seeing another solid SQL Server networking community in place and growing, just an hour west of us in Baton Rouge. If you're a fan, Glenda will be our speaker for the Baton Rouge User Group on March 12.

Download my slide deck and sample scripts to show the abstraction of permissions by stored procedures and views here.

Monday, February 24, 2014

Challenges with Downgrading TFS 2010 from SQL Enterprise to SQL Standard Edition

I recently had the challenge of migrating an installation of Microsoft Team Foundation Server 2010 from a shared SQL Server 2008 R2 Enterprise instance to its own SQL Server 2008 R2 Standard instance on a new server.

Because of some Enterprise edition-specific feature usage, this turned out to have two problems during the restoration:
  • Use of database compression on some TFS 2010 tables 
  • Use of perspectives in the TFS 2010 SSAS database "TFS_Analytics" 
Neither feature is available in Standard edition and conveniently the error message when restoring the SQL or SSAS databases from Enterprise to Standard clearly indicated this. After making the client aware, the decision made was to try and remove these features from their TFS installation.

After removing these Enterprise features from copies of the databases, I was able to back up and restore the copied databases (during the migration outage) to the new server without any more edition failures.

Here's how:

Remove Compressed Indexes

You may encounter this error if you attempt to restore any database that uses data compression from Enterprise to Standard edition:

cannot be started in this edition of SQL Server because part or all of object 'foo' is enabled with data compression or vardecimal storage format

Here is a script to look through all tables for compressed partitions (either heaps or indexes) and REBUILD them with DATA_COMPRESSION = NONE. This obviously only works on Enterprise Edition of SQL 2008 or higher.

--Enable WITH (ONLINE = ON) if possible

Declare @sqltext nvarchar(1000), @tbname sysname, @index_id int, @index_name nvarchar(100)

Declare tbl_csr cursor
FOR
select name = '['+s.name+'].['+o.name+']', p.index_id, i.name
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
inner join sys.indexes i on i.index_id = p.index_id and i.object_id = o.object_id 
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'

Open tbl_csr

Fetch Next from tbl_csr into @tbname, @index_id, @index_name

 While (@@FETCH_STATUS=0)
 Begin

   If @index_id =0 
    begin
     --catches heaps
     set @sqltext=N'ALTER Table '+@tbname + N' REBUILD WITH (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild heap ' + @tbname 
    end
   else
    begin
     set @sqltext=N'ALTER INDEX ' + @index_name + ' ON '+@tbname + N' REBUILD WITH  (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild index ' + @tbname 
    end

   Fetch next from tbl_csr into @tbname, @index_id, @index_name

 End

Close tbl_csr
Deallocate tbl_csr

Below is a demo you can use to simulate the script as it finds clustered indexes, nonclustered indexes and heaps to rebuild appropriately, while also ignoring XML indexes (which could present a problem if you take a blanket ALTER INDEX ALL ... REBUILD.)
use adventureworks
go

--test lab
if not exists (select 1 from sys.schemas where name = 'testschema')
exec (N'create schema testschema')

go
if exists (select 1 from sys.objects where name = 'testfeature_index') 
drop table testschema.testfeature_index
go
create table testschema.testfeature_index (id int not null identity(1,1) primary key , bigint1 bigint not null, xml1 xml null)
insert into testschema.testfeature_index (bigint1) values (123456789123456789),(1234567891234567891),(1234567891234567892),(1234567891234567893)

go
set nocount on 
insert into testschema.testfeature_index (bigint1)
select bigint1+5 from testschema.testfeature_index 
go 10
set nocount off
alter index all on testschema.testfeature_index rebuild with (data_compression = page)
create nonclustered index idx_nc_testfeature1 on testschema.testfeature_index (bigint1) with (data_compression = page)
create primary xml index idx_nc_testfeaturexml1 on testschema.testfeature_index (xml1) 
create xml index idx_nc_testfeaturexml2 on testschema.testfeature_index (xml1)  USING XML INDEX idx_nc_testfeaturexml1 FOR PATH
go
if exists (select 1 from sys.objects where name = 'testfeature_heap') 
drop table testschema.testfeature_heap
go
create table testschema.testfeature_heap (id int not null identity(1,1)  , bigint1 bigint not null)
insert into testschema.testfeature_heap (bigint1) values (123456789123456789),(1234567891234567891),(1234567891234567892),(1234567891234567893)
go
set nocount on 
insert into testschema.testfeature_heap (bigint1)
select bigint1+5 from testschema.testfeature_heap 
go 10
set nocount off
go
alter table testschema.testfeature_heap rebuild  with (data_compression = PAGE)
create nonclustered index idx_nc_testfeature1 on testschema.testfeature_heap (bigint1) with (data_compression = page)

go

--Enable WITH (ONLINE = ON) if possible

select name = '['+s.name+'].['+o.name+']', case p.index_id when 0 then 'Heap' when 1 then 'Clustered Index' else 'Index' end
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'
go
Declare @sqltext nvarchar(1000), @tbname sysname, @index_id int, @index_name nvarchar(100)

Declare tbl_csr cursor
FOR
select name = '['+s.name+'].['+o.name+']', p.index_id, i.name
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
inner join sys.indexes i on i.index_id = p.index_id and i.object_id = o.object_id 
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'

Open tbl_csr

Fetch Next from tbl_csr into @tbname, @index_id, @index_name

 While (@@FETCH_STATUS=0)
 Begin

   If @index_id =0 
    begin
     --catches heaps
     set @sqltext=N'ALTER Table '+@tbname + N' REBUILD WITH (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild heap ' + @tbname 
    end
   else
    begin
     set @sqltext=N'ALTER INDEX ' + @index_name + ' ON '+@tbname + N' REBUILD WITH  (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild index ' + @tbname 
    end

   Fetch next from tbl_csr into @tbname, @index_id, @index_name

 End

Close tbl_csr
Deallocate tbl_csr


go

select name = '['+s.name+'].['+o.name+']', case p.index_id when 0 then 'Heap' else 'Index' end
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'

Remove Perspectives from SSAS Database

You may encounter this issue when restoring any SSAS database from Enteprise to Standard editions, not just in TFS.

Errors related to feature availability and configuration: The 'Perspectives' feature is not included in the '64 Bit Standard Edition' SKU.

The solution is multi-step but straightforward.

Here's a breakdown of the steps. The XMLA code to accomplish this will follow:

  1. Backup the SSAS database (TFS_Analytics) on the Enterprise SSAS instance.
  2. Restore the SSAS database with a new name (TFS_Analytics_std) to a new DbStorageLocation on the Standard SSAS instance.
  3. In Management Studio Object Explorer, script out the database as an ALTER statement. Find the <Perspectives> section of the code. (Note - "Perspectives" is plural.) Drag and select to the </Perspectives> tag. Be sure to capture all the <Perspective>...</Perspective> sections. Delete.
    • Easier way? Collapse the box to the left of the <Perspectives> tag. Select the collapsed line for the <Perspectives> tag. Delete.
  4. Execute the XMLA script. (Hit F5.) This will remove the perspectives from the database.
  5. Backup the TFS_Analytics_std database to a new location.
  6. Restore the backup of the TFS_Analytics_std database to the Standard Edition server, renaming the database back to TFS_Analytics.
Code examples below. Be aware that you may need to apply the <AllowOverwrite>true</AllowOverwrite> element to overwrite any .abf files during a backup, or databases during a restore. For safety reasons, this option has been set to false for these code examples.


  1. On the old Enteprise server, backup the SSAS database (TFS_Analytics).
    <backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
     <object>
      <databaseid>Tfs_Analysis</DatabaseID>
     </object>
     <file>M:\MigrationBackups\TFS_analysis_migrationbackup_2014.abf</file> <allowoverwrite>false</allowoverwrite>
    </backup> 
  2. On the old Enteprise server, restore the SSAS database with a new name (TFS_Analytics_std) to a new DbStorageLocation.
    <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
     <File>M:\migrationbackups\TFS_analysis_migrationbackup_2014.abf</File>
     <DatabaseName>TFS_Analysis_std</DatabaseName>
    <DbStorageLocation xmlns="http://schemas.microsoft.com/analysisservices/2008/engine/100/100">m:\migrationbackups\</DbStorageLocation
    >
    
  3. In Management Studio Object Explorer, script out the database as an ALTER statement.
  4. Find the <Perspectives> section of the code and remove it.
  5. Execute the XMLA script. (Hit F5.) This will remove the perspectives from the database.
  6. On the old Enteprise server, backup the TFS_Analytics_std database to a new location.
    <Backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
      <Object>
        <DatabaseID>Tfs_Analysis_std</DatabaseID>
     </Object>
      <File>M:\MigrationBackups\TFS_analysis_migrationbackup_2014_std.abf</File>
    </Backup>
    
  7. Create a new XMLA script on the target Standard Edition server. Restore the backup of the TFS_Analytics_std database to the Standard Edition server, renaming the database back to "TFS_Analytics".

    If there are no other Enterprise-only features in use in the SSAS database, this backup should restore successfully.

    Note also that the restore here occurs across the wire, using the UNC path to a temporary folder share. The SSAS service account on the new Standard edition server must have permissions to view this folder share.
    <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
     <File>\\serverpathnamehere\MigrationBackups\TFS_analysis_migrationbackup_2014_std.abf</File>
    <DatabaseName>TFS_Analysis</DatabaseName>
     <AllowOverwrite>false</AllowOverwrite>
    

Sunday, February 23, 2014

February '14 Acadiana SQL Server User Group

I am honored to be the very first presenter of the new PASS chapter in Lafayette, Louisiana this Tuesday, presenting on SQL Server Permissions and Security Principals at the Acadiana SQL Server User Group.

More information here at their awesome-looking website: http://acadianasql.org/index.php/events/8-our-first-meeting

We will be discussing a review of SQL Server permissions starting with the basics and moving into the security implications behinds stored procedures, views, database ownership, application connections, consolidated databases, application roles and more. This presentation is perfect for .net developers of all levels, database architects and administrators, and Windows server admins who have to dabble in SQL Server security.

Sparkhound, Inc. will be sponsoring and we will be excited to get one of PASS's newest groups off the ground. Big thanks and a shoutout to Glenda Gable, who started up the Acadiana SQL Server User Group and will be speaking at the Baton Rouge SQL Server User Group about 60 miles east in March as part of the South Louisiana SQL Server User Group Speaker Exchange Program. ;)

February '14 Data Architecture Virtual Chapter Meeting


I will be speaking and whiteboarding for Data Architecture Virtual Chapter meeting on February 27, 2014 on the fitting topic of Application Data Architecture!

This is an informative and thought-provoking Architecture presenting that I have presented multiple times and enjoy sharing. We will cover three large architectures - not specifically regarding the hardware and networking challenges but the data architecture, ETL's and the various components of SQL Server that we use in the process. This will also be a great opportunity to ask questions and foster discussions inside of your lunch hour.

Please RSVP: https://attendee.gotowebinar.com/register/2154937593520193282


Wednesday, February 05, 2014

Actual Emails: More Grief for GUIDs

Wrote this email exchange with some developer colleagues about to embark on performance tuning.
Subject: bag o' tricks from DMV talk
From: A. Developer
Hey William,
We are about to spend around 4 weeks performance tuning an application we built. Much of the performance tuning is going to be the database. Could you send over those dmv queries that you used in your DMV talk? Or do you still have them available for download? I think they will be able to help us out quite a bit.
I know one of the big things you talked about is not using GUIDs, however, we are using them because of replication. Do you have any thoughts on helping ease the fragmentation because of this?

From: William Assaf

Yep, I have all that info here. This is the last time I gave that talk: http://www.sqltact.com/2013/09/houston-tech-fest-2013-sql-admin-best.html
Of course, if you have any questions, let me know.

So, as you know, I hate GUIDs because they are 4x as large as an integer yet serve the same purpose, or 2x as large as two integer columns to allow for multiple device synchronization.

But the biggest problem with GUIDs can happen when the first key of the clustered index of a table is a GUID column. With the creation of the new GUIDs, are you doing random GUIDs or sequential GUIDS?

If you’re creating them with a SQL default value (which you’re probably not, but as an example), this would be the difference between the newid() function (bad) and the newsequentialid() function (not as bad).

Using sequential GUIDs can allow you to create a clustered index that actually make some sense when it is ordered, and can have index maintenance performed on it to REBUILD or REORGANIZE, reducing fragmentation. Problem is, when you restart the SQL Service, the sequence also resets. So you won't have one contiguous string of sequentially-inserted GUIDs in a table over its lifetime. 

On random GUIDs, you can REBUILD or REORGANIZE all you want, but the data order still won’t make any sense. Changing from random to sequential GUIDs may be really easy to change in your database or application code. If you’re already using sequential GUIDs, there’s not really much more you can to do mitigate the performance and storage letdowns of GUIDs that you would not also do on tables that use integer IDENTITY columns.

As for basic indexing, run this script first to find any tables that are still heaps. Those should be the first things you target in your performance tuning phase. http://www.sqltact.com/2013/05/hunt-down-tables-without-clustered.html

If I can help or get in on that performance tuning stuff, let me know! Good luck, keep in touch.

From: A. Developer
Thanks for the info.
One question though. Will sequential GUIDs work with replication? Wouldn't there be collisions if they are sequential?


From: William

So, in theory, is it mathematically "unlikely" that GUIDs will have collisions. That math always troubles me when multiple machines are generating GUIDs, though the network card is used to generate a unique seed for the GUID on each machine. It is even more troublesome when GUIDs are generated sequentially, but we are supposed to trust the math. :/ 
I’m not super knowledgeable about that math but from what I know, your concern is understandable but “unlikely” to be an issue. I know, not comforting. Here is some comforting reference material though. http://blogs.msdn.com/b/sqlserverfaq/archive/2010/05/27/guid-vs-int-debate.aspx 
Uses words like “practically guaranteed” and “virtually impossible” for sequential GUID collisions.



I welcome comments and additions to this somewhat religious topic. The "oil rig" scenario that is commonly faced in the Gulf South provides a recurring fresh argument for/against GUIDs. In many scenarios, a pair of integer fields can provide the benefits of sequential, unique keys. Assuming one of the pairs is also the rig/device/source of the data, you also have a built-in foreign key constraint in the unique key, something you would have to store anyway in a table keyed on a GUID.

Thursday, January 30, 2014

It May Be Necessary Re-enable Service Broker After Baremetal Restore

After the restore of a database, or perhaps a bare-metal restore of a Windows server running SQL Server, unique IDs in the Service Broker settings may have changed, causing the broker to fail when starting. So, you will need to reactivate Service Broker.

You may first notice this because Database Mail gives you an error message about the Service Broker not being enabled in the MSDB database.

For example:
"...Either Service Broker is disabled in msdb, or msdb failed to start. ... Bring msdb online, or enable Service Broker."

Clicking OK to this prompt will cause SQL Server Management Studio to hang or freeze up. You need to do this in a T-SQL script.

Similarly,

ALTER DATABASE MSDB SET ENABLE_BROKER

will never complete.

It may be necessary to run

ALTER DATABASE MSDB SET NEW_BROKER WITH ROLLBACK IMMEDIATE

to reset the service broker's unique ID on MSDB and then

ALTER DATABASE MSDB SET ENABLE_BROKER

will complete successfully.

Monday, January 27, 2014

Start/Stop Group of Azure VM's

If you've already configured your local machine's PowerShell environment to interact with your Azure subscription (see here: https://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/), then you can begin saving yourself an awful lot of button-clicking by scripting out the start/stop of your VMs, especially if you have a cluster of Azure VMs for testing and demonstrating SQL Server Availability Groups, as an example.

You will need to change a few things in these scripts to suit your purposes:
  1. Change "YourAzureAccount@hotmail.com" to your Azure account. For MSDN-based Azure subscriptions like mine, this address was my Hotmail email address.
  2. Change "Visual Studio Premium with MSDN" to the name of your Azure Subscription. This was the name my MSDN-based account was given by default.
  3. Populate the $vms variable with a list of Azure VM's in the cluster you're looking to start/stop as a group, replacing "yourVMName-alwayson-dom" and so forth.
Big thanks and credit for the assist on this blog post to Cody Gros, SharePoint Solutions Architect and my coworker at Sparkhound.

Stop a list of Azure Accounts:
#if expired, Add-AzureAccount

Get-AzureSubscription | ? {$_.ActiveDirectoryUserId -eq 'YourAzureAccount@hotmail.com' -and $_.SubscriptionName -match "Visual Studio Premium with MSDN" } | Set-AzureSubscription -SubscriptionId $_.SubscriptionID

$vmHash =@{}
$vms = "yourVMName-alwayson-dom","yourVMName-alwaysonsql1","yourVMName-alwaysonsql2","yourVMName-alwaysonWSFC"

Get-AzureVM | foreach{$vmHash.Add($_.Name,$_.ServiceName)}

foreach ($vm in $vms) {    
    $currentVMService = $vmHash[$vm] 
    Write-Host "Current VM:$($vm)"
    $thisvm = Get-AzureVM -ServiceName $currentVMService -Name $vm
    Write-Host "Stopping VM:$($thisvm.Name)"
    Stop-AzureVM -Name $thisvm.Name -ServiceName $thisvm.ServiceName #-StayProvisioned  
}
Note about the -StayProvisioned tag above. Specifying this option will retain some IP settings, but will cause your VM's to continue to accrue Azure credit, even while stopped.  Use with care.

Start a list of Azure Accounts:
#if expired, Add-AzureAccount


Get-AzureSubscription | ?{$_.ActiveDirectoryUserId -eq 'YourAzureAccount@hotmail.com' -and $_.SubscriptionName -match "Visual Studio Premium with MSDN" } | Set-AzureSubscription -SubscriptionId $_.SubscriptionID

$vmHash =@{}
$vms = "yourVMName-alwayson-dom","yourVMName-alwaysonsql1","yourVMName-alwaysonsql2","yourVMName-alwaysonWSFC"

Get-AzureVM | foreach{$vmHash.Add($_.Name,$_.ServiceName)}

foreach ($vm in $vms) {    
    $currentVMService = $vmHash[$vm] 
    Write-Host "Current VM:$($vm)"
    $thisvm = Get-AzureVM -ServiceName $currentVMService -Name $vm
    Write-Host "Starting VM:$($thisvm.Name)"
    Start-AzureVM -Name $thisvm.Name -ServiceName $thisvm.ServiceName    
}

If upon running the scripts you receive either of these errors:
get-azuresubscription : The term 'get-azuresubscription' is not recognized as the name of a cmdlet, 
Get-AzureVM : The term 'Get-AzureVM' is not recognized as the name of a cmdlet, function, script file, or 

Then you don't have the PowerShell module loaded, and PowerShell isn't automatically loading it for you. Use the Import-Module command. Below is the default location of the module, downloaded from here: https://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/
Import-Module 'C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Azure.psd1'

Updated 20160322 to reflect the latest module paths