Work Smart with SQL Server Best Practices

SQL Server Best Practices

We at SSG are big believers in SQL Server best practices. Some ways of doing things are simply better than other ways.

As people start to adopt and get familiar with new technology and new platforms, these most effective techniques seem to bubble up and become the standard for everyone. That’s why we call them best practices. They aren’t necessarily documented in the “official literature.” We believe there are “best practices” based on assumption and theory … and then there are best practices based on experience and, well, practice.

As our friend Jeff Moden once voiced on SQL Server Central:

I’ve seen people create “Holy Grail” articles that “prove” that XML splitters will beat Tally Table-like splitters hands down. And almost the whole world (including the previous me) adopted some “Best Practices” for index maintenance more than 20 years ago because of some super-simple, super-unfortunate miswording in official Microsoft-supported documentation that has also mistakenly poo-poo’ed the use of a remarkable datatype, its remarkable uses, and some incredible methods for performance. How about the “Best Practice” of doing things like lowering a Fill Factor until fragmentation stops, always using IDENTITY columns for virtually everything, or even defragging indexes just because they have logical fragmentation (which should never be done if they have a “0” Fill Factor, BTW).  Of course, it’s a “Best Practice” to not use Random GUIDs on a Clustered Index, right? Use NEWSEQUENTIALID() instead, right?

And then there are the “expert” changes that have been made by Microsoft such as the equivalent of the permanent invocation of Trace Flag 1117 on TempDB with no way to disable it even on a necessary temporary basis or how about LOBs defaulting to in-row which permanently destroys page density of the Clustered Index. Yeah … both of those are just two examples of how supposed “Best Practices” have put the screws to people without them even knowing about it.

So, just as we believe zipper merging is a best practice when traffic shifts from two lanes to one, there are approaches to SQL Server that, when applied, deliver optimum outcomes. Here are a few of our favorites.

SQL Server Best Practices from Senior Consultant Dan Maenle

1. Test your backups! In the realm of system restores, it’s good to be a little pessimistic. In other words, it’s not if but when you will need to use those handy backup files to get up and running again. Therefore, you need to have 100% confidence in your backups, and that means testing them. Our friend Jonathan Kehayias has another way to look at it: “You don’t need a backup strategy, you need a RESTORE strategy, and planning a BACKUP strategy without having a RESTORE strategy is fundamentally flawed.”

2. Use correct data types on joins and predicates. Joins allow you to retrieve data from two or more tables based on their logical relationship. If the data types are not identical, but only compatible, SQL Server must implicitly or explicitly convert the data. That can be an expensive operation. SQL Server uses predicates to determine how many data pages a data request has to touch. Using an incorrect data type with your predicate can also make for an expensive operation because it involves touching many more records than you plan on.

3. Turn on Querystore. Dan is a passionate advocate for the Querystore, and hosted a webinar on the subject for us in 2022. Since its introduction in SQL Server 2016, Query Store has been a DBA life-saver … for those who actually use it.

Hence this best practice. Too many users of SQL Server don’t know anything about this feature. It is an ideal tool for solving performance issues and finding performance differences caused by query plan changes. It’s a “black box” to make your life as a DBA easier. It enables you to “go back in time” and analyze the performance impact of query plan changes, and pinpoint and fix any problem areas. It’s available with on-premises SQL Server. It is also a key tool to use when troubleshooting performance issues in Azure SQL Database. Turn it on and benefit from all it does.

4. Back up your Service Master Key. SQL Server uses encryption to secure and protect sensitive data. The Service Master Key (SMK) is the foundation of this encryption. SQL Server automatically generates the SMK when SQL Server starts for the first time. Microsoft’s advice is that the SMK “should be backed up and stored in a secure, off-site location. Creating this backup should be one of the first administrative actions you perform on the server.” Yes, you can restore the SMK, but it’s far from a painless exercise. Do yourself a favor and follow the direction of this best practice.

SQL Server Best Practices from Senior Consultant Scott Klein

1. Don’t accept default memory configuration options. SQL Server can and does use a lot of memory. The default settings are 0 MB for min server memory and 2,147,483,647 MB for max server memory, or more than 2 million GB (keep in mind, memory settings are designated in MB, not GB). The caveats are: setting max memory too high means SQL Server will compete for memory with other instances on the same host. Setting it too low impairs performance. Microsoft recommends “75% of available system memory not consumed by other processes.”

Utilizing max and min memory when running multiple instances is especially important. If you are not proactive in your min and max server memory settings, the instances you engage first are likely to allocate all available memory, while idle instances won’t have much or any to work with when they are needed. Keep in mind that Windows does not balance memory across apps.

And, by the way, it’s a bad idea to set minimum server memory to the same level as max server memory. According to Microsoft, “once the memory allocated to the SQL Server Database Engine reaches that value, the SQL Server Database Engine stops dynamically freeing and acquiring memory for the buffer pool.” So don’t do it.

2. Keep functions out of the WHERE clause. Again, this is about performance. You can get away with functions in the SELECT clause, but it can be a disaster with WHERE. The reason is that functions used with WHERE force SQL Server to do a table scan or index scan rather than an index seek.

3. Stop Shrinking your data files. Or: Just because you can shrink your data files doesn’t mean you should. My point here is that too many just shrink because they can. The best practice is to shrink only if you have done a large archive process and removed large sections of data from your database. Or, if you deleted large amounts of indexes. Otherwise, KNOW WHEN to SHRINK your database.

Why? Shrinking your data files can drastically increase index fragmentation, which can ding your performance. No less an expert than Paul Randal has written, “Data file shrink should never be part of regular maintenance, and you should NEVER, NEVER have auto-shrink enabled.” Let that be the final word on shrinking.

As you apply these SQL Server best practices in your work, I’m confident you’ll get the outcomes you want, while avoiding the headaches you don’t.

Please share this

This Post Has One Comment

  1. Jeff Moden

    Heh…. Good short article, Randy. I laughed out loud at the irony of your numbered list of “Favorite Best Practices” being empty. Either that’s great emphasis that no supposed “Best Practice” is a panacea, you didn’t finish the article, or my computer is broken. 😀

    My personal “Best Practice” is in the form of a 3 step mantra. The steps aren’t numbered because all 3 are of equal importance and can actually be done all at the same time.

    Make it work…
    Make it fast…
    Make it pretty…

    And, it ain’t done ’til it’s pretty. 😀

Leave a Reply

Related Articles

A blue and white background with white text Description automatically generated

Exploring SSMS 21

Microsoft recently announced SQL Server Management Studio (SSMS 21). Here’s a quick look at what’s included in this latest iteration.

Read More »