SQL Server Environmental Diagnostics Guide

SQL Server Environmental Diagnostics Guide

Supporting an application as a DBA means you have encountered complaints about performance. The investigation starts and the root cause must be determined. Often times the end result is not a bug in the code but rather something specific to the environment. Environmental problems are usually the culprit.

Performance problems for a SQL Server based application are likely to be caused by environmental factors and not buggy code.

Whether it is a configuration you can change in SQL Server, Windows Server, VMware, or the network it is likely the first course of action is to perform a quick assessment of the environment. This is where understanding the various configurations and best practices are key. Knowing what to look for can save tons of time.

A mistake I often see is a performance issue is passed off to someone else (more senior) and that engineer assumes a lot of things without checking. People are going to relay the problem as they see it – not as it actually is. This leads to skipping over some elementary checks which can save time and frustration from tracking down imaginary bugs.

Start troubleshooting with a quick environmental check.

Below are common environmental mishaps I see when troubleshooting SQL Server performance complaints. Consider these 1st line of action before getting into execution plans, statistics, indexing, and code refactoring.

CPU Considerations

The main configurations here are MAXDOP and the Cost Threshold of Parallelism (CTP). Like many of the configurations discussed here it will not take long to realize that they are poor choices for performance whenever the SQL instance is asked to perform under moderate stress.

You’ll know that CPU is stressed if you see lots of CXPACKET or SOS_SCHEDULER_YIELD waits.

MAXDOP

The max degree of parallelism (MAXDOP) configuration option controls the number of processors that are used for the execution of a query in a parallel plan. This option determines the computing and thread resources that are used for the query plan operators that perform the work in parallel.

  • The default is 0 i.e. use all cores
  • For < 8 logical cores, assign the value to be the number of logical cores
  • For >= 8 logical cores, assign the value to be 8
  • There are exception but this is a good starting place

Here are guidelines from Microsoft for best practices with MAXDOP.

In SSMS, right click on Instance -> Properties -> Advanced -> options are under Parallelism section

Screenshot of Parallelism settings

Know your application! For example – SharePoint works best with MAXDOP 1

Cost Threshold of Parallelism

Cost Threshold for Parallelism – The cost threshold for parallelism option specifies the threshold at which Microsoft SQL Server creates and runs parallel plans for queries. SQL Server creates and runs a parallel plan for a query only when the estimated cost to run a serial plan for the same query is higher than the value set in cost threshold for parallelism. The cost refers to an estimated elapsed time in seconds required to run the serial plan on a specific hardware configuration.

  • The default is 5
  • Changing this value in conjunction with MAXDOP to a value of 50

This is a rather arbitrary number given that it is based on costs of an execution plan. I’ve seen advice setting this to 50, 200, 500. I like to start at 50 then jump to 200. MAXDOP is going to drive performance more than this setting.

It is located above MAXDOP in the above screenshot.

Processor and I/O Affinity

Basically leave this option alone. This controls the CPU processors that SQL will use. We typically want SQL Server to use all of the CPU cores presented to it by the operating system. Sometimes people mistake this setting with MAXDOP. They are not the same.

Here is a good explanation of how this setting works.

Memory Configurations

By default you probably won’t like the memory settings that SQL Server chooses. That’s because it is greedy and tries to take it all leaving little for the OS and other applications. Hopefully you’ve got your SQL host on a dedicated machine (virtual or physical) and SQL is the only application running on it.

Don’t trust task manager for SQL Server memory consumption.

You can tell memory pressure from various methods including the SQL error log, RESOURCE_SEMAPHORE waits, page life expectancy, sys.dm_os_buffer_descriptors, dbcc memorystatus, etc.

  • The default memory allocated to a SQL Instance is 0 for min server memory and 2147483647 for max server memory (meaning all)
  • Set the minimum server memory to 0
  • Set the maximum server memory depending on what else is running on the machine

Memory must be balanced with whatever else is running on the machine. That means the OS and any other application that requires resources. Raising memory doesn’t require a service restart but lowering it might (if using lock pages in memory). Also there is no need to set min = max memory anymore.

  • Typically leave Windows Server 4GB of memory
  • SSRS, SSIS, SSAS (depending on the environment) should have no less than 4GB of memory each. If you are not using them then turn the services off!
  • If there are applications running on the machine as well then leave an appropriate amount of memory for them to run as well
  • Assign the rest to SQL Server in the max memory option

Right click on Instance -> Properties -> Memory

SQL Memory Configuration

NOTE: CLR Concerns – Starting with SQL Server 2012, CLR allocations are also included in memory limits that are controlled by max server memory (MB) and min server memory (MB). That means that our CLR objects and assemblies in SQL will have their memory managed by the SQL Instance and not the OS.

This KB explains the approach. Keep this in mind if your application uses .NET assemblies.

Disk Setup

Database provisioning still has a habit of placing all of the database files on the same physical disk. This leads to poor disk I/O. Placing both data and log files on the same device can cause contention and result in poor performance. Placing the files on separate drives allows the I/O activity to occur at the same time for both the data and log files.

NOTE: things get more complicated when a SAN is involved. You need to talk with the storage engineers to determine the best fit. Ideally we want to break up our DB files on different spindles but it gets complicated and out of scope for this article. Dealing with NFS, CIFS, and SMB are whole topic unto themselves.

Presizing the database files is a good practice and part of capacity planning.

For all purpose storage tips Microsoft has this article of best practices.

Strive to separate database files onto different LUNs / physical disks:

  • Data files (mdf, ndf)
  • Transaction Log files (ldf)
  • Tempdb files
  • OS / SQL binaries
  • System databases
  • Backups
  • Traces

At a minimum the data files and transaction log files should be separated. Once this is done the next part to improve is separating the tempdb files. Significant performance degradation can occur by way of disk contention to these files.

Example disk separation:

Disk layout for DB files

Storage Top 10 Best Practices: https://technet.microsoft.com/en-us/library/cc966534.aspx

Instance Level Configurations

Most items here are things to leave alone. Touching them usually leads to more of a mess.

Sometimes it is more important to know what to leave alone.

Maximum Worker Threads – the number of worker threads available to run SQL processes, forms the connection pool for client connections.

  • Default is 0
  • Leave as 0 otherwise it the database engine can reject worker threads coming from the application
  • If need to change then use the table in this KB for the number of threads based on the number of CPUs

In SSMS, right click on Instance -> Properties -> Processors

Processor options

Boost SQL Priority

  • Default is unchecked
  • Leave this unchecked at all times!

In SSMS, right click on Instance -> Properties -> Processors

Maximum Number of Concurrent Connections

  • Default is 0 (unlimited)
  • Leave this setting alone!

In SSMS, right click on Instance -> Properties -> Connections

Boost priority

Allow Remote Connections to the Instance

  • Default is checked
  • Leave this setting checked

In SSMS, right click on Instance -> Properties -> Connections

Remote Query Timeout

  • Default is 0 (no timeout)
  • Leave this as 0

In SSMS, right click on Instance -> Properties -> Connections

Query Wait – the time in seconds that a query wait for resources before timing out

  • Default is -1
  • Leave this setting alone

In SSMS, right click on Instance -> Properties -> Advanced

Configuring tempdb

The tempdb database is shared by all databases on the SQL Instance. It has a lot to do with performance and can be a bottleneck. We want tempdb to have the following best practice guidelines:

  • Each tempdb data file should be the same initial size
  • Autogrowth to tempdb files should be an explicit value in MB instead of a percentage. Choose a reasonable value based on their workload. Ex. 64MB, 128MB, etc.
  • The number of tempdb files should be 1 per processor core up to 8. At that point the performance should be monitored and if more tempdb files are needed they should be added in sets of 4.
  • Ideally the tempdb files are sized up to the max they will need and never have to autogrow.

tempdb

tempdb files

Remember that tempdb is rebuilt by SQL Server after each service restart/reboot. You won’t see changes to file structure until then.

Database Level Configurations

The auto-options are the ones to watch out for here.

  • Make sure the Auto Close option is false
  • Make sure the Auto Shrink option is false

Must Read: Considerations for the “autogrow” and “autoshrink” settings in SQL Server

In SSMS, right click on the Database -> Properties -> Options

DB configurations

Initial Size and Autogrowth

Here are the database defaults for SQL 2008 R2:

db file default - 2k8r2

SQL Server 2016 is a little better but still rather bad:

db file default - 2016

Default database initial sizes and autogrowth settings are poor.

We won’t have much ability to change the initial size of the files but we can change the model database. This will ensure that any new databases will be created with better values.

In this case, change the initial size to an appropriate starting size based on initial capacity planning.

In all cases, check the autogrowth /max size option:

db file autogrowth

  • Change the autogrowth from the default to a value like 128MB, 256MB, 512MB, 1024MB. Use your judgment based on capacity planning (DB growth and available disk space)
  • The data files maxsize option can be left as unlimited.

Environmental Conclusions

The key points for investigating SQL Server environmental configurations for performance issues are:

  1. Defaults are often poor!
  2. Use best practices to configure SQL Server to minimize performance issues – know where to get good advice and avoid bad advice. Test test test!
  3. Know when to step outside the guidelines – they are not rules without exceptions – learn them!

 


Like what you are reading? Please subscribe!

Yes I want to Subscribe!

Just enter your email below.

12 thoughts on “SQL Server Environmental Diagnostics Guide

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.