INF: Optimizing Microsoft SQL Server Performance
This article was previously published under Q110352
This article has been archived. It is offered "as is" and will no longer be updated.
To most effectively optimize Microsoft SQL Server performance, you mustidentify the areas that will yield the largest performance increases overthe widest variety of situations, and focus analysis on these areas.Otherwise, you may expend significant time and effort on topics that maynot yield sizable improvements.
For the most part, the following information does not address theperformance issues stemming from multiuser concurrency. This is aseparate, complex topic that is covered in the document "MaximizingDatabase Consistency and Concurrency," which can be found in the SQL Serverversion 4.2x "Programmer's Reference for C," Appendix E, and also in otherKnowledge Base articles. It is not in the version 6.0 documentation, butcan be found on the MSDN (Microsoft Developer Network) CD under thattitle.
Rather than a theoretical discussion, this article focuses primarily onareas that years of experience by the Microsoft SQL Server Support team hasshown to be of practical value in real world situations.
Experience shows that the greatest benefit in SQL Server performance can begained from the general areas of logical database design, index design,query design, and application design. Conversely, the biggest performanceproblems are often caused by deficiencies in these same areas. If you areconcerned with performance, you should concentrate on these areas first,because very large performance improvements can often be achieved with arelatively small time investment.
While other system-level performance issues, such as memory, cache buffers,hardware, and so forth, are certainly candidates for study, experienceshows that the performance gain from these areas is often incremental. SQLServer manages available hardware resources automatically, for the mostpart, reducing the need (and therefore, the benefit) of extensivesystem-level hand tuning.
Microsoft SQL Server 6.0 provides new opportunities for platform-layerperformance improvements, with large amounts of memory, symmetricalmultiprocessing, parallel data scan, optimizer enhancements, and diskstriping. However, as large as these improvements are, they are finite inscope. The fastest computer can be bogged down with inefficient queries ora poorly designed application. Thus, even with the additional performanceincrease that SQL Server 6.0 allows, it is extremely important to optimizethe database, index, query, and application design.
Most performance problems cannot be successfully resolved with only aserver-side focus. The server is essentially a "puppet" of the client,which controls what queries are sent, and thereby what locks are obtainedand released. Although some tuning is possible on the server side,successful resolution of performance problems will usually dependon acknowledging the dominant role the client plays in the problem andanalyzing client application behavior.
The following are some suggestions that, based on experience, have yieldedsignificant performance gains:
Normalize Logical Database DesignReasonable normalization of the logical database design yields bestperformance. A greater number of narrow tables is characteristic of anormalized database. A lesser number of wide tables is characteristic of adenormalized database. A highly normalized database is routinely associatedwith complex relational joins, which can hurt performance. However, the SQLServer optimizer is very efficient at selecting rapid, efficient joins, aslong as effective indexes are available.
The benefits of normalization include:
- Accelerates sorting and index creation, because tables are narrower.
- Allows more clustered indexes, because there are more tables.
- Indexes tend to be narrower and more compact.
- Fewer indexes per table, helping UPDATE performance.
- Fewer NULLs and less redundant data, increasing database compactness.
- Reduces concurrency impact of DBCC diagnostics, because the necessary table locks will affect less data.
If the logical database design is already fixed and total redesign is notfeasible, it may be possible to selectively normalize a large table ifanalysis shows a bottleneck on this table. If access to the database isconducted through stored procedures, this schema change could take placewithout impacting applications. If not, it may be possible to hide thechange by creating a view that looks like a single table.
Use Efficient Index DesignUnlike many non-relational systems, relational indexes are not consideredpart of the logical database design. Indexes can be dropped, added, andchanged without affecting the database schema or application design in anyway other than performance. Efficient index design is paramount inachieving good SQL Server performance. For these reasons, you should nothesitate to experiment with different indexes.
The optimizer reliably chooses the most effective index in the majority ofcases. The overall index design strategy should be to provide a goodselection of indexes to the optimizer, and trust it to make the rightdecision. This reduces analysis time and gives good performance over a widevariety of situations.
The following are index design recommendations:
- Examine the WHERE clause of your SQL queries, because this is the primary focus of the optimizer.
Each column listed in the WHERE clause is a possible candidate for an index. If you have too many queries to examine, pick a representative set, or just the slow ones. If your development tool transparently generates SQL code, this is more difficult. Many of these tools allow the logging of the generated SQL syntax to a file or screen for debugging purposes. You may want to find out from the tool's vendor if such a feature is available.
- Use narrow indexes.
Narrow indexes are often more effective than multicolumn, compound indexes. Narrow indexes have more rows per page, and fewer index levels, boosting performance.
The optimizer can rapidly and effectively analyze hundreds, or even thousands, of index and join possibilities. Having a greater number of narrow indexes provides the optimizer with more possibilities to choose from, which usually helps performance. Having fewer wide, multicolumn indexes provides the optimizer with fewer possibilities to choose from, which may hurt performance.
It is often best not to adopt a strategy of emphasizing a fully covered query. It is true that if all columns in your SELECT clause are covered by a non-clustered index, the optimizer can recognize this and provide very good performance. However, this often results in excessively wide indexes and relies too much on the possibility that the optimizer will use this strategy. Usually, you should use more numerous narrow indexes which often provide better performance over a wider range of queries.
You should not have more indexes than are necessary to achieve adequate read performance because of the overhead involved in updating those indexes. However, even most update-oriented operations require far more reading than writing. Therefore, do not hesitate to try a new index if you think it will help; you can always drop it later.
- Use clustered indexes.
Appropriate use of clustered indexes can tremendously increase performance. Even UPDATE and DELETE operations are often accelerated by clustered indexes, because these operations require much reading. A single clustered index per table is allowed, so use this index wisely. Queries that return numerous rows or queries involving a range of values, are good candidates for acceleration by a clustered index.
Examples:By contrast, the LASTNAME or MEMBER_NO columns mentioned above are probably not good candidates for a non-clustered index if this type of query is common. Try to use non-clustered indexes on columns where few rows are returned.
SELECT * FROM PHONEBOOK WHERE LASTNAME='SMITH' -or- SELECT * FROM MEMBERTABLE WHERE MEMBER_NO > 5000 AND MEMBER_NO < 6000
- Examine column uniqueness.
This helps you decide what column is a candidate for a clustered index, non-clustered index, or no index.
The following is an example query to examine column uniqueness:This returns the number of unique values in the column. Compare this to the total number of rows in the table. On a 10,000-row table, 5,000 unique values would make the column a good candidate for a non-clustered index. On the same table, 20 unique values would better suit a clustered index. Three unique values should not be indexed at all. These are only examples, not hard and fast rules. Remember to place the indexes on the individual columns listed in the WHERE clauses of the queries.
SELECT COUNT (DISTINCT COLNAME) FROM TABLENAME
- Examine data distribution in indexed columns.
Often a long-running query occurs because a column with few unique values is indexed, or a JOIN on such a column is performed. This is a fundamental problem with the data and query itself, and cannot usually be resolved without identifying this situation. For example, a physical telephone directory sorted alphabetically on last name will not expedite looking up a person if all people in the city are named just "Smith" or "Jones." In addition to the above query, which gives a single figure for column uniqueness, you can use a GROUP BY query to see the data distribution of the indexed key values. This provides a higher resolution picture of the data, and a better perspective for how the optimizer views the data.
The following is an example query to examine data distribution of indexed key values, assuming a two-column key on COL1, COL2:This will return one row for each key value, with a count of the instances of each value. To reduce the number of rows returned, it may be helpful to exclude some with a HAVING clause. For example, the clause
SELECT COL1, COL2, COUNT(*) FROM TABLENAME GROUP BY COL1, COL2will exclude all rows which have a unique key.
HAVING COUNT(*) > 1
The number of rows returned in a query is also an important factor in index selection. The optimizer considers a non-clustered index to cost at least one page I/O per returned row. At this rate, it quickly becomes more efficient to scan the entire table. This is another reason to restrict the size of the result set or to locate the large result with a clustered index.
Use Efficient Query DesignSome types of queries are inherently resource intensive. This is related tofundamental database and index issues common to most relational databasemanagement systems (RDBMSs), not specifically to SQL Server. They are notinefficient, because the optimizer will implement the queries in the mostefficient fashion possible. However, they are resource intensive, and theset-oriented nature of SQL may make them appear inefficient. No degree ofoptimizer intelligence can eliminate the inherent resource cost of theseconstructs. They are intrinsically costly when compared to a more simplequery. Although SQL Server will use the most optimal access plan, this islimited by what is fundamentally possible.
- Large result sets
- IN, NOT IN, and OR queries
- Highly non-unique WHERE clauses
- != (not equal) comparison operators
- Certain column functions, such as SUM
- Expressions or data conversions in WHERE clause
- Local variables in WHERE clause
- Complex views with GROUP BY
SELECT SUM(SALARY) FROM TABLE
SELECT SUM(SALARY) FROM TABLE WHERE ZIP='98052'
SELECT * FROM TABLE WHERE LNAME=@VAR
SELECT * FROM TABLE WHERE LNAME=@VAR AND ZIP='98052'
In the first example, the SUM operation cannot be accelerated with anindex. Each row must be read and summed. Assuming that there is an index onthe ZIP column, the optimizer will likely use this to initially restrictthe result set before applying the SUM. This can be much faster.
In the second example, the local variable is not resolved until run time.However, the optimizer cannot defer the choice of access plan until runtime; it must choose at compile time. Yet at compile time, when the accessplan is built, the value of @VAR is not known and consequently cannot beused as input to index selection.
The illustrated technique for improvement involves restricting the resultset with an AND clause. As an alternate technique, use a stored procedure,and pass the value for @VAR as a parameter to the stored procedure.
In some cases it is best to use a group of simple queries using temp tablesto store intermediate results than to use a single very complex query.
Large result sets are costly on most RDBMSs. You should try not to returna large result set to the client for final data selection by browsing. Itis much more efficient to restrict the size of the result set, allowing thedatabase system to perform the function for which it was intended. Thisalso reduces network I/O, and makes the application more amenable todeployment across slow remote communication links. It also improvesconcurrency-related performance as the application scales upward to moreusers.
Use Efficient Application DesignThe role that application design plays in SQL Server performance cannot beoverstated. Rather than picture the server in the dominant role, it is moreaccurate to picture the client as a controlling entity, and the server as apuppet of the client. SQL Server is totally under the command of the clientregarding the type of queries, when they are submitted, and how results areprocessed. This in turn has a major effect on the type and duration oflocks, amount of I/O and CPU load on the server, and hence whetherperformance is good or bad.
For this reason, it is important to make the correct decisions during theapplication design phase. However even if you face a performance problemusing a turnkey application where changes to the client application seemimpossible, this does not change the fundamental factors which affectperformance - namely that the client plays a dominant role and manyperformance problems cannot be resolved without making client changes.
With a well-designed application, SQL Server is capable of supportingthousands of concurrent users. With a poorly-designed application,even the most powerful server platform can bog down with just a fewusers.
Using the following suggestions for client application design will providegood SQL Server performance:
- Use small result sets. Retrieving needlessly large result sets (for example, thousands of rows) for browsing on the client adds CPU and network I/O load, makes the application less capable of remote use, and can limit multiuser scalability. It is better to design the application to prompt the user for sufficient input so that queries are submitted which generate modest result sets.
Application design techniques which facilitate this include limiting the use of wildcards when building queries, mandating certain input fields, and prohibiting improvised queries.
- Use dbcancel() correctly in DB-Library applications. All applications should allow cancellation of a query in progress. No application should force the user to reboot the client computer to cancel a query. Not following this principle can lead to performance problems that cannot be resolved. When dbcancel() is used, proper care should be exercised regarding transaction level. For additional information, please see the following article in the Microsoft Knowledge Base:117143 : INF: When and How to Use dbcancel() or sqlcancel()The same issues apply to ODBC applications, if the ODBC sqlcancel() call is used.
- Always process all results to completion. Do not design an application or use a turnkey application that stops processing result rows without canceling the query. Doing so will usually lead to blocking and slow performance.
- Always implement a query timeout. Do not allow queries to run indefinitely. Make the appropriate DB-Library or ODBC calls to set a query timeout. In DB-Library, this is done with the dbsettime() call, and in ODBC with SQLSetStmtOption().
- Do not use an application development tool that does not allow explicit control over the SQL statements sent to the server. Do not use a tool that transparently generates SQL statements based on higher level objects, unless it provides crucial features such as query cancellation, query timeout, and complete transactional control. It is often not possible to maintain good performance or to resolve a performance problem if the application all by itself generates "transparent SQL," because this does not allow explicit control over transactional and locking issues which are critical to the performance picture.
- Do not intermix decision support and online transaction processing (OLTP) queries.
- Do not design an application or use a turnkey application that forces the user to reboot the client computer to cancel a query. This can cause a variety of performance problems that are difficult to resolve because of possible orphaned connections. For more information, see the following article in the Microsoft Knowledge Base:137983 : How to Troubleshoot Orphaned Connections in SQL Server
Techniques to Analyze Slow PerformanceIt may be tempting to address a performance problem solely by system-levelserver performance tuning. For example, how much memory, the type of filesystem, the number and type of processors, and so forth. The experience ofMicrosoft SQL Server Support has shown that most performance problemscannot be resolved this way. They must be addressed by analyzing theapplication, the queries the application is submitting to the database, andhow these queries interact with the database schema.
First, isolate the query or queries that are slow. Often it appears that anentire application is slow, when only a few of the SQL queries are slow.It is usually not possible to resolve a performance problem withoutbreaking the problem down and isolating the slow queries. If you have adevelopment tool that transparently generates SQL, use any availablediagnostic or debug mode of this tool to capture the generated SQL. In manycases trace features are available, but they may not be openly documented.Contact the technical support for your application to determine if a tracefeature exists for monitoring the SQL statements generated by theapplication.
For application development tools that use embedded SQL, this is mucheasier - the SQL is openly visible.
If your development tool or end-user application does not provide a tracefeature, there are several alternatives:
- Use the 4032 trace flag according to the instructions in the SQL Server 4.2x "Troubleshooting Guide," and the SQL Server 6.0 "Transact-SQL Reference." This will allow capture of the SQL statements sent to the server in the SQL error log.
- Monitor the queries through a network analyzer such as Microsoft Network Monitor, which is part of Systems Management Server.
- For ODBC applications, use the ODBC Administrator program to select tracing of ODBC calls. See the ODBC documentation for more details.
- Use a third-party client-side utility which intercepts the SQL at the DB-Library or ODBC layers. An example of this is SQL Inspector from Blue Lagoon Software.
- Use the SQLEye analysis tool provided as an example in the Microsoft TechNet CD. NOTE: SQLEye is not supported by Microsoft Technical Support.
- Run the suspected slow query in isolation, using a query tool such as ISQL, and verify that it is slow. It is often best to run the query on the server computer itself using ISQL and local pipes, and redirect the output to a file. This helps eliminate complicating factors, such as network and screen I/O, and application result buffering.
- Use SET STATISTICS IO ON to examine the I/O consumed by the query. Notice the count of logical page I/Os. The optimizer's goal is to minimize I/O count. Make a record of the logical I/O count. This forms a baseline against which to measure improvement. It is often more effective to focus exclusively on the STATISTICS IO output and experiment with different query and index types than to use SET SHOWPLAN ON. Interpreting and effectively applying the output of SHOWPLAN can require some study, and can consume time that can be more effectively spent on empirical tests. If your performance problem is not fixed by these simple recommendations, then you can use SHOWPLAN to more thoroughly investigate optimizer behavior.
- If the query involves a view or stored procedure, extract the query from the view or stored procedure and run it separately. This allows the access plan to change as you experiment with different indexes. It also helps localize the problem to the query itself, versus how the optimizer handles views or stored procedures. If the problem is not in the query itself but only when the query is run as part of a view or stored procedure, running the query by itself will help determine this.
- Be aware of possible triggers on the involved tables that can transparently generate I/O as the trigger runs. You should remove any triggers involved in a slow query. This helps determine if the problem is in the query itself or the trigger or view, and therefore, helps direct your focus.
- Examine the indexes of the tables used by the slow query. Use the previously listed techniques to determine if these are good indexes, and change them if necessary. As a first effort, try indexing each column in your WHERE clause. Often performance problems are caused by simply not having a column in the WHERE clause indexed, or by not having a useful index on such a column.
- Using the queries previously mentioned, examine the data uniqueness and distribution for each column mentioned in the WHERE clause, and especially for each indexed column. In many cases simple inspection of the query, table, indexes, and data will immediately show the problem cause. For example, performance problems are often caused by having an index on a key with only three or four unique values, or performing a JOIN on such a column, or returning an excessive number of rows to the client.
- Based on this study, make any needed changes to the application, query, or indexes. Run the query again after making the change and observe any change in I/O count.
- After noting improvement, run the main application to see if overall performance is better.
How to Examine I/O vs. CPU-bound Query Behavior:
- Use Windows NT Performance Monitor to watch I/O versus CPU activity. Watch all instances of the "% Disk Time" counter of the LogicalDisk object. Also watch the "% Total Processor Time" counter of the System object. To see valid disk performance information, you must have previously turned on the Windows NT DISKPERF setting by issuing "diskperf -Y" from a command prompt, and then rebooting the system. See the Windows NT documentation for more details.
- While running the query, if the CPU graph is consistently high (for example, greater than 70 percent), and the "% Disk Time" value is consistently low, this indicates a CPU-bound state.
- While running the query, if the CPU graph is consistently low (for example, less than 50 percent), and the "% Disk Time" is consistently high, this indicates an I/O bound state.
- Compare the CPU graph with the STATISTICS IO information.
ConclusionSQL Server is capable of very high performance on large databases. This isespecially the case with SQL Server 6.0. To achieve this performancepotential, you must use efficient database, index, query, and applicationdesign. These areas are the best candidates for obtaining significantperformance improvement. Try to make each query as efficient as possible,so that when your application scales up to more users, the collectivemultiuser load is supportable. Study of the client application behavior,the queries submitted by the application, and experimentation with indexesusing the guidelines in this document are strongly encouraged. A methodicalapproach in analyzing performance problems will often yield significantimprovement for relatively little time investment.
4.20 sql6 Windows NT optimization sqlfaqtop
Article ID: 110352 - Last Review: 12/04/2015 09:57:29 - Revision: 3.1
Microsoft SQL Server 4.21a Standard Edition, Microsoft SQL Server 6.0 Standard Edition, Microsoft SQL Server 6.5 Standard Edition
- kbnosurvey kbarchive kbinfo kbother KB110352