9 Points to Enhance SQL Server Performance with optimization

Microsoft SQL Server allows you to create basic applications fast data efficient and reliable data, but these applications run better can be difficult. Fortunately, administrators and database developers can use some simple techniques to convince more speed from a SQL Server database. In October 1996 ("10 simple tips for better performance of SQL Server"), I discussed some tactics to increase performance. Here are 10 other ideas that you can easily apply to your SQL Server database applications.


Method 1 - Try to Use Stored Procedures: 
 
You can use a procedural language (Transact-SQL) and SQL to create features that are stored in the database engine, rather than in the application code or libraries. These stored procedures have several advantages. These procedures remove the execution of analysis because SQL analysis when they are created. You can select specific stored procedures call with administrative privileges database (DBA) are executed, even if the user does not work on this level of security. This function allows you to combine a high level of data access with strong security measures. With stored procedures, you can easily create libraries of functions that reduce the amount of source code that programmers write. Stored procedures significantly reduce the amount of labor required to perform an upgrade, because you can move the application logic on the server instead of distributing new software versions for each client in an organization. If stored procedures, SQL can server engine, buffer procedures instead of hard disk read, so that the total amount of expensive disk operations to reduce finally I / O, can in a distributed environment, stored procedures reduce the amount of information between the front end (client) and the back-end (server) travel. This reduction can save time, especially when the client and server are remote. Another way to reduce traffic between the client and the server for stored procedures NONCOUNT option is to set. DONE_IN_PROC disabled NOCOUNT SQL Server messages that indicate the number of rows in a given operation affects.


Method 2 - Select the Best Read-Ahead Configuration Values:
 
One of the features of SQL Server, the query execution can significantly improve early reading (AR or parallel data scan). If SQL Server detects some queries, such as table scans and other studies, the large amounts of data back sequentially in the table a background thread is assigned below to read. The result is that your program requires this information; SQL Server has been collected by the time the data in the buffer pool.

For example: Suppose you use a long report, extracts information from a customer table. If you read large blocks of data sequentially, SQL Server can the next set of information to anticipate want, and read these words in memory while processing the first record. This action could result in a significant improvement in performance because the program can now be able to find what you need in the store and not on the hard drive.

Let's see how you can set the parameters in the dialog box, SQL Server Configuration / Options to make the most of RA. We must not forget that changes in the influence of access database RA parameters for all SQL-server applications that are running on the system. Therefore, changes in these parameters with care because of a change to undesirable results.

RA cache miss limit. SQL Server uses the miss limit RA cache to determine when should start reading. For example, if the limit RA cache is set to 5, SQL Server starts ahead of Reading after failing to find five pages in the buffer pool. The range of valid values between 1 and 255; the default is 3.

A low value means that SQL Server tries to read in advance in most consultations; select too high, SQL Server prevents a potentially useful strategy. Therefore, if the system is used primarily for reporting and other operations that usually recover large amounts of information, set the value on the lower side.

If set to 1, but it means that SQL Server problem still recover an application for RA, even if only one page of data from the disk. This process have a negative impact on performance in most cases. This setting tells SQL Server normally with AR operations as soon as possible to begin. Conversely, if the system as an online transaction processing environment works (OLTP), with very few games, sequential operations to increase this value you want SQL Server AR overall, but to prevent situations the most obvious.

RA delay. SQL Server uses the RA delay parameters to determine the front waiting period should start reading. This value is necessary because always a certain amount of time between when the manager starts with AR and if you are able to process requests. The range of valid values between 0 and 500 milliseconds; The default is 15. The setting for most of the default settings is sufficient, but if you use SQL Server on a multiprocessor computer, turn on. If this parameter is too high, SQL Server may delay too long before going to an RA.

RA pre-recoveries. RA can use the pre-Recovers SQL Server as counting how many extensions you want to recover during the first AR operations. The range of valid values between 1 and 1000, with a default value of 3. If you run your applications primarily large sequential operations, set this value to a higher number of SQL Server to say, to bring large quantities data buffer pool for each RA store operations. If this number is too high, but you can switch to other memory pool pages users with their data buffer. Therefore, be careful when you experiment with this number; gradually increase the value. Try increasing the value of a 5 percent every time, and keep track of the overall system response between changes. Find out if the performance results for an application that affects the performance of other applications.

RA worker threads. yarn processing operations AR. RA slots by-wire parameter controls the number of threads that assigns to meet the demands of RA SQL Server. Each thread then configured supports a number of individual requests RA. AR wire adjustment work can be from 0 to 255; The default value is 3. Use SQL Server to access this option to wait for the maximum number of concurrent users. If this parameter is set too low, you may not have enough threads have the volume of applications to work with RA. If the value is too high, you start with too many strands AR. SQL Server logs an error if the number of threads RA workers exceeds the number of slots with RA.

RA slots per thread. The slots RA by-wire parameter specifies managed every thread, the number of RA applications. The validation range is from 1 to 255 operations; The default is 5. If this value is too high, SQL may be the son overload with AR; can switch the wire over time between different requests AR that service requests to spend. A low value may cause idle threads. Normally, the default is fine.

One last note on the RA setting parameters. Not experiment with these numbers until you feel both the architecture and SQL Server-specific functionality of the system as well. Even if you decide to try, remember one parameter at a time for change. change several parameters at reduced power without giving much information, what response became worse.


Method 3 - Use Wide Indexes Correctly:
 
A large index is one that contains a large amount of data - or several small crevices or larger columns. For example, an index of char(255) column is wider than the index of the column char(4).

Narrow indexes are better. When an index is created, SQL Server stores the key values of the indexes and records in the index pages. With strong indications SQL Server can accommodate the largest and link to each index page values. This structure allows the optimizer to find your data faster, because you have less to read the index pages before making their data. In addition, if more index keys and pointers are on one side, the optimizer can use this information to propose effective. Conversely, if the index keys are wide, the motor can only fit a few key indicators and data on each side. The structure of the index also tends to be deeper trend, when the buttons are large so that the optimizer should also perform calculations.

In Table Customer_master in Table 1 indicates that a composite index on the column last_name, first_name, city and street stands. This index is high because it contains a relatively large number of columns and a large amount of data.

TABLE 1: Customer_master
account_number
last_name
first_name
street
city
... account_balance
344484454
Bockwinkle
Terry
Jeeves Way

24998
344484455
Okerlund
Nick
Jacques St.

105660
344484456
Blassie
George
Mariner Rd.

3004
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

You need to analyze why the major indices are created. Have all users, all of these areas to be searched; It's what they want to order in all these areas? It is likely that this index is a kitchen sink approach that many columns in the index as possible.

This approach is not as effective as you might think. The barrier performance is painfully obvious when the optimizer will be prompted to search in one or two columns of the index. In Table 1, if you are looking for all rows that have a last_name  value between Hank and Hendrix name, the optimizer to use the composite index, because there are no other clues. Unfortunately, the optimizer now hundreds of pages can read index to find the right information, since the width of the index means that the index is also low. if you have an index of names, however, you can add more keys in each index page and find the optimization program the right information quickly. A broad index can also bring the sort order that the engine in a sequential scanning used despite the existence of the index because SQL Server tries to respond to a composite index and enforce the classifications.

Despite these warnings, the rates are usually effective if you apply columns to a non-clustered index in particular. may miss (In a non-clustered index, the engine will not reorganize the data table to match the index.) SQL Server to read the data and get their information from the index key. For example, the query

select last_name, first_name,
from customer_master
where last_name between Zlotnick and Zotnick

Master table. SQL Server can meet this requirement without the table's data pages to read the two columns are indexed and the index is non-clustered. If you retrieve large blocks of data, this combination can improve performance, especially because the disk I / O can add a lot of effort for a consultation. Therefore, prices are not always large shredding performance.


Method 4 - Determine the Right Size for the Transaction Log:

To make the system correctly transaction log size begin to be allocated by about 15 percent to 25 percent of total disk space in the log database transactions. And then consider some factors that influence the use of transaction log.

An application that performs data access mainly read-only are not likely to need a record of large transactions, since the application protocol used only when the information is changed. Conversely, if the application is millions of changes every day, you can count on the need for a larger transaction log. However, when the application data changes frequently done, but small, you may not need a large transaction log.

The recovery interval parameters using the frequency of checkpoints to control occur. During a checkpoint, SQL Server synchronizes the contents of the transaction log and hard disk. In most cases, the longer the interval between checkpoints, the transaction log needs are highest.

So seldom retains the transaction log for the media, rather than automatically truncate the log, you must create a record of major transactions. Frequency record store is a tactic that allows you to avoid the creation of a register of significant transactions. Also keep in mind that because you cannot recover the transactions by truncating the log, the log automatically cut transactions means that you are willing to lose transactions between backup data in case of system failure.

When the CREATE DATABASE statement is used, specify log size with the following syntax:
\[LOG ON database_device
\[= size\]
\[, database_device
\[= size\]\]...\] 

If the data and transaction log is the database on the same device, you can leave the value of the size of the log of empty transactions because the transaction log consume only as much space as necessary. However, for most production systems, place the transaction log on a separate device. For this example, suppose you use the device to store data from database and transaction log.

Start with a small transaction record. You can easily add space if you find that you have made the transaction log is too small, but you cannot without a lot of work to make the smallest drive.

You can increase the size of two ways: First, you can use SQL Enterprise Manager to expand the screen database. In this window, you can select a predefined device, and then allocate more space for the transaction log. The other way to increase the size of the transaction log is to use the ALTER DATABASE command, and sp_logdevice stored procedure. For example, if you want to add 50 MB to a database called acquisition, the syntax is

alter database acquisition on logdevice5 = 50
sp_logdevice acquisition, logdevice5

If you increase the size of the transaction log, the teacher of the backup database before and after the change.


Method 5 - Put TEMPDB in RAM:

In some circumstances, you can enhance the data in RAM, the system performance by storing TEMP base (TEMPDB). This SQL Server database is built in temporary tables and retains much of its internal classification.
First, this technique is only suitable if enough system memory must meet the set SQL Server cache needs. If the system does not have enough memory to start, more TEMPDB can reduce overall performance. On another occasion TEMPDB RAM with an advantage if the operations correspond TEMPDB space is allocated. For example, if you assigned 2 MB of RAM and TEMPDB each instance of your application, regularly worktables created 10MB, the TEMPDB is in RAM, will not make much difference, because it has not enough space, all applications are to meet users.

Place TEMPDB in RAM improves performance when users and applications use high TEMPDB. If you do not do often TEMPDB investment RAM access memory can affect performance, and now takes TEMPDB valuable storage RAM.

We can say that, especially since you use the TEMPDB SHOWPLAN run against your requests. If you often see implicit tables, chances are you're TEMPDB encounter quite often. But if most of your questions do not require the engine work to create tables in the TEMPDB RAM probably saved unnecessary.

Another fact to be placed in the tempdb in RAM can improve performance if applications do not have access to the data set often hides; In these cases, access to programs become permanently locate the training data, instead of searching the memory. For example, in applications where individual users are looking very different sets of data, a user has little chance of finding data that has been cached by another user.

However, the use of RAM for caching data and index pages is probably best to place tempdb in RAM. However, if you decide to put tempdb in RAM, make your performance and set after the tempdb in RAM; If you do not have the experience better performance, keep your TEMPDB plate.

For better preservation performance tempdb in RAM, restart the engine after the storage changes. If you changed the engine when running, SQL Server can use, not to meet the contiguous memory to your requirement. If you restart the engine, memory TEMPDB be adjacent. By Performance Monitor, SQL Server 6.5 now allows you to follow uses the maximum amount of space in tempdb.


Method 6 - Avoid Transactions for Certain Operations:

Use the transaction only when the program modifies the database. Do not use transactions for queries, reports, drawn, or bulk operations.

If you run a query and does not insert / update /
Delete may omit transactions; select this option in the application development tool or not to issue a BEGIN TRANSACTION. Many programmers open a transaction at the beginning of a relationship, but in a transaction report will add nothing to the report (unless it is the update tables). An opening in a report operation can even reduce system performance.

Sometimes it is necessary to create temporary work tables, but the traditional concepts of the integrity of the database and transaction control cannot be applied when using work tables. Therefore, you can often avoid complete transactions with their office or change information.

Bulk operations are methods to make significant changes in the database tables. Often it is necessary to include these events in the transaction log. You can disable transactions for mass operations bypassing a Indicator configuration database to determine which transactions during inserts and SELECT INTO. Or you can modify the application code to ignore transactions if major surgery is performed.

Sometimes you need to use transactions, but to prevent unwanted data changes during an operation. For example, during a long query or report, you often want to freeze the underlying data until the report is completed. To do this, you must request the optimizer to block all lines until the process is complete; Rows of interlocking needed to begin a transaction.


Method 7 - Allocate the Correct Amount of Memory for Stored Procedure Caching:

Memory caching is an important part of the SQL Server architecture. The cache is divided between the data memory (data cache) and memory for stored procedures (stored procedure buffer). Because SQL Server uses the data cache to reduce the amount of disk I / O is required to retrieve the data, SQL Server uses the process buffer in memory to find, instead of reading from the procedure Hard disk.

If you try to run a stored procedure previously built, SQL Server first looks in the procedure cache to see if the process is already in memory. If applicable, the engine uses the version based on the memory of the stored procedure. If not, read the machine, the process of the hard disk and places it in the buffer stored procedure, consuming memory pages as 2 KB required. When you create a stored procedure or compile, SQL Server uses the buffer this information to downstream users process cached. However, the machine supports the same query plan several users do not work together; Therefore, the stored procedures are reusable and not reentrant.

the total amount of memory assigned to the motor, the parameter memory setting. After SQL Server has been started and all necessary structures defined internal memory, it allocates memory for stored procedures excess cache and data cache.

The procedure cache parameter tells the engine how much spare affect this store in the procedure cache; the rest goes to the data cache. The default value for the procedure cache parameter is 30 percent. You can increase or decrease your application using stored procedures this value depending on the amount.

You can be the statistics cache using monitoring procedures SQL stored procedures activity cache server. Pay special attention to the maximum value of parameters such as Maximum procedure buffers active percentage 

Method maximum percentage Damps assets
Maximum percentage of the buffer method
Active Cache maximum percentage method
Cache used the maximum percentage method

These values mark since the last time the engine is started up.


Method 8 -  Don't Create Too Many Indexes:
 
Some database administrators try to anticipate all the possible combinations of art and search by indexes created on almost every column of each table. too many indexes for preventing your system in several ways. Whenever the insertion or removal is complete, you must change the indices and data. If an indexed column is updated, updates the SQL Server engine index all concerned, an action that did not add the desired effect of structuring the engine re-create index trees. This update process can impede performance for all applications that access the same table and can affect short reaction throughout the system. You have no way of knowing if the index trees restructured engine. Additional indexes also consume more space. Finally, if too many indexes faced, the optimizer cannot choose the most qualified index. The operation database may be slower than running, if you have fewer indexes.

The best way to know if you have too many indexes is to test the operation of the database. to remove a typical workday simulating Show plan control each process or code that have changed, and then examine the output. You can quickly find the SQL Server determine the indices used and you do not remove the index engine, he refers often.

Sometimes additional index requires specific tasks to handle, easy to recognize, as a series of processing end of the month. In such cases, create indexes immediately before you need them and leave them as soon as you are finished. At other times, you must run large batch update operations, which can be time consuming if you update too many indexes. You can benefit from the creation of a stored procedure to exclude some clues, perform the operation and then rebuild the index. All the time, can be less than if you let the batch update process changed additional indexes.


Method 9 - Use the Multiple Table DELETE Option:
 
Traditional SQL limit its operations a table at the same time to remove. Transact-SQL has a multi-table to determine the ability to remove, reduce the number of calls of individual motors. may, for example, to delete records in two tables, and resources of the parties, two SQL statements obtained from:

delete from resources where resource_cost > 5000
delete from parts where part_cost > 5000 

Or you can use Transact-SQL's multiple table DELETE extension:
delete from resources
from parts
where resources.resource_cost = parts.part_cost
and resources.resource_cost > 5000 

This approach is not portable, so you cannot run your application against different databases. But if you work with SQL Server, Multi-Table is a handy shortcut. You can also use the UPDATE statement to modify multiple tables at once.

Post a Comment

0 Comments