Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 20 Jan 2004 11:14:43 -0700
From:      "SQL-Server-Performance.Com" <webmaster@sql-server-performance.com>
To:        "Joseph Mallett" <freebsd-chat@freebsd.org>
Subject:   SQL-Server-Performance.Com Newsletter -- January 20, 2004
Message-ID:  <439810571-220041220181443656@alentus.com>

next in thread | raw e-mail | index | archive | help
=================================================================

**SQL-Server-Performance.Com Newsletter**

=================================================================

January 20, 2004

=================================================================

Editor:     Brad M. McGehee, MVP, MCSE+I, MCSD
E-Mail:     mailto:webmaster@sql-server-performance.com

=================================================================

URL:        http://www.sql-server-performance.com/

=================================================================

Information on how to subscribe, unsubscribe, and to
change your e-mail address is at the bottom of this newsletter.
This is a 100% opt-in newsletter.

=================================================================




=================================================================
**Sponsor's Message**
=================================================================

COMPARE AND SYNCHRONIZE SQL SERVER DATABASES


Use Red Gate's tools for SQL Server databases for all your
comparison and synchronization tasks.  Tools include:

--SQL Compare: Compare and synchronize database structures

--SQL Data Compare: Compare and synchronize data in databases

--DTS Compare: Compare SQL Server settings, jobs, logins and
DTS packages

--SQL Toolkit: Automate SQL Compare and SQL Data Compare


Visit http://www.red-gate.com/sql/summary.htm for a FREE trial,
or contact mailto:sales@red-gate.com

=================================================================




=================================================================
**In This Issue**
=================================================================

-- New Information Published on SQL-Server-Performance.Com
-- New and Updated SQL Server Performance Tips
-- Learn & Win: January Forum Contest With FREE Software

=================================================================




=================================================================
**New Information Published on SQL-Server-Performance.Com**
=================================================================

Here are some new items that were recently posted to
SQL-Server-Performance.Com:


--New for Writers: If you submit an article to SQL-Server-
Performance.Com in 2004, and it is published, you will receive a
free copy of LeadByte Software's Network Performance Suite, worth
$977

--New Article: SQL Server Parallel Execution Plans


If you haven't seen these yet, check them out at:

http://www.sql-server-performance.com/

=================================================================




=================================================================
**Sponsor's Message**
=================================================================

FREE White Paper on Data Recovery Best Practices
-- by Stephen Wynkoop, MVP and Founder SSWUG

Planning for data recovery is about more than just making sure
your database is backed up. There are many things to consider. In
his paper, veteran SQL Server guru Stephen Wynkoop discusses the
best approaches to a solid data recovery solution. He outlines
key planning points and investigates how select tools can help
you accomplish a plan for success. This white paper is brought to
you, compliments of Lumigent Technologies.

Download it now at: http://www.lumigent.com/go/sd14

=================================================================




=================================================================
**SQL Server Performance Tuning and Optimization Tips**

The SQL Server performance tips listed below were recently added
or updated on the website.
=================================================================



**Blocking**

Blocking occurs when one connection to SQL Server locks one or
more records, and a second connection to SQL Server requires a
conflicting lock type on the record or records locked by the
first connection. This causes the second connection to wait until
the first connection releases its locks. By default, a connection
will wait an unlimited amount of time for the blocking lock to go
away. Blocking is not the same thing as a deadlock.

A certain amount of blocking is normal and unavoidable. But too
much blocking can cause connections (representing applications
and user) to wait extensive periods of time, hurting overall SQL
Server performance.  In the worst cases, blocking can escalate as
more and more connections are waiting for locks to be released,
creating extreme slowdowns. The goal should be to reduce blocking
as much as possible.

Locks held by SELECT statements are only held as long as it takes
to read the data, not the entire length of the transaction. On
the other hand, locks held by INSERT, UPDATE, and DELETE
statements are held until the entire transaction is complete.
This is done in order to allow easy rollback of a transaction, if
necessary.

Some causes of excessive blocking, and ways to help avoid
blocking, include:

--Long-running queries. Anytime a query of any type, whether it
is a SELECT, INSERT, UPDATE, or DELETE, takes more than a few
seconds to complete, blocking is likely. The obvious solution to
this is to keep transactions as short as possible. There are many
tips on this website on how to help reduce transaction time, but
some of them include: optimize Transact-SQL code; optimize
indexes; break long transactions into multiple, smaller
transactions; avoiding cursors, etc.

--Canceling queries, but not rolling them back. If your
application's code allows a running query to be cancelled, it is
important that the code also roll back the transaction. If this
doesn't happen, locks held by the query will not be released,
which means blocking can occur.

--Distributed client/server deadlock. No, this is not your
typical deadlock that is handled automatically by SQL Server, but
a very special situation that is not automatically resolved by
SQL Server.

Here's what can happen. Let's say that an application opens two
connections to SQL Server. The application then asynchronously
starts a transaction and sends a query through the first
connection to SQL Server, waiting for results. The application
then starts a second transaction and sends a query through the
second connection to SQL Server, waiting for results. At some
point, one of the queries from one of the connections will begin
to return results, and the application will then begin to process
them.

As the application processes the results, at some point what
could happen is that the remainder of the results become blocked
by the query running from the other connection. In other words,
the first query can't complete because it is being blocked by the
second query. So in essence, this connection is blocked and
cannot continue until the second query completes. But what
happens is that the second query tries to return its results, but
because the application is blocked (from the first query), its
results cannot be processed. So this means that this query cannot
complete, which means the block on the first query can never end,
and a deadlock situation occurs. Neither connection will give up,
so neither connection never ends, and the deadlock situation
never ends.

SQL Server is unable to resolve this type of deadlock, so unless
you want to write applications that hang forever, you can take
these steps to prevent this unusual situation: 1) Add a query
time-out for each of the queries, or 2) Add a lock time-out for
each of the queries, or 3) Use a bound connection for the
application.

In many ways, the best way to avoid blocking is to write well-
tuned applications that follow the tuning advice found on this
website. [7.0, 2000]

                             *****

By default, blocking locks don't time out. The waiting connection
waits until the lock is released, and the block is over. If you
like, you can set a lock time-out so that a connection does not
wait indefinitely for the blocking lock to be released. This is
accomplished using the LOCK_TIMEOUT setting.

When the LOCK_TIMEOUT setting is used to set a maximum amount of
time that a connection can wait for a blocking lock to go away.
This means that the connection that has the lock and is causing
the blocking problem is not affected, but that the connection
waiting for the block is halted, and receives an error message.
When this happens, then error message 1222, "Lock request time-
out period exceeded" is sent to the application.

This means that the application needs to include the appropriate
error-handling code to deal with this situation and take the
appropriate action, which includes rolling back the transaction.
If the application does not know how to deal with this error
message, and the transaction is not rolled back, it is possible
that the application can continue as if the transaction was not
automatically cancelled. Because of this, you should not use the
LOCK-TIMEOUT setting unless your application(s) that will be
affected by it know what to do when they receive this message
from SQL Server.

The syntax for the SET LOCK_TIMEOUT is:

SET LOCK_TIMEOUT timeout_period

where timeout_period is the number of milliseconds that a
connection waits for a blocking lock to go away before an error
is returned from SQL Server to the application. A value of -1 is
the default, which means to wait indefinitely. A value of 0 tells
SQL Server not to wait at all, and to return the error
immediately.

This command is based on a per connection basis, and stays with
the connection until the connection is broken, or a new SET
LOCK_TIMEOUT command is issued. Updated [7.0, 2000]

                             *****

Avoid INSERTing, UPDATEing, or DELETEing large numbers of records
in a single transaction. If you do, all the records affected by
your action will be locked until the transaction is done. If you
find that you need to perform mass data changes, it is better to
batch them into smaller, shorter transactions to prevent
unnecessary locking.

In addition, changing the backup method from full to simple will
also reduce the overhead incurred by long running transactions.
Once you are done with the long running activity, you can switch
back to the full backup method. [6.5, 7.0, 2000]

                             *****

One way to help identify blocking locks is to use Enterprise
Manager. If you expand "Process Info" Under "Current Activity,"
for the appropriate server, and then scroll to the right of the
screen, you will see if there are currently any blocking locks.
If you do, you will see which SPID is blocking what other SPIDs.
Unfortunately, this screen is not dynamically updated, so if you
will want to refresh this screen often if you are looking for
blocking locks. To refresh the screen, right-click on "Current
Activity," not "Process Info," and then select "Refresh."

Most blocking locks go away soon. But if a blocking lock does not
go away, and it is prevent one or more users from performing
necessary tasks, you can ask the user, whose SPID is causing the
blocking, to exit their program that is causing the block. Or,
you can KILL the blocking SPID from Enterprise Manager. KILLing
the blocking SPID will cause the current transaction to rollback
and allow the blocked SPIDs to continue. [7.0, 2000]




=================================================================
**Sponsor's Message**
=================================================================

IntelliVIEW -- Interactive Reporting Tool for SQL Server

IntelliVIEW is an easy-to-use reporting solution that allows you
to create rich & interactive reports from your SQL Server
databases and integrate them into your applications. Design
interactive reports with ease; integrate reporting into .NET,
Java, and COM applications; analyze information in real-time; and
make faster, better-informed decisions.

Integrate Reporting into your SQL Server applications:

--Create virtually any type of report--summary, cross-tabs,
charts, etc.

--Manipulate data using convenient drag & drop facilities.

--Slash development time for creating reports by over 75%!

--Publish reports easily across the web.

--Export reports to popular formats like xls, pdf, rtf, etc.

--Print Professional looking reports using the WYSIWYG printing
features.

--Absolutely No Client Licensing Fees!

Download FREE client at http://www.intelliview.com/go/sqlperf

=================================================================


**Cursors**

Avoid using static/insensitive and keyset cursors, unless you
have no other choice. This is because they cause a temporary
table to be created in TEMPDB, which increases overhead and can
cause resource contention issues. [6.5, 7.0, 2000]

                             *****

If you have no choice but to use cursors in your application, try
to locate the SQL Server tempdb database on its own physical
device for best performance. This is because cursors use the
tempdb for temporary storage of cursor data. The faster your disk
array, the faster your cursor will be. [6.5, 7.0, 2000]

                             *****

Using cursors can reduce concurrency and lead to unnecessary
locking and blocking. To help avoid this, use the READ_ONLY
cursor option if applicable, or if you need to perform updates,
try to use the OPTIMISTIC cursor option to reduce locking. Try to
avoid the SCROLL_LOCKS cursor option, which reduces concurrency.
[6.5, 7.0, 2000]

                             *****

When you are done using a cursor, don't just CLOSE it, you must
also DEALLOCATE it. Deallocation is required to free up the SQL
Server resources used by the cursor. If you only CLOSE the
cursor, locks are freed, but SQL Server resources are not. If you
don't DEALLOCATE your cursors, the resources used by the cursor
will stay allocated, degrading the performance of your server
until they are released. [6.5, 7.0, 2000]

                             *****

If it is appropriate for your application, try to load the cursor
as soon as possible by moving to the last row of the result set.
This releases the share locks created when the cursor was built,
freeing up SQL Server resources. [6.5, 7.0, 2000]

                             *****

If you have to use a cursor because your application needs to
manually scroll through records and update them, try to avoid
client-side cursors, unless the number of rows is small or the
data is static. If the number of rows is large, or the data is
not static, consider using a server-side keyset cursor instead of
a client-side cursor. Performance is usually boosted because of a
reduction in network traffic between the client and the server.
For optimum performance, you may have to try both types of
cursors under realistic loads to determine which is best for your
particular environment. [6.5, 7.0, 2000]

                             *****

When using a server-side cursor, always try to fetch as small a
result set as possible. This includes fetching only those rows
and columns the client needs immediately. The smaller the cursor,
no matter what type of server-side cursor it is, the fewer
resources it will use, and performance will benefit. [6.5, 7.0,
2000]




=================================================================
**Sponsor's Message**
=================================================================

ELIMINATE SQL MAIL ACROSS YOUR ENTERPRISE IN 10 MINUTES OR LESS!

Introducing SQL SENTRY, a powerful, agent-less system for SQL
Server job and performance management that brings you
unprecedented scheduling, monitoring, alerting and reporting:

SCHEDULING:
Intuitive, Outlook-style calendar view of your job schedules.
With 10-minute, hour, 4-hr, day and week views, job conflicts
are clearly highlighted and easily resolved.

MONITORING:
Link Windows performance counters directly to jobs and guard
against failures before they occur by knowing how jobs are
impacting server performance.

ALERTING:
SQL Sentry handles notifications for all job and server
alerts, without agents. Alerting is centralized and SMTP-based,
with no dependencies on SQL Mail or MAPI.

REPORTING:
3-D runtime and performance charts provide unsurpassed
analysis and resolution capabilities for job-related issues.

For more information on SQL SENTRY, visit our website today:
http://www.sqlSentry.net/

=================================================================



**Database Design**

Bad logical database design results in bad physical database
design, and generally results in poor database performance. So,
if it is your responsibility to design a database from scratch,
be sure you take the necessary time and effort to get the logical
database design right. Once the logical design is right, then you
also need to take the time to get the physical design right.

Both the logical and physical design must be right before you can
expect to get good performance out of your database. If the
logical design is not right before you begin the development of
your application, it is too late after the application has been
implemented to fix it. No amount of fast, expensive hardware can
fix the poor performance caused by poor logical database design.
[6.5, 7.0, 2000]

                             *****

Following standard database normalization recommendations when
designing OLTP databases can greatly maximize a database's
performance. Here's why:

--Helps to reduce the total amount of redundant data in the
database. The less data there is, the less work SQL Server has to
perform, speeding its performance.

--Helps to reduce the use of NULLS in the database. The use of
NULLs in a database can greatly reduce database performance,
especially in WHERE clauses.

--Helps to reduce the number of columns in tables, which means
that more rows can fit on a single data page, which helps to
boost SQL Server read performance.

--Help to reduce the amount of Transact-SQL code that needs to be
written to deal with non-normalized data. The less code there is,
the less that has to run, speeding your application's
performance.

--Helps to maximize the use of clustered indexes, the most
powerful and useful type of index available to SQL Server. The
more data is separated into multiple tables because of
normalization, the more clustered indexes become available to
help speed up data access.

--Helps to reduce the total number of indexes in your database.
The less columns tables have, the less need there is for multiple
indexes to retrieve it. And the fewer indexes there are, the less
negative is the performance effect of INSERTs, UPDATES, and
DELETES. [6.5, 7.0, 2000]

                             *****

If normalizing your OLTP database forces you to create queries
with many multiple joins (4 or more), you may want to consider
denormalizing some of the tables in order to reduce the number of
required joins. Denormalization is the process of selectively
taking normalized tables and re-combining the data in them in
order to reduce the number of joins needed them to produce the
necessary query results.

Sometimes the addition of a single column of redundant data to a
table from another table can reduce a 4-way join into a 2-way
join, significantly boosting performance by reducing the time it
takes to perform the join.
While denormalization can boost join performance, it can also
have negative effects. For example, by adding redundant data to
tables, you risk the following problems:

--More data means SQL Server has to read more data pages than
otherwise needed, hurting performance.

--Redundant data can lead to data anomalies and bad data.

--In many cases, extra code will have to be written to keep
redundant data in separate tables in synch, which adds to
database overhead.
As you consider whether to denormalize a database to speed joins,
be sure you first consider if you have the proper indexes on the
tables to be joined. It is possible that your join performance
problem is more of a problem with a lack of appropriate indexes
that it is of joining too many tables.

Before you decide to denormalize a properly normalized database,
be sure you thoroughly consider all of the implications and test
performance both before and after you denormalize to see if your
efforts have really bought you anything. [6.5, 7.0, 2000]




=================================================================
**Sponsor's Message**
=================================================================

ELIMINATE SQL MAIL ACROSS YOUR ENTERPRISE IN 10 MINUTES OR LESS!

Introducing SQL SENTRY, a powerful, agent-less system for SQL
Server job and performance management that brings you
unprecedented scheduling, monitoring, alerting and reporting:

SCHEDULING:
Intuitive, Outlook-style calendar view of your job schedules.
With 10-minute, hour, 4-hr, day and week views, job conflicts
are clearly highlighted and easily resolved.

MONITORING:
Link Windows performance counters directly to jobs and guard
against failures before they occur by knowing how jobs are
impacting server performance.

ALERTING:
SQL Sentry handles notifications for all job and server
alerts, without agents. Alerting is centralized and SMTP-based,
with no dependencies on SQL Mail or MAPI.

REPORTING:
3-D runtime and performance charts provide unsurpassed
analysis and resolution capabilities for job-related issues.

For more information on SQL SENTRY, visit our website today:
http://www.sqlSentry.net/

=================================================================




=================================================================
Win & Learn: FREE SOFTWARE in the January Forum Contest!
=================================================================

SQL-Server-Performance.Com, along with 9 companies (see
below) have teamed up to give away FREE SQL Server software to
each of 9 WINNERS (one prize per winner) of the January
2004 Forum Posting Contest.

Participating in the forum is not only a great way to win free
software, but to learn a lot more about how to get the most out
of SQL Server.


--ApexSQL Code Generator ($399) from ApexSQL Software
http://www.apexsql.com

--NetworkSmart 2003 ($449) from LeadByte Software
http://www.leadbyte.com

--SQL Scribe Documentation Builder ($400) from A&G Software
http://www.ag-software.com

--DbNetGrid ($599) from DBNetLink
http://www.dbnetgrid.com

--myLittleAdmin ($490) from myLittleTools.net
http://www.mylittletools.net

--SQLZip ($500) from SQLZip Software
http://www.sqlzip.com

--mssqlXpress ($199) from XpressApps
http://www.xpressapps.com

RapTier Professional ($299) from SharpPower.Com
http://www.sharppower.com

Find Duplicates Wizard for SQL Server ($397) from Azlexica
http://www.findduplicates.com


The first place winner will get the first pick of the above free
software, the second place winner will get the second pick, the
third place winner will get the third pick, and so on.

We have also changed the rules for the forum contest, making it
even easier for participants to win.

To find out about this contest, and how you can participate,
please visit this webpage:

http://www.sql-server-performance.com/

=================================================================




=================================================================

There are only three ways you could have received this e-mail,
and that is to have subscribed to it, joined our forum, or to
have received it from a friend who forwarded it to you. This is a
100% opt-in newsletter.

To learn how to advertise in this publication, visit:
http://www.sql-server-performance.com/sponsor_information.asp

To subscribe to this newsletter, visit:
http://www.sql-server-performance.com/subscribe_newsletter.asp

To unsubscribe to this newsletter, or to change your e-mail
address, click on this URL.

=================================================================

Copyright 2004 Brad M. McGehee. All rights reserved.

No part of this newsletter may be reproduced in whole or in part
without written permission.

=================================================================



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?439810571-220041220181443656>