we have an app that is written in a "special" way. Not using SP's. Instead,
dynamic SQL is used.
Also, it's written in a way that in order to update single order, the app
sends 4-5 SQL's; some of them INSERTs and some SELECTs.
Now here is the problem: some of the procesess require updating hundreds of
thousands of "orders". Obviously the app will not (and IS not) going to
scale. When such processes executed, SQL Server is getting hammered. After
some time in the process - one CPU is getting to 100% utilization and stays
this way until the end; this slows down the system to the point that other,
read-only clients get "timeout expired".
Now here is the bad part - even running on top of the line quad server does
not help the problem - since one CPU gets to 100% sooner or later.
I was reading technet and they suggest that SQL Server cannot split very
small requests under such conditions. The problem that there are millions of
very small (uder 100ms) requests coming from single connection.
My question is: can I configure the server in a way that more than 1 CPU is
used?
Note: changing the code is not an option at this point.
Thanks,
Dima.First, check the server settings to ensure parallel execution is enabled
(it should be by default).
Second, start looking for another job. I'm not entirely joking, here. If it
is as bad as you describe and the management is unwilling to fix the
underlying flaws, then you are in database hell. Not a pretty place.
You said nothing about the indexing on the tables. Are there any? Any
clustered indexes? This is the one place where you can make changes that
won't likely break client side code.
Bob Castleman
DBA Poseur
"Dima Semensky" <dsemen@.newsgroup.nospam> wrote in message
news:%23YnIj09MFHA.1436@.TK2MSFTNGP10.phx.gbl...
> Hi,
> we have an app that is written in a "special" way. Not using SP's.
> Instead, dynamic SQL is used.
> Also, it's written in a way that in order to update single order, the app
> sends 4-5 SQL's; some of them INSERTs and some SELECTs.
> Now here is the problem: some of the procesess require updating hundreds
> of thousands of "orders". Obviously the app will not (and IS not) going to
> scale. When such processes executed, SQL Server is getting hammered. After
> some time in the process - one CPU is getting to 100% utilization and
> stays this way until the end; this slows down the system to the point that
> other, read-only clients get "timeout expired".
> Now here is the bad part - even running on top of the line quad server
> does not help the problem - since one CPU gets to 100% sooner or later.
> I was reading technet and they suggest that SQL Server cannot split very
> small requests under such conditions. The problem that there are millions
> of very small (uder 100ms) requests coming from single connection.
> My question is: can I configure the server in a way that more than 1 CPU
> is used?
> Note: changing the code is not an option at this point.
> Thanks,
> Dima.
>|||"Dima Semensky" <dsemen@.newsgroup.nospam> wrote in message
news:%23YnIj09MFHA.1436@.TK2MSFTNGP10.phx.gbl...
> Hi,
> we have an app that is written in a "special" way. Not using SP's.
> Instead, dynamic SQL is used.
> Also, it's written in a way that in order to update single order, the app
> sends 4-5 SQL's; some of them INSERTs and some SELECTs.
> Now here is the problem: some of the procesess require updating hundreds
> of thousands of "orders". Obviously the app will not (and IS not) going to
> scale. When such processes executed, SQL Server is getting hammered. After
> some time in the process - one CPU is getting to 100% utilization and
> stays this way until the end; this slows down the system to the point that
> other, read-only clients get "timeout expired".
> Now here is the bad part - even running on top of the line quad server
> does not help the problem - since one CPU gets to 100% sooner or later.
> I was reading technet and they suggest that SQL Server cannot split very
> small requests under such conditions. The problem that there are millions
> of very small (uder 100ms) requests coming from single connection.
> My question is: can I configure the server in a way that more than 1 CPU
> is used?
> Note: changing the code is not an option at this point.
>
Let me see if I get it:
You have one session running batch processes which are comprised of hundreds
of thousands of inexpensive queries (single-row updates, selects, etc).
When this process is running other connections experience poor performance
and even timeouts.
And you wonder if you can configure the server to utilize multiple CPU's for
the work of the batch process. If that is your question, the answer it no.
And you wouldn't want to even if you could. In such a situation you want
the batch session to consume LESS not MORE server resources, in order to not
adversely affect other users. Fundamentally there's nothing fatal about
having a long-running batch job spike one of your CPU's (unless that batch
job holds locks required by other sessions).
First, determine if the other session's timeouts are caused by CPU
contention or by lock waits. If the other sessions are waiting on locks
held by the batch process, then your situation is grim. If they are victims
of CPU contention, you might be able to reduce the CPU cost of the queries
and increase the CPU resources of the server and wiggle out of trouble.
Under your constraints, here's what you can do. Capture and analyze the
commands coming from the batch connection. Examine the query plans for each
one and try to optimize them using the index tuning wizard. But "tuning"
the database server rarely has much impact on poorly written applictations.
It's worth a try, but there are limits to what you can do. You might get
lucky and find some glaring omission in the indexing which improves the
application's performance (the worse the applictation, the poorer the
indexing, right?).
David|||Thanks David.
I have done what you suggesting and I am sure that this is CPU contention
problem.
Indexes are there and work fine in smaller batches. However, when there is
huge batch once in a while it overloads the CPU which affects whole server.
I was thinking about some setting I can set to evenly spread the load. So
instead of loading 1 CPU for 100%, I could load 4 with 20 each.
In my view, it is really a matter of configuration. If the app opened new
connection for every statement - this situation would not arise, however
there would be another performance problem - with too many opened/closed
connections per sec.
So, if I could tell SQL Server to e.g. handle each statement in different
threads, say in round robin way - this could be a way out.
Note: this is temporary solution for couple months to survive
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:eKUpMt%23MFHA.244@.tk2msftngp13.phx.gbl...
> "Dima Semensky" <dsemen@.newsgroup.nospam> wrote in message
> news:%23YnIj09MFHA.1436@.TK2MSFTNGP10.phx.gbl...
> Let me see if I get it:
> You have one session running batch processes which are comprised of
> hundreds of thousands of inexpensive queries (single-row updates, selects,
> etc). When this process is running other connections experience poor
> performance and even timeouts.
> And you wonder if you can configure the server to utilize multiple CPU's
> for the work of the batch process. If that is your question, the answer
> it no. And you wouldn't want to even if you could. In such a situation
> you want the batch session to consume LESS not MORE server resources, in
> order to not adversely affect other users. Fundamentally there's nothing
> fatal about having a long-running batch job spike one of your CPU's
> (unless that batch job holds locks required by other sessions).
> First, determine if the other session's timeouts are caused by CPU
> contention or by lock waits. If the other sessions are waiting on locks
> held by the batch process, then your situation is grim. If they are
> victims of CPU contention, you might be able to reduce the CPU cost of the
> queries and increase the CPU resources of the server and wiggle out of
> trouble.
> Under your constraints, here's what you can do. Capture and analyze the
> commands coming from the batch connection. Examine the query plans for
> each one and try to optimize them using the index tuning wizard. But
> "tuning" the database server rarely has much impact on poorly written
> applictations. It's worth a try, but there are limits to what you can do.
> You might get lucky and find some glaring omission in the indexing which
> improves the application's performance (the worse the applictation, the
> poorer the indexing, right?).
> David
>|||Thanks Bob.
The reason for not changing the app is that new solution will be replacing
this soon. However, in the meantime I'm looking into what other options I
have there.
I'm just looking for a way to configure the SQL Server to may be somehow
"spread" the load
"Bob Castleman" <nomail@.here> wrote in message
news:%23ojhJl%23MFHA.2604@.TK2MSFTNGP10.phx.gbl...
> First, check the server settings to ensure parallel execution is enabled
> (it should be by default).
> Second, start looking for another job. I'm not entirely joking, here. If
> it is as bad as you describe and the management is unwilling to fix the
> underlying flaws, then you are in database hell. Not a pretty place.
> You said nothing about the indexing on the tables. Are there any? Any
> clustered indexes? This is the one place where you can make changes that
> won't likely break client side code.
> Bob Castleman
> DBA Poseur
> "Dima Semensky" <dsemen@.newsgroup.nospam> wrote in message
> news:%23YnIj09MFHA.1436@.TK2MSFTNGP10.phx.gbl...
>|||On Mon, 28 Mar 2005 16:28:51 -0500, "Dima Semensky"
<dsemen@.newsgroup.nospam> wrote:
>this way until the end; this slows down the system to the point that other,
>read-only clients get "timeout expired".
For one thing, check your isolation levels.
Some of your read-only clients are probably blocked, if it's
acceptable to run with dirty data you can run with NOLOCK and see if
that helps.
I can't think of any server-side tweaks that can help.
J.|||"Dima Semensky" <dsemen@.newsgroup.nospam> wrote in message
news:uF$k6V$MFHA.3900@.TK2MSFTNGP10.phx.gbl...
> Thanks David.
> I have done what you suggesting and I am sure that this is CPU contention
> problem.
> Indexes are there and work fine in smaller batches. However, when there is
> huge batch once in a while it overloads the CPU which affects whole
> server.
> I was thinking about some setting I can set to evenly spread the load. So
> instead of loading 1 CPU for 100%, I could load 4 with 20 each.
>
That's not what would happen. If you parallelize the load it would spike 4
CPU's to 100% for 1/4 the time (actually more since parallel plans are less
efficient).
> In my view, it is really a matter of configuration. If the app opened new
> connection for every statement - this situation would not arise, however
> there would be another performance problem - with too many opened/closed
> connections per sec.
The app would still be issuing the statements one at at time. But I'm
> So, if I could tell SQL Server to e.g. handle each statement in different
> threads, say in round robin way - this could be a way out.
That's not really the case. Each thread is scheduled on any available CPU.
So a single connection issuing multiple single statements might have its
work scheduled on various CPU's or might have its work scheduled on the same
CPU each time. Which one of these happens is a detail of the SQL Server
User Mode Scheduler. But the net effect is the same. The batch processing
connection will account eat 1000ms of CPU time every second. If that is
250ms on each of 4 processors or 1000ms on 1 of 4 processors, the net effect
on other connections is the same. The batch process will occupy 25% of
available CPU resources.
One thing to check, though is that the batch process isn't using any
parallel query plans. With a parallel query plan the batch process could
monopolize more that 1 CPU. There is a server-wide setting for the maximum
degree of parallelism (MAXDOP). To protect online work from being adversely
affected by large batch jobs, you might want to set MAXDOP=1.
> Note: this is temporary solution for couple months to survive
>
Been there.
David|||Sorry for the sarcasm, but I still think you're in database hell.
You didn't say anything about memory utilization. Is just the CPU getting
consumed? What about other resources?
If the read only clients are truely read only (i.e. NEVER updating), then
how about replicating to another server? Then the read only clients would
not have to contend with cpu consumption on the transactional database.
Bob Castleman
DBA Poseur
"Dima Semensky" <dsemen@.newsgroup.nospam> wrote in message
news:evWx4W$MFHA.2580@.TK2MSFTNGP09.phx.gbl...
> Thanks Bob.
> The reason for not changing the app is that new solution will be replacing
> this soon. However, in the meantime I'm looking into what other options I
> have there.
> I'm just looking for a way to configure the SQL Server to may be somehow
> "spread" the load
> "Bob Castleman" <nomail@.here> wrote in message
> news:%23ojhJl%23MFHA.2604@.TK2MSFTNGP10.phx.gbl...
>|||4 CPU's. 4GB RAM, SAN
yes, the app would issue one at a time and there is no way around it.
However, if for example the SQL Server sent each statement to different
thread - then all CPU would be working instead of single.
It's a mistery for me why SQL Server goes to 100% at some point if the app
sends the statements in sequential order. May be something related to the
way it works with threads...
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:OafLAh$MFHA.3668@.TK2MSFTNGP14.phx.gbl...
> "Dima Semensky" <dsemen@.newsgroup.nospam> wrote in message
> news:uF$k6V$MFHA.3900@.TK2MSFTNGP10.phx.gbl...
> That's not what would happen. If you parallelize the load it would spike
> 4 CPU's to 100% for 1/4 the time (actually more since parallel plans are
> less efficient).
>
> The app would still be issuing the statements one at at time. But I'm
>
>
> That's not really the case. Each thread is scheduled on any available
> CPU. So a single connection issuing multiple single statements might have
> its work scheduled on various CPU's or might have its work scheduled on
> the same CPU each time. Which one of these happens is a detail of the SQL
> Server User Mode Scheduler. But the net effect is the same. The batch
> processing connection will account eat 1000ms of CPU time every second.
> If that is 250ms on each of 4 processors or 1000ms on 1 of 4 processors,
> the net effect on other connections is the same. The batch process will
> occupy 25% of available CPU resources.
> One thing to check, though is that the batch process isn't using any
> parallel query plans. With a parallel query plan the batch process could
> monopolize more that 1 CPU. There is a server-wide setting for the
> maximum degree of parallelism (MAXDOP). To protect online work from being
> adversely affected by large batch jobs, you might want to set MAXDOP=1.
>
> Been there.
> David
>|||Thanks.
All reads are NOLOCK.
For a long time I couldn't understand why the thing times out. No or little
blocking, no deadlocks, all reads are done with NOLOCK.
Finally, I found that one of the CPU is hammered at 100% for several hours.
At this point (even this is quad server), reads start slowing down and
eventually time out.
"JRStern" <jxstern@.bogus.com> wrote in message
news:748h41l7i889u1n297o616jpagucieb4gs@.
4ax.com...
> On Mon, 28 Mar 2005 16:28:51 -0500, "Dima Semensky"
> <dsemen@.newsgroup.nospam> wrote:
> For one thing, check your isolation levels.
> Some of your read-only clients are probably blocked, if it's
> acceptable to run with dirty data you can run with NOLOCK and see if
> that helps.
> I can't think of any server-side tweaks that can help.
> J.
>
No comments:
Post a Comment