Showing posts with label inserts. Show all posts
Showing posts with label inserts. Show all posts

Wednesday, March 28, 2012

Performance difference over "not exists" and left join?

Hi. I have a query that inserts rows in a table where the rows do not
already exists. I have two techniques I use to do this one is "not exists":
where not exists
(
select
*
from
Supply
where
Supply.BranchNumber = XmlSupply.BranchNumber
and
Supply.RetailerCode = XMLSupply.RetailerCode
and
Supply.TitleCode = XMLSupply.TitleCode
and
Supply.IssueYear = XMLSupply.IssueYear
and
Supply.IssueNumber = XMLSupply.IssueNumber
)
and the other is a left join and using a "where is null":
left join
Supply s
on
s.BranchNumber = xs.BranchNumber
and
s.RetailerCode = xs.RetailerCode
and
s.TitleCode = xs.TitleCode
and
s.IssueYear = xs.IssueYear
and
s.IssueNumber = xs.IssueNumber
where
s.BranchNumber is null
I cannot see any performance difference between the two. But then the table
only has a small sample in at present. Is there a difference between them?
Should I favour one over the other?
McGy
[url]http://mcgy.blogspot.com[/url]Yes, you don't see any differences because the tables are small. I have seen
some performance improvement with LEFT JOIN on very large tables,so test it
ans see what is going on
"McGy" <anon@.anon.com> wrote in message
news:esRtvucRGHA.5296@.tk2msftngp13.phx.gbl...
> Hi. I have a query that inserts rows in a table where the rows do not
> already exists. I have two techniques I use to do this one is "not
> exists":
> where not exists
> (
> select
> *
> from
> Supply
> where
> Supply.BranchNumber = XmlSupply.BranchNumber
> and
> Supply.RetailerCode = XMLSupply.RetailerCode
> and
> Supply.TitleCode = XMLSupply.TitleCode
> and
> Supply.IssueYear = XMLSupply.IssueYear
> and
> Supply.IssueNumber = XMLSupply.IssueNumber
> )
> and the other is a left join and using a "where is null":
> left join
> Supply s
> on
> s.BranchNumber = xs.BranchNumber
> and
> s.RetailerCode = xs.RetailerCode
> and
> s.TitleCode = xs.TitleCode
> and
> s.IssueYear = xs.IssueYear
> and
> s.IssueNumber = xs.IssueNumber
> where
> s.BranchNumber is null
> I cannot see any performance difference between the two. But then the
> table
> only has a small sample in at present. Is there a difference between them?
> Should I favour one over the other?
> --
> McGy
> [url]http://mcgy.blogspot.com[/url]
>
>|||Check the execution plans. SQL Server will probably generate the same plans
if the queries are semantically identical and you'll get the same
performance if that's the case. Note that the queries will be the same
semantically only if you are including all the primary key columns from the
Supply table.
Hope this helps.
Dan Guzman
SQL Server MVP
"McGy" <anon@.anon.com> wrote in message
news:esRtvucRGHA.5296@.tk2msftngp13.phx.gbl...
> Hi. I have a query that inserts rows in a table where the rows do not
> already exists. I have two techniques I use to do this one is "not
> exists":
> where not exists
> (
> select
> *
> from
> Supply
> where
> Supply.BranchNumber = XmlSupply.BranchNumber
> and
> Supply.RetailerCode = XMLSupply.RetailerCode
> and
> Supply.TitleCode = XMLSupply.TitleCode
> and
> Supply.IssueYear = XMLSupply.IssueYear
> and
> Supply.IssueNumber = XMLSupply.IssueNumber
> )
> and the other is a left join and using a "where is null":
> left join
> Supply s
> on
> s.BranchNumber = xs.BranchNumber
> and
> s.RetailerCode = xs.RetailerCode
> and
> s.TitleCode = xs.TitleCode
> and
> s.IssueYear = xs.IssueYear
> and
> s.IssueNumber = xs.IssueNumber
> where
> s.BranchNumber is null
> I cannot see any performance difference between the two. But then the
> table
> only has a small sample in at present. Is there a difference between them?
> Should I favour one over the other?
> --
> McGy
> [url]http://mcgy.blogspot.com[/url]
>
>|||Thanks for that.
McGy
[url]http://mcgy.blogspot.com[/url]
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:OSzsTYdRGHA.4960@.TK2MSFTNGP12.phx.gbl...
> Check the execution plans. SQL Server will probably generate the same
plans
> if the queries are semantically identical and you'll get the same
> performance if that's the case. Note that the queries will be the same
> semantically only if you are including all the primary key columns from
the
> Supply table.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "McGy" <anon@.anon.com> wrote in message
> news:esRtvucRGHA.5296@.tk2msftngp13.phx.gbl...
them?
>|||McGy (anon@.anon.com) writes:
> Hi. I have a query that inserts rows in a table where the rows do not
> already exists. I have two techniques I use to do this one is "not
> exists":
>...
> and the other is a left join and using a "where is null":
>...
> I cannot see any performance difference between the two. But then the
> table only has a small sample in at present. Is there a difference
> between them? Should I favour one over the other?
Some people claim that LEFT JOIN is faster than NOT EXISTS, but the only
right thing is to benchmark for each case.
Personally, I prefer to write my query to express what I am looking for,
which means that in most cases, I use NOT EXISTS.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Let's do a quick test
use pubs
go
--query 1
select Title as [titles not sold]
from titles t left outer join sales s on t.title_id = s.title_id
where s.title_id is null
--query2
select Title as [titles not sold]
from titles t where not exists (select * from sales s where t.title_id =
s.title_id)
Hit CTRL + K, press F5
Now look at the execution plan
query 1 = 50.02%
query 2 = 49.98%
so it looks like query 2 is a little (very little) more efficient in this
case since the first query has to apply a filter (s.title_id is null)
http://sqlservercode.blogspot.com/
"McGy" <anon@.anon.com> wrote in message
news:esRtvucRGHA.5296@.tk2msftngp13.phx.gbl...
> Hi. I have a query that inserts rows in a table where the rows do not
> already exists. I have two techniques I use to do this one is "not
> exists":
> where not exists
> (
> select
> *
> from
> Supply
> where
> Supply.BranchNumber = XmlSupply.BranchNumber
> and
> Supply.RetailerCode = XMLSupply.RetailerCode
> and
> Supply.TitleCode = XMLSupply.TitleCode
> and
> Supply.IssueYear = XMLSupply.IssueYear
> and
> Supply.IssueNumber = XMLSupply.IssueNumber
> )
> and the other is a left join and using a "where is null":
> left join
> Supply s
> on
> s.BranchNumber = xs.BranchNumber
> and
> s.RetailerCode = xs.RetailerCode
> and
> s.TitleCode = xs.TitleCode
> and
> s.IssueYear = xs.IssueYear
> and
> s.IssueNumber = xs.IssueNumber
> where
> s.BranchNumber is null
> I cannot see any performance difference between the two. But then the
> table
> only has a small sample in at present. Is there a difference between them?
> Should I favour one over the other?
> --
> McGy
> [url]http://mcgy.blogspot.com[/url]
>
>|||Really good thread :-)... I also go with "NOT EXISTS", I have not got any
chance to use it on production environment or else i would have strongly
suggested that, as there was a design change in my dataware housing project.
I got a link just search for "NOT EXISTS" in that,
http://msdn.microsoft.com/library/d...ntbpwithdts.asp
In that MS suggest to use "NOT EXISTS" to insert new dimesnion records...
What my peers say is using 'NOT EXISTS' can minimise table scan while
searching for records...
1. Avoid using IN’s where possible. An IN can be replaced by an EXISTS or
a
NOT EXISTS which is invariably faster as an EXISTS works by creating a join
between the two tables. If you really want to use IN, place the values that
appear most commonly in the database towards the start of the list. This is
because the query executioner basically loops through the list left to right
looking for the presence of each value in turn.
Have a look in below blog also,
http://blogs.claritycon.com/blogs/t...egory/1007.aspx
Thanks,
Sree
[Please specify the version of Sql Server as we can save one thread and time
asking back if its 2000 or 2005]
"SQL" wrote:

> Let's do a quick test
> use pubs
> go
> --query 1
> select Title as [titles not sold]
> from titles t left outer join sales s on t.title_id = s.title_id
> where s.title_id is null
> --query2
> select Title as [titles not sold]
> from titles t where not exists (select * from sales s where t.title_id =
> s.title_id)
> Hit CTRL + K, press F5
> Now look at the execution plan
> query 1 = 50.02%
> query 2 = 49.98%
> so it looks like query 2 is a little (very little) more efficient in this
> case since the first query has to apply a filter (s.title_id is null)
> http://sqlservercode.blogspot.com/
>
> "McGy" <anon@.anon.com> wrote in message
> news:esRtvucRGHA.5296@.tk2msftngp13.phx.gbl...
>
>

Monday, March 26, 2012

performance degradation after several inserts

We have a VB6 app that uses an MSDE SP3 database. We use a process with
transactions to add data to the database. As several similar packets of data
are added we've noticed a degradation in the speed of loading the
information. We've also noticed that if at the end of the insert of several
packets of data we get a 30 second upload time per packet, then shut down
MSDE, start it up again and then insert in an almost identical packet of
data, it will run twice to 3 times faster.
We tried turning autoshrink and auclose off but that didn't seem to help.
Thanks,
Terry
hi Terry,
tgrignon wrote:
> We have a VB6 app that uses an MSDE SP3 database. We use a process
> with transactions to add data to the database. As several similar
> packets of data are added we've noticed a degradation in the speed of
> loading the information. We've also noticed that if at the end of
> the insert of several packets of data we get a 30 second upload time
> per packet, then shut down MSDE, start it up again and then insert in
> an almost identical packet of data, it will run twice to 3 times
> faster.
> We tried turning autoshrink and auclose off but that didn't seem to
> help.
it can depend on the transaction... if you load a big transaction, you will
see a degradation as the transaction size increses... you can perhaps
separate the load into separated transactions..
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.14.0 - DbaMgr ver 0.59.0
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply
|||Thanks for the reply Andrea!
The loads are separated into different transactions. If I use a monitor I
can see that memory use continues to grow with each load so it seems like
there is a memory leak somewhere.
We are planning to try:
DBCC FREEPROCCACHE
&
DBCC DROPCLEANBUFFERS
in between loads with our next build. Do you have any other suggestions?
multo grazie,
Terry
"Andrea Montanari" wrote:

> hi Terry,
> it can depend on the transaction... if you load a big transaction, you will
> see a degradation as the transaction size increses... you can perhaps
> separate the load into separated transactions..
> --
> Andrea Montanari (Microsoft MVP - SQL Server)
> http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
> DbaMgr2k ver 0.14.0 - DbaMgr ver 0.59.0
> (my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
> interface)
|||hi Terry,
tgrignon wrote:
> Thanks for the reply Andrea!
> The loads are separated into different transactions. If I use a
> monitor I can see that memory use continues to grow with each load so
> it seems like there is a memory leak somewhere.
not a memory leak (or at least...) ... just the standard SQL Server
behavior... plese see http://tinyurl.com/bvpg9 for some details on SQL
Server memory management...

> We are planning to try:
> DBCC FREEPROCCACHE
> &
> DBCC DROPCLEANBUFFERS
> in between loads with our next build. Do you have any other
> suggestions?
do not know I I like that
those to DBCC commands will actually clear the procedure cache... that
usually includes penalty executing statements due to recompilations of
stored procedures, plans, more I/O activity due to refetching of database
pages...
and more, a nasty security requirements for them... they require high
privileges...
http://msdn.microsoft.com/library/de..._dbcc_7q5l.asp
http://msdn.microsoft.com/library/de..._dbcc_512h.asp
that's to say sysadmins and serveradmins membership...
if I could eventually not speed up operations via separating the job into
multiple transactions, I'd think I'd stay with the time degradation...

> multo grazie,
prego, you are welcome
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.14.0 - DbaMgr ver 0.59.0
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply
|||I see what you mean about the permissions... but we tried
DBCC FREEPROCCACHE
in between loads and it works perfectly... the separate transaction loads
are back down to a manageable load time. I wonder if a stored procedure is
being left "on" somewhere in the code...
Any other ideas?
Thanks,
Terry
"Andrea Montanari" wrote:

> hi Terry,
> tgrignon wrote:
> not a memory leak (or at least...) ... just the standard SQL Server
> behavior... plese see http://tinyurl.com/bvpg9 for some details on SQL
> Server memory management...
>
> do not know I I like that
> those to DBCC commands will actually clear the procedure cache... that
> usually includes penalty executing statements due to recompilations of
> stored procedures, plans, more I/O activity due to refetching of database
> pages...
> and more, a nasty security requirements for them... they require high
> privileges...
> http://msdn.microsoft.com/library/de..._dbcc_7q5l.asp
> http://msdn.microsoft.com/library/de..._dbcc_512h.asp
> that's to say sysadmins and serveradmins membership...
> if I could eventually not speed up operations via separating the job into
> multiple transactions, I'd think I'd stay with the time degradation...
> prego, you are welcome
> --
> Andrea Montanari (Microsoft MVP - SQL Server)
> http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
> DbaMgr2k ver 0.14.0 - DbaMgr ver 0.59.0
> (my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
> interface)
> -- remove DMO to reply
>
>
|||hi,
tgrignon wrote:
> I see what you mean about the permissions... but we tried
> DBCC FREEPROCCACHE
> in between loads and it works perfectly... the separate transaction
> loads are back down to a manageable load time. I wonder if a stored
> procedure is being left "on" somewhere in the code...
no idea... you have to control

> Any other ideas?
nope..
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.14.0 - DbaMgr ver 0.59.0
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply

Saturday, February 25, 2012

Perfomance issue

Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist a
t
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are goin
g
to 1 table while the majority are going to another table in the same DB. Thi
s
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
DonTry updating stats with FULLSCAN. Also, trace the Lock Escalation event.
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"donsql22222" <donsql22222@.discussions.microsoft.com> wrote in message
news:776E5217-2EDB-40FF-BFA6-E8A3E1ADF980@.microsoft.com...
Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist
at
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are
going
to 1 table while the majority are going to another table in the same DB.
This
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
Don|||On Mon, 13 Feb 2006 16:38:26 -0800, "donsql22222"
<donsql22222@.discussions.microsoft.com> wrote:
>It's almost looks like theres some sort of contention between the INSERT an
d
>the SELECT, but since it's in separate tables...i have no idea what to look
>for.
Of course there's contention, for LRU pages in memory, the moreso
because they are different tables!
Run the queries during the slow period from QA with SET STATISTICS IO
ON, and look at the physical reads numbers going from 0 to whatever.
J.

Perfomance issue

Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist at
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are going
to 1 table while the majority are going to another table in the same DB. This
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
DonTry updating stats with FULLSCAN. Also, trace the Lock Escalation event.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"donsql22222" <donsql22222@.discussions.microsoft.com> wrote in message
news:776E5217-2EDB-40FF-BFA6-E8A3E1ADF980@.microsoft.com...
Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist
at
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are
going
to 1 table while the majority are going to another table in the same DB.
This
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
Don|||On Mon, 13 Feb 2006 16:38:26 -0800, "donsql22222"
<donsql22222@.discussions.microsoft.com> wrote:
>It's almost looks like theres some sort of contention between the INSERT and
>the SELECT, but since it's in separate tables...i have no idea what to look
>for.
Of course there's contention, for LRU pages in memory, the moreso
because they are different tables!
Run the queries during the slow period from QA with SET STATISTICS IO
ON, and look at the physical reads numbers going from 0 to whatever.
J.

Perfomance issue

Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist at
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are going
to 1 table while the majority are going to another table in the same DB. This
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
Don
Try updating stats with FULLSCAN. Also, trace the Lock Escalation event.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
..
"donsql22222" <donsql22222@.discussions.microsoft.com> wrote in message
news:776E5217-2EDB-40FF-BFA6-E8A3E1ADF980@.microsoft.com...
Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist
at
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are
going
to 1 table while the majority are going to another table in the same DB.
This
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
Don
|||On Mon, 13 Feb 2006 16:38:26 -0800, "donsql22222"
<donsql22222@.discussions.microsoft.com> wrote:
>It's almost looks like theres some sort of contention between the INSERT and
>the SELECT, but since it's in separate tables...i have no idea what to look
>for.
Of course there's contention, for LRU pages in memory, the moreso
because they are different tables!
Run the queries during the slow period from QA with SET STATISTICS IO
ON, and look at the physical reads numbers going from 0 to whatever.
J.