Hi,
I have a problem explaining why a tempdb (#) table is 4
to 5 times faster than a local (user database) table.
To illustrate the performance differences I have the
following script:
================================================== =====
USE TEMPDB
GO
CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
NULL , DES Varchar(50) NULL )
GO
SET IMPLICIT_TRANSACTIONS OFF
SET NOCOUNT On
DECLARE @.I INT
DECLARE @.B datetime
DECLARE @.E datetime
SET @.B = Getdate()
SET @.I = 0
WHILE @.I <= 30000
BEgin
SET @.I = @.I + 1
INSERT #TEST_INSERT_20050211 ( LIG , DES )
VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
End
SET @.E = Getdate()
SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec,DATEDIFF
( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
(@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
Server_Base
SET NOCOUNT OFF
GO
DROP TABLE #TEST_INSERT_20050211
GO
-- second test
USE Northwind
GO
DROP TABLE TEST_INSERT_20050211
GO
CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
NULL , DES Varchar(50) NULL , )
GO
SET IMPLICIT_TRANSACTIONS OFF
SET NOCOUNT On
DECLARE @.I INT
DECLARE @.B datetime
DECLARE @.E datetime
SET @.B = Getdate()
SET @.I = 0
WHILE @.I <= 30000
BEgin
SET @.I = @.I + 1 INSERT
TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I , Convert
(varchar(10 ) , @.I ) )
End
SET @.E = Getdate()
SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec,DATEDIFF
( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
(@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
Server_Base
SET NOCOUNT OFF
GO
================================================== ====
The differences are 2(or 3) secs vs. 10 secs!
I turned Physical Disk Perf Counters on and found out
that, on the tempdb example, the disk activity is very,
very small. For the local (user database) table, the disk
activity is consequent with the differences in the
execution time.
One of our clients has a real life process that takes 35
minutes to finish if he creates his table in tempdb and
almost 5 hours with a local table.
I would need to >know why< this happens to be able to
explain, because he insists this difference of execution
time comes from a bad parameter in SQL Server! We have
already tested on three different servers and the results
always differ in the same way.
Could anyone explain this?
Thanks a lot in advance!
Flor
I wouldn't expect such a big difference, but for temp object in tempdb, REDO operations are not
logged to the transaction log.
Do you have tempdb on same drive etc as the northwind database?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in message
news:159001c5104c$264d82a0$a401280a@.phx.gbl...
> Hi,
> I have a problem explaining why a tempdb (#) table is 4
> to 5 times faster than a local (user database) table.
> To illustrate the performance differences I have the
> following script:
> ================================================== =====
> USE TEMPDB
> GO
> CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
> NULL , DES Varchar(50) NULL )
> GO
> SET IMPLICIT_TRANSACTIONS OFF
> SET NOCOUNT On
> DECLARE @.I INT
> DECLARE @.B datetime
> DECLARE @.E datetime
> SET @.B = Getdate()
> SET @.I = 0
> WHILE @.I <= 30000
> BEgin
> SET @.I = @.I + 1
> INSERT #TEST_INSERT_20050211 ( LIG , DES )
> VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
> End
> SET @.E = Getdate()
> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
> Server_Base
> SET NOCOUNT OFF
> GO
> DROP TABLE #TEST_INSERT_20050211
> GO
> -- second test
> USE Northwind
> GO
> DROP TABLE TEST_INSERT_20050211
> GO
> CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
> NULL , DES Varchar(50) NULL , )
> GO
> SET IMPLICIT_TRANSACTIONS OFF
> SET NOCOUNT On
> DECLARE @.I INT
> DECLARE @.B datetime
> DECLARE @.E datetime
> SET @.B = Getdate()
> SET @.I = 0
> WHILE @.I <= 30000
> BEgin
> SET @.I = @.I + 1 INSERT
> TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I , Convert
> (varchar(10 ) , @.I ) )
> End
> SET @.E = Getdate()
> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
> Server_Base
> SET NOCOUNT OFF
> GO
> ================================================== ====
> The differences are 2(or 3) secs vs. 10 secs!
> I turned Physical Disk Perf Counters on and found out
> that, on the tempdb example, the disk activity is very,
> very small. For the local (user database) table, the disk
> activity is consequent with the differences in the
> execution time.
> One of our clients has a real life process that takes 35
> minutes to finish if he creates his table in tempdb and
> almost 5 hours with a local table.
> I would need to >know why< this happens to be able to
> explain, because he insists this difference of execution
> time comes from a bad parameter in SQL Server! We have
> already tested on three different servers and the results
> always differ in the same way.
> Could anyone explain this?
> Thanks a lot in advance!
> Flor
>
|||Yes, they're on the same drive on my XP office computer
(where I tested).
As for the server my client works on (the big long
process) his user database and tempdb are on two
different drives (different controllers). Tempdb on a
RAID1 and the user database on a RAID5.
Thanks
Flor
>--Original Message--
>I wouldn't expect such a big difference, but for temp
object in tempdb, REDO operations are not
>logged to the transaction log.
>Do you have tempdb on same drive etc as the northwind
database?
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>http://www.solidqualitylearning.com/
>http://www.sqlug.se/
>
>"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in
message[vbcol=seagreen]
>news:159001c5104c$264d82a0$a401280a@.phx.gbl...
Convert[vbcol=seagreen]
disk[vbcol=seagreen]
35[vbcol=seagreen]
execution[vbcol=seagreen]
results
>
>.
>
|||OK, so REDO can still explain part of the difference. On yours client machine, RAID 1 vs RAID 5 can
also explain some difference. I'm still surprised that you see such a difference on the machine
where you have on same drive, I wouldn't expect the less logging in tempdb have such big impact.
Have you checked the execution plans?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Flor Bottazzi" <anonymous@.discussions.microsoft.com> wrote in message
news:064a01c51051$15f01c60$a501280a@.phx.gbl...[vbcol=seagreen]
> Yes, they're on the same drive on my XP office computer
> (where I tested).
> As for the server my client works on (the big long
> process) his user database and tempdb are on two
> different drives (different controllers). Tempdb on a
> RAID1 and the user database on a RAID5.
> Thanks
> Flor
> object in tempdb, REDO operations are not
> database?
> message
> Convert
> disk
> 35
> execution
> results
|||It's possible that the difference has to do with the transaction logging
involved. RAID 1 is fast at writing. RAID 5 is the slowest
implementation possible (and many DBAs these days recommend staying away
from RAID 5 altogether). You are inserting 30,000 rows. For tempdb, you
are on the RAID 1 array and running in simple recovery model. So you
have pretty fast writing and auto-truncation of the log file as needed.
For the use database, you are on a RAID 5 array (and I assume the t-log
is on the same array, which is not good). That means very slow writes
and the possiblity of auto-grow operations, which are also slow. Make
sure that AutoShrink is not on in the user database.
You should consider moving the t-log for the user database to the RAID 1
array and test again.
David Gugick
Imceda Software
www.imceda.com
|||I tried to but memory ressources began getting lower and
lower and the query was already taking more than 5
minutes, I had to stop it.
I tried to see the Execution Plan through Profiler an I
would say that for each loop in the while cycle it
generated a plan, and the ressources were also getting
lower.
Flor
>--Original Message--
>OK, so REDO can still explain part of the difference. On
yours client machine, RAID 1 vs RAID 5 can
>also explain some difference. I'm still surprised that
you see such a difference on the machine
>where you have on same drive, I wouldn't expect the less
logging in tempdb have such big impact.
>Have you checked the execution plans?
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>http://www.solidqualitylearning.com/
>http://www.sqlug.se/
>
>"Flor Bottazzi" <anonymous@.discussions.microsoft.com>
wrote in message[vbcol=seagreen]
>news:064a01c51051$15f01c60$a501280a@.phx.gbl...
is 4[vbcol=seagreen]
================================================== =====[vbcol=seagreen]
AS[vbcol=seagreen]
================================================== ====[vbcol=seagreen]
very,[vbcol=seagreen]
and[vbcol=seagreen]
have
>
>.
>
Showing posts with label differences. Show all posts
Showing posts with label differences. Show all posts
Wednesday, March 28, 2012
Performance differences local vs. tempdb table
Labels:
4to,
database,
differences,
explaining,
illustrate,
local,
microsoft,
mysql,
oracle,
performance,
server,
sql,
table,
tempdb,
user
Performance differences local vs. tempdb table
Hi,
I have a problem explaining why a tempdb (#) table is 4
to 5 times faster than a local (user database) table.
To illustrate the performance differences I have the
following script:
======================================================= USE TEMPDB
GO
CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
NULL , DES Varchar(50) NULL )
GO
SET IMPLICIT_TRANSACTIONS OFF
SET NOCOUNT On
DECLARE @.I INT
DECLARE @.B datetime
DECLARE @.E datetime
SET @.B = Getdate()
SET @.I = 0
WHILE @.I <= 30000
BEgin
SET @.I = @.I + 1
INSERT #TEST_INSERT_20050211 ( LIG , DES )
VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
End
SET @.E = Getdate()
SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
(@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
Server_Base
SET NOCOUNT OFF
GO
DROP TABLE #TEST_INSERT_20050211
GO
-- second test
USE Northwind
GO
DROP TABLE TEST_INSERT_20050211
GO
CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
NULL , DES Varchar(50) NULL , )
GO
SET IMPLICIT_TRANSACTIONS OFF
SET NOCOUNT On
DECLARE @.I INT
DECLARE @.B datetime
DECLARE @.E datetime
SET @.B = Getdate()
SET @.I = 0
WHILE @.I <= 30000
BEgin
SET @.I = @.I + 1 INSERT
TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I , Convert
(varchar(10 ) , @.I ) )
End
SET @.E = Getdate()
SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
(@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
Server_Base
SET NOCOUNT OFF
GO
======================================================
The differences are 2(or 3) secs vs. 10 secs!
I turned Physical Disk Perf Counters on and found out
that, on the tempdb example, the disk activity is very,
very small. For the local (user database) table, the disk
activity is consequent with the differences in the
execution time.
One of our clients has a real life process that takes 35
minutes to finish if he creates his table in tempdb and
almost 5 hours with a local table.
I would need to >know why< this happens to be able to
explain, because he insists this difference of execution
time comes from a bad parameter in SQL Server! We have
already tested on three different servers and the results
always differ in the same way.
Could anyone explain this?
Thanks a lot in advance!
FlorI wouldn't expect such a big difference, but for temp object in tempdb, REDO operations are not
logged to the transaction log.
Do you have tempdb on same drive etc as the northwind database?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in message
news:159001c5104c$264d82a0$a401280a@.phx.gbl...
> Hi,
> I have a problem explaining why a tempdb (#) table is 4
> to 5 times faster than a local (user database) table.
> To illustrate the performance differences I have the
> following script:
> =======================================================> USE TEMPDB
> GO
> CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
> NULL , DES Varchar(50) NULL )
> GO
> SET IMPLICIT_TRANSACTIONS OFF
> SET NOCOUNT On
> DECLARE @.I INT
> DECLARE @.B datetime
> DECLARE @.E datetime
> SET @.B = Getdate()
> SET @.I = 0
> WHILE @.I <= 30000
> BEgin
> SET @.I = @.I + 1
> INSERT #TEST_INSERT_20050211 ( LIG , DES )
> VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
> End
> SET @.E = Getdate()
> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
> Server_Base
> SET NOCOUNT OFF
> GO
> DROP TABLE #TEST_INSERT_20050211
> GO
> -- second test
> USE Northwind
> GO
> DROP TABLE TEST_INSERT_20050211
> GO
> CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
> NULL , DES Varchar(50) NULL , )
> GO
> SET IMPLICIT_TRANSACTIONS OFF
> SET NOCOUNT On
> DECLARE @.I INT
> DECLARE @.B datetime
> DECLARE @.E datetime
> SET @.B = Getdate()
> SET @.I = 0
> WHILE @.I <= 30000
> BEgin
> SET @.I = @.I + 1 INSERT
> TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I , Convert
> (varchar(10 ) , @.I ) )
> End
> SET @.E = Getdate()
> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
> Server_Base
> SET NOCOUNT OFF
> GO
> ======================================================> The differences are 2(or 3) secs vs. 10 secs!
> I turned Physical Disk Perf Counters on and found out
> that, on the tempdb example, the disk activity is very,
> very small. For the local (user database) table, the disk
> activity is consequent with the differences in the
> execution time.
> One of our clients has a real life process that takes 35
> minutes to finish if he creates his table in tempdb and
> almost 5 hours with a local table.
> I would need to >know why< this happens to be able to
> explain, because he insists this difference of execution
> time comes from a bad parameter in SQL Server! We have
> already tested on three different servers and the results
> always differ in the same way.
> Could anyone explain this?
> Thanks a lot in advance!
> Flor
>|||Yes, they're on the same drive on my XP office computer
(where I tested).
As for the server my client works on (the big long
process) his user database and tempdb are on two
different drives (different controllers). Tempdb on a
RAID1 and the user database on a RAID5.
Thanks
Flor
>--Original Message--
>I wouldn't expect such a big difference, but for temp
object in tempdb, REDO operations are not
>logged to the transaction log.
>Do you have tempdb on same drive etc as the northwind
database?
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>http://www.solidqualitylearning.com/
>http://www.sqlug.se/
>
>"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in
message
>news:159001c5104c$264d82a0$a401280a@.phx.gbl...
>> Hi,
>> I have a problem explaining why a tempdb (#) table is 4
>> to 5 times faster than a local (user database) table.
>> To illustrate the performance differences I have the
>> following script:
>> =======================================================>> USE TEMPDB
>> GO
>> CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1
>> INSERT #TEST_INSERT_20050211 ( LIG , DES )
>> VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>> DROP TABLE #TEST_INSERT_20050211
>> GO
>> -- second test
>> USE Northwind
>> GO
>> DROP TABLE TEST_INSERT_20050211
>> GO
>> CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL , )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1 INSERT
>> TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I ,
Convert
>> (varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>> ======================================================>> The differences are 2(or 3) secs vs. 10 secs!
>> I turned Physical Disk Perf Counters on and found out
>> that, on the tempdb example, the disk activity is very,
>> very small. For the local (user database) table, the
disk
>> activity is consequent with the differences in the
>> execution time.
>> One of our clients has a real life process that takes
35
>> minutes to finish if he creates his table in tempdb and
>> almost 5 hours with a local table.
>> I would need to >know why< this happens to be able to
>> explain, because he insists this difference of
execution
>> time comes from a bad parameter in SQL Server! We have
>> already tested on three different servers and the
results
>> always differ in the same way.
>> Could anyone explain this?
>> Thanks a lot in advance!
>> Flor
>
>.
>|||OK, so REDO can still explain part of the difference. On yours client machine, RAID 1 vs RAID 5 can
also explain some difference. I'm still surprised that you see such a difference on the machine
where you have on same drive, I wouldn't expect the less logging in tempdb have such big impact.
Have you checked the execution plans?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Flor Bottazzi" <anonymous@.discussions.microsoft.com> wrote in message
news:064a01c51051$15f01c60$a501280a@.phx.gbl...
> Yes, they're on the same drive on my XP office computer
> (where I tested).
> As for the server my client works on (the big long
> process) his user database and tempdb are on two
> different drives (different controllers). Tempdb on a
> RAID1 and the user database on a RAID5.
> Thanks
> Flor
>>--Original Message--
>>I wouldn't expect such a big difference, but for temp
> object in tempdb, REDO operations are not
>>logged to the transaction log.
>>Do you have tempdb on same drive etc as the northwind
> database?
>>--
>>Tibor Karaszi, SQL Server MVP
>>http://www.karaszi.com/sqlserver/default.asp
>>http://www.solidqualitylearning.com/
>>http://www.sqlug.se/
>>
>>"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in
> message
>>news:159001c5104c$264d82a0$a401280a@.phx.gbl...
>> Hi,
>> I have a problem explaining why a tempdb (#) table is 4
>> to 5 times faster than a local (user database) table.
>> To illustrate the performance differences I have the
>> following script:
>> =======================================================>> USE TEMPDB
>> GO
>> CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1
>> INSERT #TEST_INSERT_20050211 ( LIG , DES )
>> VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>> DROP TABLE #TEST_INSERT_20050211
>> GO
>> -- second test
>> USE Northwind
>> GO
>> DROP TABLE TEST_INSERT_20050211
>> GO
>> CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL , )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1 INSERT
>> TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I ,
> Convert
>> (varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>> ======================================================>> The differences are 2(or 3) secs vs. 10 secs!
>> I turned Physical Disk Perf Counters on and found out
>> that, on the tempdb example, the disk activity is very,
>> very small. For the local (user database) table, the
> disk
>> activity is consequent with the differences in the
>> execution time.
>> One of our clients has a real life process that takes
> 35
>> minutes to finish if he creates his table in tempdb and
>> almost 5 hours with a local table.
>> I would need to >know why< this happens to be able to
>> explain, because he insists this difference of
> execution
>> time comes from a bad parameter in SQL Server! We have
>> already tested on three different servers and the
> results
>> always differ in the same way.
>> Could anyone explain this?
>> Thanks a lot in advance!
>> Flor
>>
>>.|||It's possible that the difference has to do with the transaction logging
involved. RAID 1 is fast at writing. RAID 5 is the slowest
implementation possible (and many DBAs these days recommend staying away
from RAID 5 altogether). You are inserting 30,000 rows. For tempdb, you
are on the RAID 1 array and running in simple recovery model. So you
have pretty fast writing and auto-truncation of the log file as needed.
For the use database, you are on a RAID 5 array (and I assume the t-log
is on the same array, which is not good). That means very slow writes
and the possiblity of auto-grow operations, which are also slow. Make
sure that AutoShrink is not on in the user database.
You should consider moving the t-log for the user database to the RAID 1
array and test again.
David Gugick
Imceda Software
www.imceda.com|||I tried to but memory ressources began getting lower and
lower and the query was already taking more than 5
minutes, I had to stop it.
I tried to see the Execution Plan through Profiler an I
would say that for each loop in the while cycle it
generated a plan, and the ressources were also getting
lower.
Flor
>--Original Message--
>OK, so REDO can still explain part of the difference. On
yours client machine, RAID 1 vs RAID 5 can
>also explain some difference. I'm still surprised that
you see such a difference on the machine
>where you have on same drive, I wouldn't expect the less
logging in tempdb have such big impact.
>Have you checked the execution plans?
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>http://www.solidqualitylearning.com/
>http://www.sqlug.se/
>
>"Flor Bottazzi" <anonymous@.discussions.microsoft.com>
wrote in message
>news:064a01c51051$15f01c60$a501280a@.phx.gbl...
>> Yes, they're on the same drive on my XP office computer
>> (where I tested).
>> As for the server my client works on (the big long
>> process) his user database and tempdb are on two
>> different drives (different controllers). Tempdb on a
>> RAID1 and the user database on a RAID5.
>> Thanks
>> Flor
>>--Original Message--
>>I wouldn't expect such a big difference, but for temp
>> object in tempdb, REDO operations are not
>>logged to the transaction log.
>>Do you have tempdb on same drive etc as the northwind
>> database?
>>--
>>Tibor Karaszi, SQL Server MVP
>>http://www.karaszi.com/sqlserver/default.asp
>>http://www.solidqualitylearning.com/
>>http://www.sqlug.se/
>>
>>"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in
>> message
>>news:159001c5104c$264d82a0$a401280a@.phx.gbl...
>> Hi,
>> I have a problem explaining why a tempdb (#) table
is 4
>> to 5 times faster than a local (user database) table.
>> To illustrate the performance differences I have the
>> following script:
=======================================================>> USE TEMPDB
>> GO
>> CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1
>> INSERT #TEST_INSERT_20050211 ( LIG , DES )
>> VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]'
AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>> DROP TABLE #TEST_INSERT_20050211
>> GO
>> -- second test
>> USE Northwind
>> GO
>> DROP TABLE TEST_INSERT_20050211
>> GO
>> CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL , )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1 INSERT
>> TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I ,
>> Convert
>> (varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>>
======================================================>> The differences are 2(or 3) secs vs. 10 secs!
>> I turned Physical Disk Perf Counters on and found out
>> that, on the tempdb example, the disk activity is
very,
>> very small. For the local (user database) table, the
>> disk
>> activity is consequent with the differences in the
>> execution time.
>> One of our clients has a real life process that takes
>> 35
>> minutes to finish if he creates his table in tempdb
and
>> almost 5 hours with a local table.
>> I would need to >know why< this happens to be able to
>> explain, because he insists this difference of
>> execution
>> time comes from a bad parameter in SQL Server! We
have
>> already tested on three different servers and the
>> results
>> always differ in the same way.
>> Could anyone explain this?
>> Thanks a lot in advance!
>> Flor
>>
>>.
>
>.
>
I have a problem explaining why a tempdb (#) table is 4
to 5 times faster than a local (user database) table.
To illustrate the performance differences I have the
following script:
======================================================= USE TEMPDB
GO
CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
NULL , DES Varchar(50) NULL )
GO
SET IMPLICIT_TRANSACTIONS OFF
SET NOCOUNT On
DECLARE @.I INT
DECLARE @.B datetime
DECLARE @.E datetime
SET @.B = Getdate()
SET @.I = 0
WHILE @.I <= 30000
BEgin
SET @.I = @.I + 1
INSERT #TEST_INSERT_20050211 ( LIG , DES )
VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
End
SET @.E = Getdate()
SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
(@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
Server_Base
SET NOCOUNT OFF
GO
DROP TABLE #TEST_INSERT_20050211
GO
-- second test
USE Northwind
GO
DROP TABLE TEST_INSERT_20050211
GO
CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
NULL , DES Varchar(50) NULL , )
GO
SET IMPLICIT_TRANSACTIONS OFF
SET NOCOUNT On
DECLARE @.I INT
DECLARE @.B datetime
DECLARE @.E datetime
SET @.B = Getdate()
SET @.I = 0
WHILE @.I <= 30000
BEgin
SET @.I = @.I + 1 INSERT
TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I , Convert
(varchar(10 ) , @.I ) )
End
SET @.E = Getdate()
SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
(@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
Server_Base
SET NOCOUNT OFF
GO
======================================================
The differences are 2(or 3) secs vs. 10 secs!
I turned Physical Disk Perf Counters on and found out
that, on the tempdb example, the disk activity is very,
very small. For the local (user database) table, the disk
activity is consequent with the differences in the
execution time.
One of our clients has a real life process that takes 35
minutes to finish if he creates his table in tempdb and
almost 5 hours with a local table.
I would need to >know why< this happens to be able to
explain, because he insists this difference of execution
time comes from a bad parameter in SQL Server! We have
already tested on three different servers and the results
always differ in the same way.
Could anyone explain this?
Thanks a lot in advance!
FlorI wouldn't expect such a big difference, but for temp object in tempdb, REDO operations are not
logged to the transaction log.
Do you have tempdb on same drive etc as the northwind database?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in message
news:159001c5104c$264d82a0$a401280a@.phx.gbl...
> Hi,
> I have a problem explaining why a tempdb (#) table is 4
> to 5 times faster than a local (user database) table.
> To illustrate the performance differences I have the
> following script:
> =======================================================> USE TEMPDB
> GO
> CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
> NULL , DES Varchar(50) NULL )
> GO
> SET IMPLICIT_TRANSACTIONS OFF
> SET NOCOUNT On
> DECLARE @.I INT
> DECLARE @.B datetime
> DECLARE @.E datetime
> SET @.B = Getdate()
> SET @.I = 0
> WHILE @.I <= 30000
> BEgin
> SET @.I = @.I + 1
> INSERT #TEST_INSERT_20050211 ( LIG , DES )
> VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
> End
> SET @.E = Getdate()
> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
> Server_Base
> SET NOCOUNT OFF
> GO
> DROP TABLE #TEST_INSERT_20050211
> GO
> -- second test
> USE Northwind
> GO
> DROP TABLE TEST_INSERT_20050211
> GO
> CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
> NULL , DES Varchar(50) NULL , )
> GO
> SET IMPLICIT_TRANSACTIONS OFF
> SET NOCOUNT On
> DECLARE @.I INT
> DECLARE @.B datetime
> DECLARE @.E datetime
> SET @.B = Getdate()
> SET @.I = 0
> WHILE @.I <= 30000
> BEgin
> SET @.I = @.I + 1 INSERT
> TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I , Convert
> (varchar(10 ) , @.I ) )
> End
> SET @.E = Getdate()
> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
> Server_Base
> SET NOCOUNT OFF
> GO
> ======================================================> The differences are 2(or 3) secs vs. 10 secs!
> I turned Physical Disk Perf Counters on and found out
> that, on the tempdb example, the disk activity is very,
> very small. For the local (user database) table, the disk
> activity is consequent with the differences in the
> execution time.
> One of our clients has a real life process that takes 35
> minutes to finish if he creates his table in tempdb and
> almost 5 hours with a local table.
> I would need to >know why< this happens to be able to
> explain, because he insists this difference of execution
> time comes from a bad parameter in SQL Server! We have
> already tested on three different servers and the results
> always differ in the same way.
> Could anyone explain this?
> Thanks a lot in advance!
> Flor
>|||Yes, they're on the same drive on my XP office computer
(where I tested).
As for the server my client works on (the big long
process) his user database and tempdb are on two
different drives (different controllers). Tempdb on a
RAID1 and the user database on a RAID5.
Thanks
Flor
>--Original Message--
>I wouldn't expect such a big difference, but for temp
object in tempdb, REDO operations are not
>logged to the transaction log.
>Do you have tempdb on same drive etc as the northwind
database?
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>http://www.solidqualitylearning.com/
>http://www.sqlug.se/
>
>"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in
message
>news:159001c5104c$264d82a0$a401280a@.phx.gbl...
>> Hi,
>> I have a problem explaining why a tempdb (#) table is 4
>> to 5 times faster than a local (user database) table.
>> To illustrate the performance differences I have the
>> following script:
>> =======================================================>> USE TEMPDB
>> GO
>> CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1
>> INSERT #TEST_INSERT_20050211 ( LIG , DES )
>> VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>> DROP TABLE #TEST_INSERT_20050211
>> GO
>> -- second test
>> USE Northwind
>> GO
>> DROP TABLE TEST_INSERT_20050211
>> GO
>> CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL , )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1 INSERT
>> TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I ,
Convert
>> (varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>> ======================================================>> The differences are 2(or 3) secs vs. 10 secs!
>> I turned Physical Disk Perf Counters on and found out
>> that, on the tempdb example, the disk activity is very,
>> very small. For the local (user database) table, the
disk
>> activity is consequent with the differences in the
>> execution time.
>> One of our clients has a real life process that takes
35
>> minutes to finish if he creates his table in tempdb and
>> almost 5 hours with a local table.
>> I would need to >know why< this happens to be able to
>> explain, because he insists this difference of
execution
>> time comes from a bad parameter in SQL Server! We have
>> already tested on three different servers and the
results
>> always differ in the same way.
>> Could anyone explain this?
>> Thanks a lot in advance!
>> Flor
>
>.
>|||OK, so REDO can still explain part of the difference. On yours client machine, RAID 1 vs RAID 5 can
also explain some difference. I'm still surprised that you see such a difference on the machine
where you have on same drive, I wouldn't expect the less logging in tempdb have such big impact.
Have you checked the execution plans?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Flor Bottazzi" <anonymous@.discussions.microsoft.com> wrote in message
news:064a01c51051$15f01c60$a501280a@.phx.gbl...
> Yes, they're on the same drive on my XP office computer
> (where I tested).
> As for the server my client works on (the big long
> process) his user database and tempdb are on two
> different drives (different controllers). Tempdb on a
> RAID1 and the user database on a RAID5.
> Thanks
> Flor
>>--Original Message--
>>I wouldn't expect such a big difference, but for temp
> object in tempdb, REDO operations are not
>>logged to the transaction log.
>>Do you have tempdb on same drive etc as the northwind
> database?
>>--
>>Tibor Karaszi, SQL Server MVP
>>http://www.karaszi.com/sqlserver/default.asp
>>http://www.solidqualitylearning.com/
>>http://www.sqlug.se/
>>
>>"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in
> message
>>news:159001c5104c$264d82a0$a401280a@.phx.gbl...
>> Hi,
>> I have a problem explaining why a tempdb (#) table is 4
>> to 5 times faster than a local (user database) table.
>> To illustrate the performance differences I have the
>> following script:
>> =======================================================>> USE TEMPDB
>> GO
>> CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1
>> INSERT #TEST_INSERT_20050211 ( LIG , DES )
>> VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>> DROP TABLE #TEST_INSERT_20050211
>> GO
>> -- second test
>> USE Northwind
>> GO
>> DROP TABLE TEST_INSERT_20050211
>> GO
>> CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL , )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1 INSERT
>> TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I ,
> Convert
>> (varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>> ======================================================>> The differences are 2(or 3) secs vs. 10 secs!
>> I turned Physical Disk Perf Counters on and found out
>> that, on the tempdb example, the disk activity is very,
>> very small. For the local (user database) table, the
> disk
>> activity is consequent with the differences in the
>> execution time.
>> One of our clients has a real life process that takes
> 35
>> minutes to finish if he creates his table in tempdb and
>> almost 5 hours with a local table.
>> I would need to >know why< this happens to be able to
>> explain, because he insists this difference of
> execution
>> time comes from a bad parameter in SQL Server! We have
>> already tested on three different servers and the
> results
>> always differ in the same way.
>> Could anyone explain this?
>> Thanks a lot in advance!
>> Flor
>>
>>.|||It's possible that the difference has to do with the transaction logging
involved. RAID 1 is fast at writing. RAID 5 is the slowest
implementation possible (and many DBAs these days recommend staying away
from RAID 5 altogether). You are inserting 30,000 rows. For tempdb, you
are on the RAID 1 array and running in simple recovery model. So you
have pretty fast writing and auto-truncation of the log file as needed.
For the use database, you are on a RAID 5 array (and I assume the t-log
is on the same array, which is not good). That means very slow writes
and the possiblity of auto-grow operations, which are also slow. Make
sure that AutoShrink is not on in the user database.
You should consider moving the t-log for the user database to the RAID 1
array and test again.
David Gugick
Imceda Software
www.imceda.com|||I tried to but memory ressources began getting lower and
lower and the query was already taking more than 5
minutes, I had to stop it.
I tried to see the Execution Plan through Profiler an I
would say that for each loop in the while cycle it
generated a plan, and the ressources were also getting
lower.
Flor
>--Original Message--
>OK, so REDO can still explain part of the difference. On
yours client machine, RAID 1 vs RAID 5 can
>also explain some difference. I'm still surprised that
you see such a difference on the machine
>where you have on same drive, I wouldn't expect the less
logging in tempdb have such big impact.
>Have you checked the execution plans?
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>http://www.solidqualitylearning.com/
>http://www.sqlug.se/
>
>"Flor Bottazzi" <anonymous@.discussions.microsoft.com>
wrote in message
>news:064a01c51051$15f01c60$a501280a@.phx.gbl...
>> Yes, they're on the same drive on my XP office computer
>> (where I tested).
>> As for the server my client works on (the big long
>> process) his user database and tempdb are on two
>> different drives (different controllers). Tempdb on a
>> RAID1 and the user database on a RAID5.
>> Thanks
>> Flor
>>--Original Message--
>>I wouldn't expect such a big difference, but for temp
>> object in tempdb, REDO operations are not
>>logged to the transaction log.
>>Do you have tempdb on same drive etc as the northwind
>> database?
>>--
>>Tibor Karaszi, SQL Server MVP
>>http://www.karaszi.com/sqlserver/default.asp
>>http://www.solidqualitylearning.com/
>>http://www.sqlug.se/
>>
>>"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in
>> message
>>news:159001c5104c$264d82a0$a401280a@.phx.gbl...
>> Hi,
>> I have a problem explaining why a tempdb (#) table
is 4
>> to 5 times faster than a local (user database) table.
>> To illustrate the performance differences I have the
>> following script:
=======================================================>> USE TEMPDB
>> GO
>> CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1
>> INSERT #TEST_INSERT_20050211 ( LIG , DES )
>> VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]'
AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>> DROP TABLE #TEST_INSERT_20050211
>> GO
>> -- second test
>> USE Northwind
>> GO
>> DROP TABLE TEST_INSERT_20050211
>> GO
>> CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
>> NULL , DES Varchar(50) NULL , )
>> GO
>> SET IMPLICIT_TRANSACTIONS OFF
>> SET NOCOUNT On
>> DECLARE @.I INT
>> DECLARE @.B datetime
>> DECLARE @.E datetime
>> SET @.B = Getdate()
>> SET @.I = 0
>> WHILE @.I <= 30000
>> BEgin
>> SET @.I = @.I + 1 INSERT
>> TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I ,
>> Convert
>> (varchar(10 ) , @.I ) )
>> End
>> SET @.E = Getdate()
>> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
>> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
>> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
>> Server_Base
>> SET NOCOUNT OFF
>> GO
>>
======================================================>> The differences are 2(or 3) secs vs. 10 secs!
>> I turned Physical Disk Perf Counters on and found out
>> that, on the tempdb example, the disk activity is
very,
>> very small. For the local (user database) table, the
>> disk
>> activity is consequent with the differences in the
>> execution time.
>> One of our clients has a real life process that takes
>> 35
>> minutes to finish if he creates his table in tempdb
and
>> almost 5 hours with a local table.
>> I would need to >know why< this happens to be able to
>> explain, because he insists this difference of
>> execution
>> time comes from a bad parameter in SQL Server! We
have
>> already tested on three different servers and the
>> results
>> always differ in the same way.
>> Could anyone explain this?
>> Thanks a lot in advance!
>> Flor
>>
>>.
>
>.
>
Labels:
database,
differences,
explaining,
illustrate,
local,
microsoft,
mysql,
oracle,
performance,
server,
sql,
table,
tempdb,
user
Performance differences local vs. tempdb table
Hi,
I have a problem explaining why a tempdb (#) table is 4
to 5 times faster than a local (user database) table.
To illustrate the performance differences I have the
following script:
========================================
===============
USE TEMPDB
GO
CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
NULL , DES Varchar(50) NULL )
GO
SET IMPLICIT_TRANSACTIONS OFF
SET NOCOUNT On
DECLARE @.I INT
DECLARE @.B datetime
DECLARE @.E datetime
SET @.B = Getdate()
SET @.I = 0
WHILE @.I <= 30000
BEgin
SET @.I = @.I + 1
INSERT #TEST_INSERT_20050211 ( LIG , DES )
VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
End
SET @.E = Getdate()
SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
(@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
Server_Base
SET NOCOUNT OFF
GO
DROP TABLE #TEST_INSERT_20050211
GO
-- second test
USE Northwind
GO
DROP TABLE TEST_INSERT_20050211
GO
CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
NULL , DES Varchar(50) NULL , )
GO
SET IMPLICIT_TRANSACTIONS OFF
SET NOCOUNT On
DECLARE @.I INT
DECLARE @.B datetime
DECLARE @.E datetime
SET @.B = Getdate()
SET @.I = 0
WHILE @.I <= 30000
BEgin
SET @.I = @.I + 1 INSERT
TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I , Convert
(varchar(10 ) , @.I ) )
End
SET @.E = Getdate()
SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
(@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
Server_Base
SET NOCOUNT OFF
GO
========================================
==============
The differences are 2(or 3) secs vs. 10 secs!
I turned Physical Disk Perf Counters on and found out
that, on the tempdb example, the disk activity is very,
very small. For the local (user database) table, the disk
activity is consequent with the differences in the
execution time.
One of our clients has a real life process that takes 35
minutes to finish if he creates his table in tempdb and
almost 5 hours with a local table.
I would need to >know why< this happens to be able to
explain, because he insists this difference of execution
time comes from a bad parameter in SQL Server! We have
already tested on three different servers and the results
always differ in the same way.
Could anyone explain this?
Thanks a lot in advance!
FlorI wouldn't expect such a big difference, but for temp object in tempdb, REDO
operations are not
logged to the transaction log.
Do you have tempdb on same drive etc as the northwind database?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in message
news:159001c5104c$264d82a0$a401280a@.phx.gbl...
> Hi,
> I have a problem explaining why a tempdb (#) table is 4
> to 5 times faster than a local (user database) table.
> To illustrate the performance differences I have the
> following script:
> ========================================
===============
> USE TEMPDB
> GO
> CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
> NULL , DES Varchar(50) NULL )
> GO
> SET IMPLICIT_TRANSACTIONS OFF
> SET NOCOUNT On
> DECLARE @.I INT
> DECLARE @.B datetime
> DECLARE @.E datetime
> SET @.B = Getdate()
> SET @.I = 0
> WHILE @.I <= 30000
> BEgin
> SET @.I = @.I + 1
> INSERT #TEST_INSERT_20050211 ( LIG , DES )
> VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
> End
> SET @.E = Getdate()
> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
> Server_Base
> SET NOCOUNT OFF
> GO
> DROP TABLE #TEST_INSERT_20050211
> GO
> -- second test
> USE Northwind
> GO
> DROP TABLE TEST_INSERT_20050211
> GO
> CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
> NULL , DES Varchar(50) NULL , )
> GO
> SET IMPLICIT_TRANSACTIONS OFF
> SET NOCOUNT On
> DECLARE @.I INT
> DECLARE @.B datetime
> DECLARE @.E datetime
> SET @.B = Getdate()
> SET @.I = 0
> WHILE @.I <= 30000
> BEgin
> SET @.I = @.I + 1 INSERT
> TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I , Convert
> (varchar(10 ) , @.I ) )
> End
> SET @.E = Getdate()
> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
> Server_Base
> SET NOCOUNT OFF
> GO
> ========================================
==============
> The differences are 2(or 3) secs vs. 10 secs!
> I turned Physical Disk Perf Counters on and found out
> that, on the tempdb example, the disk activity is very,
> very small. For the local (user database) table, the disk
> activity is consequent with the differences in the
> execution time.
> One of our clients has a real life process that takes 35
> minutes to finish if he creates his table in tempdb and
> almost 5 hours with a local table.
> I would need to >know why< this happens to be able to
> explain, because he insists this difference of execution
> time comes from a bad parameter in SQL Server! We have
> already tested on three different servers and the results
> always differ in the same way.
> Could anyone explain this?
> Thanks a lot in advance!
> Flor
>|||Yes, they're on the same drive on my XP office computer
(where I tested).
As for the server my client works on (the big long
process) his user database and tempdb are on two
different drives (different controllers). Tempdb on a
RAID1 and the user database on a RAID5.
Thanks
Flor
>--Original Message--
>I wouldn't expect such a big difference, but for temp
object in tempdb, REDO operations are not
>logged to the transaction log.
>Do you have tempdb on same drive etc as the northwind
database?
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>http://www.solidqualitylearning.com/
>http://www.sqlug.se/
>
>"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in
message
>news:159001c5104c$264d82a0$a401280a@.phx.gbl...
Convert[vbcol=seagreen]
disk[vbcol=seagreen]
35[vbcol=seagreen]
execution[vbcol=seagreen]
results[vbcol=seagreen]
>
>.
>|||OK, so REDO can still explain part of the difference. On yours client machin
e, RAID 1 vs RAID 5 can
also explain some difference. I'm still surprised that you see such a differ
ence on the machine
where you have on same drive, I wouldn't expect the less logging in tempdb h
ave such big impact.
Have you checked the execution plans?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Flor Bottazzi" <anonymous@.discussions.microsoft.com> wrote in message
news:064a01c51051$15f01c60$a501280a@.phx.gbl...[vbcol=seagreen]
> Yes, they're on the same drive on my XP office computer
> (where I tested).
> As for the server my client works on (the big long
> process) his user database and tempdb are on two
> different drives (different controllers). Tempdb on a
> RAID1 and the user database on a RAID5.
> Thanks
> Flor
>
> object in tempdb, REDO operations are not
> database?
> message
> Convert
> disk
> 35
> execution
> results|||It's possible that the difference has to do with the transaction logging
involved. RAID 1 is fast at writing. RAID 5 is the slowest
implementation possible (and many DBAs these days recommend staying away
from RAID 5 altogether). You are inserting 30,000 rows. For tempdb, you
are on the RAID 1 array and running in simple recovery model. So you
have pretty fast writing and auto-truncation of the log file as needed.
For the use database, you are on a RAID 5 array (and I assume the t-log
is on the same array, which is not good). That means very slow writes
and the possiblity of auto-grow operations, which are also slow. Make
sure that AutoShrink is not on in the user database.
You should consider moving the t-log for the user database to the RAID 1
array and test again.
David Gugick
Imceda Software
www.imceda.com|||I tried to but memory ressources began getting lower and
lower and the query was already taking more than 5
minutes, I had to stop it.
I tried to see the Execution Plan through Profiler an I
would say that for each loop in the while cycle it
generated a plan, and the ressources were also getting
lower.
Flor
>--Original Message--
>OK, so REDO can still explain part of the difference. On
yours client machine, RAID 1 vs RAID 5 can
>also explain some difference. I'm still surprised that
you see such a difference on the machine
>where you have on same drive, I wouldn't expect the less
logging in tempdb have such big impact.
>Have you checked the execution plans?
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>http://www.solidqualitylearning.com/
>http://www.sqlug.se/
>
>"Flor Bottazzi" <anonymous@.discussions.microsoft.com>
wrote in message
>news:064a01c51051$15f01c60$a501280a@.phx.gbl...
is 4[vbcol=seagreen]
========================================
===============[vbcol=seagreen]
AS[vbcol=seagreen]
========================================
==============[vbcol=seagreen]
very,[vbcol=seagreen]
and[vbcol=seagreen]
have[vbcol=seagreen]
>
>.
>sql
I have a problem explaining why a tempdb (#) table is 4
to 5 times faster than a local (user database) table.
To illustrate the performance differences I have the
following script:
========================================
===============
USE TEMPDB
GO
CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
NULL , DES Varchar(50) NULL )
GO
SET IMPLICIT_TRANSACTIONS OFF
SET NOCOUNT On
DECLARE @.I INT
DECLARE @.B datetime
DECLARE @.E datetime
SET @.B = Getdate()
SET @.I = 0
WHILE @.I <= 30000
BEgin
SET @.I = @.I + 1
INSERT #TEST_INSERT_20050211 ( LIG , DES )
VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
End
SET @.E = Getdate()
SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
(@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
Server_Base
SET NOCOUNT OFF
GO
DROP TABLE #TEST_INSERT_20050211
GO
-- second test
USE Northwind
GO
DROP TABLE TEST_INSERT_20050211
GO
CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
NULL , DES Varchar(50) NULL , )
GO
SET IMPLICIT_TRANSACTIONS OFF
SET NOCOUNT On
DECLARE @.I INT
DECLARE @.B datetime
DECLARE @.E datetime
SET @.B = Getdate()
SET @.I = 0
WHILE @.I <= 30000
BEgin
SET @.I = @.I + 1 INSERT
TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I , Convert
(varchar(10 ) , @.I ) )
End
SET @.E = Getdate()
SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
(@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
Server_Base
SET NOCOUNT OFF
GO
========================================
==============
The differences are 2(or 3) secs vs. 10 secs!
I turned Physical Disk Perf Counters on and found out
that, on the tempdb example, the disk activity is very,
very small. For the local (user database) table, the disk
activity is consequent with the differences in the
execution time.
One of our clients has a real life process that takes 35
minutes to finish if he creates his table in tempdb and
almost 5 hours with a local table.
I would need to >know why< this happens to be able to
explain, because he insists this difference of execution
time comes from a bad parameter in SQL Server! We have
already tested on three different servers and the results
always differ in the same way.
Could anyone explain this?
Thanks a lot in advance!
FlorI wouldn't expect such a big difference, but for temp object in tempdb, REDO
operations are not
logged to the transaction log.
Do you have tempdb on same drive etc as the northwind database?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in message
news:159001c5104c$264d82a0$a401280a@.phx.gbl...
> Hi,
> I have a problem explaining why a tempdb (#) table is 4
> to 5 times faster than a local (user database) table.
> To illustrate the performance differences I have the
> following script:
> ========================================
===============
> USE TEMPDB
> GO
> CREATE TABLE #TEST_INSERT_20050211 ( LIG DEC(7,0)
> NULL , DES Varchar(50) NULL )
> GO
> SET IMPLICIT_TRANSACTIONS OFF
> SET NOCOUNT On
> DECLARE @.I INT
> DECLARE @.B datetime
> DECLARE @.E datetime
> SET @.B = Getdate()
> SET @.I = 0
> WHILE @.I <= 30000
> BEgin
> SET @.I = @.I + 1
> INSERT #TEST_INSERT_20050211 ( LIG , DES )
> VALUES ( @.I , Convert(varchar(10 ) , @.I ) )
> End
> SET @.E = Getdate()
> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [ INSERT TEMPDB ]' AS
> Server_Base
> SET NOCOUNT OFF
> GO
> DROP TABLE #TEST_INSERT_20050211
> GO
> -- second test
> USE Northwind
> GO
> DROP TABLE TEST_INSERT_20050211
> GO
> CREATE TABLE TEST_INSERT_20050211 ( LIG DEC(7,0)
> NULL , DES Varchar(50) NULL , )
> GO
> SET IMPLICIT_TRANSACTIONS OFF
> SET NOCOUNT On
> DECLARE @.I INT
> DECLARE @.B datetime
> DECLARE @.E datetime
> SET @.B = Getdate()
> SET @.I = 0
> WHILE @.I <= 30000
> BEgin
> SET @.I = @.I + 1 INSERT
> TEST_INSERT_20050211 ( LIG , DES ) VALUES ( @.I , Convert
> (varchar(10 ) , @.I ) )
> End
> SET @.E = Getdate()
> SELECT DATEDIFF ( ms, @.B, @.E ) AS Milli_sec, DATEDIFF
> ( SS, @.B, @.E ) AS Sec , @.I AS NB_Rows, Ltrim(Rtrim
> (@.@.SERVERNAME))+'-'+DB_NAME()+ ' [INSERT ] ' AS
> Server_Base
> SET NOCOUNT OFF
> GO
> ========================================
==============
> The differences are 2(or 3) secs vs. 10 secs!
> I turned Physical Disk Perf Counters on and found out
> that, on the tempdb example, the disk activity is very,
> very small. For the local (user database) table, the disk
> activity is consequent with the differences in the
> execution time.
> One of our clients has a real life process that takes 35
> minutes to finish if he creates his table in tempdb and
> almost 5 hours with a local table.
> I would need to >know why< this happens to be able to
> explain, because he insists this difference of execution
> time comes from a bad parameter in SQL Server! We have
> already tested on three different servers and the results
> always differ in the same way.
> Could anyone explain this?
> Thanks a lot in advance!
> Flor
>|||Yes, they're on the same drive on my XP office computer
(where I tested).
As for the server my client works on (the big long
process) his user database and tempdb are on two
different drives (different controllers). Tempdb on a
RAID1 and the user database on a RAID5.
Thanks
Flor
>--Original Message--
>I wouldn't expect such a big difference, but for temp
object in tempdb, REDO operations are not
>logged to the transaction log.
>Do you have tempdb on same drive etc as the northwind
database?
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>http://www.solidqualitylearning.com/
>http://www.sqlug.se/
>
>"Flor Bottazzi" <f.bottazzi@.castsoftware.com> wrote in
message
>news:159001c5104c$264d82a0$a401280a@.phx.gbl...
Convert[vbcol=seagreen]
disk[vbcol=seagreen]
35[vbcol=seagreen]
execution[vbcol=seagreen]
results[vbcol=seagreen]
>
>.
>|||OK, so REDO can still explain part of the difference. On yours client machin
e, RAID 1 vs RAID 5 can
also explain some difference. I'm still surprised that you see such a differ
ence on the machine
where you have on same drive, I wouldn't expect the less logging in tempdb h
ave such big impact.
Have you checked the execution plans?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Flor Bottazzi" <anonymous@.discussions.microsoft.com> wrote in message
news:064a01c51051$15f01c60$a501280a@.phx.gbl...[vbcol=seagreen]
> Yes, they're on the same drive on my XP office computer
> (where I tested).
> As for the server my client works on (the big long
> process) his user database and tempdb are on two
> different drives (different controllers). Tempdb on a
> RAID1 and the user database on a RAID5.
> Thanks
> Flor
>
> object in tempdb, REDO operations are not
> database?
> message
> Convert
> disk
> 35
> execution
> results|||It's possible that the difference has to do with the transaction logging
involved. RAID 1 is fast at writing. RAID 5 is the slowest
implementation possible (and many DBAs these days recommend staying away
from RAID 5 altogether). You are inserting 30,000 rows. For tempdb, you
are on the RAID 1 array and running in simple recovery model. So you
have pretty fast writing and auto-truncation of the log file as needed.
For the use database, you are on a RAID 5 array (and I assume the t-log
is on the same array, which is not good). That means very slow writes
and the possiblity of auto-grow operations, which are also slow. Make
sure that AutoShrink is not on in the user database.
You should consider moving the t-log for the user database to the RAID 1
array and test again.
David Gugick
Imceda Software
www.imceda.com|||I tried to but memory ressources began getting lower and
lower and the query was already taking more than 5
minutes, I had to stop it.
I tried to see the Execution Plan through Profiler an I
would say that for each loop in the while cycle it
generated a plan, and the ressources were also getting
lower.
Flor
>--Original Message--
>OK, so REDO can still explain part of the difference. On
yours client machine, RAID 1 vs RAID 5 can
>also explain some difference. I'm still surprised that
you see such a difference on the machine
>where you have on same drive, I wouldn't expect the less
logging in tempdb have such big impact.
>Have you checked the execution plans?
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>http://www.solidqualitylearning.com/
>http://www.sqlug.se/
>
>"Flor Bottazzi" <anonymous@.discussions.microsoft.com>
wrote in message
>news:064a01c51051$15f01c60$a501280a@.phx.gbl...
is 4[vbcol=seagreen]
========================================
===============[vbcol=seagreen]
AS[vbcol=seagreen]
========================================
==============[vbcol=seagreen]
very,[vbcol=seagreen]
and[vbcol=seagreen]
have[vbcol=seagreen]
>
>.
>sql
Labels:
4to,
database,
differences,
explaining,
illustrate,
local,
microsoft,
mysql,
oracle,
performance,
server,
sql,
table,
tempdb,
user
performance differences between queries...
I am updating a db with data from a file, in this data we have new info, info that has been updated and info that is to be removed from the db.
Now I was wondering which approach results in better performance/shorter executin time:
1. first update excisting values, then insert new ones, and last delete cancelled data
or
2. delete cancelled data and data that will be updated, then insert new and updated info
I get all this data from a file, in that file all rows are similar and there is one column that defines if the data is new, updated or to be deleted (thus all the updates also include the information for the enty that has not been altered).
// PatiPati, I think there are too many unknown variables to properly answer the question. For instance, how many columns are you updating compared to how many columns are not being updated? If you're updating one small column, and there are ten large columns, say Char(255), that are not being updated, then I believe an update would be faster than a "delete and insert all new rows". However, you would have to have the index to support finding the rows that need updating.
Personally, I always choose the way that makes the most sense logically. I would delete cancelled data, then update existing data and then insert the new rows.
Roby2222|||Thanks for you answer Roby2222, I hope this additional info will clarify things.
The amount of columns that are being updated differs from time to time, sometimes there is just one column that needs updating and sometimes almost all columns have to be updated...
Most columns are about the same size 'cause their max size is predefined in the system where it originates from.
Basically I was just wondering if it is worth doing 3 steps (delete+update+insert) when you get the same result with 2 steps (delet+insert)?
Pati|||Pati,
If you're doing a bulk insert, then the two step process will usually be quicker. If you're inserting row by row, then it's probably a toss up. If you really want to know the exact answer for your particular situation, I would run a test and benchmark both methods. You can use one scripts with all of the necessary statements, and then not how long it took to run in query analyzer.
Hope this helps,
Roby2222|||Thanks for your answers!
I think I'll go for the 2 step solution, 'cause it will probably result in simpler code (and therefore easier for someone else to read later if neccessary).
Perhaps I'll also test the performance manually as you suggested sometime later on when I'll have some time to spare! :)
Pati
Now I was wondering which approach results in better performance/shorter executin time:
1. first update excisting values, then insert new ones, and last delete cancelled data
or
2. delete cancelled data and data that will be updated, then insert new and updated info
I get all this data from a file, in that file all rows are similar and there is one column that defines if the data is new, updated or to be deleted (thus all the updates also include the information for the enty that has not been altered).
// PatiPati, I think there are too many unknown variables to properly answer the question. For instance, how many columns are you updating compared to how many columns are not being updated? If you're updating one small column, and there are ten large columns, say Char(255), that are not being updated, then I believe an update would be faster than a "delete and insert all new rows". However, you would have to have the index to support finding the rows that need updating.
Personally, I always choose the way that makes the most sense logically. I would delete cancelled data, then update existing data and then insert the new rows.
Roby2222|||Thanks for you answer Roby2222, I hope this additional info will clarify things.
The amount of columns that are being updated differs from time to time, sometimes there is just one column that needs updating and sometimes almost all columns have to be updated...
Most columns are about the same size 'cause their max size is predefined in the system where it originates from.
Basically I was just wondering if it is worth doing 3 steps (delete+update+insert) when you get the same result with 2 steps (delet+insert)?
Pati|||Pati,
If you're doing a bulk insert, then the two step process will usually be quicker. If you're inserting row by row, then it's probably a toss up. If you really want to know the exact answer for your particular situation, I would run a test and benchmark both methods. You can use one scripts with all of the necessary statements, and then not how long it took to run in query analyzer.
Hope this helps,
Roby2222|||Thanks for your answers!
I think I'll go for the 2 step solution, 'cause it will probably result in simpler code (and therefore easier for someone else to read later if neccessary).
Perhaps I'll also test the performance manually as you suggested sometime later on when I'll have some time to spare! :)
Pati
Wednesday, March 21, 2012
performance comparision of different connection types
Do you know some performance differences between reading data from a stored procedure using:
1. OLE DB Connection and OLE DB Source
2. ADO.NET Connection and DatareaderSource
Przemo
If you could do some testing and share it here I'm sure people would be intreested in reading it.
-Jamie
Labels:
ado,
comparision,
connection,
database,
differences,
microsoft,
mysql,
net,
ole,
oracle,
performance,
procedure,
reading,
server,
source,
sql,
stored,
types
Subscribe to:
Posts (Atom)