Showing posts with label time. Show all posts
Showing posts with label time. Show all posts

Wednesday, March 28, 2012

performance difference between sql 00 and sql 05

I'm testing out our applications against our new SQL 05 instance and I'm
noticing a performance difference between the time SQL 2000 returns my data
back and the time SQL 05 returns my data back. SQL 2000 is actually
returning the data back 2 seconds faster (I know its only 2 seconds but to
my users thats like a eternity). So being new to this DBA role, (coming from
a developer background) what can I start checking to get the performance on
SQL 05 to match or be better then the SQL 2000 result time?
Mike
Run the query with Execution Plan turned on. I generally look for:
index scans
table spools
table scans
missing statistics
very thick arrows between nodes
hash/merge/nested loop joins - compare performance with explicit hints
the nodes with higher %
I also tend to use:
set statistics time on
set statistics io on
set statistics profile on
You can focus on the parts of the output that look worst (e.g. highest
physical reads or longest CPU time).
Also see http://support.microsoft.com/?kbid=243589
Compare this execution plan with the execution plan on 2000 (I assume that
system is still available).
If you upgraded from 2000 -> 2005, then you should consider the following
actions:
(a) reorganize / rebuild indexes
(b) re-create statistics
(c) make sure compatibility mode is 90 not 80
(d) clear proc cache and recompile (I hope you are using stored procedures!)
(e) if you are using ad hoc queries (!) you may consider changing the forced
parameterization setting as a test
Aaron Bertrand
SQL Server MVP
http://www.sqlblog.com/
http://www.aspfaq.com/5006

> I'm testing out our applications against our new SQL 05 instance and I'm
> noticing a performance difference between the time SQL 2000 returns my
> data back and the time SQL 05 returns my data back. SQL 2000 is actually
> returning the data back 2 seconds faster (I know its only 2 seconds but to
> my users thats like a eternity). So being new to this DBA role, (coming
> from a developer background) what can I start checking to get the
> performance on SQL 05 to match or be better then the SQL 2000 result time?
|||Have you re-indexed the tables and updated the stats on the 2005 db?
Thanks,
Karthik
"Mike" wrote:

> I'm testing out our applications against our new SQL 05 instance and I'm
> noticing a performance difference between the time SQL 2000 returns my data
> back and the time SQL 05 returns my data back. SQL 2000 is actually
> returning the data back 2 seconds faster (I know its only 2 seconds but to
> my users thats like a eternity). So being new to this DBA role, (coming from
> a developer background) what can I start checking to get the performance on
> SQL 05 to match or be better then the SQL 2000 result time?
>
> --
> Mike
>
>
|||no
"Karthik" <Karthik@.discussions.microsoft.com> wrote in message
news:389A7D51-F9A6-4E1F-929E-3E64D258F5A2@.microsoft.com...[vbcol=seagreen]
> Have you re-indexed the tables and updated the stats on the 2005 db?
> Thanks,
> Karthik
> "Mike" wrote:
|||"Mike" <Mike@.community.nospam.com> wrote in message
news:OlBO0%23dsHHA.1204@.TK2MSFTNGP03.phx.gbl...[vbcol=seagreen]
> no
> "Karthik" <Karthik@.discussions.microsoft.com> wrote in message
> news:389A7D51-F9A6-4E1F-929E-3E64D258F5A2@.microsoft.com...
Updating statistics is one of the post-upgrade install tasks listed in the
Books Online
(ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/instsql9/html/b5f6549a-1e6f-486c-aa58-f12e3eb9fefa.htm).
Hope this helps.
Dan Guzman
SQL Server MVP
"Mike" <Mike@.community.nospam.com> wrote in message
news:OlBO0%23dsHHA.1204@.TK2MSFTNGP03.phx.gbl...
> no
> "Karthik" <Karthik@.discussions.microsoft.com> wrote in message
> news:389A7D51-F9A6-4E1F-929E-3E64D258F5A2@.microsoft.com...
>

performance difference between sql 00 and sql 05

I'm testing out our applications against our new SQL 05 instance and I'm
noticing a performance difference between the time SQL 2000 returns my data
back and the time SQL 05 returns my data back. SQL 2000 is actually
returning the data back 2 seconds faster (I know its only 2 seconds but to
my users thats like a eternity). So being new to this DBA role, (coming from
a developer background) what can I start checking to get the performance on
SQL 05 to match or be better then the SQL 2000 result time?
MikeRun the query with Execution Plan turned on. I generally look for:
index scans
table spools
table scans
missing statistics
very thick arrows between nodes
hash/merge/nested loop joins - compare performance with explicit hints
the nodes with higher %
I also tend to use:
set statistics time on
set statistics io on
set statistics profile on
You can focus on the parts of the output that look worst (e.g. highest
physical reads or longest CPU time).
Also see http://support.microsoft.com/?kbid=243589
Compare this execution plan with the execution plan on 2000 (I assume that
system is still available).
If you upgraded from 2000 -> 2005, then you should consider the following
actions:
(a) reorganize / rebuild indexes
(b) re-create statistics
(c) make sure compatibility mode is 90 not 80
(d) clear proc cache and recompile (I hope you are using stored procedures!)
(e) if you are using ad hoc queries (!) you may consider changing the forced
parameterization setting as a test
Aaron Bertrand
SQL Server MVP
http://www.sqlblog.com/
http://www.aspfaq.com/5006

> I'm testing out our applications against our new SQL 05 instance and I'm
> noticing a performance difference between the time SQL 2000 returns my
> data back and the time SQL 05 returns my data back. SQL 2000 is actually
> returning the data back 2 seconds faster (I know its only 2 seconds but to
> my users thats like a eternity). So being new to this DBA role, (coming
> from a developer background) what can I start checking to get the
> performance on SQL 05 to match or be better then the SQL 2000 result time?|||Have you re-indexed the tables and updated the stats on the 2005 db?
Thanks,
Karthik
"Mike" wrote:

> I'm testing out our applications against our new SQL 05 instance and I'm
> noticing a performance difference between the time SQL 2000 returns my dat
a
> back and the time SQL 05 returns my data back. SQL 2000 is actually
> returning the data back 2 seconds faster (I know its only 2 seconds but to
> my users thats like a eternity). So being new to this DBA role, (coming fr
om
> a developer background) what can I start checking to get the performance o
n
> SQL 05 to match or be better then the SQL 2000 result time?
>
> --
> Mike
>
>|||no
"Karthik" <Karthik@.discussions.microsoft.com> wrote in message
news:389A7D51-F9A6-4E1F-929E-3E64D258F5A2@.microsoft.com...[vbcol=seagreen]
> Have you re-indexed the tables and updated the stats on the 2005 db?
> Thanks,
> Karthik
> "Mike" wrote:
>|||"Mike" <Mike@.community.nospam.com> wrote in message
news:OlBO0%23dsHHA.1204@.TK2MSFTNGP03.phx.gbl...
> no
> "Karthik" <Karthik@.discussions.microsoft.com> wrote in message
> news:389A7D51-F9A6-4E1F-929E-3E64D258F5A2@.microsoft.com...
Updating statistics is one of the post-upgrade install tasks listed in the
Books Online
(ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/instsql9/html/b5f6549a-1e6f-486c-aa58
-f12e3eb9fefa.htm).
Hope this helps.
Dan Guzman
SQL Server MVP
"Mike" <Mike@.community.nospam.com> wrote in message
news:OlBO0%23dsHHA.1204@.TK2MSFTNGP03.phx.gbl...[vbcol=seagreen]
> no
> "Karthik" <Karthik@.discussions.microsoft.com> wrote in message
> news:389A7D51-F9A6-4E1F-929E-3E64D258F5A2@.microsoft.com...
>

performance difference between sql 00 and sql 05

I'm testing out our applications against our new SQL 05 instance and I'm
noticing a performance difference between the time SQL 2000 returns my data
back and the time SQL 05 returns my data back. SQL 2000 is actually
returning the data back 2 seconds faster (I know its only 2 seconds but to
my users thats like a eternity). So being new to this DBA role, (coming from
a developer background) what can I start checking to get the performance on
SQL 05 to match or be better then the SQL 2000 result time?
--
MikeRun the query with Execution Plan turned on. I generally look for:
index scans
table spools
table scans
missing statistics
very thick arrows between nodes
hash/merge/nested loop joins - compare performance with explicit hints
the nodes with higher %
I also tend to use:
set statistics time on
set statistics io on
set statistics profile on
You can focus on the parts of the output that look worst (e.g. highest
physical reads or longest CPU time).
Also see http://support.microsoft.com/?kbid=243589
Compare this execution plan with the execution plan on 2000 (I assume that
system is still available).
If you upgraded from 2000 -> 2005, then you should consider the following
actions:
(a) reorganize / rebuild indexes
(b) re-create statistics
(c) make sure compatibility mode is 90 not 80
(d) clear proc cache and recompile (I hope you are using stored procedures!)
(e) if you are using ad hoc queries (!) you may consider changing the forced
parameterization setting as a test
--
Aaron Bertrand
SQL Server MVP
http://www.sqlblog.com/
http://www.aspfaq.com/5006
> I'm testing out our applications against our new SQL 05 instance and I'm
> noticing a performance difference between the time SQL 2000 returns my
> data back and the time SQL 05 returns my data back. SQL 2000 is actually
> returning the data back 2 seconds faster (I know its only 2 seconds but to
> my users thats like a eternity). So being new to this DBA role, (coming
> from a developer background) what can I start checking to get the
> performance on SQL 05 to match or be better then the SQL 2000 result time?|||Have you re-indexed the tables and updated the stats on the 2005 db?
Thanks,
Karthik
"Mike" wrote:
> I'm testing out our applications against our new SQL 05 instance and I'm
> noticing a performance difference between the time SQL 2000 returns my data
> back and the time SQL 05 returns my data back. SQL 2000 is actually
> returning the data back 2 seconds faster (I know its only 2 seconds but to
> my users thats like a eternity). So being new to this DBA role, (coming from
> a developer background) what can I start checking to get the performance on
> SQL 05 to match or be better then the SQL 2000 result time?
>
> --
> Mike
>
>|||no
"Karthik" <Karthik@.discussions.microsoft.com> wrote in message
news:389A7D51-F9A6-4E1F-929E-3E64D258F5A2@.microsoft.com...
> Have you re-indexed the tables and updated the stats on the 2005 db?
> Thanks,
> Karthik
> "Mike" wrote:
>> I'm testing out our applications against our new SQL 05 instance and I'm
>> noticing a performance difference between the time SQL 2000 returns my
>> data
>> back and the time SQL 05 returns my data back. SQL 2000 is actually
>> returning the data back 2 seconds faster (I know its only 2 seconds but
>> to
>> my users thats like a eternity). So being new to this DBA role, (coming
>> from
>> a developer background) what can I start checking to get the performance
>> on
>> SQL 05 to match or be better then the SQL 2000 result time?
>>
>> --
>> Mike
>>
>>|||"Mike" <Mike@.community.nospam.com> wrote in message
news:OlBO0%23dsHHA.1204@.TK2MSFTNGP03.phx.gbl...
> no
> "Karthik" <Karthik@.discussions.microsoft.com> wrote in message
> news:389A7D51-F9A6-4E1F-929E-3E64D258F5A2@.microsoft.com...
>> Have you re-indexed the tables and updated the stats on the 2005 db?
Updating statistics is one of the post-upgrade install tasks listed in the
Books Online
(ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/instsql9/html/b5f6549a-1e6f-486c-aa58-f12e3eb9fefa.htm).
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Mike" <Mike@.community.nospam.com> wrote in message
news:OlBO0%23dsHHA.1204@.TK2MSFTNGP03.phx.gbl...
> no
> "Karthik" <Karthik@.discussions.microsoft.com> wrote in message
> news:389A7D51-F9A6-4E1F-929E-3E64D258F5A2@.microsoft.com...
>> Have you re-indexed the tables and updated the stats on the 2005 db?
>> Thanks,
>> Karthik
>> "Mike" wrote:
>> I'm testing out our applications against our new SQL 05 instance and I'm
>> noticing a performance difference between the time SQL 2000 returns my
>> data
>> back and the time SQL 05 returns my data back. SQL 2000 is actually
>> returning the data back 2 seconds faster (I know its only 2 seconds but
>> to
>> my users thats like a eternity). So being new to this DBA role, (coming
>> from
>> a developer background) what can I start checking to get the performance
>> on
>> SQL 05 to match or be better then the SQL 2000 result time?
>>
>> --
>> Mike
>>
>>
>sql

Monday, March 26, 2012

performance difference between query analyzer and .NET code.

Dear Sir,
I used dynamic SQL on stored procedure to do the paging in my site.
I often hit a connection time out problem with that dynamic SQl when I call
it from .NET code.
However if it run it under query analyzer, everything works fine.
Who can explain this to me and what should I do in this case?
regards,
Guoqi Zheng
http://www.ureader.comWhen you say "everything works fine", do you also mean that the execution
time in Query Analyzer is within the connection timeout you have set in your
.Net application?
ML
http://milambda.blogspot.com/|||> I used dynamic SQL on stored procedure to do the paging in my site.
> I often hit a connection time out problem with that dynamic SQl when I
> call
> it from .NET code.
> However if it run it under query analyzer, everything works fine.
> Who can explain this to me and what should I do in this case?
If "everything works fine" still means it's slow, and extends beyond the
timeout setting that ML mentioned, then everything is not working fine.
One thing you might try is a different paging approach. There are probably
a handful of samples here that will be faster and safer than dynamic SQL:
http://www.aspfaq.com/2120

Wednesday, March 21, 2012

Performance counter : PhysicalDisk: % Disk Time

We have a DataWarehouse process that, at times, causes this Perfmon counter to exceed > 1800 (for several minutes at a time. Not good).

Question: How can that counter exceed 100 * (number of spindles in array)? (I suspect that numbers in that range are meaningless except to point out a disk bottleneck.)

TIA,

barkingdog

P.S. We use sql 2005, x64 edition, with SP1.

http://www.demandtech.com/FAQsdisk.htm#DiskFAQ1

Frequently Asked Questions about Disk
1. The Logical/Physical Disk % Disk Time counters look wrong. What gives?

http://support.microsoft.com/kb/269481

not about:: We use sql 2005, x64 edition, with SP1.

Performance Counter

Hi,
Is there a way to access non Sql Server related performance counters from
Sql Server.
Like % Processor time or % Disk Time?
I'm trying to build a table to establish trends and would like to collect
all the data from via a sp.
Thx
You could try
DBCC sqlperf(THREADS)
Tobes
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
> Hi,
> Is there a way to access non Sql Server related performance counters from
> Sql Server.
> Like % Processor time or % Disk Time?
> I'm trying to build a table to establish trends and would like to collect
> all the data from via a sp.
> Thx
>
|||Non- SQL Server perf counters are NOT stored in SQL the same way that the
SQL Counters are..
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
> Hi,
> Is there a way to access non Sql Server related performance counters from
> Sql Server.
> Like % Processor time or % Disk Time?
> I'm trying to build a table to establish trends and would like to collect
> all the data from via a sp.
> Thx
>
|||You can write Windows performance counters to SQL Server database tables
using the Windows 2003 server or XP performance monitor tool. Create a ODBC
system DSN for your database repository and specify the DSN under log
settings. The account used for the 'Performance logs and alerts' service
needs to be granted full access to the database.
Hope this helps.
Dan Guzman
SQL Server MVP
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
> Hi,
> Is there a way to access non Sql Server related performance counters from
> Sql Server.
> Like % Processor time or % Disk Time?
> I'm trying to build a table to establish trends and would like to collect
> all the data from via a sp.
> Thx
>
|||Thx,
I've tried this and have tested the ODBC connection which is successfull.
However when I go to start the counter log it fails and the application
event viewer has the following message:
Description:
18456 :
Login failed for user 'NT AUTHORITY\NETWORK SERVICE'.
I then went and added a local group on the server and added 'NT
AUTHORITY\NETWORK SERVICE' to the group. I then granted login access for this
local group to Sql server and added it as a user to a repository db with full
permissions on the db... get the same error.
Thx again.
"Dan Guzman" wrote:

> You can write Windows performance counters to SQL Server database tables
> using the Windows 2003 server or XP performance monitor tool. Create a ODBC
> system DSN for your database repository and specify the DSN under log
> settings. The account used for the 'Performance logs and alerts' service
> needs to be granted full access to the database.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "ap001" <ap001@.discussions.microsoft.com> wrote in message
> news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
>
>
|||You might try using a domain account for the 'Performance logs and alerts'
service. This has always worked for me.
Hope this helps.
Dan Guzman
SQL Server MVP
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:2D40BCD1-40C4-4F34-B6E5-6FDDABE109FC@.microsoft.com...[vbcol=seagreen]
> Thx,
> I've tried this and have tested the ODBC connection which is successfull.
> However when I go to start the counter log it fails and the application
> event viewer has the following message:
> Description:
> 18456 :
> Login failed for user 'NT AUTHORITY\NETWORK SERVICE'.
> I then went and added a local group on the server and added 'NT
> AUTHORITY\NETWORK SERVICE' to the group. I then granted login access for
> this
> local group to Sql server and added it as a user to a repository db with
> full
> permissions on the db... get the same error.
> Thx again.
>
>
> "Dan Guzman" wrote:
|||Thx, that worked.
"Dan Guzman" wrote:

> You might try using a domain account for the 'Performance logs and alerts'
> service. This has always worked for me.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "ap001" <ap001@.discussions.microsoft.com> wrote in message
> news:2D40BCD1-40C4-4F34-B6E5-6FDDABE109FC@.microsoft.com...
>
>

Performance Counter

Hi,
Is there a way to access non Sql Server related performance counters from
Sql Server.
Like % Processor time or % Disk Time?
I'm trying to build a table to establish trends and would like to collect
all the data from via a sp.
ThxYou could try
DBCC sqlperf(THREADS)
Tobes
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
> Hi,
> Is there a way to access non Sql Server related performance counters from
> Sql Server.
> Like % Processor time or % Disk Time?
> I'm trying to build a table to establish trends and would like to collect
> all the data from via a sp.
> Thx
>|||Non- SQL Server perf counters are NOT stored in SQL the same way that the
SQL Counters are..
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
> Hi,
> Is there a way to access non Sql Server related performance counters from
> Sql Server.
> Like % Processor time or % Disk Time?
> I'm trying to build a table to establish trends and would like to collect
> all the data from via a sp.
> Thx
>|||You can write Windows performance counters to SQL Server database tables
using the Windows 2003 server or XP performance monitor tool. Create a ODBC
system DSN for your database repository and specify the DSN under log
settings. The account used for the 'Performance logs and alerts' service
needs to be granted full access to the database.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
> Hi,
> Is there a way to access non Sql Server related performance counters from
> Sql Server.
> Like % Processor time or % Disk Time?
> I'm trying to build a table to establish trends and would like to collect
> all the data from via a sp.
> Thx
>|||Thx,
I've tried this and have tested the ODBC connection which is successfull.
However when I go to start the counter log it fails and the application
event viewer has the following message:
Description:
18456 :
Login failed for user 'NT AUTHORITY\NETWORK SERVICE'.
I then went and added a local group on the server and added 'NT
AUTHORITY\NETWORK SERVICE' to the group. I then granted login access for this
local group to Sql server and added it as a user to a repository db with full
permissions on the db... get the same error.
Thx again.
"Dan Guzman" wrote:
> You can write Windows performance counters to SQL Server database tables
> using the Windows 2003 server or XP performance monitor tool. Create a ODBC
> system DSN for your database repository and specify the DSN under log
> settings. The account used for the 'Performance logs and alerts' service
> needs to be granted full access to the database.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "ap001" <ap001@.discussions.microsoft.com> wrote in message
> news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
> > Hi,
> >
> > Is there a way to access non Sql Server related performance counters from
> > Sql Server.
> > Like % Processor time or % Disk Time?
> >
> > I'm trying to build a table to establish trends and would like to collect
> > all the data from via a sp.
> >
> > Thx
> >
> >
>
>|||You might try using a domain account for the 'Performance logs and alerts'
service. This has always worked for me.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:2D40BCD1-40C4-4F34-B6E5-6FDDABE109FC@.microsoft.com...
> Thx,
> I've tried this and have tested the ODBC connection which is successfull.
> However when I go to start the counter log it fails and the application
> event viewer has the following message:
> Description:
> 18456 :
> Login failed for user 'NT AUTHORITY\NETWORK SERVICE'.
> I then went and added a local group on the server and added 'NT
> AUTHORITY\NETWORK SERVICE' to the group. I then granted login access for
> this
> local group to Sql server and added it as a user to a repository db with
> full
> permissions on the db... get the same error.
> Thx again.
>
>
> "Dan Guzman" wrote:
>> You can write Windows performance counters to SQL Server database tables
>> using the Windows 2003 server or XP performance monitor tool. Create a
>> ODBC
>> system DSN for your database repository and specify the DSN under log
>> settings. The account used for the 'Performance logs and alerts' service
>> needs to be granted full access to the database.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "ap001" <ap001@.discussions.microsoft.com> wrote in message
>> news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
>> > Hi,
>> >
>> > Is there a way to access non Sql Server related performance counters
>> > from
>> > Sql Server.
>> > Like % Processor time or % Disk Time?
>> >
>> > I'm trying to build a table to establish trends and would like to
>> > collect
>> > all the data from via a sp.
>> >
>> > Thx
>> >
>> >
>>|||Thx, that worked.
"Dan Guzman" wrote:
> You might try using a domain account for the 'Performance logs and alerts'
> service. This has always worked for me.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "ap001" <ap001@.discussions.microsoft.com> wrote in message
> news:2D40BCD1-40C4-4F34-B6E5-6FDDABE109FC@.microsoft.com...
> > Thx,
> >
> > I've tried this and have tested the ODBC connection which is successfull.
> > However when I go to start the counter log it fails and the application
> > event viewer has the following message:
> > Description:
> > 18456 :
> > Login failed for user 'NT AUTHORITY\NETWORK SERVICE'.
> >
> > I then went and added a local group on the server and added 'NT
> > AUTHORITY\NETWORK SERVICE' to the group. I then granted login access for
> > this
> > local group to Sql server and added it as a user to a repository db with
> > full
> > permissions on the db... get the same error.
> >
> > Thx again.
> >
> >
> >
> >
> > "Dan Guzman" wrote:
> >
> >> You can write Windows performance counters to SQL Server database tables
> >> using the Windows 2003 server or XP performance monitor tool. Create a
> >> ODBC
> >> system DSN for your database repository and specify the DSN under log
> >> settings. The account used for the 'Performance logs and alerts' service
> >> needs to be granted full access to the database.
> >>
> >> --
> >> Hope this helps.
> >>
> >> Dan Guzman
> >> SQL Server MVP
> >>
> >> "ap001" <ap001@.discussions.microsoft.com> wrote in message
> >> news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
> >> > Hi,
> >> >
> >> > Is there a way to access non Sql Server related performance counters
> >> > from
> >> > Sql Server.
> >> > Like % Processor time or % Disk Time?
> >> >
> >> > I'm trying to build a table to establish trends and would like to
> >> > collect
> >> > all the data from via a sp.
> >> >
> >> > Thx
> >> >
> >> >
> >>
> >>
> >>
>
>sql

Performance Counter

Hi,
Is there a way to access non Sql Server related performance counters from
Sql Server.
Like % Processor time or % Disk Time?
I'm trying to build a table to establish trends and would like to collect
all the data from via a sp.
ThxYou could try
DBCC sqlperf(THREADS)
Tobes
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
> Hi,
> Is there a way to access non Sql Server related performance counters from
> Sql Server.
> Like % Processor time or % Disk Time?
> I'm trying to build a table to establish trends and would like to collect
> all the data from via a sp.
> Thx
>|||Non- SQL Server perf counters are NOT stored in SQL the same way that the
SQL Counters are..
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
> Hi,
> Is there a way to access non Sql Server related performance counters from
> Sql Server.
> Like % Processor time or % Disk Time?
> I'm trying to build a table to establish trends and would like to collect
> all the data from via a sp.
> Thx
>|||You can write Windows performance counters to SQL Server database tables
using the Windows 2003 server or XP performance monitor tool. Create a ODBC
system DSN for your database repository and specify the DSN under log
settings. The account used for the 'Performance logs and alerts' service
needs to be granted full access to the database.
Hope this helps.
Dan Guzman
SQL Server MVP
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
> Hi,
> Is there a way to access non Sql Server related performance counters from
> Sql Server.
> Like % Processor time or % Disk Time?
> I'm trying to build a table to establish trends and would like to collect
> all the data from via a sp.
> Thx
>|||Thx,
I've tried this and have tested the ODBC connection which is successfull.
However when I go to start the counter log it fails and the application
event viewer has the following message:
Description:
18456 :
Login failed for user 'NT AUTHORITY\NETWORK SERVICE'.
I then went and added a local group on the server and added 'NT
AUTHORITY\NETWORK SERVICE' to the group. I then granted login access for thi
s
local group to Sql server and added it as a user to a repository db with ful
l
permissions on the db... get the same error.
Thx again.
"Dan Guzman" wrote:

> You can write Windows performance counters to SQL Server database tables
> using the Windows 2003 server or XP performance monitor tool. Create a OD
BC
> system DSN for your database repository and specify the DSN under log
> settings. The account used for the 'Performance logs and alerts' service
> needs to be granted full access to the database.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "ap001" <ap001@.discussions.microsoft.com> wrote in message
> news:469B9D70-D7E3-452C-BAE3-2D5E0320909C@.microsoft.com...
>
>|||You might try using a domain account for the 'Performance logs and alerts'
service. This has always worked for me.
Hope this helps.
Dan Guzman
SQL Server MVP
"ap001" <ap001@.discussions.microsoft.com> wrote in message
news:2D40BCD1-40C4-4F34-B6E5-6FDDABE109FC@.microsoft.com...[vbcol=seagreen]
> Thx,
> I've tried this and have tested the ODBC connection which is successfull.
> However when I go to start the counter log it fails and the application
> event viewer has the following message:
> Description:
> 18456 :
> Login failed for user 'NT AUTHORITY\NETWORK SERVICE'.
> I then went and added a local group on the server and added 'NT
> AUTHORITY\NETWORK SERVICE' to the group. I then granted login access for
> this
> local group to Sql server and added it as a user to a repository db with
> full
> permissions on the db... get the same error.
> Thx again.
>
>
> "Dan Guzman" wrote:
>|||Thx, that worked.
"Dan Guzman" wrote:

> You might try using a domain account for the 'Performance logs and alerts'
> service. This has always worked for me.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "ap001" <ap001@.discussions.microsoft.com> wrote in message
> news:2D40BCD1-40C4-4F34-B6E5-6FDDABE109FC@.microsoft.com...
>
>

Monday, March 12, 2012

Performance : Queries taking very long time (Urgent)

The production_table has ~20 million records and has identity key which is
clustered and has non clustered index on Created_Date.
The following queries taking very long time as the plan using index scan.
Please advise for optimization.
select *
from production_table p -- production_table has ~10 million records
where datediff(dd, p.Created_Date, getdate()) = 7
select *
from production_table p -- -- production_table has ~10 million records
where datediff(mm, p.created_date, getdate()) = 1
Thanks in advance.
Hmm.. How many rows do the queries return ? Theres lots one could advice,
such as do you really need a select * or one could state, creating a
clustered index on Created_Date column or even trying to rewrite the where
clause something like
where created_date = getdate-7 and so on..
I think thats related to SARG and how the optimiser would pick the right
plan.. But with so many records, and with the select * in there, even if
does do an index seek , it would follow with a bookmark lookup to get the
rest of the data which could be expensive as well..
So without knowing a whole lot about ur application, it would be wrong for
me to really give the right solution...
Probably someone else can chime in with something better.
"DKRReddy" <dkrreddy@.hotmail.com> wrote in message
news:%23$XOvrMDFHA.392@.TK2MSFTNGP14.phx.gbl...
> The production_table has ~20 million records and has identity key which
is
> clustered and has non clustered index on Created_Date.
> The following queries taking very long time as the plan using index scan.
> Please advise for optimization.
> select *
> from production_table p -- production_table has ~10 million records
> where datediff(dd, p.Created_Date, getdate()) = 7
>
> select *
> from production_table p -- -- production_table has ~10 million
records
> where datediff(mm, p.created_date, getdate()) = 1
>
> Thanks in advance.
>
|||I would try running UPDATE STATISTICS initially, then try an index hint and
then consider clustering the date column rather than the identity column. You
could also test the use of calculated columns with indexes on the calculated
columns IE: day_number and month_number.
regards,
Mark Baekdal
http://www.dbghost.com
http://www.innovartis.co.uk
+44 (0)208 241 1762
Database change management for SQL Server
"DKRReddy" wrote:

> The production_table has ~20 million records and has identity key which is
> clustered and has non clustered index on Created_Date.
> The following queries taking very long time as the plan using index scan.
> Please advise for optimization.
> select *
> from production_table p -- production_table has ~10 million records
> where datediff(dd, p.Created_Date, getdate()) = 7
>
> select *
> from production_table p -- -- production_table has ~10 million records
> where datediff(mm, p.created_date, getdate()) = 1
>
> Thanks in advance.
>
>
|||Calculate the fixed date range before you query. Always list the
columns you require rather than use SELECT *.
SELECT col1, col2, ...
FROM production_table
WHERE created_date >= '20050131'
AND created_date < '20050201'
SELECT col1, col2, ...
FROM production_table
WHERE created_date >= '20050206'
AND created_date < '20050207'
David Portas
SQL Server MVP
|||David is correct,... when you use a function in a where clause, the
optimizer is unable to use index statistics to determine the best index.
Pre-determining the date ranges as David suggests will give the optimizer
visibility into the index statistics... Performance will then depend on the
number of rows that satisfy the where clause...
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"DKRReddy" <dkrreddy@.hotmail.com> wrote in message
news:%23$XOvrMDFHA.392@.TK2MSFTNGP14.phx.gbl...
> The production_table has ~20 million records and has identity key which
> is
> clustered and has non clustered index on Created_Date.
> The following queries taking very long time as the plan using index scan.
> Please advise for optimization.
> select *
> from production_table p -- production_table has ~10 million records
> where datediff(dd, p.Created_Date, getdate()) = 7
>
> select *
> from production_table p -- -- production_table has ~10 million
> records
> where datediff(mm, p.created_date, getdate()) = 1
>
> Thanks in advance.
>

Performance : Queries taking very long time (Urgent)

The production_table has ~20 million records and has identity key which is
clustered and has non clustered index on Created_Date.
The following queries taking very long time as the plan using index scan.
Please advise for optimization.
select *
from production_table p -- production_table has ~10 million records
where datediff(dd, p.Created_Date, getdate()) = 7
select *
from production_table p -- -- production_table has ~10 million records
where datediff(mm, p.created_date, getdate()) = 1
Thanks in advance.Hmm.. How many rows do the queries return ? Theres lots one could advice,
such as do you really need a select * or one could state, creating a
clustered index on Created_Date column or even trying to rewrite the where
clause something like
where created_date = getdate-7 and so on..
I think thats related to SARG and how the optimiser would pick the right
plan.. But with so many records, and with the select * in there, even if
does do an index seek , it would follow with a bookmark lookup to get the
rest of the data which could be expensive as well..
So without knowing a whole lot about ur application, it would be wrong for
me to really give the right solution...
Probably someone else can chime in with something better.
"DKRReddy" <dkrreddy@.hotmail.com> wrote in message
news:%23$XOvrMDFHA.392@.TK2MSFTNGP14.phx.gbl...
> The production_table has ~20 million records and has identity key which
is
> clustered and has non clustered index on Created_Date.
> The following queries taking very long time as the plan using index scan.
> Please advise for optimization.
> select *
> from production_table p -- production_table has ~10 million records
> where datediff(dd, p.Created_Date, getdate()) = 7
>
> select *
> from production_table p -- -- production_table has ~10 million
records
> where datediff(mm, p.created_date, getdate()) = 1
>
> Thanks in advance.
>|||I would try running UPDATE STATISTICS initially, then try an index hint and
then consider clustering the date column rather than the identity column. You
could also test the use of calculated columns with indexes on the calculated
columns IE: day_number and month_number.
regards,
Mark Baekdal
http://www.dbghost.com
http://www.innovartis.co.uk
+44 (0)208 241 1762
Database change management for SQL Server
"DKRReddy" wrote:
> The production_table has ~20 million records and has identity key which is
> clustered and has non clustered index on Created_Date.
> The following queries taking very long time as the plan using index scan.
> Please advise for optimization.
> select *
> from production_table p -- production_table has ~10 million records
> where datediff(dd, p.Created_Date, getdate()) = 7
>
> select *
> from production_table p -- -- production_table has ~10 million records
> where datediff(mm, p.created_date, getdate()) = 1
>
> Thanks in advance.
>
>|||Calculate the fixed date range before you query. Always list the
columns you require rather than use SELECT *.
SELECT col1, col2, ...
FROM production_table
WHERE created_date >= '20050131'
AND created_date < '20050201'
SELECT col1, col2, ...
FROM production_table
WHERE created_date >= '20050206'
AND created_date < '20050207'
--
David Portas
SQL Server MVP
--|||David is correct,... when you use a function in a where clause, the
optimizer is unable to use index statistics to determine the best index.
Pre-determining the date ranges as David suggests will give the optimizer
visibility into the index statistics... Performance will then depend on the
number of rows that satisfy the where clause...
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"DKRReddy" <dkrreddy@.hotmail.com> wrote in message
news:%23$XOvrMDFHA.392@.TK2MSFTNGP14.phx.gbl...
> The production_table has ~20 million records and has identity key which
> is
> clustered and has non clustered index on Created_Date.
> The following queries taking very long time as the plan using index scan.
> Please advise for optimization.
> select *
> from production_table p -- production_table has ~10 million records
> where datediff(dd, p.Created_Date, getdate()) = 7
>
> select *
> from production_table p -- -- production_table has ~10 million
> records
> where datediff(mm, p.created_date, getdate()) = 1
>
> Thanks in advance.
>

Performance : Queries taking very long time (Urgent)

The production_table has ~20 million records and has identity key which is
clustered and has non clustered index on Created_Date.
The following queries taking very long time as the plan using index scan.
Please advise for optimization.
select *
from production_table p -- production_table has ~10 million records
where datediff(dd, p.Created_Date, getdate()) = 7
select *
from production_table p -- -- production_table has ~10 million records
where datediff(mm, p.created_date, getdate()) = 1
Thanks in advance.Hmm.. How many rows do the queries return ? Theres lots one could advice,
such as do you really need a select * or one could state, creating a
clustered index on Created_Date column or even trying to rewrite the where
clause something like
where created_date = getdate-7 and so on..
I think thats related to SARG and how the optimiser would pick the right
plan.. But with so many records, and with the select * in there, even if
does do an index seek , it would follow with a bookmark lookup to get the
rest of the data which could be expensive as well..
So without knowing a whole lot about ur application, it would be wrong for
me to really give the right solution...
Probably someone else can chime in with something better.
"DKRReddy" <dkrreddy@.hotmail.com> wrote in message
news:%23$XOvrMDFHA.392@.TK2MSFTNGP14.phx.gbl...
> The production_table has ~20 million records and has identity key which
is
> clustered and has non clustered index on Created_Date.
> The following queries taking very long time as the plan using index scan.
> Please advise for optimization.
> select *
> from production_table p -- production_table has ~10 million records
> where datediff(dd, p.Created_Date, getdate()) = 7
>
> select *
> from production_table p -- -- production_table has ~10 million
records
> where datediff(mm, p.created_date, getdate()) = 1
>
> Thanks in advance.
>|||I would try running UPDATE STATISTICS initially, then try an index hint and
then consider clustering the date column rather than the identity column. Yo
u
could also test the use of calculated columns with indexes on the calculated
columns IE: day_number and month_number.
regards,
Mark Baekdal
http://www.dbghost.com
http://www.innovartis.co.uk
+44 (0)208 241 1762
Database change management for SQL Server
"DKRReddy" wrote:

> The production_table has ~20 million records and has identity key which
is
> clustered and has non clustered index on Created_Date.
> The following queries taking very long time as the plan using index scan.
> Please advise for optimization.
> select *
> from production_table p -- production_table has ~10 million records
> where datediff(dd, p.Created_Date, getdate()) = 7
>
> select *
> from production_table p -- -- production_table has ~10 million record
s
> where datediff(mm, p.created_date, getdate()) = 1
>
> Thanks in advance.
>
>|||Calculate the fixed date range before you query. Always list the
columns you require rather than use SELECT *.
SELECT col1, col2, ...
FROM production_table
WHERE created_date >= '20050131'
AND created_date < '20050201'
SELECT col1, col2, ...
FROM production_table
WHERE created_date >= '20050206'
AND created_date < '20050207'
David Portas
SQL Server MVP
--|||David is correct,... when you use a function in a where clause, the
optimizer is unable to use index statistics to determine the best index.
Pre-determining the date ranges as David suggests will give the optimizer
visibility into the index statistics... Performance will then depend on the
number of rows that satisfy the where clause...
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"DKRReddy" <dkrreddy@.hotmail.com> wrote in message
news:%23$XOvrMDFHA.392@.TK2MSFTNGP14.phx.gbl...
> The production_table has ~20 million records and has identity key which
> is
> clustered and has non clustered index on Created_Date.
> The following queries taking very long time as the plan using index scan.
> Please advise for optimization.
> select *
> from production_table p -- production_table has ~10 million records
> where datediff(dd, p.Created_Date, getdate()) = 7
>
> select *
> from production_table p -- -- production_table has ~10 million
> records
> where datediff(mm, p.created_date, getdate()) = 1
>
> Thanks in advance.
>

Friday, March 9, 2012

Performance - SQLServer:Buffer Manager Free pages

Hi
Envinnment : Windows 2003 SQLServer 2000 sp3 with AWE enabled
While performance monitoring in the low peak time, I saw the SQLServer:
Buffer Manager - Free pages = 1768 and came down till 500 and back to 1398.
I gathered form sysperfinfo the following
Buffer Cache database pages8360 MB
Free pages10 MB
Procedure Cache Allocated1377 MB
Is any thing wrong with Buffer Manger? because of free space came down? Is
there any lower limit for free pages?
Thanks for looking this issue
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums...erver/200510/1
sorry to add total memory . 12GB
Dedicated to SQL = 10.5 GB
kpxus wrote:
>Hi
>Envinnment : Windows 2003 SQLServer 2000 sp3 with AWE enabled
>While performance monitoring in the low peak time, I saw the SQLServer:
>Buffer Manager - Free pages = 1768 and came down till 500 and back to 1398.
>I gathered form sysperfinfo the following
>Buffer Cache database pages8360 MB
>Free pages10 MB
>Procedure Cache Allocated1377 MB
>Is any thing wrong with Buffer Manger? because of free space came down? Is
>there any lower limit for free pages?
>Thanks for looking this issue
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums...erver/200510/1
|||Hi
To use the Buffer Cache Hit Ratio.
I use that along with SQLServer:MemoryManager\Total Server Memory(KB)
and SQLServer:MemoryManager\Target Server Memory(KB). Total and Target
values should be equal.
Target should never be higher than Total. Target is how much SQL
Server would like to use and Total is how much it currently has.
HTH
From
Doller
|||Total and target memory are same 10.506 MB
doller wrote:
>Hi
>To use the Buffer Cache Hit Ratio.
>I use that along with SQLServer:MemoryManager\Total Server Memory(KB)
>and SQLServer:MemoryManager\Target Server Memory(KB). Total and Target
>values should be equal.
> Target should never be higher than Total. Target is how much SQL
>Server would like to use and Total is how much it currently has.
>HTH
>From
>Doller
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums...erver/200510/1

Performance - SQLServer:Buffer Manager Free pages

Hi
Envinnment : Windows 2003 SQLServer 2000 sp3 with AWE enabled
While performance monitoring in the low peak time, I saw the SQLServer:
Buffer Manager - Free pages = 1768 and came down till 500 and back to 1398.
I gathered form sysperfinfo the following
Buffer Cache database pages 8360 MB
Free pages 10 MB
Procedure Cache Allocated 1377 MB
Is any thing wrong with Buffer Manger? because of free space came down? Is
there any lower limit for free pages?
Thanks for looking this issue
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums...server/200510/1sorry to add total memory . 12GB
Dedicated to SQL = 10.5 GB
kpxus wrote:
>Hi
>Envinnment : Windows 2003 SQLServer 2000 sp3 with AWE enabled
>While performance monitoring in the low peak time, I saw the SQLServer:
>Buffer Manager - Free pages = 1768 and came down till 500 and back to 1398.
>I gathered form sysperfinfo the following
>Buffer Cache database pages 8360 MB
>Free pages 10 MB
>Procedure Cache Allocated 1377 MB
>Is any thing wrong with Buffer Manger? because of free space came down? Is
>there any lower limit for free pages?
>Thanks for looking this issue
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums...server/200510/1|||Hi
To use the Buffer Cache Hit Ratio.
I use that along with SQLServer:MemoryManager\Total Server Memory(KB)
and SQLServer:MemoryManager\Target Server Memory(KB). Total and Target
values should be equal.
Target should never be higher than Total. Target is how much SQL
Server would like to use and Total is how much it currently has.
HTH
From
Doller|||Total and target memory are same 10.506 MB
doller wrote:
>Hi
>To use the Buffer Cache Hit Ratio.
>I use that along with SQLServer:MemoryManager\Total Server Memory(KB)
>and SQLServer:MemoryManager\Target Server Memory(KB). Total and Target
>values should be equal.
> Target should never be higher than Total. Target is how much SQL
>Server would like to use and Total is how much it currently has.
>HTH
>From
>Doller
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums...server/200510/1

Performance - SQLServer:Buffer Manager Free pages

Hi
Envinnment : Windows 2003 SQLServer 2000 sp3 with AWE enabled
While performance monitoring in the low peak time, I saw the SQLServer:
Buffer Manager - Free pages = 1768 and came down till 500 and back to 1398.
I gathered form sysperfinfo the following
Buffer Cache database pages 8360 MB
Free pages 10 MB
Procedure Cache Allocated 1377 MB
Is any thing wrong with Buffer Manger? because of free space came down? Is
there any lower limit for free pages?
Thanks for looking this issue
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200510/1sorry to add total memory . 12GB
Dedicated to SQL = 10.5 GB
kpxus wrote:
>Hi
>Envinnment : Windows 2003 SQLServer 2000 sp3 with AWE enabled
>While performance monitoring in the low peak time, I saw the SQLServer:
>Buffer Manager - Free pages = 1768 and came down till 500 and back to 1398.
>I gathered form sysperfinfo the following
>Buffer Cache database pages 8360 MB
>Free pages 10 MB
>Procedure Cache Allocated 1377 MB
>Is any thing wrong with Buffer Manger? because of free space came down? Is
>there any lower limit for free pages?
>Thanks for looking this issue
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200510/1|||Hi
To use the Buffer Cache Hit Ratio.
I use that along with SQLServer:MemoryManager\Total Server Memory(KB)
and SQLServer:MemoryManager\Target Server Memory(KB). Total and Target
values should be equal.
Target should never be higher than Total. Target is how much SQL
Server would like to use and Total is how much it currently has.
HTH
From
Doller|||Total and target memory are same 10.506 MB
doller wrote:
>Hi
>To use the Buffer Cache Hit Ratio.
>I use that along with SQLServer:MemoryManager\Total Server Memory(KB)
>and SQLServer:MemoryManager\Target Server Memory(KB). Total and Target
>values should be equal.
> Target should never be higher than Total. Target is how much SQL
>Server would like to use and Total is how much it currently has.
>HTH
>From
>Doller
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200510/1

Performance -- Plan Cost and Scan

I am looking some query stats and have a query.
Query 1 Plan Cost: 5.312 -- Execution 20.497 seconds -- total time 23.848
seconds --Physical Reads 1,404, Logical Reads, 927,701,Scans 621, Read Ahead
Reads - 3,976
Query 2 Plan Cost 9.469 -- Exection 00.143 seconds -- total time 02.016
seconds -- physical reads 0, logical reads 7146, sans 622 , read ahead reads
100
Query 2 obviously runs around 95% faster and it would seem difference
between plan cost is very little.
What is the downside of having the plan cost go up? Under heavier load
would this start to perform poorly because of that? At what point does the
plan cost become more important than the other items? Or are the logical
reads a better indiciator of which way to go.
Sorry for so many questions -- really starting to use the tools to tweak
queries and want to make sure I am going down right paths.
Thanks!!!Execution plan costs are worth looking at, but must be taken with a
bit of skepticism. The cost in the execution plan is just an
estimate. The actual results can be quite different. You can even
get completely different execution plans sometimes just by updating
statistics.
Roy Harvey
Beacon Falls, CT
On Tue, 28 Feb 2006 10:46:07 -0500, "Brian" <brian@.nospam.com> wrote:

>I am looking some query stats and have a query.
>Query 1 Plan Cost: 5.312 -- Execution 20.497 seconds -- total time 23.848
>seconds --Physical Reads 1,404, Logical Reads, 927,701,Scans 621, Read Ahea
d
>Reads - 3,976
>Query 2 Plan Cost 9.469 -- Exection 00.143 seconds -- total time 02.016
>seconds -- physical reads 0, logical reads 7146, sans 622 , read ahead read
s
>100
>
>Query 2 obviously runs around 95% faster and it would seem difference
>between plan cost is very little.
>What is the downside of having the plan cost go up? Under heavier load
>would this start to perform poorly because of that? At what point does the
>plan cost become more important than the other items? Or are the logical
>reads a better indiciator of which way to go.
>Sorry for so many questions -- really starting to use the tools to tweak
>queries and want to make sure I am going down right paths.
>Thanks!!!
>

Wednesday, March 7, 2012

Performance

I am trying to improve performance on my SQL 2000
Server. In my performance monitoring, my % processor
time hovers around the 35% to 40% mark. The memory seems
to be sufficient to support our needs, but users are
complaining that the system is slow.
I can identify which databases are being used that causes
the slow response time. I would like to add some indexs
on a few tables. Can I add indexes while users are in
the system?
Any other suggestions to decrease the % processor time,
besides upgrading the processors on the server?
Thank you,
JLFlemingHi
%30-40 avg processor time did not show that you have a bottleneck with
processor.
It seems like you have slow running queries or locking-blocking issues..
I think it would be better for you to analyze you're queries with sql
profiler. And start to optimize or re-write them.
Creating index on tables will block users to access to that tables until
index creation is complete.
And my suggestion is to decide which index to create according to your
queries.
You can use SQL Profiler to capture the trace's and Index Tuning Wizard to
analyze them. (You better use ITW against the db on the test server using the
trace file that you capture from production)
Regards..
"JLFleming" wrote:
> I am trying to improve performance on my SQL 2000
> Server. In my performance monitoring, my % processor
> time hovers around the 35% to 40% mark. The memory seems
> to be sufficient to support our needs, but users are
> complaining that the system is slow.
> I can identify which databases are being used that causes
> the slow response time. I would like to add some indexs
> on a few tables. Can I add indexes while users are in
> the system?
> Any other suggestions to decrease the % processor time,
> besides upgrading the processors on the server?
> Thank you,
> JLFleming
>|||You might want to have a looka t these below. Low processor utilization can
mean several things but one of the most common is that they are waiting on
Disk I/O to complete. If you have a disk bottleneck the processors can't do
much. The same is true for blocking. Have you run sp_who2 to see if anyone
is being blocked?
http://www.microsoft.com/sql/techinfo/administration/2000/perftuning.asp
Performance WP's
http://www.swynk.com/friends/vandenberg/perfmonitor.asp Perfmon counters
http://www.sql-server-performance.com/sql_server_performance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.com/best_sql_server_performance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=q224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_perfmon_24u1.asp
Disk Monitoring
Andrew J. Kelly SQL MVP
"JLFleming" <anonymous@.discussions.microsoft.com> wrote in message
news:066e01c50eac$063c8340$a401280a@.phx.gbl...
> I am trying to improve performance on my SQL 2000
> Server. In my performance monitoring, my % processor
> time hovers around the 35% to 40% mark. The memory seems
> to be sufficient to support our needs, but users are
> complaining that the system is slow.
> I can identify which databases are being used that causes
> the slow response time. I would like to add some indexs
> on a few tables. Can I add indexes while users are in
> the system?
> Any other suggestions to decrease the % processor time,
> besides upgrading the processors on the server?
> Thank you,
> JLFleming|||I have very little to add to what these guys have said,
however you may also need to look at Disk Raiding and
Network speed.
Other than that is your system used as a data input tool,
reporting tool or a bit of both.
Peter
peternolan67REMOVETHIS@.hotmail.com
"It's true hard work never killed anybody, but I figure,
why take the chance?"
Ronald Reagan
>--Original Message--
>I am trying to improve performance on my SQL 2000
>Server. In my performance monitoring, my % processor
>time hovers around the 35% to 40% mark. The memory seems
>to be sufficient to support our needs, but users are
>complaining that the system is slow.
>I can identify which databases are being used that causes
>the slow response time. I would like to add some indexs
>on a few tables. Can I add indexes while users are in
>the system?
>Any other suggestions to decrease the % processor time,
>besides upgrading the processors on the server?
>Thank you,
>JLFleming
>.
>|||Our system is mostly used as a data input tool. We do
some reporting, but not much.
>--Original Message--
>I have very little to add to what these guys have said,
>however you may also need to look at Disk Raiding and
>Network speed.
>Other than that is your system used as a data input
tool,
>reporting tool or a bit of both.
>Peter
>peternolan67REMOVETHIS@.hotmail.com
>"It's true hard work never killed anybody, but I figure,
>why take the chance?"
>Ronald Reagan
>
>>--Original Message--
>>I am trying to improve performance on my SQL 2000
>>Server. In my performance monitoring, my % processor
>>time hovers around the 35% to 40% mark. The memory
seems
>>to be sufficient to support our needs, but users are
>>complaining that the system is slow.
>>I can identify which databases are being used that
causes
>>the slow response time. I would like to add some
indexs
>>on a few tables. Can I add indexes while users are in
>>the system?
>>Any other suggestions to decrease the % processor time,
>>besides upgrading the processors on the server?
>>Thank you,
>>JLFleming
>>.
>.
>

performance

I have been looking at the performance of my SQL Server 2000.
For %Disk Time I get the following values for 24 hours (3AM to 3AM next day):
MIN MAX AVERAGE
0.494 39244.5 113.686
If I take out the busiest 45 minutes and look at the Statistics for 23 hours
and 15 minutes then I get the following values (3:45 AM to 3 AM next day)
MIN MAX AVERAGE
0.912 4860.8 42.423
Microsoft suggests a threshold value of 90% for this counter. Do these
values look OK? I am not having any performance problems on this machine.
Even the average %Processor Time is less than 3%. Any insight will be
appreciated. Thanks.%Disk Time is almost a useless counter and should be used very sparingly or
even not at all. If it is at 100 or less you can be pretty sure the disks
are not being taxed too much. Anything over that and it becomes pretty
unreliable and hard to correlate. The disk Q's will give a much better
overall indication of how busy they are but there are other disk counters
that will give more detailed insights to how efficient they are. The
fn_virtual_filestats are also a good place to start for disk related info.
Here are some other places:
http://sqlblog.com/blogs/linchi_shea/archive/2007/03/28/don-t-forget-the-business-transaction-throughput-and-response-time.aspx
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx
IO Basics
http://www.microsoft.com/technet/prodtechnol/sql/2005/physdbstor.mspx
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"sharman" <sharman@.discussions.microsoft.com> wrote in message
news:159A4413-BD9B-4123-A669-4D4140603888@.microsoft.com...
>I have been looking at the performance of my SQL Server 2000.
> For %Disk Time I get the following values for 24 hours (3AM to 3AM next
> day):
> MIN MAX AVERAGE
> 0.494 39244.5 113.686
> If I take out the busiest 45 minutes and look at the Statistics for 23
> hours
> and 15 minutes then I get the following values (3:45 AM to 3 AM next day)
> MIN MAX AVERAGE
> 0.912 4860.8 42.423
> Microsoft suggests a threshold value of 90% for this counter. Do these
> values look OK? I am not having any performance problems on this machine.
> Even the average %Processor Time is less than 3%. Any insight will be
> appreciated. Thanks.|||In addition to what Andy said, you should also look at the I/O response
counters (Avg Disk sec/read and Avg disk sec/write). These counters tell you
how long in average an I/O read or write take to complete. Together with Avg
Disk bytes/read and Avg Disk bytes/write--which tell you the sizes of your
I/O, you can get a pretty good picture of how your disks are doing. For
instance, if you are doing small I/Os and your avg disk response time is long
(e.g. > 20ms or a smaller threshold), something is up with your disk and you
should take a closer look.
Linchi
"sharman" wrote:
> I have been looking at the performance of my SQL Server 2000.
> For %Disk Time I get the following values for 24 hours (3AM to 3AM next day):
> MIN MAX AVERAGE
> 0.494 39244.5 113.686
> If I take out the busiest 45 minutes and look at the Statistics for 23 hours
> and 15 minutes then I get the following values (3:45 AM to 3 AM next day)
> MIN MAX AVERAGE
> 0.912 4860.8 42.423
> Microsoft suggests a threshold value of 90% for this counter. Do these
> values look OK? I am not having any performance problems on this machine.
> Even the average %Processor Time is less than 3%. Any insight will be
> appreciated. Thanks.

Performance

Is there any difference between:
executing 2 queries in the same database at the same time
or
Executing 2 queries in 2 different databases at the same time
The same conditions for the 2 databases (same query, tables, indexes).
The concrete question is: is there any performance advantages in spliting
one database into 2.
--
Thanks
Regards.
Josema"Josema" <Jestrade@.ocu.org> wrote in message
news:FA30EE5C-0959-456E-BDA6-080382700A51@.microsoft.com...
> Is there any difference between:
> executing 2 queries in the same database at the same time
> or
> Executing 2 queries in 2 different databases at the same time
> The same conditions for the 2 databases (same query, tables, indexes).
> The concrete question is: is there any performance advantages in spliting
> one database into 2.
"It depends."
On to many factors to really give a complete answer w/o knowing your schema,
your hardware, etc.
For example, if the dataset in DB 1 is cached, two queries against it will
be as fast as your memory can provide them. If you have it in two
databases, the chances both datasets will fit in memory drops which means
you increase your likelihood that you have to hit the disk which will
dramactically slow things down.
On the other hand, if both queries require a full table scan and you're
flooding your disk channel, putting the two databases on completely
different disk channels may improve throughput.
Generally though I'd say splitting a database into two pieces strictly for
performance reasons isn't going to help.
>
> --
> Thanks
> Regards.
> Josema

Saturday, February 25, 2012

perform floating point addition in SQL stored procedure

Hi..
Is there any way to add the value of 4 column and at the same time print the result together with another column?

I have this stored procedure:

CREATE PROCEDURE sp_queuelist AS
BEGIN
DECLARE @.temp1 As Decimal
DECLARE @.temp2 As Decimal
DECLARE @.cash As Decimal
DECLARE @.cheque As Decimal
DECLARE @.card As Decimal
DECLARE @.nets As Decimal
DECLARE @.bill As Decimal
DECLARE @.company As Decimal

SELECT
@.cash=Cash,@.cheque=Cheque,@.card=Card,@.nets=Nets,
@.bill=Bill,@.company=Company
FROM QUEUE
ORDER BY QNo

SET @.temp1 = @.cash + @.cheque + @.card + @.nets
SET @.temp2 = @.bill +@.company

Select QNO,
PCNo,
PName,
@.temp1 As totalCash,
@.temp2 As totalContract,
Doctor

FROM QUEUE
ORDER BY QNo
END
GO

Basically I want to add 4 columns: cash+cheque+card+nets into totalCash and bill+company to totalContract.

All the 6 field types are decimal.

I want to calculate temp1 and temp2 and then select the rest of the column to be displayed in the datagrid.

However, this stored procedure gives me 2 problems:
1. It gives the rounding result of the addition of decimal number, not the decimal itself.
2. All the rows in datagrid display the same result for totalCash and totalContract, which is the total from the last row in the table.

I seldom use stored procedure.
Is there any way to solve this problem?
Any suggestion is most welcomed.
Thank you in advanced.
Sincerely

Agustina(1) you can try changing the decimal to float.

(2) in the design of your table, you can set the formula for the column as sum of the other 4 columns. that way you dont need to worry abt doing the addition. anytime you make any change in any of the columns, the computed column is automatically updated.
if you need more help in this approach, let me know.

HTH.

perform all the actions in one pass rather than take serveral call

You're going to spend far less time writing a stored procedure that creates
one account and iterating through your array in your client application
calling the stored proc.
"E B via webservertalk.com" wrote:

> My application has one web form where admin can add as many users as
> desired and at the end when he press SaveChanges I want to add all of them
> to database, There should be some way to pass arrays to stored procedure..
I
> am looking for it or something similar to this...
> I want to pass an array into a stored procedure so that it can perform all
> the actions in one pass rather than take serveral calls to the server.
> I'm working with (SQL Server, .NET)
> --
> Message posted via http://www.webservertalk.com
>Alien2_51 wrote:
> You're going to spend far less time writing a stored procedure that
> creates one account and iterating through your array in your client
> application calling the stored proc.
>
bs
--
Microsoft MVP - ASP/ASP.NET
Please reply to the newsgroup. This email account is my spam trap so I
don't check it very often. If you must reply off-line, then remove the
"NO SPAM"|||to Bob Barrows [MVP] what does it mean ?
Message posted via http://www.webservertalk.com|||E B via webservertalk.com wrote:
> to Bob Barrows [MVP] what does it mean ?
Oh, I'm sorry. it's short for:
I don't believe that what you said is necessarily correct.
Bob Barrows
Microsoft MVP - ASP/ASP.NET
Please reply to the newsgroup. This email account is my spam trap so I
don't check it very often. If you must reply off-line, then remove the
"NO SPAM"|||Ok, however i think i solve the problem , i used the link:
http://www.sommarskog.se/arrays-in-sql.html
what do u think'
Message posted via http://www.webservertalk.com|||E B via webservertalk.com wrote:
> Ok, however i think i solve the problem , i used the link:
> http://www.sommarskog.se/arrays-in-sql.html
> what do u think'
Well, there are several options in that link. Hopefully you satisfied
yourself that you picked the optimum one for your situation.
BTW, my comment was not directed at you. it was directed at the comment that
looping through the array in the client and making multiple calls to a
procedure ... across processes ... would be quicker than making a single
call to a stored procedure.
Microsoft MVP - ASP/ASP.NET
Please reply to the newsgroup. This email account is my spam trap so I
don't check it very often. If you must reply off-line, then remove the
"NO SPAM"