Showing posts with label client. Show all posts
Showing posts with label client. Show all posts

Friday, March 30, 2012

Performance improves with Studio Query window open

Bit of a strange one here. We have a SQL Express instance running with one database which is accessed by a VB6 client application.

Performance between the application and the database isn't great, but bizarrely if you open a Query window from the Management Studio (against the database) the performance dramatically improves - to the extent that it is visually noticeable from within the application.

So, I'm thinking that the database connections being made by the application are taking longer because of instance name resolution or the like and with the Query window open this resolution is performed quicker.

Has anyone come across this situation? I can re-create it on several different computers each with their own Express instance and I've not found anything about this on the net.

Here's the connection string used by the application - I have tried various permutations of values to no avail:

Provider=SQLOLEDB.1;Initial Catalog=SampleDB;Data Source=(local)\sqlexpress;Trusted_Connection=yes

Thanks very much

hi, SQLExpress sets by default it's related databases a database propery that could be involved in your observation..

SQLExpress sets the auto close database property to true, so that every database with no active connection is shut down to preserve file integrity.. at next database connection, the database will be re-opened, requiring initial overhead to start it.. you can modify that database option as required...

this "solution" was probably taken out of the box becouse SQLEXpress is expected to execute on "client pcs" and not on "server" hardware that obviously are more "secure" and "stable"...

regards|||

Yes, thank you - switching the Auto Close to False works nicely.

I note that Auto Close is False by default on databases on full SQL Server, so this we only need to watch out for those databases put up on SQL Express.

Thanks again.

Performance improves with Studio Query window open

Bit of a strange one here. We have a SQL Express instance running with one database which is accessed by a VB6 client application.

Performance between the application and the database isn't great, but bizarrely if you open a Query window from the Management Studio (against the database) the performance dramatically improves - to the extent that it is visually noticeable from within the application.

So, I'm thinking that the database connections being made by the application are taking longer because of instance name resolution or the like and with the Query window open this resolution is performed quicker.

Has anyone come across this situation? I can re-create it on several different computers each with their own Express instance and I've not found anything about this on the net.

Here's the connection string used by the application - I have tried various permutations of values to no avail:

Provider=SQLOLEDB.1;Initial Catalog=SampleDB;Data Source=(local)\sqlexpress;Trusted_Connection=yes

Thanks very much

hi, SQLExpress sets by default it's related databases a database propery that could be involved in your observation..

SQLExpress sets the auto close database property to true, so that every database with no active connection is shut down to preserve file integrity.. at next database connection, the database will be re-opened, requiring initial overhead to start it.. you can modify that database option as required...

this "solution" was probably taken out of the box becouse SQLEXpress is expected to execute on "client pcs" and not on "server" hardware that obviously are more "secure" and "stable"...

regards|||

Yes, thank you - switching the Auto Close to False works nicely.

I note that Auto Close is False by default on databases on full SQL Server, so this we only need to watch out for those databases put up on SQL Express.

Thanks again.

Performance Improvements: Hardware vs. DB Design

Hi
I recently designed & developed a data warehouse for a client of ours.
The project has been quite a success, but they have since loaded the
database with a substantial amount of (historic) data, and have given
access to the query & reporting system (Business Objects) to many more
users. As a result, the query time on the reports is beginning to
slow.
I need to advise the client on what should be done to improve
performance again.
I'm quite familiar with what can be done on the design side of the
database (i.e. indices, aggregate tables, etc.), but I am not very
sure about the cost/benefits from a hardware perspective. As a
consultancy, the cost of our services (to build aggregate tables, etc)
would be quite expensive. It might be a better option for the client
to improve the spec of the server instead, but I am reluctant to
recommend this option without knowing that there would be guaranteed
benefits. In other words, if they were to double the processing power,
would the query time effectively halve? What about memory - i.e. would
more memory help with multiple-users, but not affect the speed of the
reports? Is 4GB the max memory that Windows 2000 can have?
[I can't remember the exact spec, but the server's got something like
2 Xeon dual-processors, 4GB RAM & Win2k.]
Can anyone recommend a study or white-paper on the performance
improvements in relation to hardware upgrades, or something similar?
Otherwise, what other options do I have? Am I overlooking something
like two load-balanced servers?
Thanks for any help / recommendations!
Sean W.
Sean wrote:
> Hi
> I recently designed & developed a data warehouse for a client of ours.
> The project has been quite a success, but they have since loaded the
> database with a substantial amount of (historic) data, and have given
> access to the query & reporting system (Business Objects) to many more
> users. As a result, the query time on the reports is beginning to
> slow.
> I need to advise the client on what should be done to improve
> performance again.
> I'm quite familiar with what can be done on the design side of the
> database (i.e. indices, aggregate tables, etc.), but I am not very
> sure about the cost/benefits from a hardware perspective. As a
> consultancy, the cost of our services (to build aggregate tables, etc)
> would be quite expensive. It might be a better option for the client
> to improve the spec of the server instead, but I am reluctant to
> recommend this option without knowing that there would be guaranteed
> benefits. In other words, if they were to double the processing power,
> would the query time effectively halve? What about memory - i.e. would
> more memory help with multiple-users, but not affect the speed of the
> reports? Is 4GB the max memory that Windows 2000 can have?
> [I can't remember the exact spec, but the server's got something like
> 2 Xeon dual-processors, 4GB RAM & Win2k.]
> Can anyone recommend a study or white-paper on the performance
> improvements in relation to hardware upgrades, or something similar?
> Otherwise, what other options do I have? Am I overlooking something
> like two load-balanced servers?
> Thanks for any help / recommendations!
> Sean W.
Unfortunately, I think many companies are inclined to add more server
without first trying to tune the database. You've gone ahead and
designed a database for the customer and added the necessary RI and
indexes. But now that you see the SQL executed, someone needs to tune
the queries, revise the indexes if necessary, and verify the reporting
system is executing SQL in an efficient way: For example, is it using
stored procedures? Do users have Ad Hoc access to the database to query
whatever they want? Does the reporting tool bring back large result sets
and filter on the client PC? Do the queries use a READ UNCOMMITTED or
NOLOCK query option to keep locks to a minimum?
My point is that without performance tuning the queries, it's really
impossible to know whether adding more hardware will give you anything
but a temporary reprieve of poor performance.
When I hear about problems like this, I like to remember an old client
that had a couple hundred people hitting a SQL Server database which ran
on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
I hear about performance problems on quad-Xeon systems with 3GB RAM (as
an example), I get concerned that a company is considering throwing
another $150K at a new server.
Ad Hoc reporting systems are notoriously bad on performance, especially
if users don't run canned reports and can throw what SQL the end-user
reporting tool can generate at the server. The amount of data queried
can easily move old data out of cache and force the next query to read
that data from disk, which is really slow. Adding more memory is an
option if plan reuse is the issue. And since memory is cheap, it can't
hurt to add all you can. Adding more hardware will help, but may not
really be a long term solution.
However, nothing beats being able to aptly use an index to generate a
result set. Table/Clustered Index Scans are a killer on big tables.
You may want to consider placing query limits on ad hoc queries using
"SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
those queries cosuming excessive CPU and see if the database or the
queries can be tuned.
David Gugick
Imceda Software
www.imceda.com
|||With Business Objects it is worth asking the users to send you copies of
their reports. You can then see if indices need creating on various
tables etc.
Adrian
David G. wrote:

> Sean wrote:
>
> Unfortunately, I think many companies are inclined to add more server
> without first trying to tune the database. You've gone ahead and
> designed a database for the customer and added the necessary RI and
> indexes. But now that you see the SQL executed, someone needs to tune
> the queries, revise the indexes if necessary, and verify the reporting
> system is executing SQL in an efficient way: For example, is it using
> stored procedures? Do users have Ad Hoc access to the database to query
> whatever they want? Does the reporting tool bring back large result sets
> and filter on the client PC? Do the queries use a READ UNCOMMITTED or
> NOLOCK query option to keep locks to a minimum?
> My point is that without performance tuning the queries, it's really
> impossible to know whether adding more hardware will give you anything
> but a temporary reprieve of poor performance.
> When I hear about problems like this, I like to remember an old client
> that had a couple hundred people hitting a SQL Server database which ran
> on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
> I hear about performance problems on quad-Xeon systems with 3GB RAM (as
> an example), I get concerned that a company is considering throwing
> another $150K at a new server.
> Ad Hoc reporting systems are notoriously bad on performance, especially
> if users don't run canned reports and can throw what SQL the end-user
> reporting tool can generate at the server. The amount of data queried
> can easily move old data out of cache and force the next query to read
> that data from disk, which is really slow. Adding more memory is an
> option if plan reuse is the issue. And since memory is cheap, it can't
> hurt to add all you can. Adding more hardware will help, but may not
> really be a long term solution.
> However, nothing beats being able to aptly use an index to generate a
> result set. Table/Clustered Index Scans are a killer on big tables.
> You may want to consider placing query limits on ad hoc queries using
> "SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
> those queries cosuming excessive CPU and see if the database or the
> queries can be tuned.
>
|||Adrian Edwards wrote:
> With Business Objects it is worth asking the users to send you copies
> of their reports. You can then see if indices need creating on various
> tables etc.
> Adrian
>
Good point. If you can get your hands on any canned reports or at least
have a process for quickly rolling out new or updated reports, you may
eliminate users creating queries that don't perform well.
You might want to consider having new reports created against a test
server before moving them into production. Not giving users the rights
to create reports on the production data may be difficult to manage.
Managers generally want fast results. If you can't do that, at least
introduce a review process so there's never a time when too many reports
are created without review.
You can do this passively using Profiler or a server-side trace. You
could trap the SQL:BatchCompleted and RPC:Completed events for CPU over
a certain level (say 1,000ms) and review those SQL Statements
periodically.
David Gugick
Imceda Software
www.imceda.com
sql

Performance Improvements: Hardware vs. DB Design

Hi
I recently designed & developed a data warehouse for a client of ours.
The project has been quite a success, but they have since loaded the
database with a substantial amount of (historic) data, and have given
access to the query & reporting system (Business Objects) to many more
users. As a result, the query time on the reports is beginning to
slow.
I need to advise the client on what should be done to improve
performance again.
I'm quite familiar with what can be done on the design side of the
database (i.e. indices, aggregate tables, etc.), but I am not very
sure about the cost/benefits from a hardware perspective. As a
consultancy, the cost of our services (to build aggregate tables, etc)
would be quite expensive. It might be a better option for the client
to improve the spec of the server instead, but I am reluctant to
recommend this option without knowing that there would be guaranteed
benefits. In other words, if they were to double the processing power,
would the query time effectively halve? What about memory - i.e. would
more memory help with multiple-users, but not affect the speed of the
reports? Is 4GB the max memory that Windows 2000 can have?
[I can't remember the exact spec, but the server's got something like
2 Xeon dual-processors, 4GB RAM & Win2k.]
Can anyone recommend a study or white-paper on the performance
improvements in relation to hardware upgrades, or something similar?
Otherwise, what other options do I have? Am I overlooking something
like two load-balanced servers?
Thanks for any help / recommendations!
Sean W.Sean wrote:
> Hi
> I recently designed & developed a data warehouse for a client of ours.
> The project has been quite a success, but they have since loaded the
> database with a substantial amount of (historic) data, and have given
> access to the query & reporting system (Business Objects) to many more
> users. As a result, the query time on the reports is beginning to
> slow.
> I need to advise the client on what should be done to improve
> performance again.
> I'm quite familiar with what can be done on the design side of the
> database (i.e. indices, aggregate tables, etc.), but I am not very
> sure about the cost/benefits from a hardware perspective. As a
> consultancy, the cost of our services (to build aggregate tables, etc)
> would be quite expensive. It might be a better option for the client
> to improve the spec of the server instead, but I am reluctant to
> recommend this option without knowing that there would be guaranteed
> benefits. In other words, if they were to double the processing power,
> would the query time effectively halve? What about memory - i.e. would
> more memory help with multiple-users, but not affect the speed of the
> reports? Is 4GB the max memory that Windows 2000 can have?
> [I can't remember the exact spec, but the server's got something like
> 2 Xeon dual-processors, 4GB RAM & Win2k.]
> Can anyone recommend a study or white-paper on the performance
> improvements in relation to hardware upgrades, or something similar?
> Otherwise, what other options do I have? Am I overlooking something
> like two load-balanced servers?
> Thanks for any help / recommendations!
> Sean W.
Unfortunately, I think many companies are inclined to add more server
without first trying to tune the database. You've gone ahead and
designed a database for the customer and added the necessary RI and
indexes. But now that you see the SQL executed, someone needs to tune
the queries, revise the indexes if necessary, and verify the reporting
system is executing SQL in an efficient way: For example, is it using
stored procedures? Do users have Ad Hoc access to the database to query
whatever they want? Does the reporting tool bring back large result sets
and filter on the client PC? Do the queries use a READ UNCOMMITTED or
NOLOCK query option to keep locks to a minimum?
My point is that without performance tuning the queries, it's really
impossible to know whether adding more hardware will give you anything
but a temporary reprieve of poor performance.
When I hear about problems like this, I like to remember an old client
that had a couple hundred people hitting a SQL Server database which ran
on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
I hear about performance problems on quad-Xeon systems with 3GB RAM (as
an example), I get concerned that a company is considering throwing
another $150K at a new server.
Ad Hoc reporting systems are notoriously bad on performance, especially
if users don't run canned reports and can throw what SQL the end-user
reporting tool can generate at the server. The amount of data queried
can easily move old data out of cache and force the next query to read
that data from disk, which is really slow. Adding more memory is an
option if plan reuse is the issue. And since memory is cheap, it can't
hurt to add all you can. Adding more hardware will help, but may not
really be a long term solution.
However, nothing beats being able to aptly use an index to generate a
result set. Table/Clustered Index Scans are a killer on big tables.
You may want to consider placing query limits on ad hoc queries using
"SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
those queries cosuming excessive CPU and see if the database or the
queries can be tuned.
--
David Gugick
Imceda Software
www.imceda.com|||With Business Objects it is worth asking the users to send you copies of
their reports. You can then see if indices need creating on various
tables etc.
Adrian
David G. wrote:
> Sean wrote:
>>Hi
>>I recently designed & developed a data warehouse for a client of ours.
>>The project has been quite a success, but they have since loaded the
>>database with a substantial amount of (historic) data, and have given
>>access to the query & reporting system (Business Objects) to many more
>>users. As a result, the query time on the reports is beginning to
>>slow.
>>I need to advise the client on what should be done to improve
>>performance again.
>>I'm quite familiar with what can be done on the design side of the
>>database (i.e. indices, aggregate tables, etc.), but I am not very
>>sure about the cost/benefits from a hardware perspective. As a
>>consultancy, the cost of our services (to build aggregate tables, etc)
>>would be quite expensive. It might be a better option for the client
>>to improve the spec of the server instead, but I am reluctant to
>>recommend this option without knowing that there would be guaranteed
>>benefits. In other words, if they were to double the processing power,
>>would the query time effectively halve? What about memory - i.e. would
>>more memory help with multiple-users, but not affect the speed of the
>>reports? Is 4GB the max memory that Windows 2000 can have?
>>[I can't remember the exact spec, but the server's got something like
>>2 Xeon dual-processors, 4GB RAM & Win2k.]
>>Can anyone recommend a study or white-paper on the performance
>>improvements in relation to hardware upgrades, or something similar?
>>Otherwise, what other options do I have? Am I overlooking something
>>like two load-balanced servers?
>>Thanks for any help / recommendations!
>>Sean W.
>
> Unfortunately, I think many companies are inclined to add more server
> without first trying to tune the database. You've gone ahead and
> designed a database for the customer and added the necessary RI and
> indexes. But now that you see the SQL executed, someone needs to tune
> the queries, revise the indexes if necessary, and verify the reporting
> system is executing SQL in an efficient way: For example, is it using
> stored procedures? Do users have Ad Hoc access to the database to query
> whatever they want? Does the reporting tool bring back large result sets
> and filter on the client PC? Do the queries use a READ UNCOMMITTED or
> NOLOCK query option to keep locks to a minimum?
> My point is that without performance tuning the queries, it's really
> impossible to know whether adding more hardware will give you anything
> but a temporary reprieve of poor performance.
> When I hear about problems like this, I like to remember an old client
> that had a couple hundred people hitting a SQL Server database which ran
> on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
> I hear about performance problems on quad-Xeon systems with 3GB RAM (as
> an example), I get concerned that a company is considering throwing
> another $150K at a new server.
> Ad Hoc reporting systems are notoriously bad on performance, especially
> if users don't run canned reports and can throw what SQL the end-user
> reporting tool can generate at the server. The amount of data queried
> can easily move old data out of cache and force the next query to read
> that data from disk, which is really slow. Adding more memory is an
> option if plan reuse is the issue. And since memory is cheap, it can't
> hurt to add all you can. Adding more hardware will help, but may not
> really be a long term solution.
> However, nothing beats being able to aptly use an index to generate a
> result set. Table/Clustered Index Scans are a killer on big tables.
> You may want to consider placing query limits on ad hoc queries using
> "SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
> those queries cosuming excessive CPU and see if the database or the
> queries can be tuned.
>|||Adrian Edwards wrote:
> With Business Objects it is worth asking the users to send you copies
> of their reports. You can then see if indices need creating on various
> tables etc.
> Adrian
>
Good point. If you can get your hands on any canned reports or at least
have a process for quickly rolling out new or updated reports, you may
eliminate users creating queries that don't perform well.
You might want to consider having new reports created against a test
server before moving them into production. Not giving users the rights
to create reports on the production data may be difficult to manage.
Managers generally want fast results. If you can't do that, at least
introduce a review process so there's never a time when too many reports
are created without review.
You can do this passively using Profiler or a server-side trace. You
could trap the SQL:BatchCompleted and RPC:Completed events for CPU over
a certain level (say 1,000ms) and review those SQL Statements
periodically.
David Gugick
Imceda Software
www.imceda.com

performance hit after installing Oracle client?

SQL Server 2000 Enterprise Edition clustered on Win 2k Advanced Server
Last week we installed the Oracle client 8.1.6 to allow DTS to extract data
from our Oracle Financials system, which has been working fine. Since then u
sers have reported poor performance. Performance monitor showed low (20-30%)
CPU usage, 99.9% buffer ca
che hit ratio, usual disk utilisation amounts, but Pages/sec was averaging a
round 300. I understand this should be under 20. There is nothing else runni
ng on this server. We took the server down and switched to another node whic
h did not have the Oracle c
lient loaded and performance is back to usual. Has anyone else had a similar
problem, or is the Oracle software just a red herring?
Scott Doughtykevmc@.online.microsoft.com (Kevin McDonnell [MSFT]) wrote in message news:<ByFbcLS4DHA.568@.cpmsftngxa07.
phx.gbl>...
quote:

> When the performance was poor did you have multiple instances of SQL
> running on the same node?
> Did the performance improve after moving a single instance of SQL to
> another node?
> Thanks,
> Kevin McDonnell
> Microsoft Corporation
> This posting is provided AS IS with no warranties, and confers no rights.

Hi Kevin
We only had one instance running. We switched to another node which
didn't have the Oracle client installed and all was fine. As I say,
the oracle software may not be relevant. If it helps, while the paging
was running high, the amount of free RAM was over 200Mb.
Scottsql

Monday, March 26, 2012

Performance depending on user rights?

Hello,
I am in the world of SQL Server. Just installed it (SQL Server 2000)
from cd, added some users, created an ODBC datasource from my client
and am ready to roll. And it works!
When I perform an query from the Query Analyzer some queries are very
fast, while the same query from my client (Access) take much longer. I
noticed in the Profiler differences between the execution paths. I
just can't find out what causes these performance problems.
Someone told me it could have something to do with the user. When I
use Query analyser I am on the server, with admin rights. ODBC uses a
user with as little rights as possible. Can this be the problem?
Hope someone can help me.
Thankz,
PeterDoes the SQL statement look the same (the one from QA and Access)?
--
Tibor Karaszi, SQL Server MVP
Archive at: http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
"P. Bikkel" <bikkelp@.hotmail.com> wrote in message
news:f3f99039.0310230343.7f809590@.posting.google.com...
> Hello,
> I am in the world of SQL Server. Just installed it (SQL Server 2000)
> from cd, added some users, created an ODBC datasource from my client
> and am ready to roll. And it works!
> When I perform an query from the Query Analyzer some queries are very
> fast, while the same query from my client (Access) take much longer. I
> noticed in the Profiler differences between the execution paths. I
> just can't find out what causes these performance problems.
> Someone told me it could have something to do with the user. When I
> use Query analyser I am on the server, with admin rights. ODBC uses a
> user with as little rights as possible. Can this be the problem?
> Hope someone can help me.
> Thankz,
> Peter|||Yes, the SQL statement is the same.
"Tibor Karaszi" <tibor.please_reply_to_public_forum.karaszi@.cornerstone.se> wrote in message news:<uAOiBNWmDHA.1884@.TK2MSFTNGP09.phx.gbl>...
> Does the SQL statement look the same (the one from QA and Access)?
> --
> Tibor Karaszi, SQL Server MVP
> Archive at: http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
>
> "P. Bikkel" <bikkelp@.hotmail.com> wrote in message
> news:f3f99039.0310230343.7f809590@.posting.google.com...
> > Hello,
> >
> > I am in the world of SQL Server. Just installed it (SQL Server 2000)
> > from cd, added some users, created an ODBC datasource from my client
> > and am ready to roll. And it works!
> >
> > When I perform an query from the Query Analyzer some queries are very
> > fast, while the same query from my client (Access) take much longer. I
> > noticed in the Profiler differences between the execution paths. I
> > just can't find out what causes these performance problems.
> >
> > Someone told me it could have something to do with the user. When I
> > use Query analyser I am on the server, with admin rights. ODBC uses a
> > user with as little rights as possible. Can this be the problem?
> >
> > Hope someone can help me.
> >
> > Thankz,
> >
> > Peter|||Could possibly be different SET settings between the QA and ODBC connection...
--
Tibor Karaszi, SQL Server MVP
Archive at: http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
"P. Bikkel" <bikkelp@.hotmail.com> wrote in message
news:f3f99039.0310250327.4779d0bf@.posting.google.com...
> Yes, the SQL statement is the same.
> "Tibor Karaszi" <tibor.please_reply_to_public_forum.karaszi@.cornerstone.se> wrote in message
news:<uAOiBNWmDHA.1884@.TK2MSFTNGP09.phx.gbl>...
> > Does the SQL statement look the same (the one from QA and Access)?
> >
> > --
> > Tibor Karaszi, SQL Server MVP
> > Archive at: http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
> >
> >
> > "P. Bikkel" <bikkelp@.hotmail.com> wrote in message
> > news:f3f99039.0310230343.7f809590@.posting.google.com...
> > > Hello,
> > >
> > > I am in the world of SQL Server. Just installed it (SQL Server 2000)
> > > from cd, added some users, created an ODBC datasource from my client
> > > and am ready to roll. And it works!
> > >
> > > When I perform an query from the Query Analyzer some queries are very
> > > fast, while the same query from my client (Access) take much longer. I
> > > noticed in the Profiler differences between the execution paths. I
> > > just can't find out what causes these performance problems.
> > >
> > > Someone told me it could have something to do with the user. When I
> > > use Query analyser I am on the server, with admin rights. ODBC uses a
> > > user with as little rights as possible. Can this be the problem?
> > >
> > > Hope someone can help me.
> > >
> > > Thankz,
> > >
> > > Peter|||On 23 Oct 2003 04:43:04 -0700, bikkelp@.hotmail.com (P. Bikkel) wrote:
>I am in the world of SQL Server. Just installed it (SQL Server 2000)
>from cd, added some users, created an ODBC datasource from my client
>and am ready to roll. And it works!
I love it when a plan comes together!
>When I perform an query from the Query Analyzer some queries are very
>fast, while the same query from my client (Access) take much longer. I
>noticed in the Profiler differences between the execution paths. I
>just can't find out what causes these performance problems.
>Someone told me it could have something to do with the user. When I
>use Query analyser I am on the server, with admin rights. ODBC uses a
>user with as little rights as possible. Can this be the problem?
Well, how fast is fast, and how much longer is much longer?
If the query returns a lot of records, it could be something as simple
as the network transfer time. What if you run Query Analyzer on the
client, or Access on the server?
Joshua Sternsql

Tuesday, March 20, 2012

Performance and SQL 2005 - need some assistance

Hi,
I'm trying to decide if our client's SQL 2005 server specs are
adequate.
SERVER SPECS:
---
It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
2gig).
Duel xeons (3 gig cpus i think from memory)
6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
The database itself is around 20gig.
The performance numbers below were taken over a 20 minute period where
about 20 people were using our system. The SQL server is dedicated to
our system.
---
The main question the client are asking is if it needs more memory to
improve performance.
I'm pretty new to sql server 2005 performance analysis, so any feedback
would be great.
Here is some performance numbers related to Memory:
Available MBytes free = 900 Mb (average)
Page Faults/sec = 1800 (average)
Pages/sec = 11 (average) / min 0, max 1255 (there was
3 main spikes here)
PageFile usage = 9%
Avg Private Bytes = 1.65 gig (sqlserver)
Working set sqlserver = 1.65 gig
Buffer cache hit ratio = 99%
Checkpoint pages/sec = 0
Lazy write/sec = 16 (max 108)
Page life expectancy = 251 (min 11, max 1200)
Target server memory = 1.61 gig
Total server memory = 1.53 gig
So I am a bit confused here... the available megabytes seems ok (900Mb)
but it seems to be using the page file quite a bit (and thus
contributing to I/O)? Why would this happen?
There are severe spikes (users could be running big queries) but
overall what do you guys think?
I'll also mention the disk I/O. This seems to be an issue from what I
can see.
The database is on d:, which had:
% Disk Time = 744 (min 0, max 13954)
Av. Disk Queue Length = 7 (min 0, max 140)
Avg. Disk sec/Read = 0
Avg Disk sec/Write = 0
The log files are on e:
I am thinking about splitting the page file so it is on c:\ and e:\
(currently only c.
The tempdb database is also on c:
We seem to be having quite a bit of page splits/sec (which from what I
have read can contribute to I/O):
Page Splits/sec = 355 (min 0, max 1890)
Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
try a fill factor of say 80?
Any suggestions on the above numbers?
(I can also provide other performance stats if required (such as locks
etc), providing I have them)
Thanks for your time!
Daviddavconts@.gmail.com wrote:
> Hi,
> I'm trying to decide if our client's SQL 2005 server specs are
> adequate.
> SERVER SPECS:
> ---
> It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
> 2gig).
> Duel xeons (3 gig cpus i think from memory)
> 6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
> The database itself is around 20gig.
> The performance numbers below were taken over a 20 minute period where
> about 20 people were using our system. The SQL server is dedicated to
> our system.
> ---
> The main question the client are asking is if it needs more memory to
> improve performance.
> I'm pretty new to sql server 2005 performance analysis, so any feedback
> would be great.
> Here is some performance numbers related to Memory:
> Available MBytes free = 900 Mb (average)
> Page Faults/sec = 1800 (average)
> Pages/sec = 11 (average) / min 0, max 1255 (there was
> 3 main spikes here)
> PageFile usage = 9%
> Avg Private Bytes = 1.65 gig (sqlserver)
> Working set sqlserver = 1.65 gig
> Buffer cache hit ratio = 99%
> Checkpoint pages/sec = 0
> Lazy write/sec = 16 (max 108)
> Page life expectancy = 251 (min 11, max 1200)
> Target server memory = 1.61 gig
> Total server memory = 1.53 gig
> So I am a bit confused here... the available megabytes seems ok (900Mb)
> but it seems to be using the page file quite a bit (and thus
> contributing to I/O)? Why would this happen?
> There are severe spikes (users could be running big queries) but
> overall what do you guys think?
> I'll also mention the disk I/O. This seems to be an issue from what I
> can see.
> The database is on d:, which had:
> % Disk Time = 744 (min 0, max 13954)
> Av. Disk Queue Length = 7 (min 0, max 140)
> Avg. Disk sec/Read = 0
> Avg Disk sec/Write = 0
> The log files are on e:
> I am thinking about splitting the page file so it is on c:\ and e:\
> (currently only c.
> The tempdb database is also on c:
> We seem to be having quite a bit of page splits/sec (which from what I
> have read can contribute to I/O):
> Page Splits/sec = 355 (min 0, max 1890)
> Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
> try a fill factor of say 80?
> Any suggestions on the above numbers?
> (I can also provide other performance stats if required (such as locks
> etc), providing I have them)
> Thanks for your time!
> David
>
What does the Full Scans/sec (under SQL Server:Access Methods) counter
show? As far as fill factor, that depends on how the table is used. A
table that is fairly static should have a high fill factor, while a
table that is frequently inserted into should have a lower fill factor.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Unfortunately I didn't run that counter when doing the benchmarking
(after analysing the results I wish I did though!!)
I can run the counter now, but the db is not being used much at the
moment (some testing by a few people)
Our application hasn't gone "live" yet, but we are expecting quite a
bit of usage, so we want to try and anticipate/sort out any performance
issues before then.|||David,
You haven't told us anything about what you were doing during these time
period being monitored. How many transactions and of what type occurred
during this time? How busy were the CPU's? And most importantly what king of
performance issues are you having? As for disks you can move the files
around all you want onto the different logical drives but if they are all on
the same physical drive array it won't matter one bit. If you are doing a
lot of writes or tempdb activity you may want to think about splitting up
that single raid into several and separating the log files (including
tempdb) from the rest of the files. These might be worth having a look at:
http://www.sql-server-performance.c...nce_audit10.asp
Performance Audit
http://www.microsoft.com/technet/pr...perfmonitor.asp Perfmon counters
http://www.sql-server-performance.c...mance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.c...rmance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/d.../>
on_24u1.asp
Disk Monitoring
http://sqldev.net/misc/WaitTypes.htm Wait Types
Tempdb in 2005:
http://download.microsoft.com/downl...gWithTempDB.doc
Physical Database Storage:
Andrew J. Kelly SQL MVP
<davconts@.gmail.com> wrote in message
news:1152490147.388959.285070@.m73g2000cwd.googlegroups.com...
> Hi,
> I'm trying to decide if our client's SQL 2005 server specs are
> adequate.
> SERVER SPECS:
> ---
> It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
> 2gig).
> Duel xeons (3 gig cpus i think from memory)
> 6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
> The database itself is around 20gig.
> The performance numbers below were taken over a 20 minute period where
> about 20 people were using our system. The SQL server is dedicated to
> our system.
> ---
> The main question the client are asking is if it needs more memory to
> improve performance.
> I'm pretty new to sql server 2005 performance analysis, so any feedback
> would be great.
> Here is some performance numbers related to Memory:
> Available MBytes free = 900 Mb (average)
> Page Faults/sec = 1800 (average)
> Pages/sec = 11 (average) / min 0, max 1255 (there was
> 3 main spikes here)
> PageFile usage = 9%
> Avg Private Bytes = 1.65 gig (sqlserver)
> Working set sqlserver = 1.65 gig
> Buffer cache hit ratio = 99%
> Checkpoint pages/sec = 0
> Lazy write/sec = 16 (max 108)
> Page life expectancy = 251 (min 11, max 1200)
> Target server memory = 1.61 gig
> Total server memory = 1.53 gig
> So I am a bit confused here... the available megabytes seems ok (900Mb)
> but it seems to be using the page file quite a bit (and thus
> contributing to I/O)? Why would this happen?
> There are severe spikes (users could be running big queries) but
> overall what do you guys think?
> I'll also mention the disk I/O. This seems to be an issue from what I
> can see.
> The database is on d:, which had:
> % Disk Time = 744 (min 0, max 13954)
> Av. Disk Queue Length = 7 (min 0, max 140)
> Avg. Disk sec/Read = 0
> Avg Disk sec/Write = 0
> The log files are on e:
> I am thinking about splitting the page file so it is on c:\ and e:\
> (currently only c.
> The tempdb database is also on c:
> We seem to be having quite a bit of page splits/sec (which from what I
> have read can contribute to I/O):
> Page Splits/sec = 355 (min 0, max 1890)
> Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
> try a fill factor of say 80?
> Any suggestions on the above numbers?
> (I can also provide other performance stats if required (such as locks
> etc), providing I have them)
> Thanks for your time!
> David
>|||Thanks Andrew, I'll try and clarify.
Here is a bit more info.
Our application is a medical software application - think of it as a
radiology management system where a patients reports, bookings,
accounting etc are stored in our system.
So the queries do all sorts of lookups for patients, reports etc.
In the benchmark, we had about 20 people hit the app (and thus SQL 2005
indirectly) running all sorts of scenarios.
Our main concern was general slowness in the application. There were
points were it performs ok, whilst other times it seemed to hang/become
very slow (there are a few out of control queries submitted by users, I
have captured all the "long" queries for more analysis). We have the
same application running at another hospital, which have 30 users on at
one time and they don't experience as much slowness. Their db is
smaller though (10gig vs 20gig). Hence we are investigating how to get
their sql server working as best it can.
Here is some of the workload stats:
% Processor Time (TOTAL) = 59 (average) / (min 1, max 100)
Batch Requests/sec = 224 (average) / (min 0, max 1363)
SQL Compilations/sec = 5 (average) / (min 0, max 44)
SQL Re-Compilations/sec = 0
The CPU usage seemed ok, although there was occasional spikes to 90+
Locks info:
Average Latch Wait Time (ms) = 5 (average) / (min 0, max 57)
Latch Waits/sec = 110 (average) / (min 0, max 533)
Total Latch Wait Time (ms) = 227 (average) / (min 0, max 2348)
Average Wait Time (ms) = 19 (average) / (min 0, max 1813)
Lock Requests/sec = 102986 (average) / (min 0,
max 609802)
Lock Timeouts/sec = 0
Lock Wait Time (ms) = 4 (average) / (min 0, max
372)
Lock Waits/sec = 0
Tempdb info (currently 1 file, with initial size of 10gb):
Log Bytes Flushed/sec = 41 (min 0, max 12287)
Log File(S) size (KB) = 2047992 (min 2047992, max
2047992)
Log File(s) used size (KB) = 1385 (min 1358, max 1414)
Log Flush Waits/sec = 0
Free space in tempdb (KB) = 9600781
Version Cleanup rate (KB/s) = 0
Version Generation rate (KB/s) = 0
Here are some other counters:
Workfiles Created/sec = 6 (min 0, max 50)
Worktables Created/sec = 4 (min 0, max 28)
Worktables from Cache ratio = 28 (min 8, max 41)
Temp Tables Creation Rate = 0
Temp Tables For Destruction = 0
I've also been reading about the /3GB boot.ini switch. Could enabling
this result in SQL Server using more physical memory rather than
paging?
I'm still confused as to why there seems to be a lot of paging when
there is 900mb available? Or should I get the site to bump the memory
up to 4gb and then use the /3GB switch.
One last thing. Andrew, you mention a good point:
"As for disks you can move the files around all you want onto the
different logical drives but if they are all on the same physical drive
array it won't matter one bit."
Is there a way to control this (I personally don't have experience
managing raid disks so sorry if this is a stupid question)? I'll check
the links you mentioned below.
Thanks all for your input so far!
David
Andrew J. Kelly wrote:[vbcol=seagreen]
> David,
> You haven't told us anything about what you were doing during these time
> period being monitored. How many transactions and of what type occurred
> during this time? How busy were the CPU's? And most importantly what king
of
> performance issues are you having? As for disks you can move the files
> around all you want onto the different logical drives but if they are all
on
> the same physical drive array it won't matter one bit. If you are doing a
> lot of writes or tempdb activity you may want to think about splitting up
> that single raid into several and separating the log files (including
> tempdb) from the rest of the files. These might be worth having a look at:
>
> http://www.sql-server-performance.c...nce_audit10.asp
> Performance Audit
> http://www.microsoft.com/technet/pr...perfmonitor.asp Perfmon counters
> http://www.sql-server-performance.c...mance_audit.asp
> hardware Performance CheckList
> http://www.sql-server-performance.c...rmance_tips.asp
> SQL 2000 Performance tuning tips
> http://www.support.microsoft.com/?id=224587 Troubleshooting App
> Performance
> http://msdn.microsoft.com/library/d...
fmon_24u1.asp
> Disk Monitoring
> http://sqldev.net/misc/WaitTypes.htm Wait Types
> Tempdb in 2005:
> http://download.microsoft.com/downl...gWithTempDB.doc
> Physical Database Storage:
> --
> Andrew J. Kelly SQL MVP
> <davconts@.gmail.com> wrote in message
> news:1152490147.388959.285070@.m73g2000cwd.googlegroups.com...|||David,
Thanks for the extra info. If it is a single physical array then any
activity on one logical drive will potentially affect the other logical
drives as well. This is because it is a single unit in terms of disks and
the number of heads. The only way to avoid that is to use separate physical
drives. Sometimes adding more drives to a raid 10 will help since it spreads
the load more. But logging and tempdb activity and data access are always
counter productive to each other. How much depends on the volume and drive
configurations. My guess is that you have a lot of untuned queries or tables
that are lacking proper indexes. When you run profiler don't just look for
the queries that take the longest time but pay attention to the number of
reads as well. If you have a lot of queries with a lot of reads you will
most likely have to go to disk to satisfy some of them. That will lead to
increased blocking etc. For only 30 users your batch requests seem sort of
high for an app such as this. It sounds like it may be a bit chatty and
could use some optimization as well. The Pages per second is the one you
really want to pay attention to more than the pagefaults since pagefaults
include a lot of soft faults as well. In your case it does seem to be a
little higher than normal if the conditions are as you say but not that
excessive that I would put it top of the priority list. I would find and
tune the poorly performing queries first. And as to adding more ram. What
edition of OS and SQL Server are you running? If it is Standard Edition of
SQL you can only use 2GB anyway so adding an additional GB will not help.
One last thing. Are you sure there are no other apps or services running on
that machine?
Andrew J. Kelly SQL MVP
<davconts@.gmail.com> wrote in message
news:1152498395.129036.45440@.35g2000cwc.googlegroups.com...
> Thanks Andrew, I'll try and clarify.
> Here is a bit more info.
> Our application is a medical software application - think of it as a
> radiology management system where a patients reports, bookings,
> accounting etc are stored in our system.
> So the queries do all sorts of lookups for patients, reports etc.
> In the benchmark, we had about 20 people hit the app (and thus SQL 2005
> indirectly) running all sorts of scenarios.
> Our main concern was general slowness in the application. There were
> points were it performs ok, whilst other times it seemed to hang/become
> very slow (there are a few out of control queries submitted by users, I
> have captured all the "long" queries for more analysis). We have the
> same application running at another hospital, which have 30 users on at
> one time and they don't experience as much slowness. Their db is
> smaller though (10gig vs 20gig). Hence we are investigating how to get
> their sql server working as best it can.
> Here is some of the workload stats:
> % Processor Time (TOTAL) = 59 (average) / (min 1, max 100)
> Batch Requests/sec = 224 (average) / (min 0, max 1363)
> SQL Compilations/sec = 5 (average) / (min 0, max 44)
> SQL Re-Compilations/sec = 0
> The CPU usage seemed ok, although there was occasional spikes to 90+
> Locks info:
> Average Latch Wait Time (ms) = 5 (average) / (min 0, max 57)
> Latch Waits/sec = 110 (average) / (min 0, max 533)
> Total Latch Wait Time (ms) = 227 (average) / (min 0, max 2348)
> Average Wait Time (ms) = 19 (average) / (min 0, max 1813)
> Lock Requests/sec = 102986 (average) / (min 0,
> max 609802)
> Lock Timeouts/sec = 0
> Lock Wait Time (ms) = 4 (average) / (min 0, max
> 372)
> Lock Waits/sec = 0
> Tempdb info (currently 1 file, with initial size of 10gb):
> Log Bytes Flushed/sec = 41 (min 0, max 12287)
> Log File(S) size (KB) = 2047992 (min 2047992, max
> 2047992)
> Log File(s) used size (KB) = 1385 (min 1358, max 1414)
> Log Flush Waits/sec = 0
> Free space in tempdb (KB) = 9600781
> Version Cleanup rate (KB/s) = 0
> Version Generation rate (KB/s) = 0
> Here are some other counters:
> Workfiles Created/sec = 6 (min 0, max 50)
> Worktables Created/sec = 4 (min 0, max 28)
> Worktables from Cache ratio = 28 (min 8, max 41)
> Temp Tables Creation Rate = 0
> Temp Tables For Destruction = 0
> --
> I've also been reading about the /3GB boot.ini switch. Could enabling
> this result in SQL Server using more physical memory rather than
> paging?
> I'm still confused as to why there seems to be a lot of paging when
> there is 900mb available? Or should I get the site to bump the memory
> up to 4gb and then use the /3GB switch.
> One last thing. Andrew, you mention a good point:
> "As for disks you can move the files around all you want onto the
> different logical drives but if they are all on the same physical drive
> array it won't matter one bit."
> Is there a way to control this (I personally don't have experience
> managing raid disks so sorry if this is a stupid question)? I'll check
> the links you mentioned below.
> Thanks all for your input so far!
> David
>
> Andrew J. Kelly wrote:
>|||davconts@.gmail.com wrote:
> Unfortunately I didn't run that counter when doing the benchmarking
> (after analysing the results I wish I did though!!)
> I can run the counter now, but the db is not being used much at the
> moment (some testing by a few people)
> Our application hasn't gone "live" yet, but we are expecting quite a
> bit of usage, so we want to try and anticipate/sort out any performance
> issues before then.
>
I'm not sure memory is necessarily your problem. True, more would be
better, but generally poor performance is due to inefficient coding,
lack of proper indexes, basic things like that. SQL does a pretty good
job of managing its resources, but really falls down when given bad code.
If I were in your shoes, I would capture some trace logs while your
system is under load, and identify those processes/queries that generate
the most work. Focus on disk reads, as high disk reads are a good
indicator of table or index scans, a prime opportunity for improvement.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Thanks all for your responses so far... greatly appreciated.
Yes I do think we have some work to do in optimising queries and
indexes.. I captured some traces of queries that took more than 2
seconds (some took 3 mins !!!) for the programmers to start looking
into. Maybe with the extra size of this customers database, it has
exposed that the queries need tuning (whereas our other site with a
much smaller database doesn't have as much of a problem). I/O appears
to be a problem.
One more question though. Andrew mentioned above that the standard
version of SQL server can only use 2gb ram.
I read this somewhere else, but wanted to verify it for SQL 2005.
The following link says for standard edition of SQL Server 2005, that
the RAM is "Operating system maximum" (which is also for Enterprise
version):
http://www.microsoft.com/sql/prodin...e-features.mspx
So it appears that standard version of SQL 2005 does not have a set mem
usage restriction (such as SQL 2005 express)?
Thanks!|||I am sorry. I was thinking SQL 2000. Yes SQL 2005 Standard edition can use
up to the OS max. That was one of the benefits of upgrading if you had Std
edition.
Andrew J. Kelly SQL MVP
<davconts@.gmail.com> wrote in message
news:1152571874.284923.92510@.35g2000cwc.googlegroups.com...
> Thanks all for your responses so far... greatly appreciated.
> Yes I do think we have some work to do in optimising queries and
> indexes.. I captured some traces of queries that took more than 2
> seconds (some took 3 mins !!!) for the programmers to start looking
> into. Maybe with the extra size of this customers database, it has
> exposed that the queries need tuning (whereas our other site with a
> much smaller database doesn't have as much of a problem). I/O appears
> to be a problem.
> One more question though. Andrew mentioned above that the standard
> version of SQL server can only use 2gb ram.
> I read this somewhere else, but wanted to verify it for SQL 2005.
> The following link says for standard edition of SQL Server 2005, that
> the RAM is "Operating system maximum" (which is also for Enterprise
> version):
> http://www.microsoft.com/sql/prodin...e-features.mspx
> So it appears that standard version of SQL 2005 does not have a set mem
> usage restriction (such as SQL 2005 express)?
> Thanks!
>|||No worries
Glad they decided to lift that memory restriction for 2005 standard!
My boss was asking me about the page splits/sec value.
(Page Splits/sec = 355 (average) / (min 0, max 1890)
He believes that the vast majority of the sql work done by application
is reading (rather than inserting). Examples of inserts our app
performs might be user audit trails, and adding reports etc.
Now when you are reading (ie running a select query), you would not be
adding to the index yeah? And so you wouldn't be splitting pages..
right? So why would the page splits/sec be elevated?
Or have I got my understanding of page splits incorrect?
Andrew J. Kelly wrote:
> I am sorry. I was thinking SQL 2000. Yes SQL 2005 Standard edition can use
> up to the OS max. That was one of the benefits of upgrading if you had Std
> edition.
> --
> Andrew J. Kelly SQL MVP

Performance and SQL 2005 - need some assistance

Hi,
I'm trying to decide if our client's SQL 2005 server specs are
adequate.
SERVER SPECS:
---
It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
2gig).
Duel xeons (3 gig cpus i think from memory)
6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
The database itself is around 20gig.
The performance numbers below were taken over a 20 minute period where
about 20 people were using our system. The SQL server is dedicated to
our system.
---
The main question the client are asking is if it needs more memory to
improve performance.
I'm pretty new to sql server 2005 performance analysis, so any feedback
would be great.
Here is some performance numbers related to Memory:
Available MBytes free = 900 Mb (average)
Page Faults/sec = 1800 (average)
Pages/sec = 11 (average) / min 0, max 1255 (there was
3 main spikes here)
PageFile usage = 9%
Avg Private Bytes = 1.65 gig (sqlserver)
Working set sqlserver = 1.65 gig
Buffer cache hit ratio = 99%
Checkpoint pages/sec = 0
Lazy write/sec = 16 (max 108)
Page life expectancy = 251 (min 11, max 1200)
Target server memory = 1.61 gig
Total server memory = 1.53 gig
So I am a bit confused here... the available megabytes seems ok (900Mb)
but it seems to be using the page file quite a bit (and thus
contributing to I/O)? Why would this happen?
There are severe spikes (users could be running big queries) but
overall what do you guys think?
I'll also mention the disk I/O. This seems to be an issue from what I
can see.
The database is on d:, which had:
% Disk Time = 744 (min 0, max 13954)
Av. Disk Queue Length = 7 (min 0, max 140)
Avg. Disk sec/Read = 0
Avg Disk sec/Write = 0
The log files are on e:
I am thinking about splitting the page file so it is on c:\ and e:\
(currently only c:).
The tempdb database is also on c:
We seem to be having quite a bit of page splits/sec (which from what I
have read can contribute to I/O):
Page Splits/sec = 355 (min 0, max 1890)
Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
try a fill factor of say 80?
Any suggestions on the above numbers?
(I can also provide other performance stats if required (such as locks
etc), providing I have them)
Thanks for your time!
Daviddavconts@.gmail.com wrote:
> Hi,
> I'm trying to decide if our client's SQL 2005 server specs are
> adequate.
> SERVER SPECS:
> ---
> It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
> 2gig).
> Duel xeons (3 gig cpus i think from memory)
> 6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
> The database itself is around 20gig.
> The performance numbers below were taken over a 20 minute period where
> about 20 people were using our system. The SQL server is dedicated to
> our system.
> ---
> The main question the client are asking is if it needs more memory to
> improve performance.
> I'm pretty new to sql server 2005 performance analysis, so any feedback
> would be great.
> Here is some performance numbers related to Memory:
> Available MBytes free = 900 Mb (average)
> Page Faults/sec = 1800 (average)
> Pages/sec = 11 (average) / min 0, max 1255 (there was
> 3 main spikes here)
> PageFile usage = 9%
> Avg Private Bytes = 1.65 gig (sqlserver)
> Working set sqlserver = 1.65 gig
> Buffer cache hit ratio = 99%
> Checkpoint pages/sec = 0
> Lazy write/sec = 16 (max 108)
> Page life expectancy = 251 (min 11, max 1200)
> Target server memory = 1.61 gig
> Total server memory = 1.53 gig
> So I am a bit confused here... the available megabytes seems ok (900Mb)
> but it seems to be using the page file quite a bit (and thus
> contributing to I/O)? Why would this happen?
> There are severe spikes (users could be running big queries) but
> overall what do you guys think?
> I'll also mention the disk I/O. This seems to be an issue from what I
> can see.
> The database is on d:, which had:
> % Disk Time = 744 (min 0, max 13954)
> Av. Disk Queue Length = 7 (min 0, max 140)
> Avg. Disk sec/Read = 0
> Avg Disk sec/Write = 0
> The log files are on e:
> I am thinking about splitting the page file so it is on c:\ and e:\
> (currently only c:).
> The tempdb database is also on c:
> We seem to be having quite a bit of page splits/sec (which from what I
> have read can contribute to I/O):
> Page Splits/sec = 355 (min 0, max 1890)
> Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
> try a fill factor of say 80?
> Any suggestions on the above numbers?
> (I can also provide other performance stats if required (such as locks
> etc), providing I have them)
> Thanks for your time!
> David
>
What does the Full Scans/sec (under SQL Server:Access Methods) counter
show? As far as fill factor, that depends on how the table is used. A
table that is fairly static should have a high fill factor, while a
table that is frequently inserted into should have a lower fill factor.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Unfortunately I didn't run that counter when doing the benchmarking
(after analysing the results I wish I did though!!)
I can run the counter now, but the db is not being used much at the
moment (some testing by a few people)
Our application hasn't gone "live" yet, but we are expecting quite a
bit of usage, so we want to try and anticipate/sort out any performance
issues before then.|||David,
You haven't told us anything about what you were doing during these time
period being monitored. How many transactions and of what type occurred
during this time? How busy were the CPU's? And most importantly what king of
performance issues are you having? As for disks you can move the files
around all you want onto the different logical drives but if they are all on
the same physical drive array it won't matter one bit. If you are doing a
lot of writes or tempdb activity you may want to think about splitting up
that single raid into several and separating the log files (including
tempdb) from the rest of the files. These might be worth having a look at:
http://www.sql-server-performance.com/sql_server_performance_audit10.asp
Performance Audit
http://www.microsoft.com/technet/prodtechnol/sql/2005/library/operations.mspx
Performance WP's
http://www.swynk.com/friends/vandenberg/perfmonitor.asp Perfmon counters
http://www.sql-server-performance.com/sql_server_performance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.com/best_sql_server_performance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_perfmon_24u1.asp
Disk Monitoring
http://sqldev.net/misc/WaitTypes.htm Wait Types
Tempdb in 2005:
http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/WorkingWithTempDB.doc
Physical Database Storage:
--
Andrew J. Kelly SQL MVP
<davconts@.gmail.com> wrote in message
news:1152490147.388959.285070@.m73g2000cwd.googlegroups.com...
> Hi,
> I'm trying to decide if our client's SQL 2005 server specs are
> adequate.
> SERVER SPECS:
> ---
> It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
> 2gig).
> Duel xeons (3 gig cpus i think from memory)
> 6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
> The database itself is around 20gig.
> The performance numbers below were taken over a 20 minute period where
> about 20 people were using our system. The SQL server is dedicated to
> our system.
> ---
> The main question the client are asking is if it needs more memory to
> improve performance.
> I'm pretty new to sql server 2005 performance analysis, so any feedback
> would be great.
> Here is some performance numbers related to Memory:
> Available MBytes free = 900 Mb (average)
> Page Faults/sec = 1800 (average)
> Pages/sec = 11 (average) / min 0, max 1255 (there was
> 3 main spikes here)
> PageFile usage = 9%
> Avg Private Bytes = 1.65 gig (sqlserver)
> Working set sqlserver = 1.65 gig
> Buffer cache hit ratio = 99%
> Checkpoint pages/sec = 0
> Lazy write/sec = 16 (max 108)
> Page life expectancy = 251 (min 11, max 1200)
> Target server memory = 1.61 gig
> Total server memory = 1.53 gig
> So I am a bit confused here... the available megabytes seems ok (900Mb)
> but it seems to be using the page file quite a bit (and thus
> contributing to I/O)? Why would this happen?
> There are severe spikes (users could be running big queries) but
> overall what do you guys think?
> I'll also mention the disk I/O. This seems to be an issue from what I
> can see.
> The database is on d:, which had:
> % Disk Time = 744 (min 0, max 13954)
> Av. Disk Queue Length = 7 (min 0, max 140)
> Avg. Disk sec/Read = 0
> Avg Disk sec/Write = 0
> The log files are on e:
> I am thinking about splitting the page file so it is on c:\ and e:\
> (currently only c:).
> The tempdb database is also on c:
> We seem to be having quite a bit of page splits/sec (which from what I
> have read can contribute to I/O):
> Page Splits/sec = 355 (min 0, max 1890)
> Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
> try a fill factor of say 80?
> Any suggestions on the above numbers?
> (I can also provide other performance stats if required (such as locks
> etc), providing I have them)
> Thanks for your time!
> David
>|||Thanks Andrew, I'll try and clarify.
Here is a bit more info.
Our application is a medical software application - think of it as a
radiology management system where a patients reports, bookings,
accounting etc are stored in our system.
So the queries do all sorts of lookups for patients, reports etc.
In the benchmark, we had about 20 people hit the app (and thus SQL 2005
indirectly) running all sorts of scenarios.
Our main concern was general slowness in the application. There were
points were it performs ok, whilst other times it seemed to hang/become
very slow (there are a few out of control queries submitted by users, I
have captured all the "long" queries for more analysis). We have the
same application running at another hospital, which have 30 users on at
one time and they don't experience as much slowness. Their db is
smaller though (10gig vs 20gig). Hence we are investigating how to get
their sql server working as best it can.
Here is some of the workload stats:
% Processor Time (TOTAL) = 59 (average) / (min 1, max 100)
Batch Requests/sec = 224 (average) / (min 0, max 1363)
SQL Compilations/sec = 5 (average) / (min 0, max 44)
SQL Re-Compilations/sec = 0
The CPU usage seemed ok, although there was occasional spikes to 90+
Locks info:
Average Latch Wait Time (ms) = 5 (average) / (min 0, max 57)
Latch Waits/sec = 110 (average) / (min 0, max 533)
Total Latch Wait Time (ms) = 227 (average) / (min 0, max 2348)
Average Wait Time (ms) = 19 (average) / (min 0, max 1813)
Lock Requests/sec = 102986 (average) / (min 0,
max 609802)
Lock Timeouts/sec = 0
Lock Wait Time (ms) = 4 (average) / (min 0, max
372)
Lock Waits/sec = 0
Tempdb info (currently 1 file, with initial size of 10gb):
Log Bytes Flushed/sec = 41 (min 0, max 12287)
Log File(S) size (KB) = 2047992 (min 2047992, max
2047992)
Log File(s) used size (KB) = 1385 (min 1358, max 1414)
Log Flush Waits/sec = 0
Free space in tempdb (KB) = 9600781
Version Cleanup rate (KB/s) = 0
Version Generation rate (KB/s) = 0
Here are some other counters:
Workfiles Created/sec = 6 (min 0, max 50)
Worktables Created/sec = 4 (min 0, max 28)
Worktables from Cache ratio = 28 (min 8, max 41)
Temp Tables Creation Rate = 0
Temp Tables For Destruction = 0
--
I've also been reading about the /3GB boot.ini switch. Could enabling
this result in SQL Server using more physical memory rather than
paging?
I'm still confused as to why there seems to be a lot of paging when
there is 900mb available? Or should I get the site to bump the memory
up to 4gb and then use the /3GB switch.
One last thing. Andrew, you mention a good point:
"As for disks you can move the files around all you want onto the
different logical drives but if they are all on the same physical drive
array it won't matter one bit."
Is there a way to control this (I personally don't have experience
managing raid disks so sorry if this is a stupid question)? I'll check
the links you mentioned below.
Thanks all for your input so far!
David
Andrew J. Kelly wrote:
> David,
> You haven't told us anything about what you were doing during these time
> period being monitored. How many transactions and of what type occurred
> during this time? How busy were the CPU's? And most importantly what king of
> performance issues are you having? As for disks you can move the files
> around all you want onto the different logical drives but if they are all on
> the same physical drive array it won't matter one bit. If you are doing a
> lot of writes or tempdb activity you may want to think about splitting up
> that single raid into several and separating the log files (including
> tempdb) from the rest of the files. These might be worth having a look at:
>
> http://www.sql-server-performance.com/sql_server_performance_audit10.asp
> Performance Audit
> http://www.microsoft.com/technet/prodtechnol/sql/2005/library/operations.mspx
> Performance WP's
> http://www.swynk.com/friends/vandenberg/perfmonitor.asp Perfmon counters
> http://www.sql-server-performance.com/sql_server_performance_audit.asp
> Hardware Performance CheckList
> http://www.sql-server-performance.com/best_sql_server_performance_tips.asp
> SQL 2000 Performance tuning tips
> http://www.support.microsoft.com/?id=224587 Troubleshooting App
> Performance
> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_perfmon_24u1.asp
> Disk Monitoring
> http://sqldev.net/misc/WaitTypes.htm Wait Types
> Tempdb in 2005:
> http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/WorkingWithTempDB.doc
> Physical Database Storage:
> --
> Andrew J. Kelly SQL MVP
> <davconts@.gmail.com> wrote in message
> news:1152490147.388959.285070@.m73g2000cwd.googlegroups.com...
> > Hi,
> >
> > I'm trying to decide if our client's SQL 2005 server specs are
> > adequate.
> >
> > SERVER SPECS:
> > ---
> > It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
> > 2gig).
> > Duel xeons (3 gig cpus i think from memory)
> > 6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
> >
> > The database itself is around 20gig.
> >
> > The performance numbers below were taken over a 20 minute period where
> > about 20 people were using our system. The SQL server is dedicated to
> > our system.
> > ---
> >
> > The main question the client are asking is if it needs more memory to
> > improve performance.
> > I'm pretty new to sql server 2005 performance analysis, so any feedback
> > would be great.
> >
> > Here is some performance numbers related to Memory:
> >
> > Available MBytes free = 900 Mb (average)
> > Page Faults/sec = 1800 (average)
> > Pages/sec = 11 (average) / min 0, max 1255 (there was
> > 3 main spikes here)
> > PageFile usage = 9%
> > Avg Private Bytes = 1.65 gig (sqlserver)
> > Working set sqlserver = 1.65 gig
> > Buffer cache hit ratio = 99%
> > Checkpoint pages/sec = 0
> > Lazy write/sec = 16 (max 108)
> > Page life expectancy = 251 (min 11, max 1200)
> > Target server memory = 1.61 gig
> > Total server memory = 1.53 gig
> >
> > So I am a bit confused here... the available megabytes seems ok (900Mb)
> > but it seems to be using the page file quite a bit (and thus
> > contributing to I/O)? Why would this happen?
> > There are severe spikes (users could be running big queries) but
> > overall what do you guys think?
> >
> > I'll also mention the disk I/O. This seems to be an issue from what I
> > can see.
> > The database is on d:, which had:
> >
> > % Disk Time = 744 (min 0, max 13954)
> > Av. Disk Queue Length = 7 (min 0, max 140)
> > Avg. Disk sec/Read = 0
> > Avg Disk sec/Write = 0
> >
> > The log files are on e:
> >
> > I am thinking about splitting the page file so it is on c:\ and e:\
> > (currently only c:).
> > The tempdb database is also on c:
> >
> > We seem to be having quite a bit of page splits/sec (which from what I
> > have read can contribute to I/O):
> >
> > Page Splits/sec = 355 (min 0, max 1890)
> >
> > Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
> > try a fill factor of say 80?
> >
> > Any suggestions on the above numbers?
> >
> > (I can also provide other performance stats if required (such as locks
> > etc), providing I have them)
> >
> > Thanks for your time!
> >
> > David
> >|||David,
Thanks for the extra info. If it is a single physical array then any
activity on one logical drive will potentially affect the other logical
drives as well. This is because it is a single unit in terms of disks and
the number of heads. The only way to avoid that is to use separate physical
drives. Sometimes adding more drives to a raid 10 will help since it spreads
the load more. But logging and tempdb activity and data access are always
counter productive to each other. How much depends on the volume and drive
configurations. My guess is that you have a lot of untuned queries or tables
that are lacking proper indexes. When you run profiler don't just look for
the queries that take the longest time but pay attention to the number of
reads as well. If you have a lot of queries with a lot of reads you will
most likely have to go to disk to satisfy some of them. That will lead to
increased blocking etc. For only 30 users your batch requests seem sort of
high for an app such as this. It sounds like it may be a bit chatty and
could use some optimization as well. The Pages per second is the one you
really want to pay attention to more than the pagefaults since pagefaults
include a lot of soft faults as well. In your case it does seem to be a
little higher than normal if the conditions are as you say but not that
excessive that I would put it top of the priority list. I would find and
tune the poorly performing queries first. And as to adding more ram. What
edition of OS and SQL Server are you running? If it is Standard Edition of
SQL you can only use 2GB anyway so adding an additional GB will not help.
One last thing. Are you sure there are no other apps or services running on
that machine?
--
Andrew J. Kelly SQL MVP
<davconts@.gmail.com> wrote in message
news:1152498395.129036.45440@.35g2000cwc.googlegroups.com...
> Thanks Andrew, I'll try and clarify.
> Here is a bit more info.
> Our application is a medical software application - think of it as a
> radiology management system where a patients reports, bookings,
> accounting etc are stored in our system.
> So the queries do all sorts of lookups for patients, reports etc.
> In the benchmark, we had about 20 people hit the app (and thus SQL 2005
> indirectly) running all sorts of scenarios.
> Our main concern was general slowness in the application. There were
> points were it performs ok, whilst other times it seemed to hang/become
> very slow (there are a few out of control queries submitted by users, I
> have captured all the "long" queries for more analysis). We have the
> same application running at another hospital, which have 30 users on at
> one time and they don't experience as much slowness. Their db is
> smaller though (10gig vs 20gig). Hence we are investigating how to get
> their sql server working as best it can.
> Here is some of the workload stats:
> % Processor Time (TOTAL) = 59 (average) / (min 1, max 100)
> Batch Requests/sec = 224 (average) / (min 0, max 1363)
> SQL Compilations/sec = 5 (average) / (min 0, max 44)
> SQL Re-Compilations/sec = 0
> The CPU usage seemed ok, although there was occasional spikes to 90+
> Locks info:
> Average Latch Wait Time (ms) = 5 (average) / (min 0, max 57)
> Latch Waits/sec = 110 (average) / (min 0, max 533)
> Total Latch Wait Time (ms) = 227 (average) / (min 0, max 2348)
> Average Wait Time (ms) = 19 (average) / (min 0, max 1813)
> Lock Requests/sec = 102986 (average) / (min 0,
> max 609802)
> Lock Timeouts/sec = 0
> Lock Wait Time (ms) = 4 (average) / (min 0, max
> 372)
> Lock Waits/sec = 0
> Tempdb info (currently 1 file, with initial size of 10gb):
> Log Bytes Flushed/sec = 41 (min 0, max 12287)
> Log File(S) size (KB) = 2047992 (min 2047992, max
> 2047992)
> Log File(s) used size (KB) = 1385 (min 1358, max 1414)
> Log Flush Waits/sec = 0
> Free space in tempdb (KB) = 9600781
> Version Cleanup rate (KB/s) = 0
> Version Generation rate (KB/s) = 0
> Here are some other counters:
> Workfiles Created/sec = 6 (min 0, max 50)
> Worktables Created/sec = 4 (min 0, max 28)
> Worktables from Cache ratio = 28 (min 8, max 41)
> Temp Tables Creation Rate = 0
> Temp Tables For Destruction = 0
> --
> I've also been reading about the /3GB boot.ini switch. Could enabling
> this result in SQL Server using more physical memory rather than
> paging?
> I'm still confused as to why there seems to be a lot of paging when
> there is 900mb available? Or should I get the site to bump the memory
> up to 4gb and then use the /3GB switch.
> One last thing. Andrew, you mention a good point:
> "As for disks you can move the files around all you want onto the
> different logical drives but if they are all on the same physical drive
> array it won't matter one bit."
> Is there a way to control this (I personally don't have experience
> managing raid disks so sorry if this is a stupid question)? I'll check
> the links you mentioned below.
> Thanks all for your input so far!
> David
>
> Andrew J. Kelly wrote:
>> David,
>> You haven't told us anything about what you were doing during these time
>> period being monitored. How many transactions and of what type occurred
>> during this time? How busy were the CPU's? And most importantly what king
>> of
>> performance issues are you having? As for disks you can move the files
>> around all you want onto the different logical drives but if they are all
>> on
>> the same physical drive array it won't matter one bit. If you are doing a
>> lot of writes or tempdb activity you may want to think about splitting up
>> that single raid into several and separating the log files (including
>> tempdb) from the rest of the files. These might be worth having a look
>> at:
>>
>> http://www.sql-server-performance.com/sql_server_performance_audit10.asp
>> Performance Audit
>> http://www.microsoft.com/technet/prodtechnol/sql/2005/library/operations.mspx
>> Performance WP's
>> http://www.swynk.com/friends/vandenberg/perfmonitor.asp Perfmon
>> counters
>> http://www.sql-server-performance.com/sql_server_performance_audit.asp
>> Hardware Performance CheckList
>> http://www.sql-server-performance.com/best_sql_server_performance_tips.asp
>> SQL 2000 Performance tuning tips
>> http://www.support.microsoft.com/?id=224587 Troubleshooting App
>> Performance
>> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_perfmon_24u1.asp
>> Disk Monitoring
>> http://sqldev.net/misc/WaitTypes.htm Wait Types
>> Tempdb in 2005:
>> http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/WorkingWithTempDB.doc
>> Physical Database Storage:
>> --
>> Andrew J. Kelly SQL MVP
>> <davconts@.gmail.com> wrote in message
>> news:1152490147.388959.285070@.m73g2000cwd.googlegroups.com...
>> > Hi,
>> >
>> > I'm trying to decide if our client's SQL 2005 server specs are
>> > adequate.
>> >
>> > SERVER SPECS:
>> > ---
>> > It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
>> > 2gig).
>> > Duel xeons (3 gig cpus i think from memory)
>> > 6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
>> >
>> > The database itself is around 20gig.
>> >
>> > The performance numbers below were taken over a 20 minute period where
>> > about 20 people were using our system. The SQL server is dedicated to
>> > our system.
>> > ---
>> >
>> > The main question the client are asking is if it needs more memory to
>> > improve performance.
>> > I'm pretty new to sql server 2005 performance analysis, so any feedback
>> > would be great.
>> >
>> > Here is some performance numbers related to Memory:
>> >
>> > Available MBytes free = 900 Mb (average)
>> > Page Faults/sec = 1800 (average)
>> > Pages/sec = 11 (average) / min 0, max 1255 (there was
>> > 3 main spikes here)
>> > PageFile usage = 9%
>> > Avg Private Bytes = 1.65 gig (sqlserver)
>> > Working set sqlserver = 1.65 gig
>> > Buffer cache hit ratio = 99%
>> > Checkpoint pages/sec = 0
>> > Lazy write/sec = 16 (max 108)
>> > Page life expectancy = 251 (min 11, max 1200)
>> > Target server memory = 1.61 gig
>> > Total server memory = 1.53 gig
>> >
>> > So I am a bit confused here... the available megabytes seems ok (900Mb)
>> > but it seems to be using the page file quite a bit (and thus
>> > contributing to I/O)? Why would this happen?
>> > There are severe spikes (users could be running big queries) but
>> > overall what do you guys think?
>> >
>> > I'll also mention the disk I/O. This seems to be an issue from what I
>> > can see.
>> > The database is on d:, which had:
>> >
>> > % Disk Time = 744 (min 0, max 13954)
>> > Av. Disk Queue Length = 7 (min 0, max 140)
>> > Avg. Disk sec/Read = 0
>> > Avg Disk sec/Write = 0
>> >
>> > The log files are on e:
>> >
>> > I am thinking about splitting the page file so it is on c:\ and e:\
>> > (currently only c:).
>> > The tempdb database is also on c:
>> >
>> > We seem to be having quite a bit of page splits/sec (which from what I
>> > have read can contribute to I/O):
>> >
>> > Page Splits/sec = 355 (min 0, max 1890)
>> >
>> > Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
>> > try a fill factor of say 80?
>> >
>> > Any suggestions on the above numbers?
>> >
>> > (I can also provide other performance stats if required (such as locks
>> > etc), providing I have them)
>> >
>> > Thanks for your time!
>> >
>> > David
>> >
>|||davconts@.gmail.com wrote:
> Unfortunately I didn't run that counter when doing the benchmarking
> (after analysing the results I wish I did though!!)
> I can run the counter now, but the db is not being used much at the
> moment (some testing by a few people)
> Our application hasn't gone "live" yet, but we are expecting quite a
> bit of usage, so we want to try and anticipate/sort out any performance
> issues before then.
>
I'm not sure memory is necessarily your problem. True, more would be
better, but generally poor performance is due to inefficient coding,
lack of proper indexes, basic things like that. SQL does a pretty good
job of managing its resources, but really falls down when given bad code.
If I were in your shoes, I would capture some trace logs while your
system is under load, and identify those processes/queries that generate
the most work. Focus on disk reads, as high disk reads are a good
indicator of table or index scans, a prime opportunity for improvement.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Thanks all for your responses so far... greatly appreciated.
Yes I do think we have some work to do in optimising queries and
indexes.. I captured some traces of queries that took more than 2
seconds (some took 3 mins !!!) for the programmers to start looking
into. Maybe with the extra size of this customers database, it has
exposed that the queries need tuning (whereas our other site with a
much smaller database doesn't have as much of a problem). I/O appears
to be a problem.
One more question though. Andrew mentioned above that the standard
version of SQL server can only use 2gb ram.
I read this somewhere else, but wanted to verify it for SQL 2005.
The following link says for standard edition of SQL Server 2005, that
the RAM is "Operating system maximum" (which is also for Enterprise
version):
http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
So it appears that standard version of SQL 2005 does not have a set mem
usage restriction (such as SQL 2005 express)?
Thanks!|||I am sorry. I was thinking SQL 2000. Yes SQL 2005 Standard edition can use
up to the OS max. That was one of the benefits of upgrading if you had Std
edition.
--
Andrew J. Kelly SQL MVP
<davconts@.gmail.com> wrote in message
news:1152571874.284923.92510@.35g2000cwc.googlegroups.com...
> Thanks all for your responses so far... greatly appreciated.
> Yes I do think we have some work to do in optimising queries and
> indexes.. I captured some traces of queries that took more than 2
> seconds (some took 3 mins !!!) for the programmers to start looking
> into. Maybe with the extra size of this customers database, it has
> exposed that the queries need tuning (whereas our other site with a
> much smaller database doesn't have as much of a problem). I/O appears
> to be a problem.
> One more question though. Andrew mentioned above that the standard
> version of SQL server can only use 2gb ram.
> I read this somewhere else, but wanted to verify it for SQL 2005.
> The following link says for standard edition of SQL Server 2005, that
> the RAM is "Operating system maximum" (which is also for Enterprise
> version):
> http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
> So it appears that standard version of SQL 2005 does not have a set mem
> usage restriction (such as SQL 2005 express)?
> Thanks!
>|||No worries :)
Glad they decided to lift that memory restriction for 2005 standard!
My boss was asking me about the page splits/sec value.
(Page Splits/sec = 355 (average) / (min 0, max 1890)
He believes that the vast majority of the sql work done by application
is reading (rather than inserting). Examples of inserts our app
performs might be user audit trails, and adding reports etc.
Now when you are reading (ie running a select query), you would not be
adding to the index yeah? And so you wouldn't be splitting pages..
right? So why would the page splits/sec be elevated?
Or have I got my understanding of page splits incorrect?
Andrew J. Kelly wrote:
> I am sorry. I was thinking SQL 2000. Yes SQL 2005 Standard edition can use
> up to the OS max. That was one of the benefits of upgrading if you had Std
> edition.
> --
> Andrew J. Kelly SQL MVP|||That is a fair amount of page splits for an app that is doing mostly reads.
But often times you will find that a poorly tuned app does more writes than
you think should be happening. Is there any chance you have lots of GUID's?
Putting indexes on columns (especially clustered indexes) can cause a lot of
page splits if the fill factors are not correct. A trace should tell pretty
quick how much inserts or updates you are doing. And don't forget about
triggers that may be doing extra work you may not be aware of.
--
Andrew J. Kelly SQL MVP
<davconts@.gmail.com> wrote in message
news:1152583191.214872.61670@.p79g2000cwp.googlegroups.com...
> No worries :)
> Glad they decided to lift that memory restriction for 2005 standard!
> My boss was asking me about the page splits/sec value.
> (Page Splits/sec = 355 (average) / (min 0, max 1890)
> He believes that the vast majority of the sql work done by application
> is reading (rather than inserting). Examples of inserts our app
> performs might be user audit trails, and adding reports etc.
> Now when you are reading (ie running a select query), you would not be
> adding to the index yeah? And so you wouldn't be splitting pages..
> right? So why would the page splits/sec be elevated?
> Or have I got my understanding of page splits incorrect?
>
> Andrew J. Kelly wrote:
>> I am sorry. I was thinking SQL 2000. Yes SQL 2005 Standard edition can
>> use
>> up to the OS max. That was one of the benefits of upgrading if you had
>> Std
>> edition.
>> --
>> Andrew J. Kelly SQL MVP
>

Monday, March 12, 2012

performance and locking problems - urget

Dear Gurus,

A Client has the following problems/requests for their Production
databases, what is your professional/practical advises to tackle and
resolve these issues:

1)Number of Transactions per day The current database can currently
handle about 5000 order transactions per day

2)Table locking When doing row updates SQL Server will lock the
whole table. For example after a few instances of a driver updating an
order record it will lock the entire orders table

3)Reports When a user runs a report involving big quantities of
order-related data other processes will suffer occasional timeouts

4)Archiving There is no efficient way of archiving historical system
dataDavid McGeorge (soalvajavab1@.yahoo.com) writes:
> A Client has the following problems/requests for their Production
> databases, what is your professional/practical advises to tackle and
> resolve these issues:

I'm afraid that your questions are far too open-ended for it to be
possible to give an exhaustive answer in a newsgroup post. Had I been
a consultant, I would have been tempted to refer you the web site for
my business. Judging from your mail address you're a web programmer,
and from your questions it appears that you client is a need of someone
with SQL expertise.

> 1)Number of Transactions per day ? The current database can currently
> handle about 5000 order transactions per day

What do you mean with "can"? Is 5000 transactions/day the actual load?
Or is the database at the maximum of its capacity with that rate? I
would assume the former, since 5000 transactions/day is a low number,
unless the transactions are extremely complex.

> 2)Table locking ? When doing row updates SQL Server will lock the
> whole table. For example after a few instances of a driver updating an
> order record it will lock the entire orders table

This sounds like the indexing of the database needs to be improved.
If you say:

UPDATE tbl
SET col = 23
WHERE othercol = 12

And there is no index on othercol, SQL Server will have no choice but
to lock the entire table.

One tool to improve indexing in the database is the Index Tuning Wizard.
You give it a day's workload, and it will suggest suitable indexes to add.

> 3)Reports ? When a user runs a report involving big quantities of
> order-related data other processes will suffer occasional timeouts

Reports are often problem in OLTP databases, because they need to scan
lots of data. Indexing can help, but if user can build their own reports,
or there are report functions with lots of selection criteria, this
may not be feasible. Not the least, if the users can accept that an
odd report takes 20-30 minutes to run.

Therefore many sites sets up a report database, which is a copy of the
live database. Depending on the requirements, the report database can
be loaded from yesterday's backup, or you could keep it updated with
log shipping or replication. The latter is required if users must have
today's data in their reports, but it takes more effort to get there.

> 4)Archiving ? There is no efficient way of archiving historical system
> data

This point is completely impossible to address without knowing the
business requirements.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Friday, March 9, 2012

Performance - Database apps, connections, and C#

I have read that making connections is very expensive, and I have read that keeping connections open is very expensive.
I have a windows client/server App. There will typically be 3-5 client users accessing the system at any given time, all on a local area network. When they are using the system, they might use it for an hour or more. While viewing the data, they will m
ake frequent requests from the database. (In some cases, the screens they are viewing will automatically refresh, by sending a new query to the database.) In other words, each client might request new data every 5 seconds or so.
Also, there is a service running that automatically monitors machines, and puts the recorded data into a database, resulting in a database updata once per second from each machine. There are anywhere from 4 to 100 machines.
My question is whether I would be better off to create a database connection at the time the user logs on, and maintain that connection until he logs off, or would it be better to create and use a new connection for every query. Using a new connection fo
r every query, I get the overhead associated with new connections. Keeping the connection alive the whole time, I get the overhead associated with maintaining a connection.
Likewise, with the machine monitoring connections, would I be better off opening a connection for each machine, and keeping it alive as long as the monitoring continued, or would I be better off creating and disposing of it frequently?
In case it matters, this is all using Visual C# with ADO.net.
Generally, most apps are written today to close the connections when you're
done with them. Web applications especially open and close connections
very frequently. The client will use connection pooling to reduce the
overhead of establishing brand new connections so the perf hit isn't so bad.
Most performace gains are achieved by tuning the queries you're sending to
the server etc. and good database design.
Thanks,
Kevin McDonnell
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.