Tuesday, March 20, 2012

Performance and SQL 2005 - need some assistance

Hi,
I'm trying to decide if our client's SQL 2005 server specs are
adequate.
SERVER SPECS:
---
It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
2gig).
Duel xeons (3 gig cpus i think from memory)
6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
The database itself is around 20gig.
The performance numbers below were taken over a 20 minute period where
about 20 people were using our system. The SQL server is dedicated to
our system.
---
The main question the client are asking is if it needs more memory to
improve performance.
I'm pretty new to sql server 2005 performance analysis, so any feedback
would be great.
Here is some performance numbers related to Memory:
Available MBytes free = 900 Mb (average)
Page Faults/sec = 1800 (average)
Pages/sec = 11 (average) / min 0, max 1255 (there was
3 main spikes here)
PageFile usage = 9%
Avg Private Bytes = 1.65 gig (sqlserver)
Working set sqlserver = 1.65 gig
Buffer cache hit ratio = 99%
Checkpoint pages/sec = 0
Lazy write/sec = 16 (max 108)
Page life expectancy = 251 (min 11, max 1200)
Target server memory = 1.61 gig
Total server memory = 1.53 gig
So I am a bit confused here... the available megabytes seems ok (900Mb)
but it seems to be using the page file quite a bit (and thus
contributing to I/O)? Why would this happen?
There are severe spikes (users could be running big queries) but
overall what do you guys think?
I'll also mention the disk I/O. This seems to be an issue from what I
can see.
The database is on d:, which had:
% Disk Time = 744 (min 0, max 13954)
Av. Disk Queue Length = 7 (min 0, max 140)
Avg. Disk sec/Read = 0
Avg Disk sec/Write = 0
The log files are on e:
I am thinking about splitting the page file so it is on c:\ and e:\
(currently only c:).
The tempdb database is also on c:
We seem to be having quite a bit of page splits/sec (which from what I
have read can contribute to I/O):
Page Splits/sec = 355 (min 0, max 1890)
Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
try a fill factor of say 80?
Any suggestions on the above numbers?
(I can also provide other performance stats if required (such as locks
etc), providing I have them)
Thanks for your time!
Daviddavconts@.gmail.com wrote:
> Hi,
> I'm trying to decide if our client's SQL 2005 server specs are
> adequate.
> SERVER SPECS:
> ---
> It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
> 2gig).
> Duel xeons (3 gig cpus i think from memory)
> 6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
> The database itself is around 20gig.
> The performance numbers below were taken over a 20 minute period where
> about 20 people were using our system. The SQL server is dedicated to
> our system.
> ---
> The main question the client are asking is if it needs more memory to
> improve performance.
> I'm pretty new to sql server 2005 performance analysis, so any feedback
> would be great.
> Here is some performance numbers related to Memory:
> Available MBytes free = 900 Mb (average)
> Page Faults/sec = 1800 (average)
> Pages/sec = 11 (average) / min 0, max 1255 (there was
> 3 main spikes here)
> PageFile usage = 9%
> Avg Private Bytes = 1.65 gig (sqlserver)
> Working set sqlserver = 1.65 gig
> Buffer cache hit ratio = 99%
> Checkpoint pages/sec = 0
> Lazy write/sec = 16 (max 108)
> Page life expectancy = 251 (min 11, max 1200)
> Target server memory = 1.61 gig
> Total server memory = 1.53 gig
> So I am a bit confused here... the available megabytes seems ok (900Mb)
> but it seems to be using the page file quite a bit (and thus
> contributing to I/O)? Why would this happen?
> There are severe spikes (users could be running big queries) but
> overall what do you guys think?
> I'll also mention the disk I/O. This seems to be an issue from what I
> can see.
> The database is on d:, which had:
> % Disk Time = 744 (min 0, max 13954)
> Av. Disk Queue Length = 7 (min 0, max 140)
> Avg. Disk sec/Read = 0
> Avg Disk sec/Write = 0
> The log files are on e:
> I am thinking about splitting the page file so it is on c:\ and e:\
> (currently only c:).
> The tempdb database is also on c:
> We seem to be having quite a bit of page splits/sec (which from what I
> have read can contribute to I/O):
> Page Splits/sec = 355 (min 0, max 1890)
> Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
> try a fill factor of say 80?
> Any suggestions on the above numbers?
> (I can also provide other performance stats if required (such as locks
> etc), providing I have them)
> Thanks for your time!
> David
>
What does the Full Scans/sec (under SQL Server:Access Methods) counter
show? As far as fill factor, that depends on how the table is used. A
table that is fairly static should have a high fill factor, while a
table that is frequently inserted into should have a lower fill factor.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Unfortunately I didn't run that counter when doing the benchmarking
(after analysing the results I wish I did though!!)
I can run the counter now, but the db is not being used much at the
moment (some testing by a few people)
Our application hasn't gone "live" yet, but we are expecting quite a
bit of usage, so we want to try and anticipate/sort out any performance
issues before then.|||David,
You haven't told us anything about what you were doing during these time
period being monitored. How many transactions and of what type occurred
during this time? How busy were the CPU's? And most importantly what king of
performance issues are you having? As for disks you can move the files
around all you want onto the different logical drives but if they are all on
the same physical drive array it won't matter one bit. If you are doing a
lot of writes or tempdb activity you may want to think about splitting up
that single raid into several and separating the log files (including
tempdb) from the rest of the files. These might be worth having a look at:
http://www.sql-server-performance.com/sql_server_performance_audit10.asp
Performance Audit
http://www.microsoft.com/technet/prodtechnol/sql/2005/library/operations.mspx
Performance WP's
http://www.swynk.com/friends/vandenberg/perfmonitor.asp Perfmon counters
http://www.sql-server-performance.com/sql_server_performance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.com/best_sql_server_performance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_perfmon_24u1.asp
Disk Monitoring
http://sqldev.net/misc/WaitTypes.htm Wait Types
Tempdb in 2005:
http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/WorkingWithTempDB.doc
Physical Database Storage:
--
Andrew J. Kelly SQL MVP
<davconts@.gmail.com> wrote in message
news:1152490147.388959.285070@.m73g2000cwd.googlegroups.com...
> Hi,
> I'm trying to decide if our client's SQL 2005 server specs are
> adequate.
> SERVER SPECS:
> ---
> It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
> 2gig).
> Duel xeons (3 gig cpus i think from memory)
> 6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
> The database itself is around 20gig.
> The performance numbers below were taken over a 20 minute period where
> about 20 people were using our system. The SQL server is dedicated to
> our system.
> ---
> The main question the client are asking is if it needs more memory to
> improve performance.
> I'm pretty new to sql server 2005 performance analysis, so any feedback
> would be great.
> Here is some performance numbers related to Memory:
> Available MBytes free = 900 Mb (average)
> Page Faults/sec = 1800 (average)
> Pages/sec = 11 (average) / min 0, max 1255 (there was
> 3 main spikes here)
> PageFile usage = 9%
> Avg Private Bytes = 1.65 gig (sqlserver)
> Working set sqlserver = 1.65 gig
> Buffer cache hit ratio = 99%
> Checkpoint pages/sec = 0
> Lazy write/sec = 16 (max 108)
> Page life expectancy = 251 (min 11, max 1200)
> Target server memory = 1.61 gig
> Total server memory = 1.53 gig
> So I am a bit confused here... the available megabytes seems ok (900Mb)
> but it seems to be using the page file quite a bit (and thus
> contributing to I/O)? Why would this happen?
> There are severe spikes (users could be running big queries) but
> overall what do you guys think?
> I'll also mention the disk I/O. This seems to be an issue from what I
> can see.
> The database is on d:, which had:
> % Disk Time = 744 (min 0, max 13954)
> Av. Disk Queue Length = 7 (min 0, max 140)
> Avg. Disk sec/Read = 0
> Avg Disk sec/Write = 0
> The log files are on e:
> I am thinking about splitting the page file so it is on c:\ and e:\
> (currently only c:).
> The tempdb database is also on c:
> We seem to be having quite a bit of page splits/sec (which from what I
> have read can contribute to I/O):
> Page Splits/sec = 355 (min 0, max 1890)
> Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
> try a fill factor of say 80?
> Any suggestions on the above numbers?
> (I can also provide other performance stats if required (such as locks
> etc), providing I have them)
> Thanks for your time!
> David
>|||Thanks Andrew, I'll try and clarify.
Here is a bit more info.
Our application is a medical software application - think of it as a
radiology management system where a patients reports, bookings,
accounting etc are stored in our system.
So the queries do all sorts of lookups for patients, reports etc.
In the benchmark, we had about 20 people hit the app (and thus SQL 2005
indirectly) running all sorts of scenarios.
Our main concern was general slowness in the application. There were
points were it performs ok, whilst other times it seemed to hang/become
very slow (there are a few out of control queries submitted by users, I
have captured all the "long" queries for more analysis). We have the
same application running at another hospital, which have 30 users on at
one time and they don't experience as much slowness. Their db is
smaller though (10gig vs 20gig). Hence we are investigating how to get
their sql server working as best it can.
Here is some of the workload stats:
% Processor Time (TOTAL) = 59 (average) / (min 1, max 100)
Batch Requests/sec = 224 (average) / (min 0, max 1363)
SQL Compilations/sec = 5 (average) / (min 0, max 44)
SQL Re-Compilations/sec = 0
The CPU usage seemed ok, although there was occasional spikes to 90+
Locks info:
Average Latch Wait Time (ms) = 5 (average) / (min 0, max 57)
Latch Waits/sec = 110 (average) / (min 0, max 533)
Total Latch Wait Time (ms) = 227 (average) / (min 0, max 2348)
Average Wait Time (ms) = 19 (average) / (min 0, max 1813)
Lock Requests/sec = 102986 (average) / (min 0,
max 609802)
Lock Timeouts/sec = 0
Lock Wait Time (ms) = 4 (average) / (min 0, max
372)
Lock Waits/sec = 0
Tempdb info (currently 1 file, with initial size of 10gb):
Log Bytes Flushed/sec = 41 (min 0, max 12287)
Log File(S) size (KB) = 2047992 (min 2047992, max
2047992)
Log File(s) used size (KB) = 1385 (min 1358, max 1414)
Log Flush Waits/sec = 0
Free space in tempdb (KB) = 9600781
Version Cleanup rate (KB/s) = 0
Version Generation rate (KB/s) = 0
Here are some other counters:
Workfiles Created/sec = 6 (min 0, max 50)
Worktables Created/sec = 4 (min 0, max 28)
Worktables from Cache ratio = 28 (min 8, max 41)
Temp Tables Creation Rate = 0
Temp Tables For Destruction = 0
--
I've also been reading about the /3GB boot.ini switch. Could enabling
this result in SQL Server using more physical memory rather than
paging?
I'm still confused as to why there seems to be a lot of paging when
there is 900mb available? Or should I get the site to bump the memory
up to 4gb and then use the /3GB switch.
One last thing. Andrew, you mention a good point:
"As for disks you can move the files around all you want onto the
different logical drives but if they are all on the same physical drive
array it won't matter one bit."
Is there a way to control this (I personally don't have experience
managing raid disks so sorry if this is a stupid question)? I'll check
the links you mentioned below.
Thanks all for your input so far!
David
Andrew J. Kelly wrote:
> David,
> You haven't told us anything about what you were doing during these time
> period being monitored. How many transactions and of what type occurred
> during this time? How busy were the CPU's? And most importantly what king of
> performance issues are you having? As for disks you can move the files
> around all you want onto the different logical drives but if they are all on
> the same physical drive array it won't matter one bit. If you are doing a
> lot of writes or tempdb activity you may want to think about splitting up
> that single raid into several and separating the log files (including
> tempdb) from the rest of the files. These might be worth having a look at:
>
> http://www.sql-server-performance.com/sql_server_performance_audit10.asp
> Performance Audit
> http://www.microsoft.com/technet/prodtechnol/sql/2005/library/operations.mspx
> Performance WP's
> http://www.swynk.com/friends/vandenberg/perfmonitor.asp Perfmon counters
> http://www.sql-server-performance.com/sql_server_performance_audit.asp
> Hardware Performance CheckList
> http://www.sql-server-performance.com/best_sql_server_performance_tips.asp
> SQL 2000 Performance tuning tips
> http://www.support.microsoft.com/?id=224587 Troubleshooting App
> Performance
> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_perfmon_24u1.asp
> Disk Monitoring
> http://sqldev.net/misc/WaitTypes.htm Wait Types
> Tempdb in 2005:
> http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/WorkingWithTempDB.doc
> Physical Database Storage:
> --
> Andrew J. Kelly SQL MVP
> <davconts@.gmail.com> wrote in message
> news:1152490147.388959.285070@.m73g2000cwd.googlegroups.com...
> > Hi,
> >
> > I'm trying to decide if our client's SQL 2005 server specs are
> > adequate.
> >
> > SERVER SPECS:
> > ---
> > It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
> > 2gig).
> > Duel xeons (3 gig cpus i think from memory)
> > 6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
> >
> > The database itself is around 20gig.
> >
> > The performance numbers below were taken over a 20 minute period where
> > about 20 people were using our system. The SQL server is dedicated to
> > our system.
> > ---
> >
> > The main question the client are asking is if it needs more memory to
> > improve performance.
> > I'm pretty new to sql server 2005 performance analysis, so any feedback
> > would be great.
> >
> > Here is some performance numbers related to Memory:
> >
> > Available MBytes free = 900 Mb (average)
> > Page Faults/sec = 1800 (average)
> > Pages/sec = 11 (average) / min 0, max 1255 (there was
> > 3 main spikes here)
> > PageFile usage = 9%
> > Avg Private Bytes = 1.65 gig (sqlserver)
> > Working set sqlserver = 1.65 gig
> > Buffer cache hit ratio = 99%
> > Checkpoint pages/sec = 0
> > Lazy write/sec = 16 (max 108)
> > Page life expectancy = 251 (min 11, max 1200)
> > Target server memory = 1.61 gig
> > Total server memory = 1.53 gig
> >
> > So I am a bit confused here... the available megabytes seems ok (900Mb)
> > but it seems to be using the page file quite a bit (and thus
> > contributing to I/O)? Why would this happen?
> > There are severe spikes (users could be running big queries) but
> > overall what do you guys think?
> >
> > I'll also mention the disk I/O. This seems to be an issue from what I
> > can see.
> > The database is on d:, which had:
> >
> > % Disk Time = 744 (min 0, max 13954)
> > Av. Disk Queue Length = 7 (min 0, max 140)
> > Avg. Disk sec/Read = 0
> > Avg Disk sec/Write = 0
> >
> > The log files are on e:
> >
> > I am thinking about splitting the page file so it is on c:\ and e:\
> > (currently only c:).
> > The tempdb database is also on c:
> >
> > We seem to be having quite a bit of page splits/sec (which from what I
> > have read can contribute to I/O):
> >
> > Page Splits/sec = 355 (min 0, max 1890)
> >
> > Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
> > try a fill factor of say 80?
> >
> > Any suggestions on the above numbers?
> >
> > (I can also provide other performance stats if required (such as locks
> > etc), providing I have them)
> >
> > Thanks for your time!
> >
> > David
> >|||David,
Thanks for the extra info. If it is a single physical array then any
activity on one logical drive will potentially affect the other logical
drives as well. This is because it is a single unit in terms of disks and
the number of heads. The only way to avoid that is to use separate physical
drives. Sometimes adding more drives to a raid 10 will help since it spreads
the load more. But logging and tempdb activity and data access are always
counter productive to each other. How much depends on the volume and drive
configurations. My guess is that you have a lot of untuned queries or tables
that are lacking proper indexes. When you run profiler don't just look for
the queries that take the longest time but pay attention to the number of
reads as well. If you have a lot of queries with a lot of reads you will
most likely have to go to disk to satisfy some of them. That will lead to
increased blocking etc. For only 30 users your batch requests seem sort of
high for an app such as this. It sounds like it may be a bit chatty and
could use some optimization as well. The Pages per second is the one you
really want to pay attention to more than the pagefaults since pagefaults
include a lot of soft faults as well. In your case it does seem to be a
little higher than normal if the conditions are as you say but not that
excessive that I would put it top of the priority list. I would find and
tune the poorly performing queries first. And as to adding more ram. What
edition of OS and SQL Server are you running? If it is Standard Edition of
SQL you can only use 2GB anyway so adding an additional GB will not help.
One last thing. Are you sure there are no other apps or services running on
that machine?
--
Andrew J. Kelly SQL MVP
<davconts@.gmail.com> wrote in message
news:1152498395.129036.45440@.35g2000cwc.googlegroups.com...
> Thanks Andrew, I'll try and clarify.
> Here is a bit more info.
> Our application is a medical software application - think of it as a
> radiology management system where a patients reports, bookings,
> accounting etc are stored in our system.
> So the queries do all sorts of lookups for patients, reports etc.
> In the benchmark, we had about 20 people hit the app (and thus SQL 2005
> indirectly) running all sorts of scenarios.
> Our main concern was general slowness in the application. There were
> points were it performs ok, whilst other times it seemed to hang/become
> very slow (there are a few out of control queries submitted by users, I
> have captured all the "long" queries for more analysis). We have the
> same application running at another hospital, which have 30 users on at
> one time and they don't experience as much slowness. Their db is
> smaller though (10gig vs 20gig). Hence we are investigating how to get
> their sql server working as best it can.
> Here is some of the workload stats:
> % Processor Time (TOTAL) = 59 (average) / (min 1, max 100)
> Batch Requests/sec = 224 (average) / (min 0, max 1363)
> SQL Compilations/sec = 5 (average) / (min 0, max 44)
> SQL Re-Compilations/sec = 0
> The CPU usage seemed ok, although there was occasional spikes to 90+
> Locks info:
> Average Latch Wait Time (ms) = 5 (average) / (min 0, max 57)
> Latch Waits/sec = 110 (average) / (min 0, max 533)
> Total Latch Wait Time (ms) = 227 (average) / (min 0, max 2348)
> Average Wait Time (ms) = 19 (average) / (min 0, max 1813)
> Lock Requests/sec = 102986 (average) / (min 0,
> max 609802)
> Lock Timeouts/sec = 0
> Lock Wait Time (ms) = 4 (average) / (min 0, max
> 372)
> Lock Waits/sec = 0
> Tempdb info (currently 1 file, with initial size of 10gb):
> Log Bytes Flushed/sec = 41 (min 0, max 12287)
> Log File(S) size (KB) = 2047992 (min 2047992, max
> 2047992)
> Log File(s) used size (KB) = 1385 (min 1358, max 1414)
> Log Flush Waits/sec = 0
> Free space in tempdb (KB) = 9600781
> Version Cleanup rate (KB/s) = 0
> Version Generation rate (KB/s) = 0
> Here are some other counters:
> Workfiles Created/sec = 6 (min 0, max 50)
> Worktables Created/sec = 4 (min 0, max 28)
> Worktables from Cache ratio = 28 (min 8, max 41)
> Temp Tables Creation Rate = 0
> Temp Tables For Destruction = 0
> --
> I've also been reading about the /3GB boot.ini switch. Could enabling
> this result in SQL Server using more physical memory rather than
> paging?
> I'm still confused as to why there seems to be a lot of paging when
> there is 900mb available? Or should I get the site to bump the memory
> up to 4gb and then use the /3GB switch.
> One last thing. Andrew, you mention a good point:
> "As for disks you can move the files around all you want onto the
> different logical drives but if they are all on the same physical drive
> array it won't matter one bit."
> Is there a way to control this (I personally don't have experience
> managing raid disks so sorry if this is a stupid question)? I'll check
> the links you mentioned below.
> Thanks all for your input so far!
> David
>
> Andrew J. Kelly wrote:
>> David,
>> You haven't told us anything about what you were doing during these time
>> period being monitored. How many transactions and of what type occurred
>> during this time? How busy were the CPU's? And most importantly what king
>> of
>> performance issues are you having? As for disks you can move the files
>> around all you want onto the different logical drives but if they are all
>> on
>> the same physical drive array it won't matter one bit. If you are doing a
>> lot of writes or tempdb activity you may want to think about splitting up
>> that single raid into several and separating the log files (including
>> tempdb) from the rest of the files. These might be worth having a look
>> at:
>>
>> http://www.sql-server-performance.com/sql_server_performance_audit10.asp
>> Performance Audit
>> http://www.microsoft.com/technet/prodtechnol/sql/2005/library/operations.mspx
>> Performance WP's
>> http://www.swynk.com/friends/vandenberg/perfmonitor.asp Perfmon
>> counters
>> http://www.sql-server-performance.com/sql_server_performance_audit.asp
>> Hardware Performance CheckList
>> http://www.sql-server-performance.com/best_sql_server_performance_tips.asp
>> SQL 2000 Performance tuning tips
>> http://www.support.microsoft.com/?id=224587 Troubleshooting App
>> Performance
>> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_perfmon_24u1.asp
>> Disk Monitoring
>> http://sqldev.net/misc/WaitTypes.htm Wait Types
>> Tempdb in 2005:
>> http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/WorkingWithTempDB.doc
>> Physical Database Storage:
>> --
>> Andrew J. Kelly SQL MVP
>> <davconts@.gmail.com> wrote in message
>> news:1152490147.388959.285070@.m73g2000cwd.googlegroups.com...
>> > Hi,
>> >
>> > I'm trying to decide if our client's SQL 2005 server specs are
>> > adequate.
>> >
>> > SERVER SPECS:
>> > ---
>> > It currently has 3 gig of ram, (SQL 2005 setup for min 1.5gig / max
>> > 2gig).
>> > Duel xeons (3 gig cpus i think from memory)
>> > 6 drives in RAID 1 + 0 config (pairs are mirrorred then stripped)
>> >
>> > The database itself is around 20gig.
>> >
>> > The performance numbers below were taken over a 20 minute period where
>> > about 20 people were using our system. The SQL server is dedicated to
>> > our system.
>> > ---
>> >
>> > The main question the client are asking is if it needs more memory to
>> > improve performance.
>> > I'm pretty new to sql server 2005 performance analysis, so any feedback
>> > would be great.
>> >
>> > Here is some performance numbers related to Memory:
>> >
>> > Available MBytes free = 900 Mb (average)
>> > Page Faults/sec = 1800 (average)
>> > Pages/sec = 11 (average) / min 0, max 1255 (there was
>> > 3 main spikes here)
>> > PageFile usage = 9%
>> > Avg Private Bytes = 1.65 gig (sqlserver)
>> > Working set sqlserver = 1.65 gig
>> > Buffer cache hit ratio = 99%
>> > Checkpoint pages/sec = 0
>> > Lazy write/sec = 16 (max 108)
>> > Page life expectancy = 251 (min 11, max 1200)
>> > Target server memory = 1.61 gig
>> > Total server memory = 1.53 gig
>> >
>> > So I am a bit confused here... the available megabytes seems ok (900Mb)
>> > but it seems to be using the page file quite a bit (and thus
>> > contributing to I/O)? Why would this happen?
>> > There are severe spikes (users could be running big queries) but
>> > overall what do you guys think?
>> >
>> > I'll also mention the disk I/O. This seems to be an issue from what I
>> > can see.
>> > The database is on d:, which had:
>> >
>> > % Disk Time = 744 (min 0, max 13954)
>> > Av. Disk Queue Length = 7 (min 0, max 140)
>> > Avg. Disk sec/Read = 0
>> > Avg Disk sec/Write = 0
>> >
>> > The log files are on e:
>> >
>> > I am thinking about splitting the page file so it is on c:\ and e:\
>> > (currently only c:).
>> > The tempdb database is also on c:
>> >
>> > We seem to be having quite a bit of page splits/sec (which from what I
>> > have read can contribute to I/O):
>> >
>> > Page Splits/sec = 355 (min 0, max 1890)
>> >
>> > Our indexes have a fill factor of 95 and a rebuilt nightly. Should I
>> > try a fill factor of say 80?
>> >
>> > Any suggestions on the above numbers?
>> >
>> > (I can also provide other performance stats if required (such as locks
>> > etc), providing I have them)
>> >
>> > Thanks for your time!
>> >
>> > David
>> >
>|||davconts@.gmail.com wrote:
> Unfortunately I didn't run that counter when doing the benchmarking
> (after analysing the results I wish I did though!!)
> I can run the counter now, but the db is not being used much at the
> moment (some testing by a few people)
> Our application hasn't gone "live" yet, but we are expecting quite a
> bit of usage, so we want to try and anticipate/sort out any performance
> issues before then.
>
I'm not sure memory is necessarily your problem. True, more would be
better, but generally poor performance is due to inefficient coding,
lack of proper indexes, basic things like that. SQL does a pretty good
job of managing its resources, but really falls down when given bad code.
If I were in your shoes, I would capture some trace logs while your
system is under load, and identify those processes/queries that generate
the most work. Focus on disk reads, as high disk reads are a good
indicator of table or index scans, a prime opportunity for improvement.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Thanks all for your responses so far... greatly appreciated.
Yes I do think we have some work to do in optimising queries and
indexes.. I captured some traces of queries that took more than 2
seconds (some took 3 mins !!!) for the programmers to start looking
into. Maybe with the extra size of this customers database, it has
exposed that the queries need tuning (whereas our other site with a
much smaller database doesn't have as much of a problem). I/O appears
to be a problem.
One more question though. Andrew mentioned above that the standard
version of SQL server can only use 2gb ram.
I read this somewhere else, but wanted to verify it for SQL 2005.
The following link says for standard edition of SQL Server 2005, that
the RAM is "Operating system maximum" (which is also for Enterprise
version):
http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
So it appears that standard version of SQL 2005 does not have a set mem
usage restriction (such as SQL 2005 express)?
Thanks!|||I am sorry. I was thinking SQL 2000. Yes SQL 2005 Standard edition can use
up to the OS max. That was one of the benefits of upgrading if you had Std
edition.
--
Andrew J. Kelly SQL MVP
<davconts@.gmail.com> wrote in message
news:1152571874.284923.92510@.35g2000cwc.googlegroups.com...
> Thanks all for your responses so far... greatly appreciated.
> Yes I do think we have some work to do in optimising queries and
> indexes.. I captured some traces of queries that took more than 2
> seconds (some took 3 mins !!!) for the programmers to start looking
> into. Maybe with the extra size of this customers database, it has
> exposed that the queries need tuning (whereas our other site with a
> much smaller database doesn't have as much of a problem). I/O appears
> to be a problem.
> One more question though. Andrew mentioned above that the standard
> version of SQL server can only use 2gb ram.
> I read this somewhere else, but wanted to verify it for SQL 2005.
> The following link says for standard edition of SQL Server 2005, that
> the RAM is "Operating system maximum" (which is also for Enterprise
> version):
> http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
> So it appears that standard version of SQL 2005 does not have a set mem
> usage restriction (such as SQL 2005 express)?
> Thanks!
>|||No worries :)
Glad they decided to lift that memory restriction for 2005 standard!
My boss was asking me about the page splits/sec value.
(Page Splits/sec = 355 (average) / (min 0, max 1890)
He believes that the vast majority of the sql work done by application
is reading (rather than inserting). Examples of inserts our app
performs might be user audit trails, and adding reports etc.
Now when you are reading (ie running a select query), you would not be
adding to the index yeah? And so you wouldn't be splitting pages..
right? So why would the page splits/sec be elevated?
Or have I got my understanding of page splits incorrect?
Andrew J. Kelly wrote:
> I am sorry. I was thinking SQL 2000. Yes SQL 2005 Standard edition can use
> up to the OS max. That was one of the benefits of upgrading if you had Std
> edition.
> --
> Andrew J. Kelly SQL MVP|||That is a fair amount of page splits for an app that is doing mostly reads.
But often times you will find that a poorly tuned app does more writes than
you think should be happening. Is there any chance you have lots of GUID's?
Putting indexes on columns (especially clustered indexes) can cause a lot of
page splits if the fill factors are not correct. A trace should tell pretty
quick how much inserts or updates you are doing. And don't forget about
triggers that may be doing extra work you may not be aware of.
--
Andrew J. Kelly SQL MVP
<davconts@.gmail.com> wrote in message
news:1152583191.214872.61670@.p79g2000cwp.googlegroups.com...
> No worries :)
> Glad they decided to lift that memory restriction for 2005 standard!
> My boss was asking me about the page splits/sec value.
> (Page Splits/sec = 355 (average) / (min 0, max 1890)
> He believes that the vast majority of the sql work done by application
> is reading (rather than inserting). Examples of inserts our app
> performs might be user audit trails, and adding reports etc.
> Now when you are reading (ie running a select query), you would not be
> adding to the index yeah? And so you wouldn't be splitting pages..
> right? So why would the page splits/sec be elevated?
> Or have I got my understanding of page splits incorrect?
>
> Andrew J. Kelly wrote:
>> I am sorry. I was thinking SQL 2000. Yes SQL 2005 Standard edition can
>> use
>> up to the OS max. That was one of the benefits of upgrading if you had
>> Std
>> edition.
>> --
>> Andrew J. Kelly SQL MVP
>

No comments:

Post a Comment