Monday, March 12, 2012

performance + filegroups

my company use ms sql server 2000 and the database is already 45 gb.
Performance has slow down and we are thinking of using secondary data
files:
in primary filegroup, system tables
in secondaryA filegroup, user tables (these 2 on the same physical disk)
in secodaryB filegroup, the indexes of user tables (on different physical
disk)
would this increase performance?
> would this increase performance?
Yes , it would
"nikolakg" <nikolakg@.discussions.microsoft.com> wrote in message
news:B4E07E76-07B8-4DAD-B8D0-03FF6762C662@.microsoft.com...
> my company use ms sql server 2000 and the database is already 45 gb.
> Performance has slow down and we are thinking of using secondary data
> files:
> in primary filegroup, system tables
> in secondaryA filegroup, user tables (these 2 on the same physical disk)
> in secodaryB filegroup, the indexes of user tables (on different physical
> disk)
> would this increase performance?
|||is there a way to move user tables from primary filegroup to secondaryA, all
together? because in a test database i did it manualy for one table at a time
Ο χρ?στη? "Uri Dimant" Xγγραψε:

> Yes , it would
>
> "nikolakg" <nikolakg@.discussions.microsoft.com> wrote in message
> news:B4E07E76-07B8-4DAD-B8D0-03FF6762C662@.microsoft.com...
>
>
|||If you have a clustered index created on the user table , so a CI is
actual data , create an CI index on the secondary group and SQL Server will
move all data to the filegroup
"nikolakg" <nikolakg@.discussions.microsoft.com> wrote in message
news:DE93DB7C-B961-4F6B-8AEE-37E21B714213@.microsoft.com...[vbcol=seagreen]
> is there a way to move user tables from primary filegroup to secondaryA,
> all
> together? because in a test database i did it manualy for one table at a
> time
> ? ?? "Uri Dimant" ??:
|||we have nearly 400 user tables in the database, and a lot of clustered and
not clustered indexes, all in primary file. you suggest that i drop all CI
and recreate them on secondary filegroup? but still i would have to do it for
every table
Ο χρ?στη? "Uri Dimant" Xγγραψε:

> If you have a clustered index created on the user table , so a CI is
> actual data , create an CI index on the secondary group and SQL Server will
> move all data to the filegroup
>
> "nikolakg" <nikolakg@.discussions.microsoft.com> wrote in message
> news:DE93DB7C-B961-4F6B-8AEE-37E21B714213@.microsoft.com...
>
>
|||I wouldn't be so sure. First, the performance slowdown the poster noted
would have to be attributable to physical disk reads on the data files for
this to potentially help. If 95% of the waits causing the apparent slowdown
are CPU or locking-related, then a 100% improvement in physical disk IO
would only result in a 2.5% improvement in overall response time.
Second, in order to increase performance by adding additional filegroups you
have to get the "right" objects moved to the other filegroup. The
difficulty in determining the "right" objects lies in the fact that your
most commonly used database pages will be cached in server memory. So if
you have your indexes on a second disk, you risk idling that disk because
your index reads are heavily cached.
You might consider adding additional files to your primary filegroup and
placing them on the other volume. This will spread IO automatically across
the disks without you having to guess at which objects belong where.
David
<DIV>"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:Op7SIp$2GHA.1256@.TK2MSFTNGP02.phx.gbl...</DIV>>> would this increase
performance?
> Yes , it would
>
> "nikolakg" <nikolakg@.discussions.microsoft.com> wrote in message
> news:B4E07E76-07B8-4DAD-B8D0-03FF6762C662@.microsoft.com...
>
|||I second that. Adding a second physical disk and using it for SQL Server
files will increase the available I/O per second (assuming your disk
controller supports parallel I/O). But if that is not your bottleneck,
then you would be waisting money. Check the average disk queue length
during performance problems. If it is consistenly over 4 (or at least
over 1), then you are lacking I/O.
If I/O is your primary bottleneck, then there are basically two
approaches on how to use the 2 I/O steams most efficiently:
1. SAME: Stripe And Mirror Everything. This would require that the
second disk has the same (or larger) capacity as the first, and you
create a stripeset. This will evenly distribute all data of the first
disk over the two disks
2. Use the second disk only for a second filegroup, and manually move
all "high priority" objects to it. This will give you more granularity,
more control. But it also means you need in depth knowledge of the
database usage, it will require a lot of work to move the objects and
test if it actually improved the situation, and it requires checks every
now and then to see if it is still optimal.
Gert-Jan
David Browne wrote:[vbcol=seagreen]
> I wouldn't be so sure. First, the performance slowdown the poster noted
> would have to be attributable to physical disk reads on the data files for
> this to potentially help. If 95% of the waits causing the apparent slowdown
> are CPU or locking-related, then a 100% improvement in physical disk IO
> would only result in a 2.5% improvement in overall response time.
> Second, in order to increase performance by adding additional filegroups you
> have to get the "right" objects moved to the other filegroup. The
> difficulty in determining the "right" objects lies in the fact that your
> most commonly used database pages will be cached in server memory. So if
> you have your indexes on a second disk, you risk idling that disk because
> your index reads are heavily cached.
> You might consider adding additional files to your primary filegroup and
> placing them on the other volume. This will spread IO automatically across
> the disks without you having to guess at which objects belong where.
> David
> <DIV>"Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:Op7SIp$2GHA.1256@.TK2MSFTNGP02.phx.gbl...</DIV>>> would this increase
> performance?
|||On Tue, 19 Sep 2006 07:32:01 -0700, nikolakg
<nikolakg@.discussions.microsoft.com> wrote:
>my company use ms sql server 2000 and the database is already 45 gb.
>Performance has slow down and we are thinking of using secondary data
>files:
>in primary filegroup, system tables
>in secondaryA filegroup, user tables (these 2 on the same physical disk)
>in secodaryB filegroup, the indexes of user tables (on different physical
>disk)
>would this increase performance?
Defragmenting the database periodically may have the same effect.
Of course, you already have tons of RAM, right? Plenty of RAM is a
good way to make performance problems go away. The more the (active)
data size exceeds RAM, the more sensitive performance is to physical
data layouts. But, if you start going down that path, you might end
up trying to manually balance dozens of filegroups. It might even be
effective, but it requires a lot of work, experimentation, etc.
Try to compare the logical read volume to the physical read volume
using perfmon.
J.
|||Hi
If you have text columns in your table them moving the clustered index will
not move the text data, if that is the case then I think you would have
create a new table and transfer the data.
John
"nikolakg" wrote:
[vbcol=seagreen]
> we have nearly 400 user tables in the database, and a lot of clustered and
> not clustered indexes, all in primary file. you suggest that i drop all CI
> and recreate them on secondary filegroup? but still i would have to do it for
> every table
> Ο χρ?στη? "Uri Dimant" Xγγραψε:
|||thanks ...
i used performance monitor and i discovered that althouth we had set max
server memory to 6g (we have 8g ram and left 2 for os), total server memory
and target server memory had the same size approximately 4,1g. i found out
that the version of sql server
(Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) sp4)
had a bug and we just fixed it
(FIX: Not all memory is available when AWE is enabled on a computer that is
running a 32-bit version of SQL Server 2000 SP4)
total server memory and target server memory has increased, now it is 6.1g
but they still have the same size.
i read that:
<if the SQLServer:Memory Manager: Total Server Memory (KB) counter is more
or equal than the SQLServer:Memory Manager: Target Server Memory (KB)
counter, this indicates that SQL Server may be under memory pressure and
could use access to more physical memory.>
do you know anything about it?
is there a way to change it?
is this a reason for poor performance?
Ο χρ?στη? "JXStern" Xγγραψε:

> On Tue, 19 Sep 2006 07:32:01 -0700, nikolakg
> <nikolakg@.discussions.microsoft.com> wrote:
> Defragmenting the database periodically may have the same effect.
> Of course, you already have tons of RAM, right? Plenty of RAM is a
> good way to make performance problems go away. The more the (active)
> data size exceeds RAM, the more sensitive performance is to physical
> data layouts. But, if you start going down that path, you might end
> up trying to manually balance dozens of filegroups. It might even be
> effective, but it requires a lot of work, experimentation, etc.
> Try to compare the logical read volume to the physical read volume
> using perfmon.
> J.
>
>

No comments:

Post a Comment