Friday, March 30, 2012

Performance in Cube generation

during the cube generation, I saw the disk utilization in C drive is 100%.
How can I change the temporary drive for the cubes generation?
or
can I share the work load on others drives? nHow?There are lots of tips and tricks mentioned in the AS Operations and
Preformance Guides.
Pointers to them are at:
http://www.microsoft.com/sql/evalua.../bianalysis.asp
1) changing the temp folder -- yes. In Analysis Manager, right-click on the
server and select Properties...
2) different folders -- no. Everything in Analysis Services sits under the
same data folder.
You can modify where that is located, but everything sits under that single
folder.
--
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kam" <Kam@.discussions.microsoft.com> wrote in message
news:613802AE-92ED-4B83-9C5D-0DAB5576B781@.microsoft.com...
> during the cube generation, I saw the disk utilization in C drive is 100%.
> How can I change the temporary drive for the cubes generation?
> or
> can I share the work load on others drives? nHow?

Performance in big UNION

Before migrating an application to SQL2005, I am checking the performance of
it and comparing it with the performance in SQL 2000. I have found a big
query that has 1000 selects of constant values joined by union all; in the
first execution, it takes more than 20 seconds to run but in sql 2000 it
takes less than 3 seconds, but when the plan is in cache the executions that
follow run in 3 seconds.
You can reproduce it with
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
union all
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
and repeating it till 1000 unions all.
Does anybody know what 2005 is doing in the compilation? and, of course, if
there is any workaround to avoid splitting the query into a subset of queries.
Thanks in advance.Ariel wrote:
> Before migrating an application to SQL2005, I am checking the performance of
> it and comparing it with the performance in SQL 2000. I have found a big
> query that has 1000 selects of constant values joined by union all; in the
> first execution, it takes more than 20 seconds to run but in sql 2000 it
> takes less than 3 seconds, but when the plan is in cache the executions that
> follow run in 3 seconds.
> You can reproduce it with
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> union all
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> and repeating it till 1000 unions all.
Why are you comparing the performance of such a hm... exotic query?
Do you use it in your application?
First execution of your query takes several seconds, because parse and
compile step takes a lot of time. Nothing can be done about it. Query
processor has been totally rebuilt in SQL Server 2005, you're
experiencing a side effect of these changes.
> Does anybody know what 2005 is doing in the compilation? and, of course, if
> there is any workaround to avoid splitting the query into a subset of queries.
Why do you need this query?
Best regards,
Marcin Guzowski
http://guzowski.info|||I assume that the values within this 1000 line select are static.
Why not load them into a table and select from the table?
--
Denny
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
Microsoft Office SharePoint Server 2007: Configuration)
MCITP (dbadmin, dbdev)
"Marcin A. Guzowski" wrote:
> Ariel wrote:
> > Before migrating an application to SQL2005, I am checking the performance of
> > it and comparing it with the performance in SQL 2000. I have found a big
> > query that has 1000 selects of constant values joined by union all; in the
> > first execution, it takes more than 20 seconds to run but in sql 2000 it
> > takes less than 3 seconds, but when the plan is in cache the executions that
> > follow run in 3 seconds.
> >
> > You can reproduce it with
> >
> > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> > union all
> > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> >
> > and repeating it till 1000 unions all.
>
> Why are you comparing the performance of such a hm... exotic query?
> Do you use it in your application?
> First execution of your query takes several seconds, because parse and
> compile step takes a lot of time. Nothing can be done about it. Query
> processor has been totally rebuilt in SQL Server 2005, you're
> experiencing a side effect of these changes.
>
> > Does anybody know what 2005 is doing in the compilation? and, of course, if
> > there is any workaround to avoid splitting the query into a subset of queries.
> Why do you need this query?
>
> --
> Best regards,
> Marcin Guzowski
> http://guzowski.info
>|||The query written in this post is only to reproduce the problem, the legacy
code of my application is using a similar code to insert data from a grid to
a table. But with the query of the example you can reproduce the problem.
Isn't there a way to make the optimizer run faster?
Six times slower than in 2000 seems to be too much to be a side effect due
to changes in 2005 , isn't it?
"mrdenny" wrote:
> I assume that the values within this 1000 line select are static.
> Why not load them into a table and select from the table?
> --
> Denny
> MCSA (2003) / MCDBA (SQL 2000)
> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
> Microsoft Office SharePoint Server 2007: Configuration)
> MCITP (dbadmin, dbdev)
>
> "Marcin A. Guzowski" wrote:
> > Ariel wrote:
> > > Before migrating an application to SQL2005, I am checking the performance of
> > > it and comparing it with the performance in SQL 2000. I have found a big
> > > query that has 1000 selects of constant values joined by union all; in the
> > > first execution, it takes more than 20 seconds to run but in sql 2000 it
> > > takes less than 3 seconds, but when the plan is in cache the executions that
> > > follow run in 3 seconds.
> > >
> > > You can reproduce it with
> > >
> > > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> > > union all
> > > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> > >
> > > and repeating it till 1000 unions all.
> >
> >
> > Why are you comparing the performance of such a hm... exotic query?
> > Do you use it in your application?
> >
> > First execution of your query takes several seconds, because parse and
> > compile step takes a lot of time. Nothing can be done about it. Query
> > processor has been totally rebuilt in SQL Server 2005, you're
> > experiencing a side effect of these changes.
> >
> >
> > > Does anybody know what 2005 is doing in the compilation? and, of course, if
> > > there is any workaround to avoid splitting the query into a subset of queries.
> >
> > Why do you need this query?
> >
> >
> > --
> > Best regards,
> > Marcin Guzowski
> > http://guzowski.info
> >|||Ariel wrote:
> The query written in this post is only to reproduce the problem, the legacy
> code of my application is using a similar code to insert data from a grid to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
Of course there is a way - faster server.
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
No, it's not too much.
Use the right method for your task and performance won't be a bottleneck.
I would choose one of two ways to insert data from app to a table.
First (classic, many inserts in one batch, all in one transaction):
BEGIN TRAN
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
...
COMMIT TRAN
Second (dynamic SQL - very fast):
INSERT table
EXEC
('
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
');
Best regards,
Marcin Guzowski
http://guzowski.info|||Since it is sql 2005, you can actually force the plan. See BOL: Plan
Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
Not sure if that will help in this scenario.
You mention a grid as the actual source of the problem and this fabricated
massive union all as just a demo. Can you post what the grid is doing, or
fix the ROOT cause there yourself'
TheSQLGuru
President
Indicium Resources, Inc.
"Ariel" <Ariel@.discussions.microsoft.com> wrote in message
news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
> The query written in this post is only to reproduce the problem, the
> legacy
> code of my application is using a similar code to insert data from a grid
> to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
>
> "mrdenny" wrote:
>> I assume that the values within this 1000 line select are static.
>> Why not load them into a table and select from the table?
>> --
>> Denny
>> MCSA (2003) / MCDBA (SQL 2000)
>> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration
>> /
>> Microsoft Office SharePoint Server 2007: Configuration)
>> MCITP (dbadmin, dbdev)
>>
>> "Marcin A. Guzowski" wrote:
>> > Ariel wrote:
>> > > Before migrating an application to SQL2005, I am checking the
>> > > performance of
>> > > it and comparing it with the performance in SQL 2000. I have found a
>> > > big
>> > > query that has 1000 selects of constant values joined by union all;
>> > > in the
>> > > first execution, it takes more than 20 seconds to run but in sql 2000
>> > > it
>> > > takes less than 3 seconds, but when the plan is in cache the
>> > > executions that
>> > > follow run in 3 seconds.
>> > >
>> > > You can reproduce it with
>> > >
>> > > select
>> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
>> > > union all
>> > > select
>> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
>> > >
>> > > and repeating it till 1000 unions all.
>> >
>> >
>> > Why are you comparing the performance of such a hm... exotic query?
>> > Do you use it in your application?
>> >
>> > First execution of your query takes several seconds, because parse and
>> > compile step takes a lot of time. Nothing can be done about it. Query
>> > processor has been totally rebuilt in SQL Server 2005, you're
>> > experiencing a side effect of these changes.
>> >
>> >
>> > > Does anybody know what 2005 is doing in the compilation? and, of
>> > > course, if
>> > > there is any workaround to avoid splitting the query into a subset of
>> > > queries.
>> >
>> > Why do you need this query?
>> >
>> >
>> > --
>> > Best regards,
>> > Marcin Guzowski
>> > http://guzowski.info
>> >|||Thanks for your ideas, I just was trying to avoid changing code and learn
what the optimizer is doing during those 20 seconds. I'll use your ideas to
change the code. I have tried to split the query in other with 'only' 15
UNIONS and the results are good enough for the app, although they continue
being worse than in 2000.
Do you know any information source where I can find information about this
kind of 'surprises' with the query processor'
"TheSQLGuru" wrote:
> Since it is sql 2005, you can actually force the plan. See BOL: Plan
> Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
> Not sure if that will help in this scenario.
> You mention a grid as the actual source of the problem and this fabricated
> massive union all as just a demo. Can you post what the grid is doing, or
> fix the ROOT cause there yourself'
>
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Ariel" <Ariel@.discussions.microsoft.com> wrote in message
> news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
> > The query written in this post is only to reproduce the problem, the
> > legacy
> > code of my application is using a similar code to insert data from a grid
> > to
> > a table. But with the query of the example you can reproduce the problem.
> >
> > Isn't there a way to make the optimizer run faster?
> >
> > Six times slower than in 2000 seems to be too much to be a side effect due
> > to changes in 2005 , isn't it?
> >
> >
> >
> > "mrdenny" wrote:
> >
> >> I assume that the values within this 1000 line select are static.
> >>
> >> Why not load them into a table and select from the table?
> >> --
> >> Denny
> >> MCSA (2003) / MCDBA (SQL 2000)
> >> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration
> >> /
> >> Microsoft Office SharePoint Server 2007: Configuration)
> >> MCITP (dbadmin, dbdev)
> >>
> >>
> >> "Marcin A. Guzowski" wrote:
> >>
> >> > Ariel wrote:
> >> > > Before migrating an application to SQL2005, I am checking the
> >> > > performance of
> >> > > it and comparing it with the performance in SQL 2000. I have found a
> >> > > big
> >> > > query that has 1000 selects of constant values joined by union all;
> >> > > in the
> >> > > first execution, it takes more than 20 seconds to run but in sql 2000
> >> > > it
> >> > > takes less than 3 seconds, but when the plan is in cache the
> >> > > executions that
> >> > > follow run in 3 seconds.
> >> > >
> >> > > You can reproduce it with
> >> > >
> >> > > select
> >> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> >> > > union all
> >> > > select
> >> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> >> > >
> >> > > and repeating it till 1000 unions all.
> >> >
> >> >
> >> > Why are you comparing the performance of such a hm... exotic query?
> >> > Do you use it in your application?
> >> >
> >> > First execution of your query takes several seconds, because parse and
> >> > compile step takes a lot of time. Nothing can be done about it. Query
> >> > processor has been totally rebuilt in SQL Server 2005, you're
> >> > experiencing a side effect of these changes.
> >> >
> >> >
> >> > > Does anybody know what 2005 is doing in the compilation? and, of
> >> > > course, if
> >> > > there is any workaround to avoid splitting the query into a subset of
> >> > > queries.
> >> >
> >> > Why do you need this query?
> >> >
> >> >
> >> > --
> >> > Best regards,
> >> > Marcin Guzowski
> >> > http://guzowski.info
> >> >
>
>sql

Performance in big UNION

Before migrating an application to SQL2005, I am checking the performance of
it and comparing it with the performance in SQL 2000. I have found a big
query that has 1000 selects of constant values joined by union all; in the
first execution, it takes more than 20 seconds to run but in sql 2000 it
takes less than 3 seconds, but when the plan is in cache the executions that
follow run in 3 seconds.
You can reproduce it with
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
union all
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
and repeating it till 1000 unions all.
Does anybody know what 2005 is doing in the compilation? and, of course, if
there is any workaround to avoid splitting the query into a subset of querie
s.
Thanks in advance.Ariel wrote:
> Before migrating an application to SQL2005, I am checking the performance
of
> it and comparing it with the performance in SQL 2000. I have found a big
> query that has 1000 selects of constant values joined by union all; in the
> first execution, it takes more than 20 seconds to run but in sql 2000 it
> takes less than 3 seconds, but when the plan is in cache the executions th
at
> follow run in 3 seconds.
> You can reproduce it with
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
> union all
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
> and repeating it till 1000 unions all.
Why are you comparing the performance of such a hm... exotic query?
Do you use it in your application?
First execution of your query takes several seconds, because parse and
compile step takes a lot of time. Nothing can be done about it. Query
processor has been totally rebuilt in SQL Server 2005, you're
experiencing a side effect of these changes.
[vbcol=seagreen]
> Does anybody know what 2005 is doing in the compilation? and, of course, i
f
> there is any workaround to avoid splitting the query into a subset of queries.[/vb
col]
Why do you need this query?
Best regards,
Marcin Guzowski
http://guzowski.info|||I assume that the values within this 1000 line select are static.
Why not load them into a table and select from the table?
--
Denny
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
Microsoft Office SharePoint Server 2007: Configuration)
MCITP (dbadmin, dbdev)
"Marcin A. Guzowski" wrote:

> Ariel wrote:
>
> Why are you comparing the performance of such a hm... exotic query?
> Do you use it in your application?
> First execution of your query takes several seconds, because parse and
> compile step takes a lot of time. Nothing can be done about it. Query
> processor has been totally rebuilt in SQL Server 2005, you're
> experiencing a side effect of these changes.
>
> Why do you need this query?
>
> --
> Best regards,
> Marcin Guzowski
> http://guzowski.info
>|||The query written in this post is only to reproduce the problem, the legacy
code of my application is using a similar code to insert data from a grid to
a table. But with the query of the example you can reproduce the problem.
Isn't there a way to make the optimizer run faster?
Six times slower than in 2000 seems to be too much to be a side effect due
to changes in 2005 , isn't it?
"mrdenny" wrote:
[vbcol=seagreen]
> I assume that the values within this 1000 line select are static.
> Why not load them into a table and select from the table?
> --
> Denny
> MCSA (2003) / MCDBA (SQL 2000)
> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration
/
> Microsoft Office SharePoint Server 2007: Configuration)
> MCITP (dbadmin, dbdev)
>
> "Marcin A. Guzowski" wrote:
>|||Ariel wrote:
> The query written in this post is only to reproduce the problem, the legac
y
> code of my application is using a similar code to insert data from a grid
to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
Of course there is a way - faster server.

> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
No, it's not too much.
Use the right method for your task and performance won't be a bottleneck.
I would choose one of two ways to insert data from app to a table.
First (classic, many inserts in one batch, all in one transaction):
BEGIN TRAN
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
...
COMMIT TRAN
Second (dynamic SQL - very fast):
INSERT table
EXEC
('
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
');
Best regards,
Marcin Guzowski
http://guzowski.info|||Since it is sql 2005, you can actually force the plan. See BOL: Plan
Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
Not sure if that will help in this scenario.
You mention a grid as the actual source of the problem and this fabricated
massive union all as just a demo. Can you post what the grid is doing, or
fix the ROOT cause there yourself'
TheSQLGuru
President
Indicium Resources, Inc.
"Ariel" <Ariel@.discussions.microsoft.com> wrote in message
news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...[vbcol=seagreen]
> The query written in this post is only to reproduce the problem, the
> legacy
> code of my application is using a similar code to insert data from a grid
> to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
>
> "mrdenny" wrote:
>|||Thanks for your ideas, I just was trying to avoid changing code and learn
what the optimizer is doing during those 20 seconds. I'll use your ideas to
change the code. I have tried to split the query in other with 'only' 15
UNIONS and the results are good enough for the app, although they continue
being worse than in 2000.
Do you know any information source where I can find information about this
kind of 'surprises' with the query processor'
"TheSQLGuru" wrote:

> Since it is sql 2005, you can actually force the plan. See BOL: Plan
> Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
> Not sure if that will help in this scenario.
> You mention a grid as the actual source of the problem and this fabricated
> massive union all as just a demo. Can you post what the grid is doing, or
> fix the ROOT cause there yourself'
>
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Ariel" <Ariel@.discussions.microsoft.com> wrote in message
> news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
>
>

Performance in big UNION

Before migrating an application to SQL2005, I am checking the performance of
it and comparing it with the performance in SQL 2000. I have found a big
query that has 1000 selects of constant values joined by union all; in the
first execution, it takes more than 20 seconds to run but in sql 2000 it
takes less than 3 seconds, but when the plan is in cache the executions that
follow run in 3 seconds.
You can reproduce it with
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
union all
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
and repeating it till 1000 unions all.
Does anybody know what 2005 is doing in the compilation? and, of course, if
there is any workaround to avoid splitting the query into a subset of queries.
Thanks in advance.
Ariel wrote:
> Before migrating an application to SQL2005, I am checking the performance of
> it and comparing it with the performance in SQL 2000. I have found a big
> query that has 1000 selects of constant values joined by union all; in the
> first execution, it takes more than 20 seconds to run but in sql 2000 it
> takes less than 3 seconds, but when the plan is in cache the executions that
> follow run in 3 seconds.
> You can reproduce it with
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
> union all
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
> and repeating it till 1000 unions all.
Why are you comparing the performance of such a hm... exotic query?
Do you use it in your application?
First execution of your query takes several seconds, because parse and
compile step takes a lot of time. Nothing can be done about it. Query
processor has been totally rebuilt in SQL Server 2005, you're
experiencing a side effect of these changes.

> Does anybody know what 2005 is doing in the compilation? and, of course, if
> there is any workaround to avoid splitting the query into a subset of queries.
Why do you need this query?
Best regards,
Marcin Guzowski
http://guzowski.info
|||I assume that the values within this 1000 line select are static.
Why not load them into a table and select from the table?
Denny
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
Microsoft Office SharePoint Server 2007: Configuration)
MCITP (dbadmin, dbdev)
"Marcin A. Guzowski" wrote:

> Ariel wrote:
>
> Why are you comparing the performance of such a hm... exotic query?
> Do you use it in your application?
> First execution of your query takes several seconds, because parse and
> compile step takes a lot of time. Nothing can be done about it. Query
> processor has been totally rebuilt in SQL Server 2005, you're
> experiencing a side effect of these changes.
>
> Why do you need this query?
>
> --
> Best regards,
> Marcin Guzowski
> http://guzowski.info
>
|||The query written in this post is only to reproduce the problem, the legacy
code of my application is using a similar code to insert data from a grid to
a table. But with the query of the example you can reproduce the problem.
Isn't there a way to make the optimizer run faster?
Six times slower than in 2000 seems to be too much to be a side effect due
to changes in 2005 , isn't it?
"mrdenny" wrote:
[vbcol=seagreen]
> I assume that the values within this 1000 line select are static.
> Why not load them into a table and select from the table?
> --
> Denny
> MCSA (2003) / MCDBA (SQL 2000)
> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
> Microsoft Office SharePoint Server 2007: Configuration)
> MCITP (dbadmin, dbdev)
>
> "Marcin A. Guzowski" wrote:
|||Ariel wrote:
> The query written in this post is only to reproduce the problem, the legacy
> code of my application is using a similar code to insert data from a grid to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
Of course there is a way - faster server.

> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
No, it's not too much.
Use the right method for your task and performance won't be a bottleneck.
I would choose one of two ways to insert data from app to a table.
First (classic, many inserts in one batch, all in one transaction):
BEGIN TRAN
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
....
COMMIT TRAN
Second (dynamic SQL - very fast):
INSERT table
EXEC
('
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
');
Best regards,
Marcin Guzowski
http://guzowski.info
|||Since it is sql 2005, you can actually force the plan. See BOL: Plan
Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
Not sure if that will help in this scenario.
You mention a grid as the actual source of the problem and this fabricated
massive union all as just a demo. Can you post what the grid is doing, or
fix the ROOT cause there yourself?
TheSQLGuru
President
Indicium Resources, Inc.
"Ariel" <Ariel@.discussions.microsoft.com> wrote in message
news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...[vbcol=seagreen]
> The query written in this post is only to reproduce the problem, the
> legacy
> code of my application is using a similar code to insert data from a grid
> to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
>
> "mrdenny" wrote:
|||Thanks for your ideas, I just was trying to avoid changing code and learn
what the optimizer is doing during those 20 seconds. I'll use your ideas to
change the code. I have tried to split the query in other with 'only' 15
UNIONS and the results are good enough for the app, although they continue
being worse than in 2000.
Do you know any information source where I can find information about this
kind of 'surprises' with the query processor?
"TheSQLGuru" wrote:

> Since it is sql 2005, you can actually force the plan. See BOL: Plan
> Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
> Not sure if that will help in this scenario.
> You mention a grid as the actual source of the problem and this fabricated
> massive union all as just a demo. Can you post what the grid is doing, or
> fix the ROOT cause there yourself?
>
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Ariel" <Ariel@.discussions.microsoft.com> wrote in message
> news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
>
>

Performance improves with Studio Query window open

Bit of a strange one here. We have a SQL Express instance running with one database which is accessed by a VB6 client application.

Performance between the application and the database isn't great, but bizarrely if you open a Query window from the Management Studio (against the database) the performance dramatically improves - to the extent that it is visually noticeable from within the application.

So, I'm thinking that the database connections being made by the application are taking longer because of instance name resolution or the like and with the Query window open this resolution is performed quicker.

Has anyone come across this situation? I can re-create it on several different computers each with their own Express instance and I've not found anything about this on the net.

Here's the connection string used by the application - I have tried various permutations of values to no avail:

Provider=SQLOLEDB.1;Initial Catalog=SampleDB;Data Source=(local)\sqlexpress;Trusted_Connection=yes

Thanks very much

hi, SQLExpress sets by default it's related databases a database propery that could be involved in your observation..

SQLExpress sets the auto close database property to true, so that every database with no active connection is shut down to preserve file integrity.. at next database connection, the database will be re-opened, requiring initial overhead to start it.. you can modify that database option as required...

this "solution" was probably taken out of the box becouse SQLEXpress is expected to execute on "client pcs" and not on "server" hardware that obviously are more "secure" and "stable"...

regards|||

Yes, thank you - switching the Auto Close to False works nicely.

I note that Auto Close is False by default on databases on full SQL Server, so this we only need to watch out for those databases put up on SQL Express.

Thanks again.

Performance improves with Studio Query window open

Bit of a strange one here. We have a SQL Express instance running with one database which is accessed by a VB6 client application.

Performance between the application and the database isn't great, but bizarrely if you open a Query window from the Management Studio (against the database) the performance dramatically improves - to the extent that it is visually noticeable from within the application.

So, I'm thinking that the database connections being made by the application are taking longer because of instance name resolution or the like and with the Query window open this resolution is performed quicker.

Has anyone come across this situation? I can re-create it on several different computers each with their own Express instance and I've not found anything about this on the net.

Here's the connection string used by the application - I have tried various permutations of values to no avail:

Provider=SQLOLEDB.1;Initial Catalog=SampleDB;Data Source=(local)\sqlexpress;Trusted_Connection=yes

Thanks very much

hi, SQLExpress sets by default it's related databases a database propery that could be involved in your observation..

SQLExpress sets the auto close database property to true, so that every database with no active connection is shut down to preserve file integrity.. at next database connection, the database will be re-opened, requiring initial overhead to start it.. you can modify that database option as required...

this "solution" was probably taken out of the box becouse SQLEXpress is expected to execute on "client pcs" and not on "server" hardware that obviously are more "secure" and "stable"...

regards|||

Yes, thank you - switching the Auto Close to False works nicely.

I note that Auto Close is False by default on databases on full SQL Server, so this we only need to watch out for those databases put up on SQL Express.

Thanks again.

Performance Improvements: Hardware vs. DB Design

Hi
I recently designed & developed a data warehouse for a client of ours.
The project has been quite a success, but they have since loaded the
database with a substantial amount of (historic) data, and have given
access to the query & reporting system (Business Objects) to many more
users. As a result, the query time on the reports is beginning to
slow.
I need to advise the client on what should be done to improve
performance again.
I'm quite familiar with what can be done on the design side of the
database (i.e. indices, aggregate tables, etc.), but I am not very
sure about the cost/benefits from a hardware perspective. As a
consultancy, the cost of our services (to build aggregate tables, etc)
would be quite expensive. It might be a better option for the client
to improve the spec of the server instead, but I am reluctant to
recommend this option without knowing that there would be guaranteed
benefits. In other words, if they were to double the processing power,
would the query time effectively halve? What about memory - i.e. would
more memory help with multiple-users, but not affect the speed of the
reports? Is 4GB the max memory that Windows 2000 can have?
[I can't remember the exact spec, but the server's got something like
2 Xeon dual-processors, 4GB RAM & Win2k.]
Can anyone recommend a study or white-paper on the performance
improvements in relation to hardware upgrades, or something similar?
Otherwise, what other options do I have? Am I overlooking something
like two load-balanced servers?
Thanks for any help / recommendations!
Sean W.
Sean wrote:
> Hi
> I recently designed & developed a data warehouse for a client of ours.
> The project has been quite a success, but they have since loaded the
> database with a substantial amount of (historic) data, and have given
> access to the query & reporting system (Business Objects) to many more
> users. As a result, the query time on the reports is beginning to
> slow.
> I need to advise the client on what should be done to improve
> performance again.
> I'm quite familiar with what can be done on the design side of the
> database (i.e. indices, aggregate tables, etc.), but I am not very
> sure about the cost/benefits from a hardware perspective. As a
> consultancy, the cost of our services (to build aggregate tables, etc)
> would be quite expensive. It might be a better option for the client
> to improve the spec of the server instead, but I am reluctant to
> recommend this option without knowing that there would be guaranteed
> benefits. In other words, if they were to double the processing power,
> would the query time effectively halve? What about memory - i.e. would
> more memory help with multiple-users, but not affect the speed of the
> reports? Is 4GB the max memory that Windows 2000 can have?
> [I can't remember the exact spec, but the server's got something like
> 2 Xeon dual-processors, 4GB RAM & Win2k.]
> Can anyone recommend a study or white-paper on the performance
> improvements in relation to hardware upgrades, or something similar?
> Otherwise, what other options do I have? Am I overlooking something
> like two load-balanced servers?
> Thanks for any help / recommendations!
> Sean W.
Unfortunately, I think many companies are inclined to add more server
without first trying to tune the database. You've gone ahead and
designed a database for the customer and added the necessary RI and
indexes. But now that you see the SQL executed, someone needs to tune
the queries, revise the indexes if necessary, and verify the reporting
system is executing SQL in an efficient way: For example, is it using
stored procedures? Do users have Ad Hoc access to the database to query
whatever they want? Does the reporting tool bring back large result sets
and filter on the client PC? Do the queries use a READ UNCOMMITTED or
NOLOCK query option to keep locks to a minimum?
My point is that without performance tuning the queries, it's really
impossible to know whether adding more hardware will give you anything
but a temporary reprieve of poor performance.
When I hear about problems like this, I like to remember an old client
that had a couple hundred people hitting a SQL Server database which ran
on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
I hear about performance problems on quad-Xeon systems with 3GB RAM (as
an example), I get concerned that a company is considering throwing
another $150K at a new server.
Ad Hoc reporting systems are notoriously bad on performance, especially
if users don't run canned reports and can throw what SQL the end-user
reporting tool can generate at the server. The amount of data queried
can easily move old data out of cache and force the next query to read
that data from disk, which is really slow. Adding more memory is an
option if plan reuse is the issue. And since memory is cheap, it can't
hurt to add all you can. Adding more hardware will help, but may not
really be a long term solution.
However, nothing beats being able to aptly use an index to generate a
result set. Table/Clustered Index Scans are a killer on big tables.
You may want to consider placing query limits on ad hoc queries using
"SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
those queries cosuming excessive CPU and see if the database or the
queries can be tuned.
David Gugick
Imceda Software
www.imceda.com
|||With Business Objects it is worth asking the users to send you copies of
their reports. You can then see if indices need creating on various
tables etc.
Adrian
David G. wrote:

> Sean wrote:
>
> Unfortunately, I think many companies are inclined to add more server
> without first trying to tune the database. You've gone ahead and
> designed a database for the customer and added the necessary RI and
> indexes. But now that you see the SQL executed, someone needs to tune
> the queries, revise the indexes if necessary, and verify the reporting
> system is executing SQL in an efficient way: For example, is it using
> stored procedures? Do users have Ad Hoc access to the database to query
> whatever they want? Does the reporting tool bring back large result sets
> and filter on the client PC? Do the queries use a READ UNCOMMITTED or
> NOLOCK query option to keep locks to a minimum?
> My point is that without performance tuning the queries, it's really
> impossible to know whether adding more hardware will give you anything
> but a temporary reprieve of poor performance.
> When I hear about problems like this, I like to remember an old client
> that had a couple hundred people hitting a SQL Server database which ran
> on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
> I hear about performance problems on quad-Xeon systems with 3GB RAM (as
> an example), I get concerned that a company is considering throwing
> another $150K at a new server.
> Ad Hoc reporting systems are notoriously bad on performance, especially
> if users don't run canned reports and can throw what SQL the end-user
> reporting tool can generate at the server. The amount of data queried
> can easily move old data out of cache and force the next query to read
> that data from disk, which is really slow. Adding more memory is an
> option if plan reuse is the issue. And since memory is cheap, it can't
> hurt to add all you can. Adding more hardware will help, but may not
> really be a long term solution.
> However, nothing beats being able to aptly use an index to generate a
> result set. Table/Clustered Index Scans are a killer on big tables.
> You may want to consider placing query limits on ad hoc queries using
> "SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
> those queries cosuming excessive CPU and see if the database or the
> queries can be tuned.
>
|||Adrian Edwards wrote:
> With Business Objects it is worth asking the users to send you copies
> of their reports. You can then see if indices need creating on various
> tables etc.
> Adrian
>
Good point. If you can get your hands on any canned reports or at least
have a process for quickly rolling out new or updated reports, you may
eliminate users creating queries that don't perform well.
You might want to consider having new reports created against a test
server before moving them into production. Not giving users the rights
to create reports on the production data may be difficult to manage.
Managers generally want fast results. If you can't do that, at least
introduce a review process so there's never a time when too many reports
are created without review.
You can do this passively using Profiler or a server-side trace. You
could trap the SQL:BatchCompleted and RPC:Completed events for CPU over
a certain level (say 1,000ms) and review those SQL Statements
periodically.
David Gugick
Imceda Software
www.imceda.com
sql

Performance Improvements: Hardware vs. DB Design

Hi
I recently designed & developed a data warehouse for a client of ours.
The project has been quite a success, but they have since loaded the
database with a substantial amount of (historic) data, and have given
access to the query & reporting system (Business Objects) to many more
users. As a result, the query time on the reports is beginning to
slow.
I need to advise the client on what should be done to improve
performance again.
I'm quite familiar with what can be done on the design side of the
database (i.e. indices, aggregate tables, etc.), but I am not very
sure about the cost/benefits from a hardware perspective. As a
consultancy, the cost of our services (to build aggregate tables, etc)
would be quite expensive. It might be a better option for the client
to improve the spec of the server instead, but I am reluctant to
recommend this option without knowing that there would be guaranteed
benefits. In other words, if they were to double the processing power,
would the query time effectively halve? What about memory - i.e. would
more memory help with multiple-users, but not affect the speed of the
reports? Is 4GB the max memory that Windows 2000 can have?
[I can't remember the exact spec, but the server's got something like
2 Xeon dual-processors, 4GB RAM & Win2k.]
Can anyone recommend a study or white-paper on the performance
improvements in relation to hardware upgrades, or something similar?
Otherwise, what other options do I have? Am I overlooking something
like two load-balanced servers?
Thanks for any help / recommendations!
Sean W.Sean wrote:
> Hi
> I recently designed & developed a data warehouse for a client of ours.
> The project has been quite a success, but they have since loaded the
> database with a substantial amount of (historic) data, and have given
> access to the query & reporting system (Business Objects) to many more
> users. As a result, the query time on the reports is beginning to
> slow.
> I need to advise the client on what should be done to improve
> performance again.
> I'm quite familiar with what can be done on the design side of the
> database (i.e. indices, aggregate tables, etc.), but I am not very
> sure about the cost/benefits from a hardware perspective. As a
> consultancy, the cost of our services (to build aggregate tables, etc)
> would be quite expensive. It might be a better option for the client
> to improve the spec of the server instead, but I am reluctant to
> recommend this option without knowing that there would be guaranteed
> benefits. In other words, if they were to double the processing power,
> would the query time effectively halve? What about memory - i.e. would
> more memory help with multiple-users, but not affect the speed of the
> reports? Is 4GB the max memory that Windows 2000 can have?
> [I can't remember the exact spec, but the server's got something like
> 2 Xeon dual-processors, 4GB RAM & Win2k.]
> Can anyone recommend a study or white-paper on the performance
> improvements in relation to hardware upgrades, or something similar?
> Otherwise, what other options do I have? Am I overlooking something
> like two load-balanced servers?
> Thanks for any help / recommendations!
> Sean W.
Unfortunately, I think many companies are inclined to add more server
without first trying to tune the database. You've gone ahead and
designed a database for the customer and added the necessary RI and
indexes. But now that you see the SQL executed, someone needs to tune
the queries, revise the indexes if necessary, and verify the reporting
system is executing SQL in an efficient way: For example, is it using
stored procedures? Do users have Ad Hoc access to the database to query
whatever they want? Does the reporting tool bring back large result sets
and filter on the client PC? Do the queries use a READ UNCOMMITTED or
NOLOCK query option to keep locks to a minimum?
My point is that without performance tuning the queries, it's really
impossible to know whether adding more hardware will give you anything
but a temporary reprieve of poor performance.
When I hear about problems like this, I like to remember an old client
that had a couple hundred people hitting a SQL Server database which ran
on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
I hear about performance problems on quad-Xeon systems with 3GB RAM (as
an example), I get concerned that a company is considering throwing
another $150K at a new server.
Ad Hoc reporting systems are notoriously bad on performance, especially
if users don't run canned reports and can throw what SQL the end-user
reporting tool can generate at the server. The amount of data queried
can easily move old data out of cache and force the next query to read
that data from disk, which is really slow. Adding more memory is an
option if plan reuse is the issue. And since memory is cheap, it can't
hurt to add all you can. Adding more hardware will help, but may not
really be a long term solution.
However, nothing beats being able to aptly use an index to generate a
result set. Table/Clustered Index Scans are a killer on big tables.
You may want to consider placing query limits on ad hoc queries using
"SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
those queries cosuming excessive CPU and see if the database or the
queries can be tuned.
--
David Gugick
Imceda Software
www.imceda.com|||With Business Objects it is worth asking the users to send you copies of
their reports. You can then see if indices need creating on various
tables etc.
Adrian
David G. wrote:
> Sean wrote:
>>Hi
>>I recently designed & developed a data warehouse for a client of ours.
>>The project has been quite a success, but they have since loaded the
>>database with a substantial amount of (historic) data, and have given
>>access to the query & reporting system (Business Objects) to many more
>>users. As a result, the query time on the reports is beginning to
>>slow.
>>I need to advise the client on what should be done to improve
>>performance again.
>>I'm quite familiar with what can be done on the design side of the
>>database (i.e. indices, aggregate tables, etc.), but I am not very
>>sure about the cost/benefits from a hardware perspective. As a
>>consultancy, the cost of our services (to build aggregate tables, etc)
>>would be quite expensive. It might be a better option for the client
>>to improve the spec of the server instead, but I am reluctant to
>>recommend this option without knowing that there would be guaranteed
>>benefits. In other words, if they were to double the processing power,
>>would the query time effectively halve? What about memory - i.e. would
>>more memory help with multiple-users, but not affect the speed of the
>>reports? Is 4GB the max memory that Windows 2000 can have?
>>[I can't remember the exact spec, but the server's got something like
>>2 Xeon dual-processors, 4GB RAM & Win2k.]
>>Can anyone recommend a study or white-paper on the performance
>>improvements in relation to hardware upgrades, or something similar?
>>Otherwise, what other options do I have? Am I overlooking something
>>like two load-balanced servers?
>>Thanks for any help / recommendations!
>>Sean W.
>
> Unfortunately, I think many companies are inclined to add more server
> without first trying to tune the database. You've gone ahead and
> designed a database for the customer and added the necessary RI and
> indexes. But now that you see the SQL executed, someone needs to tune
> the queries, revise the indexes if necessary, and verify the reporting
> system is executing SQL in an efficient way: For example, is it using
> stored procedures? Do users have Ad Hoc access to the database to query
> whatever they want? Does the reporting tool bring back large result sets
> and filter on the client PC? Do the queries use a READ UNCOMMITTED or
> NOLOCK query option to keep locks to a minimum?
> My point is that without performance tuning the queries, it's really
> impossible to know whether adding more hardware will give you anything
> but a temporary reprieve of poor performance.
> When I hear about problems like this, I like to remember an old client
> that had a couple hundred people hitting a SQL Server database which ran
> on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
> I hear about performance problems on quad-Xeon systems with 3GB RAM (as
> an example), I get concerned that a company is considering throwing
> another $150K at a new server.
> Ad Hoc reporting systems are notoriously bad on performance, especially
> if users don't run canned reports and can throw what SQL the end-user
> reporting tool can generate at the server. The amount of data queried
> can easily move old data out of cache and force the next query to read
> that data from disk, which is really slow. Adding more memory is an
> option if plan reuse is the issue. And since memory is cheap, it can't
> hurt to add all you can. Adding more hardware will help, but may not
> really be a long term solution.
> However, nothing beats being able to aptly use an index to generate a
> result set. Table/Clustered Index Scans are a killer on big tables.
> You may want to consider placing query limits on ad hoc queries using
> "SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
> those queries cosuming excessive CPU and see if the database or the
> queries can be tuned.
>|||Adrian Edwards wrote:
> With Business Objects it is worth asking the users to send you copies
> of their reports. You can then see if indices need creating on various
> tables etc.
> Adrian
>
Good point. If you can get your hands on any canned reports or at least
have a process for quickly rolling out new or updated reports, you may
eliminate users creating queries that don't perform well.
You might want to consider having new reports created against a test
server before moving them into production. Not giving users the rights
to create reports on the production data may be difficult to manage.
Managers generally want fast results. If you can't do that, at least
introduce a review process so there's never a time when too many reports
are created without review.
You can do this passively using Profiler or a server-side trace. You
could trap the SQL:BatchCompleted and RPC:Completed events for CPU over
a certain level (say 1,000ms) and review those SQL Statements
periodically.
David Gugick
Imceda Software
www.imceda.com

Performance improvements

Any suggestions on improving this query? It is currently timing out after 5 mins.

With Set [Account] As {[Account].&[17253], Descendants([Account].&[17253], 1, Leaves), Descendants([Account].&[17253], 2, Leaves), Descendants([Account].&[17253], 3, Leaves), Descendants([Account].&[17253], 4, Leaves),Descendants([Account].&[17253], 5, Leaves), Descendants([Account].&[17253], 6, Leaves), Descendants([Account].&[17253], 7, Leaves)}Set [Year] As {[Years].&[2005], [Years].&[2006], [Years].&[2007]} Select CrossJoin({CrossJoin({[Organization].&[1]}, {CrossJoin({[Years].[Years].members}, {[Period].[Period Name].members})})} ,{[Measures].[Current vs. Prior], [Measures].[Value]}) On Columns,Crossjoin([Account], [Scenarios].&[1]) on Rows From TestCube

We're trying to get all the descendants of account #17253. Just using the Descendants function causes even worse performance.

thanks,

Andrew

First try looking at the performance guide and see what are the recommendataions for improving query performance.

Please also take a look at NON EMPTY, NONEMPTY and nonemptycrossjoin. And NON_EMPTY_BEHAVIOR property.

Here some more information

http://blogs.msdn.com/bi_systems/articles/162852.aspx

http://www.sql-server-performance.com/wp_msas_9.asp

HTH

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Performance Improvement in Analysis Services 2005

Hi,

I am new to Analysis Services and was wondering if some one could
assist me with the issues I am having.

Basically we have one single table on which I need to build a cube so
that managemet can view the data in excel and have good perofrmance. I
started with AS 2000 but ran into the issue of having more than 64,000
members for a dimension level which were not unique.I tried various
suggestions given on the forum but none worked.

So I installed AS 2005 and migrated the database from AS 2000 and build the cube. However I am running into major performance issues, What I have are 2 dimensions with hierarchies Products and Time . The lowest dimesion in Products in something called Instrument Id and it is the 4th level in my hierarchy, the Time dimension has 3 levels, month being the lowest.

I read the articles which described the difference between hierarchies and attribute relationships in AS 2000 and AS 2005. So I defined atributes in my cube design for the above mentioned dimensions. However for the lowest level of detail ( Instument Id across Months ) the query takes for ever. I have the aggregation set at 70%

I know there is an issue with my design but with my limited knowledge of Analysis Serivces I am unble to focus in the right direction. Can someone help me with this

Thanks

For the clues on how to deal with performance problems and the for the better desing ideas take a look at the showcase study; project REAL

http://www.microsoft.com/technet/prodtechnol/sql/2005/projreal.mspx

Particularly in your case I can think of some reasons for slower performance.

For one, the query you are trying to run is going to scan your entire fact and is probably going to bring very big result. Try and make sure you provide a slice for your query. Select only few Instrument Id's not entire level.

Second, try and partition your measure group. Having several partitions would help to speed up your query. This is because Analysis Server can detect which partitions hold the data for the particular Instument Id's you query for and will only scan these partitions.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Edward,

Thanks for the input, just a couple of quick follow up questions

1. When you say "make sure you provide a slice for your query. Select only few Instrument Id's not entire level" I am not sure how i would go about doing it.

2. As far a partitions I was trying to create partitions for each month of data using the query binding feature. I am not supposed to create a poartition against a dimension. and how would I go about creating partitions across a measure group.

Thanks

|||Dear All,

I'm having a problem with performance in SSAS 2005. This is the server specification:
OLAP

Server and ORACLE Data Warehouse Server (64 bit) : IBM xSeries 460,

windows 2003 EE x64 SP1, SQL Server Analysis Services 2005 x64 SP1 +

Hotfix, , ORACLE 10g R2, 8x Xeon 3 GHz, memory 8 GB. Data Warehouse is

located on drive D 400GB and OLAP is in drive E 300GB and both of them

are HDD External using SAN (Storage Area Network) RAID 5.
Application Server (32 bit) : 52 IBM xSeries 366 Windows 2003 R2 SP1 BI Serv Dev ,sqlserv 2005 8x Xeon 3GH, mem 8 GB

For

the Data warehouse I'm using ORACLE 10g R2 64 bit and for the OLAP I'm

using SSAS 2005 64 bit. I'm already installed the oracle client. For

the front-end, I'm using Proclarity.

I have problems with the

performance when querying the OLAP by using ProClarity or SSMS

(Management Studio). The response time is very low, but the CPU

Utilization is below 10% and Memory Usage not more than 50%. When I

check in the performance monitor, the I/O access is very high

continuously 100% utilization when I are running query from ProClarity

Analytics Server. For improving the performance I have applied design

aggregation for each cube. However, design aggregation cannot be made

on several cubes that have a lot of data, and have a lot of measures

and dimension (the number of fields in the table more or less 150

fields). They return 0% optimization level when I run design

aggregation.

FYI, I have done several ways for optimizing the performance such as:
1. Using MOLAP storage mode with 100% design aggregation, except several cubes that can not be made.
2. Cube design follows the best practice, e.g. using key member.
3. Separation of physical storage for data warehouse and OLAP files in different disk.
4. I also used usage-based optimization but it not works.

My

key points are how to improve the performance and fix the aggregation

problem that cannot be made on several cubes. I am looking forward to

hear from you soon.

Thanks in advance.|||Dear All,

I'm having a problem with performance in SSAS 2005. This is the server specification:
OLAP

Server and ORACLE Data Warehouse Server (64 bit) : IBM xSeries 460,

windows 2003 EE x64 SP1, SQL Server Analysis Services 2005 x64 SP1 +

Hotfix, , ORACLE 10g R2, 8x Xeon 3 GHz, memory 8 GB. Data Warehouse is

located on drive D 400GB and OLAP is in drive E 300GB and both of them

are HDD External using SAN (Storage Area Network) RAID 5.
Application Server (32 bit) : 52 IBM xSeries 366 Windows 2003 R2 SP1 BI Serv Dev ,sqlserv 2005 8x Xeon 3GH, mem 8 GB

For

the Data warehouse I'm using ORACLE 10g R2 64 bit and for the OLAP I'm

using SSAS 2005 64 bit. I'm already installed the oracle client. For

the front-end, I'm using Proclarity.

I have problems with the

performance when querying the OLAP by using ProClarity or SSMS

(Management Studio). The response time is very low, but the CPU

Utilization is below 10% and Memory Usage not more than 50%. When I

check in the performance monitor, the I/O access is very high

continuously 100% utilization when I are running query from ProClarity

Analytics Server. For improving the performance I have applied design

aggregation for each cube. However, design aggregation cannot be made

on several cubes that have a lot of data, and have a lot of measures

and dimension (the number of fields in the table more or less 150

fields). They return 0% optimization level when I run design

aggregation.

FYI, I have done several ways for optimizing the performance such as:
1. Using MOLAP storage mode with 100% design aggregation, except several cubes that can not be made.
2. Cube design follows the best practice, e.g. using key member.
3. Separation of physical storage for data warehouse and OLAP files in different disk.
4. I also used usage-based optimization but it not works.

My

key points are how to improve the performance and fix the aggregation

problem that cannot be made on several cubes. I am looking forward to

hear from you soon.

Thanks in advance.|||Dear All,

I'm having a problem with performance in SSAS 2005. This is the server specification:
OLAP

Server and ORACLE Data Warehouse Server (64 bit) : IBM xSeries 460,

windows 2003 EE x64 SP1, SQL Server Analysis Services 2005 x64 SP1 +

Hotfix, , ORACLE 10g R2, 8x Xeon 3 GHz, memory 8 GB. Data Warehouse is

located on drive D 400GB and OLAP is in drive E 300GB and both of them

are HDD External using SAN (Storage Area Network) RAID 5.
Application Server (32 bit) : 52 IBM xSeries 366 Windows 2003 R2 SP1 BI Serv Dev ,sqlserv 2005 8x Xeon 3GH, mem 8 GB

For

the Data warehouse I'm using ORACLE 10g R2 64 bit and for the OLAP I'm

using SSAS 2005 64 bit. I'm already installed the oracle client. For

the front-end, I'm using Proclarity.

I have problems with the

performance when querying the OLAP by using ProClarity or SSMS

(Management Studio). The response time is very low, but the CPU

Utilization is below 10% and Memory Usage not more than 50%. When I

check in the performance monitor, the I/O access is very high

continuously 100% utilization when I are running query from ProClarity

Analytics Server. For improving the performance I have applied design

aggregation for each cube. However, design aggregation cannot be made

on several cubes that have a lot of data, and have a lot of measures

and dimension (the number of fields in the table more or less 150

fields). They return 0% optimization level when I run design

aggregation.

FYI, I have done several ways for optimizing the performance such as:
1. Using MOLAP storage mode with 100% design aggregation, except several cubes that can not be made.
2. Cube design follows the best practice, e.g. using key member.
3. Separation of physical storage for data warehouse and OLAP files in different disk.
4. I also used usage-based optimization but it not works.

My

key points are how to improve the performance and fix the aggregation

problem that cannot be made on several cubes. I am looking forward to

hear from you soon.

Thanks in advance.|||Dear All,

I'm having a problem with performance in SSAS 2005. This is the server specification:
OLAP

Server and ORACLE Data Warehouse Server (64 bit) : IBM xSeries 460,

windows 2003 EE x64 SP1, SQL Server Analysis Services 2005 x64 SP1 +

Hotfix, , ORACLE 10g R2, 8x Xeon 3 GHz, memory 8 GB. Data Warehouse is

located on drive D 400GB and OLAP is in drive E 300GB and both of them

are HDD External using SAN (Storage Area Network) RAID 5.
Application Server (32 bit) : 52 IBM xSeries 366 Windows 2003 R2 SP1 BI Serv Dev ,sqlserv 2005 8x Xeon 3GH, mem 8 GB

For

the Data warehouse I'm using ORACLE 10g R2 64 bit and for the OLAP I'm

using SSAS 2005 64 bit. I'm already installed the oracle client. For

the front-end, I'm using Proclarity.

I have problems with the

performance when querying the OLAP by using ProClarity or SSMS

(Management Studio). The response time is very low, but the CPU

Utilization is below 10% and Memory Usage not more than 50%. When I

check in the performance monitor, the I/O access is very high

continuously 100% utilization when I are running query from ProClarity

Analytics Server. For improving the performance I have applied design

aggregation for each cube. However, design aggregation cannot be made

on several cubes that have a lot of data, and have a lot of measures

and dimension (the number of fields in the table more or less 150

fields). They return 0% optimization level when I run design

aggregation.

FYI, I have done several ways for optimizing the performance such as:
1. Using MOLAP storage mode with 100% design aggregation, except several cubes that can not be made.
2. Cube design follows the best practice, e.g. using key member.
3. Separation of physical storage for data warehouse and OLAP files in different disk.
4. I also used usage-based optimization but it not works.

My

key points are how to improve the performance and fix the aggregation

problem that cannot be made on several cubes. I am looking forward to

hear from you soon.

Thanks in advance.|||Dear All,

I'm having a problem with performance in SSAS 2005. This is the server specification:
OLAP

Server and ORACLE Data Warehouse Server (64 bit) : IBM xSeries 460,

windows 2003 EE x64 SP1, SQL Server Analysis Services 2005 x64 SP1 +

Hotfix, , ORACLE 10g R2, 8x Xeon 3 GHz, memory 8 GB. Data Warehouse is

located on drive D 400GB and OLAP is in drive E 300GB and both of them

are HDD External using SAN (Storage Area Network) RAID 5.
Application Server (32 bit) : 52 IBM xSeries 366 Windows 2003 R2 SP1 BI Serv Dev ,sqlserv 2005 8x Xeon 3GH, mem 8 GB

For

the Data warehouse I'm using ORACLE 10g R2 64 bit and for the OLAP I'm

using SSAS 2005 64 bit. I'm already installed the oracle client. For

the front-end, I'm using Proclarity.

I have problems with the

performance when querying the OLAP by using ProClarity or SSMS

(Management Studio). The response time is very low, but the CPU

Utilization is below 10% and Memory Usage not more than 50%. When I

check in the performance monitor, the I/O access is very high

continuously 100% utilization when I are running query from ProClarity

Analytics Server. For improving the performance I have applied design

aggregation for each cube. However, design aggregation cannot be made

on several cubes that have a lot of data, and have a lot of measures

and dimension (the number of fields in the table more or less 150

fields). They return 0% optimization level when I run design

aggregation.

FYI, I have done several ways for optimizing the performance such as:
1. Using MOLAP storage mode with 100% design aggregation, except several cubes that can not be made.
2. Cube design follows the best practice, e.g. using key member.
3. Separation of physical storage for data warehouse and OLAP files in different disk.
4. I also used usage-based optimization but it not works.

My

key points are how to improve the performance and fix the aggregation

problem that cannot be made on several cubes. I am looking forward to

hear from you soon.

Thanks in advance.

Performance Improvement in Analysis Services 2005

Hi,

I am new to Analysis Services and was wondering if some one could
assist me with the issues I am having.

Basically we have one single table on which I need to build a cube so
that managemet can view the data in excel and have good perofrmance. I
started with AS 2000 but ran into the issue of having more than 64,000
members for a dimension level which were not unique.I tried various
suggestions given on the forum but none worked.

So I installed AS 2005 and migrated the database from AS 2000 and build the cube. However I am running into major performance issues, What I have are 2 dimensions with hierarchies Products and Time . The lowest dimesion in Products in something called Instrument Id and it is the 4th level in my hierarchy, the Time dimension has 3 levels, month being the lowest.

I read the articles which described the difference between hierarchies and attribute relationships in AS 2000 and AS 2005. So I defined atributes in my cube design for the above mentioned dimensions. However for the lowest level of detail ( Instument Id across Months ) the query takes for ever. I have the aggregation set at 70%

I know there is an issue with my design but with my limited knowledge of Analysis Serivces I am unble to focus in the right direction. Can someone help me with this

Thanks

For the clues on how to deal with performance problems and the for the better desing ideas take a look at the showcase study; project REAL

http://www.microsoft.com/technet/prodtechnol/sql/2005/projreal.mspx

Particularly in your case I can think of some reasons for slower performance.

For one, the query you are trying to run is going to scan your entire fact and is probably going to bring very big result. Try and make sure you provide a slice for your query. Select only few Instrument Id's not entire level.

Second, try and partition your measure group. Having several partitions would help to speed up your query. This is because Analysis Server can detect which partitions hold the data for the particular Instument Id's you query for and will only scan these partitions.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Edward,

Thanks for the input, just a couple of quick follow up questions

1. When you say "make sure you provide a slice for your query. Select only few Instrument Id's not entire level" I am not sure how i would go about doing it.

2. As far a partitions I was trying to create partitions for each month of data using the query binding feature. I am not supposed to create a poartition against a dimension. and how would I go about creating partitions across a measure group.

Thanks

|||Dear All,

I'm having a problem with performance in SSAS 2005. This is the server specification:
OLAP

Server and ORACLE Data Warehouse Server (64 bit) : IBM xSeries 460,

windows 2003 EE x64 SP1, SQL Server Analysis Services 2005 x64 SP1 +

Hotfix, , ORACLE 10g R2, 8x Xeon 3 GHz, memory 8 GB. Data Warehouse is

located on drive D 400GB and OLAP is in drive E 300GB and both of them

are HDD External using SAN (Storage Area Network) RAID 5.
Application Server (32 bit) : 52 IBM xSeries 366 Windows 2003 R2 SP1 BI Serv Dev ,sqlserv 2005 8x Xeon 3GH, mem 8 GB

For

the Data warehouse I'm using ORACLE 10g R2 64 bit and for the OLAP I'm

using SSAS 2005 64 bit. I'm already installed the oracle client. For

the front-end, I'm using Proclarity.

I have problems with the

performance when querying the OLAP by using ProClarity or SSMS

(Management Studio). The response time is very low, but the CPU

Utilization is below 10% and Memory Usage not more than 50%. When I

check in the performance monitor, the I/O access is very high

continuously 100% utilization when I are running query from ProClarity

Analytics Server. For improving the performance I have applied design

aggregation for each cube. However, design aggregation cannot be made

on several cubes that have a lot of data, and have a lot of measures

and dimension (the number of fields in the table more or less 150

fields). They return 0% optimization level when I run design

aggregation.

FYI, I have done several ways for optimizing the performance such as:
1. Using MOLAP storage mode with 100% design aggregation, except several cubes that can not be made.
2. Cube design follows the best practice, e.g. using key member.
3. Separation of physical storage for data warehouse and OLAP files in different disk.
4. I also used usage-based optimization but it not works.

My

key points are how to improve the performance and fix the aggregation

problem that cannot be made on several cubes. I am looking forward to

hear from you soon.

Thanks in advance.|||Dear All,

I'm having a problem with performance in SSAS 2005. This is the server specification:
OLAP

Server and ORACLE Data Warehouse Server (64 bit) : IBM xSeries 460,

windows 2003 EE x64 SP1, SQL Server Analysis Services 2005 x64 SP1 +

Hotfix, , ORACLE 10g R2, 8x Xeon 3 GHz, memory 8 GB. Data Warehouse is

located on drive D 400GB and OLAP is in drive E 300GB and both of them

are HDD External using SAN (Storage Area Network) RAID 5.
Application Server (32 bit) : 52 IBM xSeries 366 Windows 2003 R2 SP1 BI Serv Dev ,sqlserv 2005 8x Xeon 3GH, mem 8 GB

For

the Data warehouse I'm using ORACLE 10g R2 64 bit and for the OLAP I'm

using SSAS 2005 64 bit. I'm already installed the oracle client. For

the front-end, I'm using Proclarity.

I have problems with the

performance when querying the OLAP by using ProClarity or SSMS

(Management Studio). The response time is very low, but the CPU

Utilization is below 10% and Memory Usage not more than 50%. When I

check in the performance monitor, the I/O access is very high

continuously 100% utilization when I are running query from ProClarity

Analytics Server. For improving the performance I have applied design

aggregation for each cube. However, design aggregation cannot be made

on several cubes that have a lot of data, and have a lot of measures

and dimension (the number of fields in the table more or less 150

fields). They return 0% optimization level when I run design

aggregation.

FYI, I have done several ways for optimizing the performance such as:
1. Using MOLAP storage mode with 100% design aggregation, except several cubes that can not be made.
2. Cube design follows the best practice, e.g. using key member.
3. Separation of physical storage for data warehouse and OLAP files in different disk.
4. I also used usage-based optimization but it not works.

My

key points are how to improve the performance and fix the aggregation

problem that cannot be made on several cubes. I am looking forward to

hear from you soon.

Thanks in advance.|||Dear All,

I'm having a problem with performance in SSAS 2005. This is the server specification:
OLAP

Server and ORACLE Data Warehouse Server (64 bit) : IBM xSeries 460,

windows 2003 EE x64 SP1, SQL Server Analysis Services 2005 x64 SP1 +

Hotfix, , ORACLE 10g R2, 8x Xeon 3 GHz, memory 8 GB. Data Warehouse is

located on drive D 400GB and OLAP is in drive E 300GB and both of them

are HDD External using SAN (Storage Area Network) RAID 5.
Application Server (32 bit) : 52 IBM xSeries 366 Windows 2003 R2 SP1 BI Serv Dev ,sqlserv 2005 8x Xeon 3GH, mem 8 GB

For

the Data warehouse I'm using ORACLE 10g R2 64 bit and for the OLAP I'm

using SSAS 2005 64 bit. I'm already installed the oracle client. For

the front-end, I'm using Proclarity.

I have problems with the

performance when querying the OLAP by using ProClarity or SSMS

(Management Studio). The response time is very low, but the CPU

Utilization is below 10% and Memory Usage not more than 50%. When I

check in the performance monitor, the I/O access is very high

continuously 100% utilization when I are running query from ProClarity

Analytics Server. For improving the performance I have applied design

aggregation for each cube. However, design aggregation cannot be made

on several cubes that have a lot of data, and have a lot of measures

and dimension (the number of fields in the table more or less 150

fields). They return 0% optimization level when I run design

aggregation.

FYI, I have done several ways for optimizing the performance such as:
1. Using MOLAP storage mode with 100% design aggregation, except several cubes that can not be made.
2. Cube design follows the best practice, e.g. using key member.
3. Separation of physical storage for data warehouse and OLAP files in different disk.
4. I also used usage-based optimization but it not works.

My

key points are how to improve the performance and fix the aggregation

problem that cannot be made on several cubes. I am looking forward to

hear from you soon.

Thanks in advance.|||Dear All,

I'm having a problem with performance in SSAS 2005. This is the server specification:
OLAP

Server and ORACLE Data Warehouse Server (64 bit) : IBM xSeries 460,

windows 2003 EE x64 SP1, SQL Server Analysis Services 2005 x64 SP1 +

Hotfix, , ORACLE 10g R2, 8x Xeon 3 GHz, memory 8 GB. Data Warehouse is

located on drive D 400GB and OLAP is in drive E 300GB and both of them

are HDD External using SAN (Storage Area Network) RAID 5.
Application Server (32 bit) : 52 IBM xSeries 366 Windows 2003 R2 SP1 BI Serv Dev ,sqlserv 2005 8x Xeon 3GH, mem 8 GB

For

the Data warehouse I'm using ORACLE 10g R2 64 bit and for the OLAP I'm

using SSAS 2005 64 bit. I'm already installed the oracle client. For

the front-end, I'm using Proclarity.

I have problems with the

performance when querying the OLAP by using ProClarity or SSMS

(Management Studio). The response time is very low, but the CPU

Utilization is below 10% and Memory Usage not more than 50%. When I

check in the performance monitor, the I/O access is very high

continuously 100% utilization when I are running query from ProClarity

Analytics Server. For improving the performance I have applied design

aggregation for each cube. However, design aggregation cannot be made

on several cubes that have a lot of data, and have a lot of measures

and dimension (the number of fields in the table more or less 150

fields). They return 0% optimization level when I run design

aggregation.

FYI, I have done several ways for optimizing the performance such as:
1. Using MOLAP storage mode with 100% design aggregation, except several cubes that can not be made.
2. Cube design follows the best practice, e.g. using key member.
3. Separation of physical storage for data warehouse and OLAP files in different disk.
4. I also used usage-based optimization but it not works.

My

key points are how to improve the performance and fix the aggregation

problem that cannot be made on several cubes. I am looking forward to

hear from you soon.

Thanks in advance.|||Dear All,

I'm having a problem with performance in SSAS 2005. This is the server specification:
OLAP

Server and ORACLE Data Warehouse Server (64 bit) : IBM xSeries 460,

windows 2003 EE x64 SP1, SQL Server Analysis Services 2005 x64 SP1 +

Hotfix, , ORACLE 10g R2, 8x Xeon 3 GHz, memory 8 GB. Data Warehouse is

located on drive D 400GB and OLAP is in drive E 300GB and both of them

are HDD External using SAN (Storage Area Network) RAID 5.
Application Server (32 bit) : 52 IBM xSeries 366 Windows 2003 R2 SP1 BI Serv Dev ,sqlserv 2005 8x Xeon 3GH, mem 8 GB

For

the Data warehouse I'm using ORACLE 10g R2 64 bit and for the OLAP I'm

using SSAS 2005 64 bit. I'm already installed the oracle client. For

the front-end, I'm using Proclarity.

I have problems with the

performance when querying the OLAP by using ProClarity or SSMS

(Management Studio). The response time is very low, but the CPU

Utilization is below 10% and Memory Usage not more than 50%. When I

check in the performance monitor, the I/O access is very high

continuously 100% utilization when I are running query from ProClarity

Analytics Server. For improving the performance I have applied design

aggregation for each cube. However, design aggregation cannot be made

on several cubes that have a lot of data, and have a lot of measures

and dimension (the number of fields in the table more or less 150

fields). They return 0% optimization level when I run design

aggregation.

FYI, I have done several ways for optimizing the performance such as:
1. Using MOLAP storage mode with 100% design aggregation, except several cubes that can not be made.
2. Cube design follows the best practice, e.g. using key member.
3. Separation of physical storage for data warehouse and OLAP files in different disk.
4. I also used usage-based optimization but it not works.

My

key points are how to improve the performance and fix the aggregation

problem that cannot be made on several cubes. I am looking forward to

hear from you soon.

Thanks in advance.

performance implications of collation changes

for the most part, my (sql server 2005) application prefers to be case
insensitive, so my default database collation is case insensitive. However
there are a few columns which I really want to be case-sensitive. Is there
any significant performance impact to setting some columns to have a
different collation from the database as a whole?
...Mike
"Mike Kraley" <mkraley@.community.nospam> wrote in message
news:B7D9FD39-82D5-4256-BCCA-A11CD3A693D8@.microsoft.com...
> for the most part, my (sql server 2005) application prefers to be case
> insensitive, so my default database collation is case insensitive. However
> there are a few columns which I really want to be case-sensitive. Is there
> any significant performance impact to setting some columns to have a
> different collation from the database as a whole?
> --
> ...Mike
No, in fact being case-sensitive would have a minor performance boost.
(Even better would be to use binary collation.)
|||Hello Mike,
I agree with MIchael that case-sensitive would have a minor performance
boost though it may improve perofmrance a little. You may want to refer to
the following link for more discussion on this topic
How to perform case sensitive searches in SQL Server?
http://vyaskn.tripod.com/case_sensitive_search_in_sql_server.htm
Please feel free to let's know if you have any further comments or
concerns. Thank you.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Community Support
==================================================
Get notification to my posts through email? Please refer to
http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
ications
<http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx>.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
<http://msdn.microsoft.com/subscriptions/support/default.aspx>.
==================================================
This posting is provided "AS IS" with no warranties, and confers no rights.
sql

performance implications of collation changes

for the most part, my (sql server 2005) application prefers to be case
insensitive, so my default database collation is case insensitive. However
there are a few columns which I really want to be case-sensitive. Is there
any significant performance impact to setting some columns to have a
different collation from the database as a whole?
...Mike"Mike Kraley" <mkraley@.community.nospam> wrote in message
news:B7D9FD39-82D5-4256-BCCA-A11CD3A693D8@.microsoft.com...
> for the most part, my (sql server 2005) application prefers to be case
> insensitive, so my default database collation is case insensitive. However
> there are a few columns which I really want to be case-sensitive. Is there
> any significant performance impact to setting some columns to have a
> different collation from the database as a whole?
> --
> ...Mike
No, in fact being case-sensitive would have a minor performance boost.
(Even better would be to use binary collation.)|||Hello Mike,
I agree with MIchael that case-sensitive would have a minor performance
boost though it may improve perofmrance a little. You may want to refer to
the following link for more discussion on this topic
How to perform case sensitive searches in SQL Server?
http://vyaskn.tripod.com/case_sensi..._sql_server.htm
Please feel free to let's know if you have any further comments or
concerns. Thank you.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Community Support
========================================
==========
Get notification to my posts through email? Please refer to
http://msdn.microsoft.com/subscript...ault.aspx#notif
ications
<http://msdn.microsoft.com/subscript...ps/default.aspx>.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
<http://msdn.microsoft.com/subscript...rt/default.aspx>.
========================================
==========
This posting is provided "AS IS" with no warranties, and confers no rights.

performance implications of collation changes

for the most part, my (sql server 2005) application prefers to be case
insensitive, so my default database collation is case insensitive. However
there are a few columns which I really want to be case-sensitive. Is there
any significant performance impact to setting some columns to have a
different collation from the database as a whole?
--
...Mike"Mike Kraley" <mkraley@.community.nospam> wrote in message
news:B7D9FD39-82D5-4256-BCCA-A11CD3A693D8@.microsoft.com...
> for the most part, my (sql server 2005) application prefers to be case
> insensitive, so my default database collation is case insensitive. However
> there are a few columns which I really want to be case-sensitive. Is there
> any significant performance impact to setting some columns to have a
> different collation from the database as a whole?
> --
> ...Mike
No, in fact being case-sensitive would have a minor performance boost.
(Even better would be to use binary collation.)|||Hello Mike,
I agree with MIchael that case-sensitive would have a minor performance
boost though it may improve perofmrance a little. You may want to refer to
the following link for more discussion on this topic
How to perform case sensitive searches in SQL Server?
http://vyaskn.tripod.com/case_sensitive_search_in_sql_server.htm
Please feel free to let's know if you have any further comments or
concerns. Thank you.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Community Support
==================================================Get notification to my posts through email? Please refer to
http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
ications
<http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx>.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
<http://msdn.microsoft.com/subscriptions/support/default.aspx>.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.

Performance impact of Transactional Push Replication

Hi,
I have a transactional push replication from our Publisher on server MM1 and subscriber + distributor on server MM2. We have had performance slowdowns on the MM1 publisher server and we removed the replication to see if this was causing the problem. I b
elieve the only replication process is the sp_replcmd (logreader agent) on the publisher. Are there any statistics that I can use to determine if replication was causing the slowdowns? Any help would be appreciated.
Thanks
Don
Don,
you could use performane monitor and correlate
SQLServer:Replication Dist
SQLServer:Replication Logreader
(Dist:Delivered Cmds/Sec,Dist:Delivered Trans)
with Process:%Processor Time for the sqlserver and sqlserver agent
processes.
Ideally you'd measure the threads used in the replication process and their
contribution, but the above technique should show any large trends.
HTH,
Paul Ibison

Performance impact of tracing?

I have set up a trace using sp_trace_create and it works fine for most of my
SQL 2k SP 3 servers. However on one server (which is no different to the
others) it has a huge impact on performance to the point where i can not run
tracing. The only difference between servers is that this particular server
has a database which uses a huge amount of UDF's?
Anyone had a similar problem?
Since UDF's run for every row processed they can cause a lot of trace events
if you are tracing at the statement level. Try tracing only the Batch
Completed, SP Completed and RPC Completed events instead. You can always
filter down once you find offending statements and open up the trace. Also
make sure you are sending the results to a LOCAL disk drive that is not used
by the data or especially the log file.
Andrew J. Kelly SQL MVP
"Malarb" <Malarb@.discussions.microsoft.com> wrote in message
news:1C0D4DF0-D62D-4258-A658-F44EB85B798F@.microsoft.com...
>I have set up a trace using sp_trace_create and it works fine for most of
>my
> SQL 2k SP 3 servers. However on one server (which is no different to the
> others) it has a huge impact on performance to the point where i can not
> run
> tracing. The only difference between servers is that this particular
> server
> has a database which uses a huge amount of UDF's?
> Anyone had a similar problem?

Performance impact of tracing?

I have set up a trace using sp_trace_create and it works fine for most of my
SQL 2k SP 3 servers. However on one server (which is no different to the
others) it has a huge impact on performance to the point where i can not run
tracing. The only difference between servers is that this particular server
has a database which uses a huge amount of UDF's?
Anyone had a similar problem?Since UDF's run for every row processed they can cause a lot of trace events
if you are tracing at the statement level. Try tracing only the Batch
Completed, SP Completed and RPC Completed events instead. You can always
filter down once you find offending statements and open up the trace. Also
make sure you are sending the results to a LOCAL disk drive that is not used
by the data or especially the log file.
Andrew J. Kelly SQL MVP
"Malarb" <Malarb@.discussions.microsoft.com> wrote in message
news:1C0D4DF0-D62D-4258-A658-F44EB85B798F@.microsoft.com...
>I have set up a trace using sp_trace_create and it works fine for most of
>my
> SQL 2k SP 3 servers. However on one server (which is no different to the
> others) it has a huge impact on performance to the point where i can not
> run
> tracing. The only difference between servers is that this particular
> server
> has a database which uses a huge amount of UDF's?
> Anyone had a similar problem?

Performance impact of tracing?

I have set up a trace using sp_trace_create and it works fine for most of my
SQL 2k SP 3 servers. However on one server (which is no different to the
others) it has a huge impact on performance to the point where i can not run
tracing. The only difference between servers is that this particular server
has a database which uses a huge amount of UDF's?
Anyone had a similar problem?Since UDF's run for every row processed they can cause a lot of trace events
if you are tracing at the statement level. Try tracing only the Batch
Completed, SP Completed and RPC Completed events instead. You can always
filter down once you find offending statements and open up the trace. Also
make sure you are sending the results to a LOCAL disk drive that is not used
by the data or especially the log file.
--
Andrew J. Kelly SQL MVP
"Malarb" <Malarb@.discussions.microsoft.com> wrote in message
news:1C0D4DF0-D62D-4258-A658-F44EB85B798F@.microsoft.com...
>I have set up a trace using sp_trace_create and it works fine for most of
>my
> SQL 2k SP 3 servers. However on one server (which is no different to the
> others) it has a huge impact on performance to the point where i can not
> run
> tracing. The only difference between servers is that this particular
> server
> has a database which uses a huge amount of UDF's?
> Anyone had a similar problem?