Friday, March 30, 2012

Performance in Cube generation

during the cube generation, I saw the disk utilization in C drive is 100%.
How can I change the temporary drive for the cubes generation?
or
can I share the work load on others drives? nHow?There are lots of tips and tricks mentioned in the AS Operations and
Preformance Guides.
Pointers to them are at:
http://www.microsoft.com/sql/evalua.../bianalysis.asp
1) changing the temp folder -- yes. In Analysis Manager, right-click on the
server and select Properties...
2) different folders -- no. Everything in Analysis Services sits under the
same data folder.
You can modify where that is located, but everything sits under that single
folder.
--
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kam" <Kam@.discussions.microsoft.com> wrote in message
news:613802AE-92ED-4B83-9C5D-0DAB5576B781@.microsoft.com...
> during the cube generation, I saw the disk utilization in C drive is 100%.
> How can I change the temporary drive for the cubes generation?
> or
> can I share the work load on others drives? nHow?

Performance in big UNION

Before migrating an application to SQL2005, I am checking the performance of
it and comparing it with the performance in SQL 2000. I have found a big
query that has 1000 selects of constant values joined by union all; in the
first execution, it takes more than 20 seconds to run but in sql 2000 it
takes less than 3 seconds, but when the plan is in cache the executions that
follow run in 3 seconds.
You can reproduce it with
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
union all
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
and repeating it till 1000 unions all.
Does anybody know what 2005 is doing in the compilation? and, of course, if
there is any workaround to avoid splitting the query into a subset of queries.
Thanks in advance.Ariel wrote:
> Before migrating an application to SQL2005, I am checking the performance of
> it and comparing it with the performance in SQL 2000. I have found a big
> query that has 1000 selects of constant values joined by union all; in the
> first execution, it takes more than 20 seconds to run but in sql 2000 it
> takes less than 3 seconds, but when the plan is in cache the executions that
> follow run in 3 seconds.
> You can reproduce it with
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> union all
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> and repeating it till 1000 unions all.
Why are you comparing the performance of such a hm... exotic query?
Do you use it in your application?
First execution of your query takes several seconds, because parse and
compile step takes a lot of time. Nothing can be done about it. Query
processor has been totally rebuilt in SQL Server 2005, you're
experiencing a side effect of these changes.
> Does anybody know what 2005 is doing in the compilation? and, of course, if
> there is any workaround to avoid splitting the query into a subset of queries.
Why do you need this query?
Best regards,
Marcin Guzowski
http://guzowski.info|||I assume that the values within this 1000 line select are static.
Why not load them into a table and select from the table?
--
Denny
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
Microsoft Office SharePoint Server 2007: Configuration)
MCITP (dbadmin, dbdev)
"Marcin A. Guzowski" wrote:
> Ariel wrote:
> > Before migrating an application to SQL2005, I am checking the performance of
> > it and comparing it with the performance in SQL 2000. I have found a big
> > query that has 1000 selects of constant values joined by union all; in the
> > first execution, it takes more than 20 seconds to run but in sql 2000 it
> > takes less than 3 seconds, but when the plan is in cache the executions that
> > follow run in 3 seconds.
> >
> > You can reproduce it with
> >
> > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> > union all
> > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> >
> > and repeating it till 1000 unions all.
>
> Why are you comparing the performance of such a hm... exotic query?
> Do you use it in your application?
> First execution of your query takes several seconds, because parse and
> compile step takes a lot of time. Nothing can be done about it. Query
> processor has been totally rebuilt in SQL Server 2005, you're
> experiencing a side effect of these changes.
>
> > Does anybody know what 2005 is doing in the compilation? and, of course, if
> > there is any workaround to avoid splitting the query into a subset of queries.
> Why do you need this query?
>
> --
> Best regards,
> Marcin Guzowski
> http://guzowski.info
>|||The query written in this post is only to reproduce the problem, the legacy
code of my application is using a similar code to insert data from a grid to
a table. But with the query of the example you can reproduce the problem.
Isn't there a way to make the optimizer run faster?
Six times slower than in 2000 seems to be too much to be a side effect due
to changes in 2005 , isn't it?
"mrdenny" wrote:
> I assume that the values within this 1000 line select are static.
> Why not load them into a table and select from the table?
> --
> Denny
> MCSA (2003) / MCDBA (SQL 2000)
> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
> Microsoft Office SharePoint Server 2007: Configuration)
> MCITP (dbadmin, dbdev)
>
> "Marcin A. Guzowski" wrote:
> > Ariel wrote:
> > > Before migrating an application to SQL2005, I am checking the performance of
> > > it and comparing it with the performance in SQL 2000. I have found a big
> > > query that has 1000 selects of constant values joined by union all; in the
> > > first execution, it takes more than 20 seconds to run but in sql 2000 it
> > > takes less than 3 seconds, but when the plan is in cache the executions that
> > > follow run in 3 seconds.
> > >
> > > You can reproduce it with
> > >
> > > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> > > union all
> > > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> > >
> > > and repeating it till 1000 unions all.
> >
> >
> > Why are you comparing the performance of such a hm... exotic query?
> > Do you use it in your application?
> >
> > First execution of your query takes several seconds, because parse and
> > compile step takes a lot of time. Nothing can be done about it. Query
> > processor has been totally rebuilt in SQL Server 2005, you're
> > experiencing a side effect of these changes.
> >
> >
> > > Does anybody know what 2005 is doing in the compilation? and, of course, if
> > > there is any workaround to avoid splitting the query into a subset of queries.
> >
> > Why do you need this query?
> >
> >
> > --
> > Best regards,
> > Marcin Guzowski
> > http://guzowski.info
> >|||Ariel wrote:
> The query written in this post is only to reproduce the problem, the legacy
> code of my application is using a similar code to insert data from a grid to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
Of course there is a way - faster server.
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
No, it's not too much.
Use the right method for your task and performance won't be a bottleneck.
I would choose one of two ways to insert data from app to a table.
First (classic, many inserts in one batch, all in one transaction):
BEGIN TRAN
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
...
COMMIT TRAN
Second (dynamic SQL - very fast):
INSERT table
EXEC
('
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
');
Best regards,
Marcin Guzowski
http://guzowski.info|||Since it is sql 2005, you can actually force the plan. See BOL: Plan
Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
Not sure if that will help in this scenario.
You mention a grid as the actual source of the problem and this fabricated
massive union all as just a demo. Can you post what the grid is doing, or
fix the ROOT cause there yourself'
TheSQLGuru
President
Indicium Resources, Inc.
"Ariel" <Ariel@.discussions.microsoft.com> wrote in message
news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
> The query written in this post is only to reproduce the problem, the
> legacy
> code of my application is using a similar code to insert data from a grid
> to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
>
> "mrdenny" wrote:
>> I assume that the values within this 1000 line select are static.
>> Why not load them into a table and select from the table?
>> --
>> Denny
>> MCSA (2003) / MCDBA (SQL 2000)
>> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration
>> /
>> Microsoft Office SharePoint Server 2007: Configuration)
>> MCITP (dbadmin, dbdev)
>>
>> "Marcin A. Guzowski" wrote:
>> > Ariel wrote:
>> > > Before migrating an application to SQL2005, I am checking the
>> > > performance of
>> > > it and comparing it with the performance in SQL 2000. I have found a
>> > > big
>> > > query that has 1000 selects of constant values joined by union all;
>> > > in the
>> > > first execution, it takes more than 20 seconds to run but in sql 2000
>> > > it
>> > > takes less than 3 seconds, but when the plan is in cache the
>> > > executions that
>> > > follow run in 3 seconds.
>> > >
>> > > You can reproduce it with
>> > >
>> > > select
>> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
>> > > union all
>> > > select
>> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
>> > >
>> > > and repeating it till 1000 unions all.
>> >
>> >
>> > Why are you comparing the performance of such a hm... exotic query?
>> > Do you use it in your application?
>> >
>> > First execution of your query takes several seconds, because parse and
>> > compile step takes a lot of time. Nothing can be done about it. Query
>> > processor has been totally rebuilt in SQL Server 2005, you're
>> > experiencing a side effect of these changes.
>> >
>> >
>> > > Does anybody know what 2005 is doing in the compilation? and, of
>> > > course, if
>> > > there is any workaround to avoid splitting the query into a subset of
>> > > queries.
>> >
>> > Why do you need this query?
>> >
>> >
>> > --
>> > Best regards,
>> > Marcin Guzowski
>> > http://guzowski.info
>> >|||Thanks for your ideas, I just was trying to avoid changing code and learn
what the optimizer is doing during those 20 seconds. I'll use your ideas to
change the code. I have tried to split the query in other with 'only' 15
UNIONS and the results are good enough for the app, although they continue
being worse than in 2000.
Do you know any information source where I can find information about this
kind of 'surprises' with the query processor'
"TheSQLGuru" wrote:
> Since it is sql 2005, you can actually force the plan. See BOL: Plan
> Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
> Not sure if that will help in this scenario.
> You mention a grid as the actual source of the problem and this fabricated
> massive union all as just a demo. Can you post what the grid is doing, or
> fix the ROOT cause there yourself'
>
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Ariel" <Ariel@.discussions.microsoft.com> wrote in message
> news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
> > The query written in this post is only to reproduce the problem, the
> > legacy
> > code of my application is using a similar code to insert data from a grid
> > to
> > a table. But with the query of the example you can reproduce the problem.
> >
> > Isn't there a way to make the optimizer run faster?
> >
> > Six times slower than in 2000 seems to be too much to be a side effect due
> > to changes in 2005 , isn't it?
> >
> >
> >
> > "mrdenny" wrote:
> >
> >> I assume that the values within this 1000 line select are static.
> >>
> >> Why not load them into a table and select from the table?
> >> --
> >> Denny
> >> MCSA (2003) / MCDBA (SQL 2000)
> >> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration
> >> /
> >> Microsoft Office SharePoint Server 2007: Configuration)
> >> MCITP (dbadmin, dbdev)
> >>
> >>
> >> "Marcin A. Guzowski" wrote:
> >>
> >> > Ariel wrote:
> >> > > Before migrating an application to SQL2005, I am checking the
> >> > > performance of
> >> > > it and comparing it with the performance in SQL 2000. I have found a
> >> > > big
> >> > > query that has 1000 selects of constant values joined by union all;
> >> > > in the
> >> > > first execution, it takes more than 20 seconds to run but in sql 2000
> >> > > it
> >> > > takes less than 3 seconds, but when the plan is in cache the
> >> > > executions that
> >> > > follow run in 3 seconds.
> >> > >
> >> > > You can reproduce it with
> >> > >
> >> > > select
> >> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> >> > > union all
> >> > > select
> >> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> >> > >
> >> > > and repeating it till 1000 unions all.
> >> >
> >> >
> >> > Why are you comparing the performance of such a hm... exotic query?
> >> > Do you use it in your application?
> >> >
> >> > First execution of your query takes several seconds, because parse and
> >> > compile step takes a lot of time. Nothing can be done about it. Query
> >> > processor has been totally rebuilt in SQL Server 2005, you're
> >> > experiencing a side effect of these changes.
> >> >
> >> >
> >> > > Does anybody know what 2005 is doing in the compilation? and, of
> >> > > course, if
> >> > > there is any workaround to avoid splitting the query into a subset of
> >> > > queries.
> >> >
> >> > Why do you need this query?
> >> >
> >> >
> >> > --
> >> > Best regards,
> >> > Marcin Guzowski
> >> > http://guzowski.info
> >> >
>
>sql

Performance in big UNION

Before migrating an application to SQL2005, I am checking the performance of
it and comparing it with the performance in SQL 2000. I have found a big
query that has 1000 selects of constant values joined by union all; in the
first execution, it takes more than 20 seconds to run but in sql 2000 it
takes less than 3 seconds, but when the plan is in cache the executions that
follow run in 3 seconds.
You can reproduce it with
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
union all
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
and repeating it till 1000 unions all.
Does anybody know what 2005 is doing in the compilation? and, of course, if
there is any workaround to avoid splitting the query into a subset of querie
s.
Thanks in advance.Ariel wrote:
> Before migrating an application to SQL2005, I am checking the performance
of
> it and comparing it with the performance in SQL 2000. I have found a big
> query that has 1000 selects of constant values joined by union all; in the
> first execution, it takes more than 20 seconds to run but in sql 2000 it
> takes less than 3 seconds, but when the plan is in cache the executions th
at
> follow run in 3 seconds.
> You can reproduce it with
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
> union all
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
> and repeating it till 1000 unions all.
Why are you comparing the performance of such a hm... exotic query?
Do you use it in your application?
First execution of your query takes several seconds, because parse and
compile step takes a lot of time. Nothing can be done about it. Query
processor has been totally rebuilt in SQL Server 2005, you're
experiencing a side effect of these changes.
[vbcol=seagreen]
> Does anybody know what 2005 is doing in the compilation? and, of course, i
f
> there is any workaround to avoid splitting the query into a subset of queries.[/vb
col]
Why do you need this query?
Best regards,
Marcin Guzowski
http://guzowski.info|||I assume that the values within this 1000 line select are static.
Why not load them into a table and select from the table?
--
Denny
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
Microsoft Office SharePoint Server 2007: Configuration)
MCITP (dbadmin, dbdev)
"Marcin A. Guzowski" wrote:

> Ariel wrote:
>
> Why are you comparing the performance of such a hm... exotic query?
> Do you use it in your application?
> First execution of your query takes several seconds, because parse and
> compile step takes a lot of time. Nothing can be done about it. Query
> processor has been totally rebuilt in SQL Server 2005, you're
> experiencing a side effect of these changes.
>
> Why do you need this query?
>
> --
> Best regards,
> Marcin Guzowski
> http://guzowski.info
>|||The query written in this post is only to reproduce the problem, the legacy
code of my application is using a similar code to insert data from a grid to
a table. But with the query of the example you can reproduce the problem.
Isn't there a way to make the optimizer run faster?
Six times slower than in 2000 seems to be too much to be a side effect due
to changes in 2005 , isn't it?
"mrdenny" wrote:
[vbcol=seagreen]
> I assume that the values within this 1000 line select are static.
> Why not load them into a table and select from the table?
> --
> Denny
> MCSA (2003) / MCDBA (SQL 2000)
> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration
/
> Microsoft Office SharePoint Server 2007: Configuration)
> MCITP (dbadmin, dbdev)
>
> "Marcin A. Guzowski" wrote:
>|||Ariel wrote:
> The query written in this post is only to reproduce the problem, the legac
y
> code of my application is using a similar code to insert data from a grid
to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
Of course there is a way - faster server.

> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
No, it's not too much.
Use the right method for your task and performance won't be a bottleneck.
I would choose one of two ways to insert data from app to a table.
First (classic, many inserts in one batch, all in one transaction):
BEGIN TRAN
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
...
COMMIT TRAN
Second (dynamic SQL - very fast):
INSERT table
EXEC
('
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
');
Best regards,
Marcin Guzowski
http://guzowski.info|||Since it is sql 2005, you can actually force the plan. See BOL: Plan
Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
Not sure if that will help in this scenario.
You mention a grid as the actual source of the problem and this fabricated
massive union all as just a demo. Can you post what the grid is doing, or
fix the ROOT cause there yourself'
TheSQLGuru
President
Indicium Resources, Inc.
"Ariel" <Ariel@.discussions.microsoft.com> wrote in message
news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...[vbcol=seagreen]
> The query written in this post is only to reproduce the problem, the
> legacy
> code of my application is using a similar code to insert data from a grid
> to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
>
> "mrdenny" wrote:
>|||Thanks for your ideas, I just was trying to avoid changing code and learn
what the optimizer is doing during those 20 seconds. I'll use your ideas to
change the code. I have tried to split the query in other with 'only' 15
UNIONS and the results are good enough for the app, although they continue
being worse than in 2000.
Do you know any information source where I can find information about this
kind of 'surprises' with the query processor'
"TheSQLGuru" wrote:

> Since it is sql 2005, you can actually force the plan. See BOL: Plan
> Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
> Not sure if that will help in this scenario.
> You mention a grid as the actual source of the problem and this fabricated
> massive union all as just a demo. Can you post what the grid is doing, or
> fix the ROOT cause there yourself'
>
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Ariel" <Ariel@.discussions.microsoft.com> wrote in message
> news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
>
>

Performance in big UNION

Before migrating an application to SQL2005, I am checking the performance of
it and comparing it with the performance in SQL 2000. I have found a big
query that has 1000 selects of constant values joined by union all; in the
first execution, it takes more than 20 seconds to run but in sql 2000 it
takes less than 3 seconds, but when the plan is in cache the executions that
follow run in 3 seconds.
You can reproduce it with
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
union all
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
and repeating it till 1000 unions all.
Does anybody know what 2005 is doing in the compilation? and, of course, if
there is any workaround to avoid splitting the query into a subset of queries.
Thanks in advance.
Ariel wrote:
> Before migrating an application to SQL2005, I am checking the performance of
> it and comparing it with the performance in SQL 2000. I have found a big
> query that has 1000 selects of constant values joined by union all; in the
> first execution, it takes more than 20 seconds to run but in sql 2000 it
> takes less than 3 seconds, but when the plan is in cache the executions that
> follow run in 3 seconds.
> You can reproduce it with
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
> union all
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
> and repeating it till 1000 unions all.
Why are you comparing the performance of such a hm... exotic query?
Do you use it in your application?
First execution of your query takes several seconds, because parse and
compile step takes a lot of time. Nothing can be done about it. Query
processor has been totally rebuilt in SQL Server 2005, you're
experiencing a side effect of these changes.

> Does anybody know what 2005 is doing in the compilation? and, of course, if
> there is any workaround to avoid splitting the query into a subset of queries.
Why do you need this query?
Best regards,
Marcin Guzowski
http://guzowski.info
|||I assume that the values within this 1000 line select are static.
Why not load them into a table and select from the table?
Denny
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
Microsoft Office SharePoint Server 2007: Configuration)
MCITP (dbadmin, dbdev)
"Marcin A. Guzowski" wrote:

> Ariel wrote:
>
> Why are you comparing the performance of such a hm... exotic query?
> Do you use it in your application?
> First execution of your query takes several seconds, because parse and
> compile step takes a lot of time. Nothing can be done about it. Query
> processor has been totally rebuilt in SQL Server 2005, you're
> experiencing a side effect of these changes.
>
> Why do you need this query?
>
> --
> Best regards,
> Marcin Guzowski
> http://guzowski.info
>
|||The query written in this post is only to reproduce the problem, the legacy
code of my application is using a similar code to insert data from a grid to
a table. But with the query of the example you can reproduce the problem.
Isn't there a way to make the optimizer run faster?
Six times slower than in 2000 seems to be too much to be a side effect due
to changes in 2005 , isn't it?
"mrdenny" wrote:
[vbcol=seagreen]
> I assume that the values within this 1000 line select are static.
> Why not load them into a table and select from the table?
> --
> Denny
> MCSA (2003) / MCDBA (SQL 2000)
> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
> Microsoft Office SharePoint Server 2007: Configuration)
> MCITP (dbadmin, dbdev)
>
> "Marcin A. Guzowski" wrote:
|||Ariel wrote:
> The query written in this post is only to reproduce the problem, the legacy
> code of my application is using a similar code to insert data from a grid to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
Of course there is a way - faster server.

> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
No, it's not too much.
Use the right method for your task and performance won't be a bottleneck.
I would choose one of two ways to insert data from app to a table.
First (classic, many inserts in one batch, all in one transaction):
BEGIN TRAN
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
....
COMMIT TRAN
Second (dynamic SQL - very fast):
INSERT table
EXEC
('
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
');
Best regards,
Marcin Guzowski
http://guzowski.info
|||Since it is sql 2005, you can actually force the plan. See BOL: Plan
Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
Not sure if that will help in this scenario.
You mention a grid as the actual source of the problem and this fabricated
massive union all as just a demo. Can you post what the grid is doing, or
fix the ROOT cause there yourself?
TheSQLGuru
President
Indicium Resources, Inc.
"Ariel" <Ariel@.discussions.microsoft.com> wrote in message
news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...[vbcol=seagreen]
> The query written in this post is only to reproduce the problem, the
> legacy
> code of my application is using a similar code to insert data from a grid
> to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
>
> "mrdenny" wrote:
|||Thanks for your ideas, I just was trying to avoid changing code and learn
what the optimizer is doing during those 20 seconds. I'll use your ideas to
change the code. I have tried to split the query in other with 'only' 15
UNIONS and the results are good enough for the app, although they continue
being worse than in 2000.
Do you know any information source where I can find information about this
kind of 'surprises' with the query processor?
"TheSQLGuru" wrote:

> Since it is sql 2005, you can actually force the plan. See BOL: Plan
> Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
> Not sure if that will help in this scenario.
> You mention a grid as the actual source of the problem and this fabricated
> massive union all as just a demo. Can you post what the grid is doing, or
> fix the ROOT cause there yourself?
>
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Ariel" <Ariel@.discussions.microsoft.com> wrote in message
> news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
>
>

Performance improves with Studio Query window open

Bit of a strange one here. We have a SQL Express instance running with one database which is accessed by a VB6 client application.

Performance between the application and the database isn't great, but bizarrely if you open a Query window from the Management Studio (against the database) the performance dramatically improves - to the extent that it is visually noticeable from within the application.

So, I'm thinking that the database connections being made by the application are taking longer because of instance name resolution or the like and with the Query window open this resolution is performed quicker.

Has anyone come across this situation? I can re-create it on several different computers each with their own Express instance and I've not found anything about this on the net.

Here's the connection string used by the application - I have tried various permutations of values to no avail:

Provider=SQLOLEDB.1;Initial Catalog=SampleDB;Data Source=(local)\sqlexpress;Trusted_Connection=yes

Thanks very much

hi, SQLExpress sets by default it's related databases a database propery that could be involved in your observation..

SQLExpress sets the auto close database property to true, so that every database with no active connection is shut down to preserve file integrity.. at next database connection, the database will be re-opened, requiring initial overhead to start it.. you can modify that database option as required...

this "solution" was probably taken out of the box becouse SQLEXpress is expected to execute on "client pcs" and not on "server" hardware that obviously are more "secure" and "stable"...

regards|||

Yes, thank you - switching the Auto Close to False works nicely.

I note that Auto Close is False by default on databases on full SQL Server, so this we only need to watch out for those databases put up on SQL Express.

Thanks again.

Performance improves with Studio Query window open

Bit of a strange one here. We have a SQL Express instance running with one database which is accessed by a VB6 client application.

Performance between the application and the database isn't great, but bizarrely if you open a Query window from the Management Studio (against the database) the performance dramatically improves - to the extent that it is visually noticeable from within the application.

So, I'm thinking that the database connections being made by the application are taking longer because of instance name resolution or the like and with the Query window open this resolution is performed quicker.

Has anyone come across this situation? I can re-create it on several different computers each with their own Express instance and I've not found anything about this on the net.

Here's the connection string used by the application - I have tried various permutations of values to no avail:

Provider=SQLOLEDB.1;Initial Catalog=SampleDB;Data Source=(local)\sqlexpress;Trusted_Connection=yes

Thanks very much

hi, SQLExpress sets by default it's related databases a database propery that could be involved in your observation..

SQLExpress sets the auto close database property to true, so that every database with no active connection is shut down to preserve file integrity.. at next database connection, the database will be re-opened, requiring initial overhead to start it.. you can modify that database option as required...

this "solution" was probably taken out of the box becouse SQLEXpress is expected to execute on "client pcs" and not on "server" hardware that obviously are more "secure" and "stable"...

regards|||

Yes, thank you - switching the Auto Close to False works nicely.

I note that Auto Close is False by default on databases on full SQL Server, so this we only need to watch out for those databases put up on SQL Express.

Thanks again.

Performance Improvements: Hardware vs. DB Design

Hi
I recently designed & developed a data warehouse for a client of ours.
The project has been quite a success, but they have since loaded the
database with a substantial amount of (historic) data, and have given
access to the query & reporting system (Business Objects) to many more
users. As a result, the query time on the reports is beginning to
slow.
I need to advise the client on what should be done to improve
performance again.
I'm quite familiar with what can be done on the design side of the
database (i.e. indices, aggregate tables, etc.), but I am not very
sure about the cost/benefits from a hardware perspective. As a
consultancy, the cost of our services (to build aggregate tables, etc)
would be quite expensive. It might be a better option for the client
to improve the spec of the server instead, but I am reluctant to
recommend this option without knowing that there would be guaranteed
benefits. In other words, if they were to double the processing power,
would the query time effectively halve? What about memory - i.e. would
more memory help with multiple-users, but not affect the speed of the
reports? Is 4GB the max memory that Windows 2000 can have?
[I can't remember the exact spec, but the server's got something like
2 Xeon dual-processors, 4GB RAM & Win2k.]
Can anyone recommend a study or white-paper on the performance
improvements in relation to hardware upgrades, or something similar?
Otherwise, what other options do I have? Am I overlooking something
like two load-balanced servers?
Thanks for any help / recommendations!
Sean W.
Sean wrote:
> Hi
> I recently designed & developed a data warehouse for a client of ours.
> The project has been quite a success, but they have since loaded the
> database with a substantial amount of (historic) data, and have given
> access to the query & reporting system (Business Objects) to many more
> users. As a result, the query time on the reports is beginning to
> slow.
> I need to advise the client on what should be done to improve
> performance again.
> I'm quite familiar with what can be done on the design side of the
> database (i.e. indices, aggregate tables, etc.), but I am not very
> sure about the cost/benefits from a hardware perspective. As a
> consultancy, the cost of our services (to build aggregate tables, etc)
> would be quite expensive. It might be a better option for the client
> to improve the spec of the server instead, but I am reluctant to
> recommend this option without knowing that there would be guaranteed
> benefits. In other words, if they were to double the processing power,
> would the query time effectively halve? What about memory - i.e. would
> more memory help with multiple-users, but not affect the speed of the
> reports? Is 4GB the max memory that Windows 2000 can have?
> [I can't remember the exact spec, but the server's got something like
> 2 Xeon dual-processors, 4GB RAM & Win2k.]
> Can anyone recommend a study or white-paper on the performance
> improvements in relation to hardware upgrades, or something similar?
> Otherwise, what other options do I have? Am I overlooking something
> like two load-balanced servers?
> Thanks for any help / recommendations!
> Sean W.
Unfortunately, I think many companies are inclined to add more server
without first trying to tune the database. You've gone ahead and
designed a database for the customer and added the necessary RI and
indexes. But now that you see the SQL executed, someone needs to tune
the queries, revise the indexes if necessary, and verify the reporting
system is executing SQL in an efficient way: For example, is it using
stored procedures? Do users have Ad Hoc access to the database to query
whatever they want? Does the reporting tool bring back large result sets
and filter on the client PC? Do the queries use a READ UNCOMMITTED or
NOLOCK query option to keep locks to a minimum?
My point is that without performance tuning the queries, it's really
impossible to know whether adding more hardware will give you anything
but a temporary reprieve of poor performance.
When I hear about problems like this, I like to remember an old client
that had a couple hundred people hitting a SQL Server database which ran
on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
I hear about performance problems on quad-Xeon systems with 3GB RAM (as
an example), I get concerned that a company is considering throwing
another $150K at a new server.
Ad Hoc reporting systems are notoriously bad on performance, especially
if users don't run canned reports and can throw what SQL the end-user
reporting tool can generate at the server. The amount of data queried
can easily move old data out of cache and force the next query to read
that data from disk, which is really slow. Adding more memory is an
option if plan reuse is the issue. And since memory is cheap, it can't
hurt to add all you can. Adding more hardware will help, but may not
really be a long term solution.
However, nothing beats being able to aptly use an index to generate a
result set. Table/Clustered Index Scans are a killer on big tables.
You may want to consider placing query limits on ad hoc queries using
"SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
those queries cosuming excessive CPU and see if the database or the
queries can be tuned.
David Gugick
Imceda Software
www.imceda.com
|||With Business Objects it is worth asking the users to send you copies of
their reports. You can then see if indices need creating on various
tables etc.
Adrian
David G. wrote:

> Sean wrote:
>
> Unfortunately, I think many companies are inclined to add more server
> without first trying to tune the database. You've gone ahead and
> designed a database for the customer and added the necessary RI and
> indexes. But now that you see the SQL executed, someone needs to tune
> the queries, revise the indexes if necessary, and verify the reporting
> system is executing SQL in an efficient way: For example, is it using
> stored procedures? Do users have Ad Hoc access to the database to query
> whatever they want? Does the reporting tool bring back large result sets
> and filter on the client PC? Do the queries use a READ UNCOMMITTED or
> NOLOCK query option to keep locks to a minimum?
> My point is that without performance tuning the queries, it's really
> impossible to know whether adding more hardware will give you anything
> but a temporary reprieve of poor performance.
> When I hear about problems like this, I like to remember an old client
> that had a couple hundred people hitting a SQL Server database which ran
> on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
> I hear about performance problems on quad-Xeon systems with 3GB RAM (as
> an example), I get concerned that a company is considering throwing
> another $150K at a new server.
> Ad Hoc reporting systems are notoriously bad on performance, especially
> if users don't run canned reports and can throw what SQL the end-user
> reporting tool can generate at the server. The amount of data queried
> can easily move old data out of cache and force the next query to read
> that data from disk, which is really slow. Adding more memory is an
> option if plan reuse is the issue. And since memory is cheap, it can't
> hurt to add all you can. Adding more hardware will help, but may not
> really be a long term solution.
> However, nothing beats being able to aptly use an index to generate a
> result set. Table/Clustered Index Scans are a killer on big tables.
> You may want to consider placing query limits on ad hoc queries using
> "SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
> those queries cosuming excessive CPU and see if the database or the
> queries can be tuned.
>
|||Adrian Edwards wrote:
> With Business Objects it is worth asking the users to send you copies
> of their reports. You can then see if indices need creating on various
> tables etc.
> Adrian
>
Good point. If you can get your hands on any canned reports or at least
have a process for quickly rolling out new or updated reports, you may
eliminate users creating queries that don't perform well.
You might want to consider having new reports created against a test
server before moving them into production. Not giving users the rights
to create reports on the production data may be difficult to manage.
Managers generally want fast results. If you can't do that, at least
introduce a review process so there's never a time when too many reports
are created without review.
You can do this passively using Profiler or a server-side trace. You
could trap the SQL:BatchCompleted and RPC:Completed events for CPU over
a certain level (say 1,000ms) and review those SQL Statements
periodically.
David Gugick
Imceda Software
www.imceda.com
sql