Friday, March 30, 2012
Performance in big UNION
it and comparing it with the performance in SQL 2000. I have found a big
query that has 1000 selects of constant values joined by union all; in the
first execution, it takes more than 20 seconds to run but in sql 2000 it
takes less than 3 seconds, but when the plan is in cache the executions that
follow run in 3 seconds.
You can reproduce it with
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
union all
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
and repeating it till 1000 unions all.
Does anybody know what 2005 is doing in the compilation? and, of course, if
there is any workaround to avoid splitting the query into a subset of queries.
Thanks in advance.Ariel wrote:
> Before migrating an application to SQL2005, I am checking the performance of
> it and comparing it with the performance in SQL 2000. I have found a big
> query that has 1000 selects of constant values joined by union all; in the
> first execution, it takes more than 20 seconds to run but in sql 2000 it
> takes less than 3 seconds, but when the plan is in cache the executions that
> follow run in 3 seconds.
> You can reproduce it with
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> union all
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> and repeating it till 1000 unions all.
Why are you comparing the performance of such a hm... exotic query?
Do you use it in your application?
First execution of your query takes several seconds, because parse and
compile step takes a lot of time. Nothing can be done about it. Query
processor has been totally rebuilt in SQL Server 2005, you're
experiencing a side effect of these changes.
> Does anybody know what 2005 is doing in the compilation? and, of course, if
> there is any workaround to avoid splitting the query into a subset of queries.
Why do you need this query?
Best regards,
Marcin Guzowski
http://guzowski.info|||I assume that the values within this 1000 line select are static.
Why not load them into a table and select from the table?
--
Denny
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
Microsoft Office SharePoint Server 2007: Configuration)
MCITP (dbadmin, dbdev)
"Marcin A. Guzowski" wrote:
> Ariel wrote:
> > Before migrating an application to SQL2005, I am checking the performance of
> > it and comparing it with the performance in SQL 2000. I have found a big
> > query that has 1000 selects of constant values joined by union all; in the
> > first execution, it takes more than 20 seconds to run but in sql 2000 it
> > takes less than 3 seconds, but when the plan is in cache the executions that
> > follow run in 3 seconds.
> >
> > You can reproduce it with
> >
> > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> > union all
> > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> >
> > and repeating it till 1000 unions all.
>
> Why are you comparing the performance of such a hm... exotic query?
> Do you use it in your application?
> First execution of your query takes several seconds, because parse and
> compile step takes a lot of time. Nothing can be done about it. Query
> processor has been totally rebuilt in SQL Server 2005, you're
> experiencing a side effect of these changes.
>
> > Does anybody know what 2005 is doing in the compilation? and, of course, if
> > there is any workaround to avoid splitting the query into a subset of queries.
> Why do you need this query?
>
> --
> Best regards,
> Marcin Guzowski
> http://guzowski.info
>|||The query written in this post is only to reproduce the problem, the legacy
code of my application is using a similar code to insert data from a grid to
a table. But with the query of the example you can reproduce the problem.
Isn't there a way to make the optimizer run faster?
Six times slower than in 2000 seems to be too much to be a side effect due
to changes in 2005 , isn't it?
"mrdenny" wrote:
> I assume that the values within this 1000 line select are static.
> Why not load them into a table and select from the table?
> --
> Denny
> MCSA (2003) / MCDBA (SQL 2000)
> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
> Microsoft Office SharePoint Server 2007: Configuration)
> MCITP (dbadmin, dbdev)
>
> "Marcin A. Guzowski" wrote:
> > Ariel wrote:
> > > Before migrating an application to SQL2005, I am checking the performance of
> > > it and comparing it with the performance in SQL 2000. I have found a big
> > > query that has 1000 selects of constant values joined by union all; in the
> > > first execution, it takes more than 20 seconds to run but in sql 2000 it
> > > takes less than 3 seconds, but when the plan is in cache the executions that
> > > follow run in 3 seconds.
> > >
> > > You can reproduce it with
> > >
> > > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> > > union all
> > > select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> > >
> > > and repeating it till 1000 unions all.
> >
> >
> > Why are you comparing the performance of such a hm... exotic query?
> > Do you use it in your application?
> >
> > First execution of your query takes several seconds, because parse and
> > compile step takes a lot of time. Nothing can be done about it. Query
> > processor has been totally rebuilt in SQL Server 2005, you're
> > experiencing a side effect of these changes.
> >
> >
> > > Does anybody know what 2005 is doing in the compilation? and, of course, if
> > > there is any workaround to avoid splitting the query into a subset of queries.
> >
> > Why do you need this query?
> >
> >
> > --
> > Best regards,
> > Marcin Guzowski
> > http://guzowski.info
> >|||Ariel wrote:
> The query written in this post is only to reproduce the problem, the legacy
> code of my application is using a similar code to insert data from a grid to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
Of course there is a way - faster server.
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
No, it's not too much.
Use the right method for your task and performance won't be a bottleneck.
I would choose one of two ways to insert data from app to a table.
First (classic, many inserts in one batch, all in one transaction):
BEGIN TRAN
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
...
COMMIT TRAN
Second (dynamic SQL - very fast):
INSERT table
EXEC
('
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
');
Best regards,
Marcin Guzowski
http://guzowski.info|||Since it is sql 2005, you can actually force the plan. See BOL: Plan
Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
Not sure if that will help in this scenario.
You mention a grid as the actual source of the problem and this fabricated
massive union all as just a demo. Can you post what the grid is doing, or
fix the ROOT cause there yourself'
TheSQLGuru
President
Indicium Resources, Inc.
"Ariel" <Ariel@.discussions.microsoft.com> wrote in message
news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
> The query written in this post is only to reproduce the problem, the
> legacy
> code of my application is using a similar code to insert data from a grid
> to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
>
> "mrdenny" wrote:
>> I assume that the values within this 1000 line select are static.
>> Why not load them into a table and select from the table?
>> --
>> Denny
>> MCSA (2003) / MCDBA (SQL 2000)
>> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration
>> /
>> Microsoft Office SharePoint Server 2007: Configuration)
>> MCITP (dbadmin, dbdev)
>>
>> "Marcin A. Guzowski" wrote:
>> > Ariel wrote:
>> > > Before migrating an application to SQL2005, I am checking the
>> > > performance of
>> > > it and comparing it with the performance in SQL 2000. I have found a
>> > > big
>> > > query that has 1000 selects of constant values joined by union all;
>> > > in the
>> > > first execution, it takes more than 20 seconds to run but in sql 2000
>> > > it
>> > > takes less than 3 seconds, but when the plan is in cache the
>> > > executions that
>> > > follow run in 3 seconds.
>> > >
>> > > You can reproduce it with
>> > >
>> > > select
>> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
>> > > union all
>> > > select
>> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
>> > >
>> > > and repeating it till 1000 unions all.
>> >
>> >
>> > Why are you comparing the performance of such a hm... exotic query?
>> > Do you use it in your application?
>> >
>> > First execution of your query takes several seconds, because parse and
>> > compile step takes a lot of time. Nothing can be done about it. Query
>> > processor has been totally rebuilt in SQL Server 2005, you're
>> > experiencing a side effect of these changes.
>> >
>> >
>> > > Does anybody know what 2005 is doing in the compilation? and, of
>> > > course, if
>> > > there is any workaround to avoid splitting the query into a subset of
>> > > queries.
>> >
>> > Why do you need this query?
>> >
>> >
>> > --
>> > Best regards,
>> > Marcin Guzowski
>> > http://guzowski.info
>> >|||Thanks for your ideas, I just was trying to avoid changing code and learn
what the optimizer is doing during those 20 seconds. I'll use your ideas to
change the code. I have tried to split the query in other with 'only' 15
UNIONS and the results are good enough for the app, although they continue
being worse than in 2000.
Do you know any information source where I can find information about this
kind of 'surprises' with the query processor'
"TheSQLGuru" wrote:
> Since it is sql 2005, you can actually force the plan. See BOL: Plan
> Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
> Not sure if that will help in this scenario.
> You mention a grid as the actual source of the problem and this fabricated
> massive union all as just a demo. Can you post what the grid is doing, or
> fix the ROOT cause there yourself'
>
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Ariel" <Ariel@.discussions.microsoft.com> wrote in message
> news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
> > The query written in this post is only to reproduce the problem, the
> > legacy
> > code of my application is using a similar code to insert data from a grid
> > to
> > a table. But with the query of the example you can reproduce the problem.
> >
> > Isn't there a way to make the optimizer run faster?
> >
> > Six times slower than in 2000 seems to be too much to be a side effect due
> > to changes in 2005 , isn't it?
> >
> >
> >
> > "mrdenny" wrote:
> >
> >> I assume that the values within this 1000 line select are static.
> >>
> >> Why not load them into a table and select from the table?
> >> --
> >> Denny
> >> MCSA (2003) / MCDBA (SQL 2000)
> >> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration
> >> /
> >> Microsoft Office SharePoint Server 2007: Configuration)
> >> MCITP (dbadmin, dbdev)
> >>
> >>
> >> "Marcin A. Guzowski" wrote:
> >>
> >> > Ariel wrote:
> >> > > Before migrating an application to SQL2005, I am checking the
> >> > > performance of
> >> > > it and comparing it with the performance in SQL 2000. I have found a
> >> > > big
> >> > > query that has 1000 selects of constant values joined by union all;
> >> > > in the
> >> > > first execution, it takes more than 20 seconds to run but in sql 2000
> >> > > it
> >> > > takes less than 3 seconds, but when the plan is in cache the
> >> > > executions that
> >> > > follow run in 3 seconds.
> >> > >
> >> > > You can reproduce it with
> >> > >
> >> > > select
> >> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> >> > > union all
> >> > > select
> >> > > 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25
> >> > >
> >> > > and repeating it till 1000 unions all.
> >> >
> >> >
> >> > Why are you comparing the performance of such a hm... exotic query?
> >> > Do you use it in your application?
> >> >
> >> > First execution of your query takes several seconds, because parse and
> >> > compile step takes a lot of time. Nothing can be done about it. Query
> >> > processor has been totally rebuilt in SQL Server 2005, you're
> >> > experiencing a side effect of these changes.
> >> >
> >> >
> >> > > Does anybody know what 2005 is doing in the compilation? and, of
> >> > > course, if
> >> > > there is any workaround to avoid splitting the query into a subset of
> >> > > queries.
> >> >
> >> > Why do you need this query?
> >> >
> >> >
> >> > --
> >> > Best regards,
> >> > Marcin Guzowski
> >> > http://guzowski.info
> >> >
>
>sql
Performance in big UNION
it and comparing it with the performance in SQL 2000. I have found a big
query that has 1000 selects of constant values joined by union all; in the
first execution, it takes more than 20 seconds to run but in sql 2000 it
takes less than 3 seconds, but when the plan is in cache the executions that
follow run in 3 seconds.
You can reproduce it with
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
union all
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
and repeating it till 1000 unions all.
Does anybody know what 2005 is doing in the compilation? and, of course, if
there is any workaround to avoid splitting the query into a subset of querie
s.
Thanks in advance.Ariel wrote:
> Before migrating an application to SQL2005, I am checking the performance
of
> it and comparing it with the performance in SQL 2000. I have found a big
> query that has 1000 selects of constant values joined by union all; in the
> first execution, it takes more than 20 seconds to run but in sql 2000 it
> takes less than 3 seconds, but when the plan is in cache the executions th
at
> follow run in 3 seconds.
> You can reproduce it with
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
> union all
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
> and repeating it till 1000 unions all.
Why are you comparing the performance of such a hm... exotic query?
Do you use it in your application?
First execution of your query takes several seconds, because parse and
compile step takes a lot of time. Nothing can be done about it. Query
processor has been totally rebuilt in SQL Server 2005, you're
experiencing a side effect of these changes.
[vbcol=seagreen]
> Does anybody know what 2005 is doing in the compilation? and, of course, i
f
> there is any workaround to avoid splitting the query into a subset of queries.[/vb
col]
Why do you need this query?
Best regards,
Marcin Guzowski
http://guzowski.info|||I assume that the values within this 1000 line select are static.
Why not load them into a table and select from the table?
--
Denny
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
Microsoft Office SharePoint Server 2007: Configuration)
MCITP (dbadmin, dbdev)
"Marcin A. Guzowski" wrote:
> Ariel wrote:
>
> Why are you comparing the performance of such a hm... exotic query?
> Do you use it in your application?
> First execution of your query takes several seconds, because parse and
> compile step takes a lot of time. Nothing can be done about it. Query
> processor has been totally rebuilt in SQL Server 2005, you're
> experiencing a side effect of these changes.
>
> Why do you need this query?
>
> --
> Best regards,
> Marcin Guzowski
> http://guzowski.info
>|||The query written in this post is only to reproduce the problem, the legacy
code of my application is using a similar code to insert data from a grid to
a table. But with the query of the example you can reproduce the problem.
Isn't there a way to make the optimizer run faster?
Six times slower than in 2000 seems to be too much to be a side effect due
to changes in 2005 , isn't it?
"mrdenny" wrote:
[vbcol=seagreen]
> I assume that the values within this 1000 line select are static.
> Why not load them into a table and select from the table?
> --
> Denny
> MCSA (2003) / MCDBA (SQL 2000)
> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration
/
> Microsoft Office SharePoint Server 2007: Configuration)
> MCITP (dbadmin, dbdev)
>
> "Marcin A. Guzowski" wrote:
>|||Ariel wrote:
> The query written in this post is only to reproduce the problem, the legac
y
> code of my application is using a similar code to insert data from a grid
to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
Of course there is a way - faster server.
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
No, it's not too much.
Use the right method for your task and performance won't be a bottleneck.
I would choose one of two ways to insert data from app to a table.
First (classic, many inserts in one batch, all in one transaction):
BEGIN TRAN
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
...
COMMIT TRAN
Second (dynamic SQL - very fast):
INSERT table
EXEC
('
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1
7,18,19,20,21,22,23,24,25
');
Best regards,
Marcin Guzowski
http://guzowski.info|||Since it is sql 2005, you can actually force the plan. See BOL: Plan
Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
Not sure if that will help in this scenario.
You mention a grid as the actual source of the problem and this fabricated
massive union all as just a demo. Can you post what the grid is doing, or
fix the ROOT cause there yourself'
TheSQLGuru
President
Indicium Resources, Inc.
"Ariel" <Ariel@.discussions.microsoft.com> wrote in message
news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...[vbcol=seagreen]
> The query written in this post is only to reproduce the problem, the
> legacy
> code of my application is using a similar code to insert data from a grid
> to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
>
> "mrdenny" wrote:
>|||Thanks for your ideas, I just was trying to avoid changing code and learn
what the optimizer is doing during those 20 seconds. I'll use your ideas to
change the code. I have tried to split the query in other with 'only' 15
UNIONS and the results are good enough for the app, although they continue
being worse than in 2000.
Do you know any information source where I can find information about this
kind of 'surprises' with the query processor'
"TheSQLGuru" wrote:
> Since it is sql 2005, you can actually force the plan. See BOL: Plan
> Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
> Not sure if that will help in this scenario.
> You mention a grid as the actual source of the problem and this fabricated
> massive union all as just a demo. Can you post what the grid is doing, or
> fix the ROOT cause there yourself'
>
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Ariel" <Ariel@.discussions.microsoft.com> wrote in message
> news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
>
>
Performance in big UNION
it and comparing it with the performance in SQL 2000. I have found a big
query that has 1000 selects of constant values joined by union all; in the
first execution, it takes more than 20 seconds to run but in sql 2000 it
takes less than 3 seconds, but when the plan is in cache the executions that
follow run in 3 seconds.
You can reproduce it with
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
union all
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
and repeating it till 1000 unions all.
Does anybody know what 2005 is doing in the compilation? and, of course, if
there is any workaround to avoid splitting the query into a subset of queries.
Thanks in advance.
Ariel wrote:
> Before migrating an application to SQL2005, I am checking the performance of
> it and comparing it with the performance in SQL 2000. I have found a big
> query that has 1000 selects of constant values joined by union all; in the
> first execution, it takes more than 20 seconds to run but in sql 2000 it
> takes less than 3 seconds, but when the plan is in cache the executions that
> follow run in 3 seconds.
> You can reproduce it with
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
> union all
> select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
> and repeating it till 1000 unions all.
Why are you comparing the performance of such a hm... exotic query?
Do you use it in your application?
First execution of your query takes several seconds, because parse and
compile step takes a lot of time. Nothing can be done about it. Query
processor has been totally rebuilt in SQL Server 2005, you're
experiencing a side effect of these changes.
> Does anybody know what 2005 is doing in the compilation? and, of course, if
> there is any workaround to avoid splitting the query into a subset of queries.
Why do you need this query?
Best regards,
Marcin Guzowski
http://guzowski.info
|||I assume that the values within this 1000 line select are static.
Why not load them into a table and select from the table?
Denny
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
Microsoft Office SharePoint Server 2007: Configuration)
MCITP (dbadmin, dbdev)
"Marcin A. Guzowski" wrote:
> Ariel wrote:
>
> Why are you comparing the performance of such a hm... exotic query?
> Do you use it in your application?
> First execution of your query takes several seconds, because parse and
> compile step takes a lot of time. Nothing can be done about it. Query
> processor has been totally rebuilt in SQL Server 2005, you're
> experiencing a side effect of these changes.
>
> Why do you need this query?
>
> --
> Best regards,
> Marcin Guzowski
> http://guzowski.info
>
|||The query written in this post is only to reproduce the problem, the legacy
code of my application is using a similar code to insert data from a grid to
a table. But with the query of the example you can reproduce the problem.
Isn't there a way to make the optimizer run faster?
Six times slower than in 2000 seems to be too much to be a side effect due
to changes in 2005 , isn't it?
"mrdenny" wrote:
[vbcol=seagreen]
> I assume that the values within this 1000 line select are static.
> Why not load them into a table and select from the table?
> --
> Denny
> MCSA (2003) / MCDBA (SQL 2000)
> MCTS (SQL 2005 / Microsoft Windows SharePoint Services 3.0: Configuration /
> Microsoft Office SharePoint Server 2007: Configuration)
> MCITP (dbadmin, dbdev)
>
> "Marcin A. Guzowski" wrote:
|||Ariel wrote:
> The query written in this post is only to reproduce the problem, the legacy
> code of my application is using a similar code to insert data from a grid to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
Of course there is a way - faster server.
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
No, it's not too much.
Use the right method for your task and performance won't be a bottleneck.
I would choose one of two ways to insert data from app to a table.
First (classic, many inserts in one batch, all in one transaction):
BEGIN TRAN
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
INSERT INTO table (cols..) VALUES (.....)
....
COMMIT TRAN
Second (dynamic SQL - very fast):
INSERT table
EXEC
('
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
select 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ,21,22,23,24,25
');
Best regards,
Marcin Guzowski
http://guzowski.info
|||Since it is sql 2005, you can actually force the plan. See BOL: Plan
Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
Not sure if that will help in this scenario.
You mention a grid as the actual source of the problem and this fabricated
massive union all as just a demo. Can you post what the grid is doing, or
fix the ROOT cause there yourself?
TheSQLGuru
President
Indicium Resources, Inc.
"Ariel" <Ariel@.discussions.microsoft.com> wrote in message
news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...[vbcol=seagreen]
> The query written in this post is only to reproduce the problem, the
> legacy
> code of my application is using a similar code to insert data from a grid
> to
> a table. But with the query of the example you can reproduce the problem.
> Isn't there a way to make the optimizer run faster?
> Six times slower than in 2000 seems to be too much to be a side effect due
> to changes in 2005 , isn't it?
>
> "mrdenny" wrote:
|||Thanks for your ideas, I just was trying to avoid changing code and learn
what the optimizer is doing during those 20 seconds. I'll use your ideas to
change the code. I have tried to split the query in other with 'only' 15
UNIONS and the results are good enough for the app, although they continue
being worse than in 2000.
Do you know any information source where I can find information about this
kind of 'surprises' with the query processor?
"TheSQLGuru" wrote:
> Since it is sql 2005, you can actually force the plan. See BOL: Plan
> Forcing Scenario: Create a Plan Guide That Uses a USE PLAN Query Hint
> Not sure if that will help in this scenario.
> You mention a grid as the actual source of the problem and this fabricated
> massive union all as just a demo. Can you post what the grid is doing, or
> fix the ROOT cause there yourself?
>
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Ariel" <Ariel@.discussions.microsoft.com> wrote in message
> news:9E2E9715-4C9E-449F-9A9E-06DD932AFD53@.microsoft.com...
>
>
Performance improves with Studio Query window open
Bit of a strange one here. We have a SQL Express instance running with one database which is accessed by a VB6 client application.
Performance between the application and the database isn't great, but bizarrely if you open a Query window from the Management Studio (against the database) the performance dramatically improves - to the extent that it is visually noticeable from within the application.
So, I'm thinking that the database connections being made by the application are taking longer because of instance name resolution or the like and with the Query window open this resolution is performed quicker.
Has anyone come across this situation? I can re-create it on several different computers each with their own Express instance and I've not found anything about this on the net.
Here's the connection string used by the application - I have tried various permutations of values to no avail:
Provider=SQLOLEDB.1;Initial Catalog=SampleDB;Data Source=(local)\sqlexpress;Trusted_Connection=yes
Thanks very much
hi, SQLExpress sets by default it's related databases a database propery that could be involved in your observation..
SQLExpress sets the auto close database property to true, so that every database with no active connection is shut down to preserve file integrity.. at next database connection, the database will be re-opened, requiring initial overhead to start it.. you can modify that database option as required...
this "solution" was probably taken out of the box becouse SQLEXpress is expected to execute on "client pcs" and not on "server" hardware that obviously are more "secure" and "stable"...
regards|||Yes, thank you - switching the Auto Close to False works nicely.
I note that Auto Close is False by default on databases on full SQL Server, so this we only need to watch out for those databases put up on SQL Express.
Thanks again.
Performance improves with Studio Query window open
Bit of a strange one here. We have a SQL Express instance running with one database which is accessed by a VB6 client application.
Performance between the application and the database isn't great, but bizarrely if you open a Query window from the Management Studio (against the database) the performance dramatically improves - to the extent that it is visually noticeable from within the application.
So, I'm thinking that the database connections being made by the application are taking longer because of instance name resolution or the like and with the Query window open this resolution is performed quicker.
Has anyone come across this situation? I can re-create it on several different computers each with their own Express instance and I've not found anything about this on the net.
Here's the connection string used by the application - I have tried various permutations of values to no avail:
Provider=SQLOLEDB.1;Initial Catalog=SampleDB;Data Source=(local)\sqlexpress;Trusted_Connection=yes
Thanks very much
hi, SQLExpress sets by default it's related databases a database propery that could be involved in your observation..
SQLExpress sets the auto close database property to true, so that every database with no active connection is shut down to preserve file integrity.. at next database connection, the database will be re-opened, requiring initial overhead to start it.. you can modify that database option as required...
this "solution" was probably taken out of the box becouse SQLEXpress is expected to execute on "client pcs" and not on "server" hardware that obviously are more "secure" and "stable"...
regards|||Yes, thank you - switching the Auto Close to False works nicely.
I note that Auto Close is False by default on databases on full SQL Server, so this we only need to watch out for those databases put up on SQL Express.
Thanks again.
performance implications of collation changes
insensitive, so my default database collation is case insensitive. However
there are a few columns which I really want to be case-sensitive. Is there
any significant performance impact to setting some columns to have a
different collation from the database as a whole?
...Mike
"Mike Kraley" <mkraley@.community.nospam> wrote in message
news:B7D9FD39-82D5-4256-BCCA-A11CD3A693D8@.microsoft.com...
> for the most part, my (sql server 2005) application prefers to be case
> insensitive, so my default database collation is case insensitive. However
> there are a few columns which I really want to be case-sensitive. Is there
> any significant performance impact to setting some columns to have a
> different collation from the database as a whole?
> --
> ...Mike
No, in fact being case-sensitive would have a minor performance boost.
(Even better would be to use binary collation.)
|||Hello Mike,
I agree with MIchael that case-sensitive would have a minor performance
boost though it may improve perofmrance a little. You may want to refer to
the following link for more discussion on this topic
How to perform case sensitive searches in SQL Server?
http://vyaskn.tripod.com/case_sensitive_search_in_sql_server.htm
Please feel free to let's know if you have any further comments or
concerns. Thank you.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Community Support
==================================================
Get notification to my posts through email? Please refer to
http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
ications
<http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx>.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
<http://msdn.microsoft.com/subscriptions/support/default.aspx>.
==================================================
This posting is provided "AS IS" with no warranties, and confers no rights.
sql
performance implications of collation changes
insensitive, so my default database collation is case insensitive. However
there are a few columns which I really want to be case-sensitive. Is there
any significant performance impact to setting some columns to have a
different collation from the database as a whole?
...Mike"Mike Kraley" <mkraley@.community.nospam> wrote in message
news:B7D9FD39-82D5-4256-BCCA-A11CD3A693D8@.microsoft.com...
> for the most part, my (sql server 2005) application prefers to be case
> insensitive, so my default database collation is case insensitive. However
> there are a few columns which I really want to be case-sensitive. Is there
> any significant performance impact to setting some columns to have a
> different collation from the database as a whole?
> --
> ...Mike
No, in fact being case-sensitive would have a minor performance boost.
(Even better would be to use binary collation.)|||Hello Mike,
I agree with MIchael that case-sensitive would have a minor performance
boost though it may improve perofmrance a little. You may want to refer to
the following link for more discussion on this topic
How to perform case sensitive searches in SQL Server?
http://vyaskn.tripod.com/case_sensi..._sql_server.htm
Please feel free to let's know if you have any further comments or
concerns. Thank you.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Community Support
========================================
==========
Get notification to my posts through email? Please refer to
http://msdn.microsoft.com/subscript...ault.aspx#notif
ications
<http://msdn.microsoft.com/subscript...ps/default.aspx>.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
<http://msdn.microsoft.com/subscript...rt/default.aspx>.
========================================
==========
This posting is provided "AS IS" with no warranties, and confers no rights.
performance implications of collation changes
insensitive, so my default database collation is case insensitive. However
there are a few columns which I really want to be case-sensitive. Is there
any significant performance impact to setting some columns to have a
different collation from the database as a whole?
--
...Mike"Mike Kraley" <mkraley@.community.nospam> wrote in message
news:B7D9FD39-82D5-4256-BCCA-A11CD3A693D8@.microsoft.com...
> for the most part, my (sql server 2005) application prefers to be case
> insensitive, so my default database collation is case insensitive. However
> there are a few columns which I really want to be case-sensitive. Is there
> any significant performance impact to setting some columns to have a
> different collation from the database as a whole?
> --
> ...Mike
No, in fact being case-sensitive would have a minor performance boost.
(Even better would be to use binary collation.)|||Hello Mike,
I agree with MIchael that case-sensitive would have a minor performance
boost though it may improve perofmrance a little. You may want to refer to
the following link for more discussion on this topic
How to perform case sensitive searches in SQL Server?
http://vyaskn.tripod.com/case_sensitive_search_in_sql_server.htm
Please feel free to let's know if you have any further comments or
concerns. Thank you.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Community Support
==================================================Get notification to my posts through email? Please refer to
http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
ications
<http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx>.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
<http://msdn.microsoft.com/subscriptions/support/default.aspx>.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.
Performance Help Help....
Database Server having (Report Server Database + Application Database)
Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75 gb
disk space)
Software - Windows 2K Advance Server SP4, SQL 2000 sp3
IIS Server ( Report Server, Application Website)
Hardware config - 2CPU (1130 mhz), 1 gb RAM
Software - Windows 2K Server SP4, Reporting Services with SP1
Flow of application - User visits the application web site, queries (data
size 2.3 million records for a month) the and internally application passes
this query to Report Server through URL and data comes back to the User.
This thing works fine as long user queries for 2-3 days of data, but it
starts throwing "Server not available..." if it goes beyond that. (I saw
that on the web server aspnet_wp.exe starts crashing)
Any suggestion or solutions are welcome. I'm thinking of putting Report
Server web site on the Database Server, but I want to avoid that as I don't
want to run IIS with SQL 2000 (may be I'm wrong).
Thanks in advance,
SuhaibDo you have the /3GB flag set in c:\boot.ini? Take a look at for some common
things
http://blogs.msdn.com/tudortr/archive/2004/06/28/167969.aspx
Another thing to do would be to monitor the Process | Privates bytes
performance counter for the aspnet_wp.exe process.
--
Tudor Trufinescu
Dev Lead
Sql Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"Suhaib Khan" <skhan@.faicorp.com> wrote in message
news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> I have the following setup -
> Database Server having (Report Server Database + Application Database)
> Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
gb
> disk space)
> Software - Windows 2K Advance Server SP4, SQL 2000 sp3
>
> IIS Server ( Report Server, Application Website)
> Hardware config - 2CPU (1130 mhz), 1 gb RAM
> Software - Windows 2K Server SP4, Reporting Services with SP1
> Flow of application - User visits the application web site, queries (data
> size 2.3 million records for a month) the and internally application
passes
> this query to Report Server through URL and data comes back to the User.
> This thing works fine as long user queries for 2-3 days of data, but it
> starts throwing "Server not available..." if it goes beyond that. (I saw
> that on the web server aspnet_wp.exe starts crashing)
> Any suggestion or solutions are welcome. I'm thinking of putting Report
> Server web site on the Database Server, but I want to avoid that as I
don't
> want to run IIS with SQL 2000 (may be I'm wrong).
>
> Thanks in advance,
> Suhaib
>|||how many rows are returned to the report?
have you some aggregated data? does it SQL Server standard or enterprise
edition?
Have you try to use OLAP Cubes?
"Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> I have the following setup -
> Database Server having (Report Server Database + Application Database)
> Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
gb
> disk space)
> Software - Windows 2K Advance Server SP4, SQL 2000 sp3
>
> IIS Server ( Report Server, Application Website)
> Hardware config - 2CPU (1130 mhz), 1 gb RAM
> Software - Windows 2K Server SP4, Reporting Services with SP1
> Flow of application - User visits the application web site, queries (data
> size 2.3 million records for a month) the and internally application
passes
> this query to Report Server through URL and data comes back to the User.
> This thing works fine as long user queries for 2-3 days of data, but it
> starts throwing "Server not available..." if it goes beyond that. (I saw
> that on the web server aspnet_wp.exe starts crashing)
> Any suggestion or solutions are welcome. I'm thinking of putting Report
> Server web site on the Database Server, but I want to avoid that as I
don't
> want to run IIS with SQL 2000 (may be I'm wrong).
>
> Thanks in advance,
> Suhaib
>|||What is the size of the result set? How many pages? Also, what are you
rending it as (html, pdf, etc?). How long does it take from query analyzer?
Are you sending lots of data to RS and letting it filter or letting the
query itself filter out the data before it arrives at RS?
Bruce L-C
"Suhaib Khan" <skhan@.faicorp.com> wrote in message
news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> I have the following setup -
> Database Server having (Report Server Database + Application Database)
> Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
gb
> disk space)
> Software - Windows 2K Advance Server SP4, SQL 2000 sp3
>
> IIS Server ( Report Server, Application Website)
> Hardware config - 2CPU (1130 mhz), 1 gb RAM
> Software - Windows 2K Server SP4, Reporting Services with SP1
> Flow of application - User visits the application web site, queries (data
> size 2.3 million records for a month) the and internally application
passes
> this query to Report Server through URL and data comes back to the User.
> This thing works fine as long user queries for 2-3 days of data, but it
> starts throwing "Server not available..." if it goes beyond that. (I saw
> that on the web server aspnet_wp.exe starts crashing)
> Any suggestion or solutions are welcome. I'm thinking of putting Report
> Server web site on the Database Server, but I want to avoid that as I
don't
> want to run IIS with SQL 2000 (may be I'm wrong).
>
> Thanks in advance,
> Suhaib
>|||how many rows are returned to the report?
for 5 days - 430,000.
have you some aggregated data?
Yes, but we are not using any filter option on Report Server side
does it SQL Server standard or enterprise edition?
enterprise
Have you try to use OLAP Cubes?
Not now.
We are upgrading our current Reports from pure html using ASP to ASP.NET
with Reporting Services. We used to allow user's to query data for 3
months, again these months were not prefined, user's were allowed to select
Jan to March or Feb to April, with Reporting Services we are not able cross
more than 3 days forget about going across 3 months.
Database is partitioned based on Month, Vendor, and Year ID (Currently we
just have one vendor and one month of data, so I can't balme my partiton)
Suhaib
"Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
news:%23MZgb%23gXEHA.1048@.tk2msftngp13.phx.gbl...
> how many rows are returned to the report?
> have you some aggregated data? does it SQL Server standard or enterprise
> edition?
> Have you try to use OLAP Cubes?
> "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> > I have the following setup -
> >
> > Database Server having (Report Server Database + Application Database)
> > Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
> gb
> > disk space)
> > Software - Windows 2K Advance Server SP4, SQL 2000 sp3
> >
> >
> > IIS Server ( Report Server, Application Website)
> > Hardware config - 2CPU (1130 mhz), 1 gb RAM
> > Software - Windows 2K Server SP4, Reporting Services with SP1
> >
> > Flow of application - User visits the application web site, queries
(data
> > size 2.3 million records for a month) the and internally application
> passes
> > this query to Report Server through URL and data comes back to the User.
> >
> > This thing works fine as long user queries for 2-3 days of data, but it
> > starts throwing "Server not available..." if it goes beyond that. (I saw
> > that on the web server aspnet_wp.exe starts crashing)
> >
> > Any suggestion or solutions are welcome. I'm thinking of putting Report
> > Server web site on the Database Server, but I want to avoid that as I
> don't
> > want to run IIS with SQL 2000 (may be I'm wrong).
> >
> >
> > Thanks in advance,
> >
> > Suhaib
> >
> >
>|||430 000 rows'
wow!!!!
its very very big!!!
why do you return so many details in your report?
If you use aggregated function in RS, then the time to process the report
will be huge!!! in this case, I recommend to create multiple datasets (1 for
the detail, 1 for aggregated data with an SQL statement which do the
calculations (sums, group by...))
Maybe you can schedule the report and save it as a snapshot report. (or save
it as PDF in a network folder)
"Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
news:%23hIXORiXEHA.1656@.TK2MSFTNGP09.phx.gbl...
> how many rows are returned to the report?
> for 5 days - 430,000.
> have you some aggregated data?
> Yes, but we are not using any filter option on Report Server side
> does it SQL Server standard or enterprise edition?
> enterprise
> Have you try to use OLAP Cubes?
> Not now.
> We are upgrading our current Reports from pure html using ASP to ASP.NET
> with Reporting Services. We used to allow user's to query data for 3
> months, again these months were not prefined, user's were allowed to
select
> Jan to March or Feb to April, with Reporting Services we are not able
cross
> more than 3 days forget about going across 3 months.
>
> Database is partitioned based on Month, Vendor, and Year ID (Currently we
> just have one vendor and one month of data, so I can't balme my partiton)
>
> Suhaib
>
> "Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
> news:%23MZgb%23gXEHA.1048@.tk2msftngp13.phx.gbl...
> > how many rows are returned to the report?
> > have you some aggregated data? does it SQL Server standard or enterprise
> > edition?
> > Have you try to use OLAP Cubes?
> >
> > "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> > news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> > > I have the following setup -
> > >
> > > Database Server having (Report Server Database + Application Database)
> > > Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive
(75
> > gb
> > > disk space)
> > > Software - Windows 2K Advance Server SP4, SQL 2000 sp3
> > >
> > >
> > > IIS Server ( Report Server, Application Website)
> > > Hardware config - 2CPU (1130 mhz), 1 gb RAM
> > > Software - Windows 2K Server SP4, Reporting Services with SP1
> > >
> > > Flow of application - User visits the application web site, queries
> (data
> > > size 2.3 million records for a month) the and internally application
> > passes
> > > this query to Report Server through URL and data comes back to the
User.
> > >
> > > This thing works fine as long user queries for 2-3 days of data, but
it
> > > starts throwing "Server not available..." if it goes beyond that. (I
saw
> > > that on the web server aspnet_wp.exe starts crashing)
> > >
> > > Any suggestion or solutions are welcome. I'm thinking of putting
Report
> > > Server web site on the Database Server, but I want to avoid that as I
> > don't
> > > want to run IIS with SQL 2000 (may be I'm wrong).
> > >
> > >
> > > Thanks in advance,
> > >
> > > Suhaib
> > >
> > >
> >
> >
>|||Do a report in Query Analyzer and return 430,000 rows. Now wait and wait
somemore and wait somemore. If your customer needs a data extract (for
instance to load their own datamart) then you should use DTS. Otherwise, you
should have reports that provide drill through that allows the customer to
get to the data they are interested in. No human is going to browse 430,000
rows. I suggest you look at how to do drill through. RS supports drill
through reports very cleanly and it is the way to go.
As far as scheduling a PDF. This would be a 5,000 page PDF report. This is
not reasonable to expect a system to do. I strongly suggest re-analyzing
your requirements and solutions. Nobody will have a solution for a 5,000
page report that is speedy. Again, if you are extracting data for someone's
datamart then use DTS.
Bruce L-C
"Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
news:umOPgHrXEHA.2844@.TK2MSFTNGP12.phx.gbl...
> 430 000 rows'
> wow!!!!
> its very very big!!!
> why do you return so many details in your report?
> If you use aggregated function in RS, then the time to process the report
> will be huge!!! in this case, I recommend to create multiple datasets (1
for
> the detail, 1 for aggregated data with an SQL statement which do the
> calculations (sums, group by...))
> Maybe you can schedule the report and save it as a snapshot report. (or
save
> it as PDF in a network folder)
> "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> news:%23hIXORiXEHA.1656@.TK2MSFTNGP09.phx.gbl...
> > how many rows are returned to the report?
> > for 5 days - 430,000.
> >
> > have you some aggregated data?
> > Yes, but we are not using any filter option on Report Server side
> >
> > does it SQL Server standard or enterprise edition?
> > enterprise
> >
> > Have you try to use OLAP Cubes?
> > Not now.
> >
> > We are upgrading our current Reports from pure html using ASP to ASP.NET
> > with Reporting Services. We used to allow user's to query data for 3
> > months, again these months were not prefined, user's were allowed to
> select
> > Jan to March or Feb to April, with Reporting Services we are not able
> cross
> > more than 3 days forget about going across 3 months.
> >
> >
> > Database is partitioned based on Month, Vendor, and Year ID (Currently
we
> > just have one vendor and one month of data, so I can't balme my
partiton)
> >
> >
> > Suhaib
> >
> >
> > "Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
> > news:%23MZgb%23gXEHA.1048@.tk2msftngp13.phx.gbl...
> > > how many rows are returned to the report?
> > > have you some aggregated data? does it SQL Server standard or
enterprise
> > > edition?
> > > Have you try to use OLAP Cubes?
> > >
> > > "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> > > news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> > > > I have the following setup -
> > > >
> > > > Database Server having (Report Server Database + Application
Database)
> > > > Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive
> (75
> > > gb
> > > > disk space)
> > > > Software - Windows 2K Advance Server SP4, SQL 2000 sp3
> > > >
> > > >
> > > > IIS Server ( Report Server, Application Website)
> > > > Hardware config - 2CPU (1130 mhz), 1 gb RAM
> > > > Software - Windows 2K Server SP4, Reporting Services with SP1
> > > >
> > > > Flow of application - User visits the application web site, queries
> > (data
> > > > size 2.3 million records for a month) the and internally application
> > > passes
> > > > this query to Report Server through URL and data comes back to the
> User.
> > > >
> > > > This thing works fine as long user queries for 2-3 days of data, but
> it
> > > > starts throwing "Server not available..." if it goes beyond that. (I
> saw
> > > > that on the web server aspnet_wp.exe starts crashing)
> > > >
> > > > Any suggestion or solutions are welcome. I'm thinking of putting
> Report
> > > > Server web site on the Database Server, but I want to avoid that as
I
> > > don't
> > > > want to run IIS with SQL 2000 (may be I'm wrong).
> > > >
> > > >
> > > > Thanks in advance,
> > > >
> > > > Suhaib
> > > >
> > > >
> > >
> > >
> >
> >
>
Wednesday, March 28, 2012
Performance effect of using maxOccurs="unbounded"
In our application we have a few elements in our schema where we have not set limits on the number of occurences. For these particular elements, the maxOccurs attribute is set to "unbounded".
Does this have any performance effect on any XML indexes that we may add later to improve performance? Does it slow anything down internally in SQL Server when performing any DML operations on our stored XML?
It does not make a difference to XML indexing whether maxOccurs is set to unbounded or a fixed value (> 1, say 6). A separate entry is created in the XML index for each occurrence of the element. So index lookup and other other optimizations will occur the same way. There should be no performance difference between the two choices.
If maxOccurs is 1, then you should set it to 1.
Thank you,
Shankar Pal, Program Manager, SQL Server
Performance effect if SQL2000 and SQL2005 on same box
I am going to upgrade SQL2000 to SQL2005 because 3rd party application
requires SQL2005. My plan is upgrade my current database on SQL2000 to
SQL2005, but keeping SQL2000 for fall back plan.
Is there any impact on performance if I have SQL2005 and SQL2000 running on
the same machine?
Thanks alot for any help in advance.
Han.You will obviously have resource usage of you have both service started at the same time. But having
2000 installed but not started will not degrade performance of a 2005 install on that machine.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Han" <Han@.discussions.microsoft.com> wrote in message
news:FE01070A-7736-47CF-AECA-D9DB9CA5F75E@.microsoft.com...
> Hi everyone,
> I am going to upgrade SQL2000 to SQL2005 because 3rd party application
> requires SQL2005. My plan is upgrade my current database on SQL2000 to
> SQL2005, but keeping SQL2000 for fall back plan.
> Is there any impact on performance if I have SQL2005 and SQL2000 running on
> the same machine?
> Thanks alot for any help in advance.
> Han.|||The issues are not different from running two SQL2000 instances or two
SQL2005 instances.
Linchi
"Han" wrote:
> Hi everyone,
> I am going to upgrade SQL2000 to SQL2005 because 3rd party application
> requires SQL2005. My plan is upgrade my current database on SQL2000 to
> SQL2005, but keeping SQL2000 for fall back plan.
> Is there any impact on performance if I have SQL2005 and SQL2000 running on
> the same machine?
> Thanks alot for any help in advance.
> Han.
Monday, March 26, 2012
Performance diff when sp executed from apps and from query analyze
Im noticing a performance diff when i execute a proc from the application
the duration shown in the profiler is higher than when i execute the same
proc from query analyze.
Does anyone have any clue on why this difference is occuring. What could be
causing the overhead.
Thanks.Arzan wrote:
> Hi,
> Im noticing a performance diff when i execute a proc from the
> application the duration shown in the profiler is higher than when i
> execute the same proc from query analyze.
> Does anyone have any clue on why this difference is occuring. What
> could be causing the overhead.
> Thanks.
Duration is sometimes an application issue. The duration for a query is
determined by how long a query takes to execute plus how long the
application takes to fetch all the results in the result set. QA fetches
rows immediately and performs little to no processing on the returning
data, so durations are as fast as they can be. OTOH, an application may
be performing row processing on the result set or not fetching all at
once causing delays in the final duration figure.
You can test this from SQL EM and Profiler.
Run Profiler with the SQL:StmtStarting and SQL:StmtCompleted events.
Find a table with a lot of rows and right-click and select Return All
Rows.
You'll see the SQL:StmtStarting event with the Select * From...
Go back to SQL EM and CTLR + END to move to the end of the result set
Now you see the SQL:StmtCompleted in Profiler.
I'm not saying this is your problem. Can you provide more details about
your app and what's it's doing with the results, if any.
David Gugick
Imceda Software
www.imceda.com
Friday, March 23, 2012
Performance Counters Missing
performance counters for SQL went missing.
How do I get them back and would I need to restart SQL/Reboot afterwards?
In NT machine heard about executing,
diskperf -y in command prompt and reboot to get few performance counter
enabled...
Thanks,
Sree
"Woo" wrote:
> After installing Sharepoint application on to my SQL Server 2000, all the
> performance counters for SQL went missing.
> How do I get them back and would I need to restart SQL/Reboot afterwards?
>
|||The server is Windows 2003.
"Sreejith G" wrote:
[vbcol=seagreen]
> In NT machine heard about executing,
> diskperf -y in command prompt and reboot to get few performance counter
> enabled...
> Thanks,
> Sree
> "Woo" wrote:
Performance Counters Missing
performance counters for SQL went missing.
How do I get them back and would I need to restart SQL/Reboot afterwards?In NT machine heard about executing,
diskperf -y in command prompt and reboot to get few performance counter
enabled...
Thanks,
Sree
"Woo" wrote:
> After installing Sharepoint application on to my SQL Server 2000, all the
> performance counters for SQL went missing.
> How do I get them back and would I need to restart SQL/Reboot afterwards?
>|||The server is Windows 2003.
"Sreejith G" wrote:
> In NT machine heard about executing,
> diskperf -y in command prompt and reboot to get few performance counter
> enabled...
> Thanks,
> Sree
> "Woo" wrote:
> > After installing Sharepoint application on to my SQL Server 2000, all the
> > performance counters for SQL went missing.
> >
> > How do I get them back and would I need to restart SQL/Reboot afterwards?
> >
> >
Performance counters disappeared
available performance counters via RegQueryValueEx() API
at 10 second interval on some Windows 2000 computers that
have SQL Server 7.0 installed. After the application have
run for some period of time (several days or longer), the
SQL Server related performance counters disappeared. They
are disabeld by the OS. The knowledge base article
Q248993 mentions that the performance library of Windows
2000 disables any performance DLL that returns data in the
incorrect format, causes an unhandled program fault, or
takes too long to return the performance data (this is
usually the cause of a library being disabled). It is
obvious the incorrect format does not apply to my case
since it would have caused the problem right after it
started not several days later. Anyone knows what causes
the problem?Did you see Q812915?
"Tianxiang Zhang" <tianxiang_zhang@.bmc.com> wrote in message
news:013701c344b5$7f9caed0$a501280a@.phx.gbl...
> I have an application written in C++ that querys all
> available performance counters via RegQueryValueEx() API
> at 10 second interval on some Windows 2000 computers that
> have SQL Server 7.0 installed. After the application have
> run for some period of time (several days or longer), the
> SQL Server related performance counters disappeared. They
> are disabeld by the OS. The knowledge base article
> Q248993 mentions that the performance library of Windows
> 2000 disables any performance DLL that returns data in the
> incorrect format, causes an unhandled program fault, or
> takes too long to return the performance data (this is
> usually the cause of a library being disabled). It is
> obvious the incorrect format does not apply to my case
> since it would have caused the problem right after it
> started not several days later. Anyone knows what causes
> the problem?|||I did. They don't seem to be the same issue. Thanks for
the help.
>--Original Message--
>Did you see Q812915?
>
>"Tianxiang Zhang" <tianxiang_zhang@.bmc.com> wrote in
message
>news:013701c344b5$7f9caed0$a501280a@.phx.gbl...
>> I have an application written in C++ that querys all
>> available performance counters via RegQueryValueEx() API
>> at 10 second interval on some Windows 2000 computers
that
>> have SQL Server 7.0 installed. After the application
have
>> run for some period of time (several days or longer),
the
>> SQL Server related performance counters disappeared.
They
>> are disabeld by the OS. The knowledge base article
>> Q248993 mentions that the performance library of Windows
>> 2000 disables any performance DLL that returns data in
the
>> incorrect format, causes an unhandled program fault, or
>> takes too long to return the performance data (this is
>> usually the cause of a library being disabled). It is
>> obvious the incorrect format does not apply to my case
>> since it would have caused the problem right after it
>> started not several days later. Anyone knows what
causes
>> the problem?
>
>.
>sql
Wednesday, March 21, 2012
Performance cost of joins
The main purpose is to get high performance from the system. the system is supposed to produce reports that contain information from about 100,000 records from 5 or 6 related tables.
which of these approaches is better from the performance point of view? What is the performance cost of using joins over getting data from one table?Generally, one should not blindly assume that it's always 'normalized and slow' vs. 'de-normalized and fast’. Normalized databases when reading data typically perform better too provided that the optimizer is able to benefit from useful indexes for the frequent joins. Having said that, obviously there are corner cases where de-normalization does improve things for certain data access patterns especially when de-normalization allows you to get rid of some indexes and hence reduce the update costs. But, again, it all depends on your particular logical database schema and the prevalent data access/update patterns.|||
It is generally better to normalize your tables for better data integrity and to reduce duplicate data. After you have a good normalized foundation, then you can think about denormalizing with rollup tables for exceptional cases. For an OLTP workload, you may see some performance issues if you have frequently executed queries that have joins to more than four or five tables, since the query optimizer has to use heuristics to come up with a plan.
Your indexing strategy will have more effect on performance than anything else. You have to analyze how volatile your tables are and what kind of workload you have in order to determine what indexes to create. There are good DMV queries that will let you easily see which queries are being executed the most and which indexes are being used. Here is an example:
-- Get Top 200 executed SP's ordered by calls/minute
SELECT TOP 200 qt.text AS 'SP Name', qs.execution_count AS 'Execution Count',
qs.total_worker_time/ISNULL(qs.execution_count, 1) AS 'AvgWorkerTime',
qs.total_worker_time AS 'TotalWorkerTime',
qs.total_elapsed_time/ISNULL(qs.execution_count, 1) AS 'AvgElapsedTime',
qs.max_logical_reads, qs.max_logical_writes, qs.creation_time,
DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache',
qs.execution_count/DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Calls/Minute',
qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second'
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
WHERE qt.dbid = 5 -- Change this for the database you are interested in
ORDER BY qs.execution_count/DATEDIFF(SECOND, qs.creation_time, GetDate())DESC
its a question of tradeoff..... for faster dmls u need a normalized structures...for faster (generaly) reports de normalized is better..... so u have to decide where actually u want to optimize the performance...
for me normalized database does the trick... as it may prove to be more helpful in longer run ... ie u always have a greater integrity of data....
for reports u may use views to avoid repeated use of joins...
so optimal for me = normalized + views
Performance cost of joins
The main purpose is to get high performance from the system. the system is supposed to produce reports that contain information from about 100,000 records from 5 or 6 related tables.
which of these approaches is better from the performance point of view? What is the performance cost of using joins over getting data from one table?Generally, one should not blindly assume that it's always 'normalized and slow' vs. 'de-normalized and fast’. Normalized databases when reading data typically perform better too provided that the optimizer is able to benefit from useful indexes for the frequent joins. Having said that, obviously there are corner cases where de-normalization does improve things for certain data access patterns especially when de-normalization allows you to get rid of some indexes and hence reduce the update costs. But, again, it all depends on your particular logical database schema and the prevalent data access/update patterns.|||
It is generally better to normalize your tables for better data integrity and to reduce duplicate data. After you have a good normalized foundation, then you can think about denormalizing with rollup tables for exceptional cases. For an OLTP workload, you may see some performance issues if you have frequently executed queries that have joins to more than four or five tables, since the query optimizer has to use heuristics to come up with a plan.
Your indexing strategy will have more effect on performance than anything else. You have to analyze how volatile your tables are and what kind of workload you have in order to determine what indexes to create. There are good DMV queries that will let you easily see which queries are being executed the most and which indexes are being used. Here is an example:
-- Get Top 200 executed SP's ordered by calls/minute
SELECT TOP 200 qt.text AS 'SP Name', qs.execution_count AS 'Execution Count',
qs.total_worker_time/ISNULL(qs.execution_count, 1) AS 'AvgWorkerTime',
qs.total_worker_time AS 'TotalWorkerTime',
qs.total_elapsed_time/ISNULL(qs.execution_count, 1) AS 'AvgElapsedTime',
qs.max_logical_reads, qs.max_logical_writes, qs.creation_time,
DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache',
qs.execution_count/DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Calls/Minute',
qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second'
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
WHERE qt.dbid = 5 -- Change this for the database you are interested in
ORDER BY qs.execution_count/DATEDIFF(SECOND, qs.creation_time, GetDate())DESC
its a question of tradeoff..... for faster dmls u need a normalized structures...for faster (generaly) reports de normalized is better..... so u have to decide where actually u want to optimize the performance...
for me normalized database does the trick... as it may prove to be more helpful in longer run ... ie u always have a greater integrity of data....
for reports u may use views to avoid repeated use of joins...
so optimal for me = normalized + views
Performance cost of joins
The main purpose is to get high performance from the system. the system is supposed to produce reports that contain information from about 100,000 records from 5 or 6 related tables.
which of these approaches is better from the performance point of view? What is the performance cost of using joins over getting data from one table?Generally, one should not blindly assume that it's always 'normalized and slow' vs. 'de-normalized and fast’. Normalized databases when reading data typically perform better too provided that the optimizer is able to benefit from useful indexes for the frequent joins. Having said that, obviously there are corner cases where de-normalization does improve things for certain data access patterns especially when de-normalization allows you to get rid of some indexes and hence reduce the update costs. But, again, it all depends on your particular logical database schema and the prevalent data access/update patterns.|||
It is generally better to normalize your tables for better data integrity and to reduce duplicate data. After you have a good normalized foundation, then you can think about denormalizing with rollup tables for exceptional cases. For an OLTP workload, you may see some performance issues if you have frequently executed queries that have joins to more than four or five tables, since the query optimizer has to use heuristics to come up with a plan.
Your indexing strategy will have more effect on performance than anything else. You have to analyze how volatile your tables are and what kind of workload you have in order to determine what indexes to create. There are good DMV queries that will let you easily see which queries are being executed the most and which indexes are being used. Here is an example:
-- Get Top 200 executed SP's ordered by calls/minute
SELECT TOP 200 qt.text AS 'SP Name', qs.execution_count AS 'Execution Count',
qs.total_worker_time/ISNULL(qs.execution_count, 1) AS 'AvgWorkerTime',
qs.total_worker_time AS 'TotalWorkerTime',
qs.total_elapsed_time/ISNULL(qs.execution_count, 1) AS 'AvgElapsedTime',
qs.max_logical_reads, qs.max_logical_writes, qs.creation_time,
DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache',
qs.execution_count/DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Calls/Minute',
qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second'
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
WHERE qt.dbid = 5 -- Change this for the database you are interested in
ORDER BY qs.execution_count/DATEDIFF(SECOND, qs.creation_time, GetDate())DESC
its a question of tradeoff..... for faster dmls u need a normalized structures...for faster (generaly) reports de normalized is better..... so u have to decide where actually u want to optimize the performance...
for me normalized database does the trick... as it may prove to be more helpful in longer run ... ie u always have a greater integrity of data....
for reports u may use views to avoid repeated use of joins...
so optimal for me = normalized + views
Performance collapse on writing to database
We have an application that stores online measurement data in a MSDE
database.
Some statistics: 20 x 1kb per minute.
We had no problems for several months at many locations.
On some of our systems we saw that the SQL Server took very much of the
memory and some of the customers complaint about this. So we limitted the
maximum server memory of SQL Server to half of the available RAM memory
(most of the machines had 1GB RAM).
After running the SQL Server then for a while we saw that the writing to the
database got slower and slower and in the end it took about 30 Seconds and
more for a single INSERT. After stopping and restarting the SQL Server
everything worked fine again.
The time for reading out of the database was never influenced.
Can anybody explain this to me?
Is there a way to avoid it?
I hope on many useful answers!
Thanks!
what is the sqlserver instance doing in 30 seconds? Is it waiting on IO? Or
blocked by something? select from sysprocesses as a start. There are some
good information on the web for analyzing blocking issues.
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://blogs.msdn.com/weix
"Frank Esser" <Mistral@.nurfuerspam.de> wrote in message
news:OGWumWSgFHA.2700@.TK2MSFTNGP15.phx.gbl...
> Hello!
> We have an application that stores online measurement data in a MSDE
> database.
> Some statistics: 20 x 1kb per minute.
> We had no problems for several months at many locations.
> On some of our systems we saw that the SQL Server took very much of the
> memory and some of the customers complaint about this. So we limitted the
> maximum server memory of SQL Server to half of the available RAM memory
> (most of the machines had 1GB RAM).
> After running the SQL Server then for a while we saw that the writing to
the
> database got slower and slower and in the end it took about 30 Seconds and
> more for a single INSERT. After stopping and restarting the SQL Server
> everything worked fine again.
> The time for reading out of the database was never influenced.
> Can anybody explain this to me?
> Is there a way to avoid it?
> I hope on many useful answers!
> Thanks!
>
Performance collapse on writing to database
We have an application that stores online measurement data in a MSDE
database.
Some statistics: 20 x 1kb per minute.
We had no problems for several months at many locations.
On some of our systems we saw that the SQL Server took very much of the
memory and some of the customers complaint about this. So we limitted the
maximum server memory of SQL Server to half of the available RAM memory
(most of the machines had 1GB RAM).
After running the SQL Server then for a while we saw that the writing to the
database got slower and slower and in the end it took about 30 Seconds and
more for a single INSERT. After stopping and restarting the SQL Server
everything worked fine again.
The time for reading out of the database was never influenced.
Can anybody explain this to me?
Is there a way to avoid it?
I hope on many useful answers!
Thanks!what is the sqlserver instance doing in 30 seconds? Is it waiting on IO? Or
blocked by something? select from sysprocesses as a start. There are some
good information on the web for analyzing blocking issues.
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://blogs.msdn.com/weix
"Frank Esser" <Mistral@.nurfuerspam.de> wrote in message
news:OGWumWSgFHA.2700@.TK2MSFTNGP15.phx.gbl...
> Hello!
> We have an application that stores online measurement data in a MSDE
> database.
> Some statistics: 20 x 1kb per minute.
> We had no problems for several months at many locations.
> On some of our systems we saw that the SQL Server took very much of the
> memory and some of the customers complaint about this. So we limitted the
> maximum server memory of SQL Server to half of the available RAM memory
> (most of the machines had 1GB RAM).
> After running the SQL Server then for a while we saw that the writing to
the
> database got slower and slower and in the end it took about 30 Seconds and
> more for a single INSERT. After stopping and restarting the SQL Server
> everything worked fine again.
> The time for reading out of the database was never influenced.
> Can anybody explain this to me?
> Is there a way to avoid it?
> I hope on many useful answers!
> Thanks!
>sql