Hi
I recently designed & developed a data warehouse for a client of ours.
The project has been quite a success, but they have since loaded the
database with a substantial amount of (historic) data, and have given
access to the query & reporting system (Business Objects) to many more
users. As a result, the query time on the reports is beginning to
slow.
I need to advise the client on what should be done to improve
performance again.
I'm quite familiar with what can be done on the design side of the
database (i.e. indices, aggregate tables, etc.), but I am not very
sure about the cost/benefits from a hardware perspective. As a
consultancy, the cost of our services (to build aggregate tables, etc)
would be quite expensive. It might be a better option for the client
to improve the spec of the server instead, but I am reluctant to
recommend this option without knowing that there would be guaranteed
benefits. In other words, if they were to double the processing power,
would the query time effectively halve? What about memory - i.e. would
more memory help with multiple-users, but not affect the speed of the
reports? Is 4GB the max memory that Windows 2000 can have?
[I can't remember the exact spec, but the server's got something like
2 Xeon dual-processors, 4GB RAM & Win2k.]
Can anyone recommend a study or white-paper on the performance
improvements in relation to hardware upgrades, or something similar?
Otherwise, what other options do I have? Am I overlooking something
like two load-balanced servers?
Thanks for any help / recommendations!
Sean W.
Sean wrote:
> Hi
> I recently designed & developed a data warehouse for a client of ours.
> The project has been quite a success, but they have since loaded the
> database with a substantial amount of (historic) data, and have given
> access to the query & reporting system (Business Objects) to many more
> users. As a result, the query time on the reports is beginning to
> slow.
> I need to advise the client on what should be done to improve
> performance again.
> I'm quite familiar with what can be done on the design side of the
> database (i.e. indices, aggregate tables, etc.), but I am not very
> sure about the cost/benefits from a hardware perspective. As a
> consultancy, the cost of our services (to build aggregate tables, etc)
> would be quite expensive. It might be a better option for the client
> to improve the spec of the server instead, but I am reluctant to
> recommend this option without knowing that there would be guaranteed
> benefits. In other words, if they were to double the processing power,
> would the query time effectively halve? What about memory - i.e. would
> more memory help with multiple-users, but not affect the speed of the
> reports? Is 4GB the max memory that Windows 2000 can have?
> [I can't remember the exact spec, but the server's got something like
> 2 Xeon dual-processors, 4GB RAM & Win2k.]
> Can anyone recommend a study or white-paper on the performance
> improvements in relation to hardware upgrades, or something similar?
> Otherwise, what other options do I have? Am I overlooking something
> like two load-balanced servers?
> Thanks for any help / recommendations!
> Sean W.
Unfortunately, I think many companies are inclined to add more server
without first trying to tune the database. You've gone ahead and
designed a database for the customer and added the necessary RI and
indexes. But now that you see the SQL executed, someone needs to tune
the queries, revise the indexes if necessary, and verify the reporting
system is executing SQL in an efficient way: For example, is it using
stored procedures? Do users have Ad Hoc access to the database to query
whatever they want? Does the reporting tool bring back large result sets
and filter on the client PC? Do the queries use a READ UNCOMMITTED or
NOLOCK query option to keep locks to a minimum?
My point is that without performance tuning the queries, it's really
impossible to know whether adding more hardware will give you anything
but a temporary reprieve of poor performance.
When I hear about problems like this, I like to remember an old client
that had a couple hundred people hitting a SQL Server database which ran
on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
I hear about performance problems on quad-Xeon systems with 3GB RAM (as
an example), I get concerned that a company is considering throwing
another $150K at a new server.
Ad Hoc reporting systems are notoriously bad on performance, especially
if users don't run canned reports and can throw what SQL the end-user
reporting tool can generate at the server. The amount of data queried
can easily move old data out of cache and force the next query to read
that data from disk, which is really slow. Adding more memory is an
option if plan reuse is the issue. And since memory is cheap, it can't
hurt to add all you can. Adding more hardware will help, but may not
really be a long term solution.
However, nothing beats being able to aptly use an index to generate a
result set. Table/Clustered Index Scans are a killer on big tables.
You may want to consider placing query limits on ad hoc queries using
"SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
those queries cosuming excessive CPU and see if the database or the
queries can be tuned.
David Gugick
Imceda Software
www.imceda.com
|||With Business Objects it is worth asking the users to send you copies of
their reports. You can then see if indices need creating on various
tables etc.
Adrian
David G. wrote:
> Sean wrote:
>
> Unfortunately, I think many companies are inclined to add more server
> without first trying to tune the database. You've gone ahead and
> designed a database for the customer and added the necessary RI and
> indexes. But now that you see the SQL executed, someone needs to tune
> the queries, revise the indexes if necessary, and verify the reporting
> system is executing SQL in an efficient way: For example, is it using
> stored procedures? Do users have Ad Hoc access to the database to query
> whatever they want? Does the reporting tool bring back large result sets
> and filter on the client PC? Do the queries use a READ UNCOMMITTED or
> NOLOCK query option to keep locks to a minimum?
> My point is that without performance tuning the queries, it's really
> impossible to know whether adding more hardware will give you anything
> but a temporary reprieve of poor performance.
> When I hear about problems like this, I like to remember an old client
> that had a couple hundred people hitting a SQL Server database which ran
> on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
> I hear about performance problems on quad-Xeon systems with 3GB RAM (as
> an example), I get concerned that a company is considering throwing
> another $150K at a new server.
> Ad Hoc reporting systems are notoriously bad on performance, especially
> if users don't run canned reports and can throw what SQL the end-user
> reporting tool can generate at the server. The amount of data queried
> can easily move old data out of cache and force the next query to read
> that data from disk, which is really slow. Adding more memory is an
> option if plan reuse is the issue. And since memory is cheap, it can't
> hurt to add all you can. Adding more hardware will help, but may not
> really be a long term solution.
> However, nothing beats being able to aptly use an index to generate a
> result set. Table/Clustered Index Scans are a killer on big tables.
> You may want to consider placing query limits on ad hoc queries using
> "SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
> those queries cosuming excessive CPU and see if the database or the
> queries can be tuned.
>
|||Adrian Edwards wrote:
> With Business Objects it is worth asking the users to send you copies
> of their reports. You can then see if indices need creating on various
> tables etc.
> Adrian
>
Good point. If you can get your hands on any canned reports or at least
have a process for quickly rolling out new or updated reports, you may
eliminate users creating queries that don't perform well.
You might want to consider having new reports created against a test
server before moving them into production. Not giving users the rights
to create reports on the production data may be difficult to manage.
Managers generally want fast results. If you can't do that, at least
introduce a review process so there's never a time when too many reports
are created without review.
You can do this passively using Profiler or a server-side trace. You
could trap the SQL:BatchCompleted and RPC:Completed events for CPU over
a certain level (say 1,000ms) and review those SQL Statements
periodically.
David Gugick
Imceda Software
www.imceda.com
sql
Showing posts with label hardware. Show all posts
Showing posts with label hardware. Show all posts
Friday, March 30, 2012
Performance Improvements: Hardware vs. DB Design
Hi
I recently designed & developed a data warehouse for a client of ours.
The project has been quite a success, but they have since loaded the
database with a substantial amount of (historic) data, and have given
access to the query & reporting system (Business Objects) to many more
users. As a result, the query time on the reports is beginning to
slow.
I need to advise the client on what should be done to improve
performance again.
I'm quite familiar with what can be done on the design side of the
database (i.e. indices, aggregate tables, etc.), but I am not very
sure about the cost/benefits from a hardware perspective. As a
consultancy, the cost of our services (to build aggregate tables, etc)
would be quite expensive. It might be a better option for the client
to improve the spec of the server instead, but I am reluctant to
recommend this option without knowing that there would be guaranteed
benefits. In other words, if they were to double the processing power,
would the query time effectively halve? What about memory - i.e. would
more memory help with multiple-users, but not affect the speed of the
reports? Is 4GB the max memory that Windows 2000 can have?
[I can't remember the exact spec, but the server's got something like
2 Xeon dual-processors, 4GB RAM & Win2k.]
Can anyone recommend a study or white-paper on the performance
improvements in relation to hardware upgrades, or something similar?
Otherwise, what other options do I have? Am I overlooking something
like two load-balanced servers?
Thanks for any help / recommendations!
Sean W.Sean wrote:
> Hi
> I recently designed & developed a data warehouse for a client of ours.
> The project has been quite a success, but they have since loaded the
> database with a substantial amount of (historic) data, and have given
> access to the query & reporting system (Business Objects) to many more
> users. As a result, the query time on the reports is beginning to
> slow.
> I need to advise the client on what should be done to improve
> performance again.
> I'm quite familiar with what can be done on the design side of the
> database (i.e. indices, aggregate tables, etc.), but I am not very
> sure about the cost/benefits from a hardware perspective. As a
> consultancy, the cost of our services (to build aggregate tables, etc)
> would be quite expensive. It might be a better option for the client
> to improve the spec of the server instead, but I am reluctant to
> recommend this option without knowing that there would be guaranteed
> benefits. In other words, if they were to double the processing power,
> would the query time effectively halve? What about memory - i.e. would
> more memory help with multiple-users, but not affect the speed of the
> reports? Is 4GB the max memory that Windows 2000 can have?
> [I can't remember the exact spec, but the server's got something like
> 2 Xeon dual-processors, 4GB RAM & Win2k.]
> Can anyone recommend a study or white-paper on the performance
> improvements in relation to hardware upgrades, or something similar?
> Otherwise, what other options do I have? Am I overlooking something
> like two load-balanced servers?
> Thanks for any help / recommendations!
> Sean W.
Unfortunately, I think many companies are inclined to add more server
without first trying to tune the database. You've gone ahead and
designed a database for the customer and added the necessary RI and
indexes. But now that you see the SQL executed, someone needs to tune
the queries, revise the indexes if necessary, and verify the reporting
system is executing SQL in an efficient way: For example, is it using
stored procedures? Do users have Ad Hoc access to the database to query
whatever they want? Does the reporting tool bring back large result sets
and filter on the client PC? Do the queries use a READ UNCOMMITTED or
NOLOCK query option to keep locks to a minimum?
My point is that without performance tuning the queries, it's really
impossible to know whether adding more hardware will give you anything
but a temporary reprieve of poor performance.
When I hear about problems like this, I like to remember an old client
that had a couple hundred people hitting a SQL Server database which ran
on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
I hear about performance problems on quad-Xeon systems with 3GB RAM (as
an example), I get concerned that a company is considering throwing
another $150K at a new server.
Ad Hoc reporting systems are notoriously bad on performance, especially
if users don't run canned reports and can throw what SQL the end-user
reporting tool can generate at the server. The amount of data queried
can easily move old data out of cache and force the next query to read
that data from disk, which is really slow. Adding more memory is an
option if plan reuse is the issue. And since memory is cheap, it can't
hurt to add all you can. Adding more hardware will help, but may not
really be a long term solution.
However, nothing beats being able to aptly use an index to generate a
result set. Table/Clustered Index Scans are a killer on big tables.
You may want to consider placing query limits on ad hoc queries using
"SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
those queries cosuming excessive CPU and see if the database or the
queries can be tuned.
--
David Gugick
Imceda Software
www.imceda.com|||With Business Objects it is worth asking the users to send you copies of
their reports. You can then see if indices need creating on various
tables etc.
Adrian
David G. wrote:
> Sean wrote:
>>Hi
>>I recently designed & developed a data warehouse for a client of ours.
>>The project has been quite a success, but they have since loaded the
>>database with a substantial amount of (historic) data, and have given
>>access to the query & reporting system (Business Objects) to many more
>>users. As a result, the query time on the reports is beginning to
>>slow.
>>I need to advise the client on what should be done to improve
>>performance again.
>>I'm quite familiar with what can be done on the design side of the
>>database (i.e. indices, aggregate tables, etc.), but I am not very
>>sure about the cost/benefits from a hardware perspective. As a
>>consultancy, the cost of our services (to build aggregate tables, etc)
>>would be quite expensive. It might be a better option for the client
>>to improve the spec of the server instead, but I am reluctant to
>>recommend this option without knowing that there would be guaranteed
>>benefits. In other words, if they were to double the processing power,
>>would the query time effectively halve? What about memory - i.e. would
>>more memory help with multiple-users, but not affect the speed of the
>>reports? Is 4GB the max memory that Windows 2000 can have?
>>[I can't remember the exact spec, but the server's got something like
>>2 Xeon dual-processors, 4GB RAM & Win2k.]
>>Can anyone recommend a study or white-paper on the performance
>>improvements in relation to hardware upgrades, or something similar?
>>Otherwise, what other options do I have? Am I overlooking something
>>like two load-balanced servers?
>>Thanks for any help / recommendations!
>>Sean W.
>
> Unfortunately, I think many companies are inclined to add more server
> without first trying to tune the database. You've gone ahead and
> designed a database for the customer and added the necessary RI and
> indexes. But now that you see the SQL executed, someone needs to tune
> the queries, revise the indexes if necessary, and verify the reporting
> system is executing SQL in an efficient way: For example, is it using
> stored procedures? Do users have Ad Hoc access to the database to query
> whatever they want? Does the reporting tool bring back large result sets
> and filter on the client PC? Do the queries use a READ UNCOMMITTED or
> NOLOCK query option to keep locks to a minimum?
> My point is that without performance tuning the queries, it's really
> impossible to know whether adding more hardware will give you anything
> but a temporary reprieve of poor performance.
> When I hear about problems like this, I like to remember an old client
> that had a couple hundred people hitting a SQL Server database which ran
> on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
> I hear about performance problems on quad-Xeon systems with 3GB RAM (as
> an example), I get concerned that a company is considering throwing
> another $150K at a new server.
> Ad Hoc reporting systems are notoriously bad on performance, especially
> if users don't run canned reports and can throw what SQL the end-user
> reporting tool can generate at the server. The amount of data queried
> can easily move old data out of cache and force the next query to read
> that data from disk, which is really slow. Adding more memory is an
> option if plan reuse is the issue. And since memory is cheap, it can't
> hurt to add all you can. Adding more hardware will help, but may not
> really be a long term solution.
> However, nothing beats being able to aptly use an index to generate a
> result set. Table/Clustered Index Scans are a killer on big tables.
> You may want to consider placing query limits on ad hoc queries using
> "SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
> those queries cosuming excessive CPU and see if the database or the
> queries can be tuned.
>|||Adrian Edwards wrote:
> With Business Objects it is worth asking the users to send you copies
> of their reports. You can then see if indices need creating on various
> tables etc.
> Adrian
>
Good point. If you can get your hands on any canned reports or at least
have a process for quickly rolling out new or updated reports, you may
eliminate users creating queries that don't perform well.
You might want to consider having new reports created against a test
server before moving them into production. Not giving users the rights
to create reports on the production data may be difficult to manage.
Managers generally want fast results. If you can't do that, at least
introduce a review process so there's never a time when too many reports
are created without review.
You can do this passively using Profiler or a server-side trace. You
could trap the SQL:BatchCompleted and RPC:Completed events for CPU over
a certain level (say 1,000ms) and review those SQL Statements
periodically.
David Gugick
Imceda Software
www.imceda.com
I recently designed & developed a data warehouse for a client of ours.
The project has been quite a success, but they have since loaded the
database with a substantial amount of (historic) data, and have given
access to the query & reporting system (Business Objects) to many more
users. As a result, the query time on the reports is beginning to
slow.
I need to advise the client on what should be done to improve
performance again.
I'm quite familiar with what can be done on the design side of the
database (i.e. indices, aggregate tables, etc.), but I am not very
sure about the cost/benefits from a hardware perspective. As a
consultancy, the cost of our services (to build aggregate tables, etc)
would be quite expensive. It might be a better option for the client
to improve the spec of the server instead, but I am reluctant to
recommend this option without knowing that there would be guaranteed
benefits. In other words, if they were to double the processing power,
would the query time effectively halve? What about memory - i.e. would
more memory help with multiple-users, but not affect the speed of the
reports? Is 4GB the max memory that Windows 2000 can have?
[I can't remember the exact spec, but the server's got something like
2 Xeon dual-processors, 4GB RAM & Win2k.]
Can anyone recommend a study or white-paper on the performance
improvements in relation to hardware upgrades, or something similar?
Otherwise, what other options do I have? Am I overlooking something
like two load-balanced servers?
Thanks for any help / recommendations!
Sean W.Sean wrote:
> Hi
> I recently designed & developed a data warehouse for a client of ours.
> The project has been quite a success, but they have since loaded the
> database with a substantial amount of (historic) data, and have given
> access to the query & reporting system (Business Objects) to many more
> users. As a result, the query time on the reports is beginning to
> slow.
> I need to advise the client on what should be done to improve
> performance again.
> I'm quite familiar with what can be done on the design side of the
> database (i.e. indices, aggregate tables, etc.), but I am not very
> sure about the cost/benefits from a hardware perspective. As a
> consultancy, the cost of our services (to build aggregate tables, etc)
> would be quite expensive. It might be a better option for the client
> to improve the spec of the server instead, but I am reluctant to
> recommend this option without knowing that there would be guaranteed
> benefits. In other words, if they were to double the processing power,
> would the query time effectively halve? What about memory - i.e. would
> more memory help with multiple-users, but not affect the speed of the
> reports? Is 4GB the max memory that Windows 2000 can have?
> [I can't remember the exact spec, but the server's got something like
> 2 Xeon dual-processors, 4GB RAM & Win2k.]
> Can anyone recommend a study or white-paper on the performance
> improvements in relation to hardware upgrades, or something similar?
> Otherwise, what other options do I have? Am I overlooking something
> like two load-balanced servers?
> Thanks for any help / recommendations!
> Sean W.
Unfortunately, I think many companies are inclined to add more server
without first trying to tune the database. You've gone ahead and
designed a database for the customer and added the necessary RI and
indexes. But now that you see the SQL executed, someone needs to tune
the queries, revise the indexes if necessary, and verify the reporting
system is executing SQL in an efficient way: For example, is it using
stored procedures? Do users have Ad Hoc access to the database to query
whatever they want? Does the reporting tool bring back large result sets
and filter on the client PC? Do the queries use a READ UNCOMMITTED or
NOLOCK query option to keep locks to a minimum?
My point is that without performance tuning the queries, it's really
impossible to know whether adding more hardware will give you anything
but a temporary reprieve of poor performance.
When I hear about problems like this, I like to remember an old client
that had a couple hundred people hitting a SQL Server database which ran
on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
I hear about performance problems on quad-Xeon systems with 3GB RAM (as
an example), I get concerned that a company is considering throwing
another $150K at a new server.
Ad Hoc reporting systems are notoriously bad on performance, especially
if users don't run canned reports and can throw what SQL the end-user
reporting tool can generate at the server. The amount of data queried
can easily move old data out of cache and force the next query to read
that data from disk, which is really slow. Adding more memory is an
option if plan reuse is the issue. And since memory is cheap, it can't
hurt to add all you can. Adding more hardware will help, but may not
really be a long term solution.
However, nothing beats being able to aptly use an index to generate a
result set. Table/Clustered Index Scans are a killer on big tables.
You may want to consider placing query limits on ad hoc queries using
"SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
those queries cosuming excessive CPU and see if the database or the
queries can be tuned.
--
David Gugick
Imceda Software
www.imceda.com|||With Business Objects it is worth asking the users to send you copies of
their reports. You can then see if indices need creating on various
tables etc.
Adrian
David G. wrote:
> Sean wrote:
>>Hi
>>I recently designed & developed a data warehouse for a client of ours.
>>The project has been quite a success, but they have since loaded the
>>database with a substantial amount of (historic) data, and have given
>>access to the query & reporting system (Business Objects) to many more
>>users. As a result, the query time on the reports is beginning to
>>slow.
>>I need to advise the client on what should be done to improve
>>performance again.
>>I'm quite familiar with what can be done on the design side of the
>>database (i.e. indices, aggregate tables, etc.), but I am not very
>>sure about the cost/benefits from a hardware perspective. As a
>>consultancy, the cost of our services (to build aggregate tables, etc)
>>would be quite expensive. It might be a better option for the client
>>to improve the spec of the server instead, but I am reluctant to
>>recommend this option without knowing that there would be guaranteed
>>benefits. In other words, if they were to double the processing power,
>>would the query time effectively halve? What about memory - i.e. would
>>more memory help with multiple-users, but not affect the speed of the
>>reports? Is 4GB the max memory that Windows 2000 can have?
>>[I can't remember the exact spec, but the server's got something like
>>2 Xeon dual-processors, 4GB RAM & Win2k.]
>>Can anyone recommend a study or white-paper on the performance
>>improvements in relation to hardware upgrades, or something similar?
>>Otherwise, what other options do I have? Am I overlooking something
>>like two load-balanced servers?
>>Thanks for any help / recommendations!
>>Sean W.
>
> Unfortunately, I think many companies are inclined to add more server
> without first trying to tune the database. You've gone ahead and
> designed a database for the customer and added the necessary RI and
> indexes. But now that you see the SQL executed, someone needs to tune
> the queries, revise the indexes if necessary, and verify the reporting
> system is executing SQL in an efficient way: For example, is it using
> stored procedures? Do users have Ad Hoc access to the database to query
> whatever they want? Does the reporting tool bring back large result sets
> and filter on the client PC? Do the queries use a READ UNCOMMITTED or
> NOLOCK query option to keep locks to a minimum?
> My point is that without performance tuning the queries, it's really
> impossible to know whether adding more hardware will give you anything
> but a temporary reprieve of poor performance.
> When I hear about problems like this, I like to remember an old client
> that had a couple hundred people hitting a SQL Server database which ran
> on a dual-CPU 486 with 84 MB of RAM. I know times have changed, but when
> I hear about performance problems on quad-Xeon systems with 3GB RAM (as
> an example), I get concerned that a company is considering throwing
> another $150K at a new server.
> Ad Hoc reporting systems are notoriously bad on performance, especially
> if users don't run canned reports and can throw what SQL the end-user
> reporting tool can generate at the server. The amount of data queried
> can easily move old data out of cache and force the next query to read
> that data from disk, which is really slow. Adding more memory is an
> option if plan reuse is the issue. And since memory is cheap, it can't
> hurt to add all you can. Adding more hardware will help, but may not
> really be a long term solution.
> However, nothing beats being able to aptly use an index to generate a
> result set. Table/Clustered Index Scans are a killer on big tables.
> You may want to consider placing query limits on ad hoc queries using
> "SET QUERY_GOVERNOR_COST_LIMIT ". You can also use Profiler to track
> those queries cosuming excessive CPU and see if the database or the
> queries can be tuned.
>|||Adrian Edwards wrote:
> With Business Objects it is worth asking the users to send you copies
> of their reports. You can then see if indices need creating on various
> tables etc.
> Adrian
>
Good point. If you can get your hands on any canned reports or at least
have a process for quickly rolling out new or updated reports, you may
eliminate users creating queries that don't perform well.
You might want to consider having new reports created against a test
server before moving them into production. Not giving users the rights
to create reports on the production data may be difficult to manage.
Managers generally want fast results. If you can't do that, at least
introduce a review process so there's never a time when too many reports
are created without review.
You can do this passively using Profiler or a server-side trace. You
could trap the SQL:BatchCompleted and RPC:Completed events for CPU over
a certain level (say 1,000ms) and review those SQL Statements
periodically.
David Gugick
Imceda Software
www.imceda.com
Performance Help Help....
I have the following setup -
Database Server having (Report Server Database + Application Database)
Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75 gb
disk space)
Software - Windows 2K Advance Server SP4, SQL 2000 sp3
IIS Server ( Report Server, Application Website)
Hardware config - 2CPU (1130 mhz), 1 gb RAM
Software - Windows 2K Server SP4, Reporting Services with SP1
Flow of application - User visits the application web site, queries (data
size 2.3 million records for a month) the and internally application passes
this query to Report Server through URL and data comes back to the User.
This thing works fine as long user queries for 2-3 days of data, but it
starts throwing "Server not available..." if it goes beyond that. (I saw
that on the web server aspnet_wp.exe starts crashing)
Any suggestion or solutions are welcome. I'm thinking of putting Report
Server web site on the Database Server, but I want to avoid that as I don't
want to run IIS with SQL 2000 (may be I'm wrong).
Thanks in advance,
SuhaibDo you have the /3GB flag set in c:\boot.ini? Take a look at for some common
things
http://blogs.msdn.com/tudortr/archive/2004/06/28/167969.aspx
Another thing to do would be to monitor the Process | Privates bytes
performance counter for the aspnet_wp.exe process.
--
Tudor Trufinescu
Dev Lead
Sql Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"Suhaib Khan" <skhan@.faicorp.com> wrote in message
news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> I have the following setup -
> Database Server having (Report Server Database + Application Database)
> Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
gb
> disk space)
> Software - Windows 2K Advance Server SP4, SQL 2000 sp3
>
> IIS Server ( Report Server, Application Website)
> Hardware config - 2CPU (1130 mhz), 1 gb RAM
> Software - Windows 2K Server SP4, Reporting Services with SP1
> Flow of application - User visits the application web site, queries (data
> size 2.3 million records for a month) the and internally application
passes
> this query to Report Server through URL and data comes back to the User.
> This thing works fine as long user queries for 2-3 days of data, but it
> starts throwing "Server not available..." if it goes beyond that. (I saw
> that on the web server aspnet_wp.exe starts crashing)
> Any suggestion or solutions are welcome. I'm thinking of putting Report
> Server web site on the Database Server, but I want to avoid that as I
don't
> want to run IIS with SQL 2000 (may be I'm wrong).
>
> Thanks in advance,
> Suhaib
>|||how many rows are returned to the report?
have you some aggregated data? does it SQL Server standard or enterprise
edition?
Have you try to use OLAP Cubes?
"Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> I have the following setup -
> Database Server having (Report Server Database + Application Database)
> Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
gb
> disk space)
> Software - Windows 2K Advance Server SP4, SQL 2000 sp3
>
> IIS Server ( Report Server, Application Website)
> Hardware config - 2CPU (1130 mhz), 1 gb RAM
> Software - Windows 2K Server SP4, Reporting Services with SP1
> Flow of application - User visits the application web site, queries (data
> size 2.3 million records for a month) the and internally application
passes
> this query to Report Server through URL and data comes back to the User.
> This thing works fine as long user queries for 2-3 days of data, but it
> starts throwing "Server not available..." if it goes beyond that. (I saw
> that on the web server aspnet_wp.exe starts crashing)
> Any suggestion or solutions are welcome. I'm thinking of putting Report
> Server web site on the Database Server, but I want to avoid that as I
don't
> want to run IIS with SQL 2000 (may be I'm wrong).
>
> Thanks in advance,
> Suhaib
>|||What is the size of the result set? How many pages? Also, what are you
rending it as (html, pdf, etc?). How long does it take from query analyzer?
Are you sending lots of data to RS and letting it filter or letting the
query itself filter out the data before it arrives at RS?
Bruce L-C
"Suhaib Khan" <skhan@.faicorp.com> wrote in message
news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> I have the following setup -
> Database Server having (Report Server Database + Application Database)
> Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
gb
> disk space)
> Software - Windows 2K Advance Server SP4, SQL 2000 sp3
>
> IIS Server ( Report Server, Application Website)
> Hardware config - 2CPU (1130 mhz), 1 gb RAM
> Software - Windows 2K Server SP4, Reporting Services with SP1
> Flow of application - User visits the application web site, queries (data
> size 2.3 million records for a month) the and internally application
passes
> this query to Report Server through URL and data comes back to the User.
> This thing works fine as long user queries for 2-3 days of data, but it
> starts throwing "Server not available..." if it goes beyond that. (I saw
> that on the web server aspnet_wp.exe starts crashing)
> Any suggestion or solutions are welcome. I'm thinking of putting Report
> Server web site on the Database Server, but I want to avoid that as I
don't
> want to run IIS with SQL 2000 (may be I'm wrong).
>
> Thanks in advance,
> Suhaib
>|||how many rows are returned to the report?
for 5 days - 430,000.
have you some aggregated data?
Yes, but we are not using any filter option on Report Server side
does it SQL Server standard or enterprise edition?
enterprise
Have you try to use OLAP Cubes?
Not now.
We are upgrading our current Reports from pure html using ASP to ASP.NET
with Reporting Services. We used to allow user's to query data for 3
months, again these months were not prefined, user's were allowed to select
Jan to March or Feb to April, with Reporting Services we are not able cross
more than 3 days forget about going across 3 months.
Database is partitioned based on Month, Vendor, and Year ID (Currently we
just have one vendor and one month of data, so I can't balme my partiton)
Suhaib
"Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
news:%23MZgb%23gXEHA.1048@.tk2msftngp13.phx.gbl...
> how many rows are returned to the report?
> have you some aggregated data? does it SQL Server standard or enterprise
> edition?
> Have you try to use OLAP Cubes?
> "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> > I have the following setup -
> >
> > Database Server having (Report Server Database + Application Database)
> > Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
> gb
> > disk space)
> > Software - Windows 2K Advance Server SP4, SQL 2000 sp3
> >
> >
> > IIS Server ( Report Server, Application Website)
> > Hardware config - 2CPU (1130 mhz), 1 gb RAM
> > Software - Windows 2K Server SP4, Reporting Services with SP1
> >
> > Flow of application - User visits the application web site, queries
(data
> > size 2.3 million records for a month) the and internally application
> passes
> > this query to Report Server through URL and data comes back to the User.
> >
> > This thing works fine as long user queries for 2-3 days of data, but it
> > starts throwing "Server not available..." if it goes beyond that. (I saw
> > that on the web server aspnet_wp.exe starts crashing)
> >
> > Any suggestion or solutions are welcome. I'm thinking of putting Report
> > Server web site on the Database Server, but I want to avoid that as I
> don't
> > want to run IIS with SQL 2000 (may be I'm wrong).
> >
> >
> > Thanks in advance,
> >
> > Suhaib
> >
> >
>|||430 000 rows'
wow!!!!
its very very big!!!
why do you return so many details in your report?
If you use aggregated function in RS, then the time to process the report
will be huge!!! in this case, I recommend to create multiple datasets (1 for
the detail, 1 for aggregated data with an SQL statement which do the
calculations (sums, group by...))
Maybe you can schedule the report and save it as a snapshot report. (or save
it as PDF in a network folder)
"Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
news:%23hIXORiXEHA.1656@.TK2MSFTNGP09.phx.gbl...
> how many rows are returned to the report?
> for 5 days - 430,000.
> have you some aggregated data?
> Yes, but we are not using any filter option on Report Server side
> does it SQL Server standard or enterprise edition?
> enterprise
> Have you try to use OLAP Cubes?
> Not now.
> We are upgrading our current Reports from pure html using ASP to ASP.NET
> with Reporting Services. We used to allow user's to query data for 3
> months, again these months were not prefined, user's were allowed to
select
> Jan to March or Feb to April, with Reporting Services we are not able
cross
> more than 3 days forget about going across 3 months.
>
> Database is partitioned based on Month, Vendor, and Year ID (Currently we
> just have one vendor and one month of data, so I can't balme my partiton)
>
> Suhaib
>
> "Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
> news:%23MZgb%23gXEHA.1048@.tk2msftngp13.phx.gbl...
> > how many rows are returned to the report?
> > have you some aggregated data? does it SQL Server standard or enterprise
> > edition?
> > Have you try to use OLAP Cubes?
> >
> > "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> > news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> > > I have the following setup -
> > >
> > > Database Server having (Report Server Database + Application Database)
> > > Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive
(75
> > gb
> > > disk space)
> > > Software - Windows 2K Advance Server SP4, SQL 2000 sp3
> > >
> > >
> > > IIS Server ( Report Server, Application Website)
> > > Hardware config - 2CPU (1130 mhz), 1 gb RAM
> > > Software - Windows 2K Server SP4, Reporting Services with SP1
> > >
> > > Flow of application - User visits the application web site, queries
> (data
> > > size 2.3 million records for a month) the and internally application
> > passes
> > > this query to Report Server through URL and data comes back to the
User.
> > >
> > > This thing works fine as long user queries for 2-3 days of data, but
it
> > > starts throwing "Server not available..." if it goes beyond that. (I
saw
> > > that on the web server aspnet_wp.exe starts crashing)
> > >
> > > Any suggestion or solutions are welcome. I'm thinking of putting
Report
> > > Server web site on the Database Server, but I want to avoid that as I
> > don't
> > > want to run IIS with SQL 2000 (may be I'm wrong).
> > >
> > >
> > > Thanks in advance,
> > >
> > > Suhaib
> > >
> > >
> >
> >
>|||Do a report in Query Analyzer and return 430,000 rows. Now wait and wait
somemore and wait somemore. If your customer needs a data extract (for
instance to load their own datamart) then you should use DTS. Otherwise, you
should have reports that provide drill through that allows the customer to
get to the data they are interested in. No human is going to browse 430,000
rows. I suggest you look at how to do drill through. RS supports drill
through reports very cleanly and it is the way to go.
As far as scheduling a PDF. This would be a 5,000 page PDF report. This is
not reasonable to expect a system to do. I strongly suggest re-analyzing
your requirements and solutions. Nobody will have a solution for a 5,000
page report that is speedy. Again, if you are extracting data for someone's
datamart then use DTS.
Bruce L-C
"Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
news:umOPgHrXEHA.2844@.TK2MSFTNGP12.phx.gbl...
> 430 000 rows'
> wow!!!!
> its very very big!!!
> why do you return so many details in your report?
> If you use aggregated function in RS, then the time to process the report
> will be huge!!! in this case, I recommend to create multiple datasets (1
for
> the detail, 1 for aggregated data with an SQL statement which do the
> calculations (sums, group by...))
> Maybe you can schedule the report and save it as a snapshot report. (or
save
> it as PDF in a network folder)
> "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> news:%23hIXORiXEHA.1656@.TK2MSFTNGP09.phx.gbl...
> > how many rows are returned to the report?
> > for 5 days - 430,000.
> >
> > have you some aggregated data?
> > Yes, but we are not using any filter option on Report Server side
> >
> > does it SQL Server standard or enterprise edition?
> > enterprise
> >
> > Have you try to use OLAP Cubes?
> > Not now.
> >
> > We are upgrading our current Reports from pure html using ASP to ASP.NET
> > with Reporting Services. We used to allow user's to query data for 3
> > months, again these months were not prefined, user's were allowed to
> select
> > Jan to March or Feb to April, with Reporting Services we are not able
> cross
> > more than 3 days forget about going across 3 months.
> >
> >
> > Database is partitioned based on Month, Vendor, and Year ID (Currently
we
> > just have one vendor and one month of data, so I can't balme my
partiton)
> >
> >
> > Suhaib
> >
> >
> > "Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
> > news:%23MZgb%23gXEHA.1048@.tk2msftngp13.phx.gbl...
> > > how many rows are returned to the report?
> > > have you some aggregated data? does it SQL Server standard or
enterprise
> > > edition?
> > > Have you try to use OLAP Cubes?
> > >
> > > "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> > > news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> > > > I have the following setup -
> > > >
> > > > Database Server having (Report Server Database + Application
Database)
> > > > Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive
> (75
> > > gb
> > > > disk space)
> > > > Software - Windows 2K Advance Server SP4, SQL 2000 sp3
> > > >
> > > >
> > > > IIS Server ( Report Server, Application Website)
> > > > Hardware config - 2CPU (1130 mhz), 1 gb RAM
> > > > Software - Windows 2K Server SP4, Reporting Services with SP1
> > > >
> > > > Flow of application - User visits the application web site, queries
> > (data
> > > > size 2.3 million records for a month) the and internally application
> > > passes
> > > > this query to Report Server through URL and data comes back to the
> User.
> > > >
> > > > This thing works fine as long user queries for 2-3 days of data, but
> it
> > > > starts throwing "Server not available..." if it goes beyond that. (I
> saw
> > > > that on the web server aspnet_wp.exe starts crashing)
> > > >
> > > > Any suggestion or solutions are welcome. I'm thinking of putting
> Report
> > > > Server web site on the Database Server, but I want to avoid that as
I
> > > don't
> > > > want to run IIS with SQL 2000 (may be I'm wrong).
> > > >
> > > >
> > > > Thanks in advance,
> > > >
> > > > Suhaib
> > > >
> > > >
> > >
> > >
> >
> >
>
Database Server having (Report Server Database + Application Database)
Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75 gb
disk space)
Software - Windows 2K Advance Server SP4, SQL 2000 sp3
IIS Server ( Report Server, Application Website)
Hardware config - 2CPU (1130 mhz), 1 gb RAM
Software - Windows 2K Server SP4, Reporting Services with SP1
Flow of application - User visits the application web site, queries (data
size 2.3 million records for a month) the and internally application passes
this query to Report Server through URL and data comes back to the User.
This thing works fine as long user queries for 2-3 days of data, but it
starts throwing "Server not available..." if it goes beyond that. (I saw
that on the web server aspnet_wp.exe starts crashing)
Any suggestion or solutions are welcome. I'm thinking of putting Report
Server web site on the Database Server, but I want to avoid that as I don't
want to run IIS with SQL 2000 (may be I'm wrong).
Thanks in advance,
SuhaibDo you have the /3GB flag set in c:\boot.ini? Take a look at for some common
things
http://blogs.msdn.com/tudortr/archive/2004/06/28/167969.aspx
Another thing to do would be to monitor the Process | Privates bytes
performance counter for the aspnet_wp.exe process.
--
Tudor Trufinescu
Dev Lead
Sql Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"Suhaib Khan" <skhan@.faicorp.com> wrote in message
news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> I have the following setup -
> Database Server having (Report Server Database + Application Database)
> Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
gb
> disk space)
> Software - Windows 2K Advance Server SP4, SQL 2000 sp3
>
> IIS Server ( Report Server, Application Website)
> Hardware config - 2CPU (1130 mhz), 1 gb RAM
> Software - Windows 2K Server SP4, Reporting Services with SP1
> Flow of application - User visits the application web site, queries (data
> size 2.3 million records for a month) the and internally application
passes
> this query to Report Server through URL and data comes back to the User.
> This thing works fine as long user queries for 2-3 days of data, but it
> starts throwing "Server not available..." if it goes beyond that. (I saw
> that on the web server aspnet_wp.exe starts crashing)
> Any suggestion or solutions are welcome. I'm thinking of putting Report
> Server web site on the Database Server, but I want to avoid that as I
don't
> want to run IIS with SQL 2000 (may be I'm wrong).
>
> Thanks in advance,
> Suhaib
>|||how many rows are returned to the report?
have you some aggregated data? does it SQL Server standard or enterprise
edition?
Have you try to use OLAP Cubes?
"Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> I have the following setup -
> Database Server having (Report Server Database + Application Database)
> Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
gb
> disk space)
> Software - Windows 2K Advance Server SP4, SQL 2000 sp3
>
> IIS Server ( Report Server, Application Website)
> Hardware config - 2CPU (1130 mhz), 1 gb RAM
> Software - Windows 2K Server SP4, Reporting Services with SP1
> Flow of application - User visits the application web site, queries (data
> size 2.3 million records for a month) the and internally application
passes
> this query to Report Server through URL and data comes back to the User.
> This thing works fine as long user queries for 2-3 days of data, but it
> starts throwing "Server not available..." if it goes beyond that. (I saw
> that on the web server aspnet_wp.exe starts crashing)
> Any suggestion or solutions are welcome. I'm thinking of putting Report
> Server web site on the Database Server, but I want to avoid that as I
don't
> want to run IIS with SQL 2000 (may be I'm wrong).
>
> Thanks in advance,
> Suhaib
>|||What is the size of the result set? How many pages? Also, what are you
rending it as (html, pdf, etc?). How long does it take from query analyzer?
Are you sending lots of data to RS and letting it filter or letting the
query itself filter out the data before it arrives at RS?
Bruce L-C
"Suhaib Khan" <skhan@.faicorp.com> wrote in message
news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> I have the following setup -
> Database Server having (Report Server Database + Application Database)
> Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
gb
> disk space)
> Software - Windows 2K Advance Server SP4, SQL 2000 sp3
>
> IIS Server ( Report Server, Application Website)
> Hardware config - 2CPU (1130 mhz), 1 gb RAM
> Software - Windows 2K Server SP4, Reporting Services with SP1
> Flow of application - User visits the application web site, queries (data
> size 2.3 million records for a month) the and internally application
passes
> this query to Report Server through URL and data comes back to the User.
> This thing works fine as long user queries for 2-3 days of data, but it
> starts throwing "Server not available..." if it goes beyond that. (I saw
> that on the web server aspnet_wp.exe starts crashing)
> Any suggestion or solutions are welcome. I'm thinking of putting Report
> Server web site on the Database Server, but I want to avoid that as I
don't
> want to run IIS with SQL 2000 (may be I'm wrong).
>
> Thanks in advance,
> Suhaib
>|||how many rows are returned to the report?
for 5 days - 430,000.
have you some aggregated data?
Yes, but we are not using any filter option on Report Server side
does it SQL Server standard or enterprise edition?
enterprise
Have you try to use OLAP Cubes?
Not now.
We are upgrading our current Reports from pure html using ASP to ASP.NET
with Reporting Services. We used to allow user's to query data for 3
months, again these months were not prefined, user's were allowed to select
Jan to March or Feb to April, with Reporting Services we are not able cross
more than 3 days forget about going across 3 months.
Database is partitioned based on Month, Vendor, and Year ID (Currently we
just have one vendor and one month of data, so I can't balme my partiton)
Suhaib
"Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
news:%23MZgb%23gXEHA.1048@.tk2msftngp13.phx.gbl...
> how many rows are returned to the report?
> have you some aggregated data? does it SQL Server standard or enterprise
> edition?
> Have you try to use OLAP Cubes?
> "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> > I have the following setup -
> >
> > Database Server having (Report Server Database + Application Database)
> > Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive (75
> gb
> > disk space)
> > Software - Windows 2K Advance Server SP4, SQL 2000 sp3
> >
> >
> > IIS Server ( Report Server, Application Website)
> > Hardware config - 2CPU (1130 mhz), 1 gb RAM
> > Software - Windows 2K Server SP4, Reporting Services with SP1
> >
> > Flow of application - User visits the application web site, queries
(data
> > size 2.3 million records for a month) the and internally application
> passes
> > this query to Report Server through URL and data comes back to the User.
> >
> > This thing works fine as long user queries for 2-3 days of data, but it
> > starts throwing "Server not available..." if it goes beyond that. (I saw
> > that on the web server aspnet_wp.exe starts crashing)
> >
> > Any suggestion or solutions are welcome. I'm thinking of putting Report
> > Server web site on the Database Server, but I want to avoid that as I
> don't
> > want to run IIS with SQL 2000 (may be I'm wrong).
> >
> >
> > Thanks in advance,
> >
> > Suhaib
> >
> >
>|||430 000 rows'
wow!!!!
its very very big!!!
why do you return so many details in your report?
If you use aggregated function in RS, then the time to process the report
will be huge!!! in this case, I recommend to create multiple datasets (1 for
the detail, 1 for aggregated data with an SQL statement which do the
calculations (sums, group by...))
Maybe you can schedule the report and save it as a snapshot report. (or save
it as PDF in a network folder)
"Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
news:%23hIXORiXEHA.1656@.TK2MSFTNGP09.phx.gbl...
> how many rows are returned to the report?
> for 5 days - 430,000.
> have you some aggregated data?
> Yes, but we are not using any filter option on Report Server side
> does it SQL Server standard or enterprise edition?
> enterprise
> Have you try to use OLAP Cubes?
> Not now.
> We are upgrading our current Reports from pure html using ASP to ASP.NET
> with Reporting Services. We used to allow user's to query data for 3
> months, again these months were not prefined, user's were allowed to
select
> Jan to March or Feb to April, with Reporting Services we are not able
cross
> more than 3 days forget about going across 3 months.
>
> Database is partitioned based on Month, Vendor, and Year ID (Currently we
> just have one vendor and one month of data, so I can't balme my partiton)
>
> Suhaib
>
> "Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
> news:%23MZgb%23gXEHA.1048@.tk2msftngp13.phx.gbl...
> > how many rows are returned to the report?
> > have you some aggregated data? does it SQL Server standard or enterprise
> > edition?
> > Have you try to use OLAP Cubes?
> >
> > "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> > news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> > > I have the following setup -
> > >
> > > Database Server having (Report Server Database + Application Database)
> > > Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive
(75
> > gb
> > > disk space)
> > > Software - Windows 2K Advance Server SP4, SQL 2000 sp3
> > >
> > >
> > > IIS Server ( Report Server, Application Website)
> > > Hardware config - 2CPU (1130 mhz), 1 gb RAM
> > > Software - Windows 2K Server SP4, Reporting Services with SP1
> > >
> > > Flow of application - User visits the application web site, queries
> (data
> > > size 2.3 million records for a month) the and internally application
> > passes
> > > this query to Report Server through URL and data comes back to the
User.
> > >
> > > This thing works fine as long user queries for 2-3 days of data, but
it
> > > starts throwing "Server not available..." if it goes beyond that. (I
saw
> > > that on the web server aspnet_wp.exe starts crashing)
> > >
> > > Any suggestion or solutions are welcome. I'm thinking of putting
Report
> > > Server web site on the Database Server, but I want to avoid that as I
> > don't
> > > want to run IIS with SQL 2000 (may be I'm wrong).
> > >
> > >
> > > Thanks in advance,
> > >
> > > Suhaib
> > >
> > >
> >
> >
>|||Do a report in Query Analyzer and return 430,000 rows. Now wait and wait
somemore and wait somemore. If your customer needs a data extract (for
instance to load their own datamart) then you should use DTS. Otherwise, you
should have reports that provide drill through that allows the customer to
get to the data they are interested in. No human is going to browse 430,000
rows. I suggest you look at how to do drill through. RS supports drill
through reports very cleanly and it is the way to go.
As far as scheduling a PDF. This would be a 5,000 page PDF report. This is
not reasonable to expect a system to do. I strongly suggest re-analyzing
your requirements and solutions. Nobody will have a solution for a 5,000
page report that is speedy. Again, if you are extracting data for someone's
datamart then use DTS.
Bruce L-C
"Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
news:umOPgHrXEHA.2844@.TK2MSFTNGP12.phx.gbl...
> 430 000 rows'
> wow!!!!
> its very very big!!!
> why do you return so many details in your report?
> If you use aggregated function in RS, then the time to process the report
> will be huge!!! in this case, I recommend to create multiple datasets (1
for
> the detail, 1 for aggregated data with an SQL statement which do the
> calculations (sums, group by...))
> Maybe you can schedule the report and save it as a snapshot report. (or
save
> it as PDF in a network folder)
> "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> news:%23hIXORiXEHA.1656@.TK2MSFTNGP09.phx.gbl...
> > how many rows are returned to the report?
> > for 5 days - 430,000.
> >
> > have you some aggregated data?
> > Yes, but we are not using any filter option on Report Server side
> >
> > does it SQL Server standard or enterprise edition?
> > enterprise
> >
> > Have you try to use OLAP Cubes?
> > Not now.
> >
> > We are upgrading our current Reports from pure html using ASP to ASP.NET
> > with Reporting Services. We used to allow user's to query data for 3
> > months, again these months were not prefined, user's were allowed to
> select
> > Jan to March or Feb to April, with Reporting Services we are not able
> cross
> > more than 3 days forget about going across 3 months.
> >
> >
> > Database is partitioned based on Month, Vendor, and Year ID (Currently
we
> > just have one vendor and one month of data, so I can't balme my
partiton)
> >
> >
> > Suhaib
> >
> >
> > "Jéjé" <willgart@._A_hAotmail_A_.com> wrote in message
> > news:%23MZgb%23gXEHA.1048@.tk2msftngp13.phx.gbl...
> > > how many rows are returned to the report?
> > > have you some aggregated data? does it SQL Server standard or
enterprise
> > > edition?
> > > Have you try to use OLAP Cubes?
> > >
> > > "Suhaib Khan" <skhan@.faicorp.com> a écrit dans le message de
> > > news:ebyDrUgXEHA.716@.TK2MSFTNGP11.phx.gbl...
> > > > I have the following setup -
> > > >
> > > > Database Server having (Report Server Database + Application
Database)
> > > > Hardware Config - 4 CPU 1.6 ghz, 8 GB RAM, About 15 GB RAID x5 drive
> (75
> > > gb
> > > > disk space)
> > > > Software - Windows 2K Advance Server SP4, SQL 2000 sp3
> > > >
> > > >
> > > > IIS Server ( Report Server, Application Website)
> > > > Hardware config - 2CPU (1130 mhz), 1 gb RAM
> > > > Software - Windows 2K Server SP4, Reporting Services with SP1
> > > >
> > > > Flow of application - User visits the application web site, queries
> > (data
> > > > size 2.3 million records for a month) the and internally application
> > > passes
> > > > this query to Report Server through URL and data comes back to the
> User.
> > > >
> > > > This thing works fine as long user queries for 2-3 days of data, but
> it
> > > > starts throwing "Server not available..." if it goes beyond that. (I
> saw
> > > > that on the web server aspnet_wp.exe starts crashing)
> > > >
> > > > Any suggestion or solutions are welcome. I'm thinking of putting
> Report
> > > > Server web site on the Database Server, but I want to avoid that as
I
> > > don't
> > > > want to run IIS with SQL 2000 (may be I'm wrong).
> > > >
> > > >
> > > > Thanks in advance,
> > > >
> > > > Suhaib
> > > >
> > > >
> > >
> > >
> >
> >
>
Monday, March 26, 2012
Performance Developer vs. Std
We have a series of SPs, poorly written, but that's not the issue here.
Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
(Dev), dual P3, 2gig ram. Pretty much just DB servers.
But we've even tried the local Dasd on the Prod (Std) box.
The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
even everything on a single RAID drive. Prod at least has db and log
separate, more at times.
We've checked everything, and can't find any differences. Running on Prod
Off hours when it's not even breaking a sweat.
The proc does nothing special, except a cursor for Update (Ooouugghh).
Any ideas what to look at ? We compared everything we can think of, no
differences SQL, Win2000... No errors in the event log ..., Query plans
match even cost estimates.
Could Developer (which is Enterprise, right) provide that much boost vs.
Std, even with nothing, that I know of that would be using EE features.
Note to date everything I've wrritten, even some larger procs (10 to 20
million row processes) seem to run as expected. Slightly faster on Prod.
KlK, MCSE
Hi
As a good benchmark, run the SQL IO stress tools on your Dev and Production
server. It will indicate if your SAN is considerably slower than the local
drives.
Look at the links off http://msmvps.com/epprecht/archive/2.../24/10591.aspx
EE does have some performance enhancements over standard (pre-fetch) but the
Disk might be your issue.
Regards
Mike
"KevinK" wrote:
> We have a series of SPs, poorly written, but that's not the issue here.
> Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
> (Dev), dual P3, 2gig ram. Pretty much just DB servers.
> But we've even tried the local Dasd on the Prod (Std) box.
> The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
> even everything on a single RAID drive. Prod at least has db and log
> separate, more at times.
> We've checked everything, and can't find any differences. Running on Prod
> Off hours when it's not even breaking a sweat.
> The proc does nothing special, except a cursor for Update (Ooouugghh).
> Any ideas what to look at ? We compared everything we can think of, no
> differences SQL, Win2000... No errors in the event log ..., Query plans
> match even cost estimates.
> Could Developer (which is Enterprise, right) provide that much boost vs.
> Std, even with nothing, that I know of that would be using EE features.
> Note to date everything I've wrritten, even some larger procs (10 to 20
> million row processes) seem to run as expected. Slightly faster on Prod.
>
> --
> KlK, MCSE
|||Thanks Mike, unfortunately we have tested it using only the local drive on
Prod.
We have a 70gig local raid drive that is normally used for Backups etc.
It didn't help.
We do know the SAN drives are slower than locals, but it isn't that
significant, maybe 20% (I think 12 to 15 was the official number.
But when we run it on the SAN we are using multiple drives, log and data.
I am trying to get them to do a DBHammer
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> As a good benchmark, run the SQL IO stress tools on your Dev and Production
> server. It will indicate if your SAN is considerably slower than the local
> drives.
> Look at the links off http://msmvps.com/epprecht/archive/2.../24/10591.aspx
> EE does have some performance enhancements over standard (pre-fetch) but the
> Disk might be your issue.
> Regards
> Mike
> "KevinK" wrote:
Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
(Dev), dual P3, 2gig ram. Pretty much just DB servers.
But we've even tried the local Dasd on the Prod (Std) box.
The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
even everything on a single RAID drive. Prod at least has db and log
separate, more at times.
We've checked everything, and can't find any differences. Running on Prod
Off hours when it's not even breaking a sweat.
The proc does nothing special, except a cursor for Update (Ooouugghh).
Any ideas what to look at ? We compared everything we can think of, no
differences SQL, Win2000... No errors in the event log ..., Query plans
match even cost estimates.
Could Developer (which is Enterprise, right) provide that much boost vs.
Std, even with nothing, that I know of that would be using EE features.
Note to date everything I've wrritten, even some larger procs (10 to 20
million row processes) seem to run as expected. Slightly faster on Prod.
KlK, MCSE
Hi
As a good benchmark, run the SQL IO stress tools on your Dev and Production
server. It will indicate if your SAN is considerably slower than the local
drives.
Look at the links off http://msmvps.com/epprecht/archive/2.../24/10591.aspx
EE does have some performance enhancements over standard (pre-fetch) but the
Disk might be your issue.
Regards
Mike
"KevinK" wrote:
> We have a series of SPs, poorly written, but that's not the issue here.
> Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
> (Dev), dual P3, 2gig ram. Pretty much just DB servers.
> But we've even tried the local Dasd on the Prod (Std) box.
> The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
> even everything on a single RAID drive. Prod at least has db and log
> separate, more at times.
> We've checked everything, and can't find any differences. Running on Prod
> Off hours when it's not even breaking a sweat.
> The proc does nothing special, except a cursor for Update (Ooouugghh).
> Any ideas what to look at ? We compared everything we can think of, no
> differences SQL, Win2000... No errors in the event log ..., Query plans
> match even cost estimates.
> Could Developer (which is Enterprise, right) provide that much boost vs.
> Std, even with nothing, that I know of that would be using EE features.
> Note to date everything I've wrritten, even some larger procs (10 to 20
> million row processes) seem to run as expected. Slightly faster on Prod.
>
> --
> KlK, MCSE
|||Thanks Mike, unfortunately we have tested it using only the local drive on
Prod.
We have a 70gig local raid drive that is normally used for Backups etc.
It didn't help.
We do know the SAN drives are slower than locals, but it isn't that
significant, maybe 20% (I think 12 to 15 was the official number.
But when we run it on the SAN we are using multiple drives, log and data.
I am trying to get them to do a DBHammer
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> As a good benchmark, run the SQL IO stress tools on your Dev and Production
> server. It will indicate if your SAN is considerably slower than the local
> drives.
> Look at the links off http://msmvps.com/epprecht/archive/2.../24/10591.aspx
> EE does have some performance enhancements over standard (pre-fetch) but the
> Disk might be your issue.
> Regards
> Mike
> "KevinK" wrote:
Performance Developer vs. Std
We have a series of SPs, poorly written, but that's not the issue here.
Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
(Dev), dual P3, 2gig ram. Pretty much just DB servers.
But we've even tried the local Dasd on the Prod (Std) box.
The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
even everything on a single RAID drive. Prod at least has db and log
separate, more at times.
We've checked everything, and can't find any differences. Running on Prod
Off hours when it's not even breaking a sweat.
The proc does nothing special, except a cursor for Update (Ooouugghh).
Any ideas what to look at ' We compared everything we can think of, no
differences SQL, Win2000... No errors in the event log ..., Query plans
match even cost estimates.
Could Developer (which is Enterprise, right) provide that much boost vs.
Std, even with nothing, that I know of that would be using EE features.
Note to date everything I've wrritten, even some larger procs (10 to 20
million row processes) seem to run as expected. Slightly faster on Prod.
--
KlK, MCSEHi
As a good benchmark, run the SQL IO stress tools on your Dev and Production
server. It will indicate if your SAN is considerably slower than the local
drives.
Look at the links off http://msmvps.com/epprecht/archive/2004/07/24/10591.aspx
EE does have some performance enhancements over standard (pre-fetch) but the
Disk might be your issue.
Regards
Mike
"KevinK" wrote:
> We have a series of SPs, poorly written, but that's not the issue here.
> Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
> (Dev), dual P3, 2gig ram. Pretty much just DB servers.
> But we've even tried the local Dasd on the Prod (Std) box.
> The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
> even everything on a single RAID drive. Prod at least has db and log
> separate, more at times.
> We've checked everything, and can't find any differences. Running on Prod
> Off hours when it's not even breaking a sweat.
> The proc does nothing special, except a cursor for Update (Ooouugghh).
> Any ideas what to look at ' We compared everything we can think of, no
> differences SQL, Win2000... No errors in the event log ..., Query plans
> match even cost estimates.
> Could Developer (which is Enterprise, right) provide that much boost vs.
> Std, even with nothing, that I know of that would be using EE features.
> Note to date everything I've wrritten, even some larger procs (10 to 20
> million row processes) seem to run as expected. Slightly faster on Prod.
>
> --
> KlK, MCSE|||Thanks Mike, unfortunately we have tested it using only the local drive on
Prod.
We have a 70gig local raid drive that is normally used for Backups etc.
It didn't help.
We do know the SAN drives are slower than locals, but it isn't that
significant, maybe 20% (I think 12 to 15 was the official number.
But when we run it on the SAN we are using multiple drives, log and data.
I am trying to get them to do a DBHammer
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> As a good benchmark, run the SQL IO stress tools on your Dev and Production
> server. It will indicate if your SAN is considerably slower than the local
> drives.
> Look at the links off http://msmvps.com/epprecht/archive/2004/07/24/10591.aspx
> EE does have some performance enhancements over standard (pre-fetch) but the
> Disk might be your issue.
> Regards
> Mike
> "KevinK" wrote:
> > We have a series of SPs, poorly written, but that's not the issue here.
> >
> > Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
> > (Dev), dual P3, 2gig ram. Pretty much just DB servers.
> >
> > But we've even tried the local Dasd on the Prod (Std) box.
> >
> > The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
> > even everything on a single RAID drive. Prod at least has db and log
> > separate, more at times.
> >
> > We've checked everything, and can't find any differences. Running on Prod
> > Off hours when it's not even breaking a sweat.
> >
> > The proc does nothing special, except a cursor for Update (Ooouugghh).
> >
> > Any ideas what to look at ' We compared everything we can think of, no
> > differences SQL, Win2000... No errors in the event log ..., Query plans
> > match even cost estimates.
> >
> > Could Developer (which is Enterprise, right) provide that much boost vs.
> > Std, even with nothing, that I know of that would be using EE features.
> >
> > Note to date everything I've wrritten, even some larger procs (10 to 20
> > million row processes) seem to run as expected. Slightly faster on Prod.
> >
> >
> >
> > --
> > KlK, MCSEsql
Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
(Dev), dual P3, 2gig ram. Pretty much just DB servers.
But we've even tried the local Dasd on the Prod (Std) box.
The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
even everything on a single RAID drive. Prod at least has db and log
separate, more at times.
We've checked everything, and can't find any differences. Running on Prod
Off hours when it's not even breaking a sweat.
The proc does nothing special, except a cursor for Update (Ooouugghh).
Any ideas what to look at ' We compared everything we can think of, no
differences SQL, Win2000... No errors in the event log ..., Query plans
match even cost estimates.
Could Developer (which is Enterprise, right) provide that much boost vs.
Std, even with nothing, that I know of that would be using EE features.
Note to date everything I've wrritten, even some larger procs (10 to 20
million row processes) seem to run as expected. Slightly faster on Prod.
--
KlK, MCSEHi
As a good benchmark, run the SQL IO stress tools on your Dev and Production
server. It will indicate if your SAN is considerably slower than the local
drives.
Look at the links off http://msmvps.com/epprecht/archive/2004/07/24/10591.aspx
EE does have some performance enhancements over standard (pre-fetch) but the
Disk might be your issue.
Regards
Mike
"KevinK" wrote:
> We have a series of SPs, poorly written, but that's not the issue here.
> Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
> (Dev), dual P3, 2gig ram. Pretty much just DB servers.
> But we've even tried the local Dasd on the Prod (Std) box.
> The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
> even everything on a single RAID drive. Prod at least has db and log
> separate, more at times.
> We've checked everything, and can't find any differences. Running on Prod
> Off hours when it's not even breaking a sweat.
> The proc does nothing special, except a cursor for Update (Ooouugghh).
> Any ideas what to look at ' We compared everything we can think of, no
> differences SQL, Win2000... No errors in the event log ..., Query plans
> match even cost estimates.
> Could Developer (which is Enterprise, right) provide that much boost vs.
> Std, even with nothing, that I know of that would be using EE features.
> Note to date everything I've wrritten, even some larger procs (10 to 20
> million row processes) seem to run as expected. Slightly faster on Prod.
>
> --
> KlK, MCSE|||Thanks Mike, unfortunately we have tested it using only the local drive on
Prod.
We have a 70gig local raid drive that is normally used for Backups etc.
It didn't help.
We do know the SAN drives are slower than locals, but it isn't that
significant, maybe 20% (I think 12 to 15 was the official number.
But when we run it on the SAN we are using multiple drives, log and data.
I am trying to get them to do a DBHammer
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> As a good benchmark, run the SQL IO stress tools on your Dev and Production
> server. It will indicate if your SAN is considerably slower than the local
> drives.
> Look at the links off http://msmvps.com/epprecht/archive/2004/07/24/10591.aspx
> EE does have some performance enhancements over standard (pre-fetch) but the
> Disk might be your issue.
> Regards
> Mike
> "KevinK" wrote:
> > We have a series of SPs, poorly written, but that's not the issue here.
> >
> > Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
> > (Dev), dual P3, 2gig ram. Pretty much just DB servers.
> >
> > But we've even tried the local Dasd on the Prod (Std) box.
> >
> > The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
> > even everything on a single RAID drive. Prod at least has db and log
> > separate, more at times.
> >
> > We've checked everything, and can't find any differences. Running on Prod
> > Off hours when it's not even breaking a sweat.
> >
> > The proc does nothing special, except a cursor for Update (Ooouugghh).
> >
> > Any ideas what to look at ' We compared everything we can think of, no
> > differences SQL, Win2000... No errors in the event log ..., Query plans
> > match even cost estimates.
> >
> > Could Developer (which is Enterprise, right) provide that much boost vs.
> > Std, even with nothing, that I know of that would be using EE features.
> >
> > Note to date everything I've wrritten, even some larger procs (10 to 20
> > million row processes) seem to run as expected. Slightly faster on Prod.
> >
> >
> >
> > --
> > KlK, MCSEsql
Performance Developer vs. Std
We have a series of SPs, poorly written, but that's not the issue here.
Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
(Dev), dual P3, 2gig ram. Pretty much just DB servers.
But we've even tried the local Dasd on the Prod (Std) box.
The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
even everything on a single RAID drive. Prod at least has db and log
separate, more at times.
We've checked everything, and can't find any differences. Running on Prod
Off hours when it's not even breaking a sweat.
The proc does nothing special, except a cursor for Update (Ooouugghh).
Any ideas what to look at ' We compared everything we can think of, no
differences SQL, Win2000... No errors in the event log ..., Query plans
match even cost estimates.
Could Developer (which is Enterprise, right) provide that much boost vs.
Std, even with nothing, that I know of that would be using EE features.
Note to date everything I've wrritten, even some larger procs (10 to 20
million row processes) seem to run as expected. Slightly faster on Prod.
KlK, MCSEHi
As a good benchmark, run the SQL IO stress tools on your Dev and Production
server. It will indicate if your SAN is considerably slower than the local
drives.
Look at the links off [url]http://msmvps.com/epprecht/archive/2004/07/24/10591.aspx[/ur
l]
EE does have some performance enhancements over standard (pre-fetch) but the
Disk might be your issue.
Regards
Mike
"KevinK" wrote:
> We have a series of SPs, poorly written, but that's not the issue here.
> Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
> (Dev), dual P3, 2gig ram. Pretty much just DB servers.
> But we've even tried the local Dasd on the Prod (Std) box.
> The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
> even everything on a single RAID drive. Prod at least has db and log
> separate, more at times.
> We've checked everything, and can't find any differences. Running on Prod
> Off hours when it's not even breaking a sweat.
> The proc does nothing special, except a cursor for Update (Ooouugghh).
> Any ideas what to look at ' We compared everything we can think of, no
> differences SQL, Win2000... No errors in the event log ..., Query plans
> match even cost estimates.
> Could Developer (which is Enterprise, right) provide that much boost vs.
> Std, even with nothing, that I know of that would be using EE features.
> Note to date everything I've wrritten, even some larger procs (10 to 20
> million row processes) seem to run as expected. Slightly faster on Prod.
>
> --
> KlK, MCSE|||Thanks Mike, unfortunately we have tested it using only the local drive on
Prod.
We have a 70gig local raid drive that is normally used for Backups etc.
It didn't help.
We do know the SAN drives are slower than locals, but it isn't that
significant, maybe 20% (I think 12 to 15 was the official number.
But when we run it on the SAN we are using multiple drives, log and data.
I am trying to get them to do a DBHammer
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> As a good benchmark, run the SQL IO stress tools on your Dev and Productio
n
> server. It will indicate if your SAN is considerably slower than the local
> drives.
> Look at the links off [url]http://msmvps.com/epprecht/archive/2004/07/24/10591.aspx[/
url]
> EE does have some performance enhancements over standard (pre-fetch) but t
he
> Disk might be your issue.
> Regards
> Mike
> "KevinK" wrote:
>
Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
(Dev), dual P3, 2gig ram. Pretty much just DB servers.
But we've even tried the local Dasd on the Prod (Std) box.
The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
even everything on a single RAID drive. Prod at least has db and log
separate, more at times.
We've checked everything, and can't find any differences. Running on Prod
Off hours when it's not even breaking a sweat.
The proc does nothing special, except a cursor for Update (Ooouugghh).
Any ideas what to look at ' We compared everything we can think of, no
differences SQL, Win2000... No errors in the event log ..., Query plans
match even cost estimates.
Could Developer (which is Enterprise, right) provide that much boost vs.
Std, even with nothing, that I know of that would be using EE features.
Note to date everything I've wrritten, even some larger procs (10 to 20
million row processes) seem to run as expected. Slightly faster on Prod.
KlK, MCSEHi
As a good benchmark, run the SQL IO stress tools on your Dev and Production
server. It will indicate if your SAN is considerably slower than the local
drives.
Look at the links off [url]http://msmvps.com/epprecht/archive/2004/07/24/10591.aspx[/ur
l]
EE does have some performance enhancements over standard (pre-fetch) but the
Disk might be your issue.
Regards
Mike
"KevinK" wrote:
> We have a series of SPs, poorly written, but that's not the issue here.
> Two machines, hardware identical, except mostly SAN (Prod) vs. Local Dasd
> (Dev), dual P3, 2gig ram. Pretty much just DB servers.
> But we've even tried the local Dasd on the Prod (Std) box.
> The process runs 2 1/2 to 3 hours on Dev, 9 to 10 on Prod. On Dev that's
> even everything on a single RAID drive. Prod at least has db and log
> separate, more at times.
> We've checked everything, and can't find any differences. Running on Prod
> Off hours when it's not even breaking a sweat.
> The proc does nothing special, except a cursor for Update (Ooouugghh).
> Any ideas what to look at ' We compared everything we can think of, no
> differences SQL, Win2000... No errors in the event log ..., Query plans
> match even cost estimates.
> Could Developer (which is Enterprise, right) provide that much boost vs.
> Std, even with nothing, that I know of that would be using EE features.
> Note to date everything I've wrritten, even some larger procs (10 to 20
> million row processes) seem to run as expected. Slightly faster on Prod.
>
> --
> KlK, MCSE|||Thanks Mike, unfortunately we have tested it using only the local drive on
Prod.
We have a 70gig local raid drive that is normally used for Backups etc.
It didn't help.
We do know the SAN drives are slower than locals, but it isn't that
significant, maybe 20% (I think 12 to 15 was the official number.
But when we run it on the SAN we are using multiple drives, log and data.
I am trying to get them to do a DBHammer
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> As a good benchmark, run the SQL IO stress tools on your Dev and Productio
n
> server. It will indicate if your SAN is considerably slower than the local
> drives.
> Look at the links off [url]http://msmvps.com/epprecht/archive/2004/07/24/10591.aspx[/
url]
> EE does have some performance enhancements over standard (pre-fetch) but t
he
> Disk might be your issue.
> Regards
> Mike
> "KevinK" wrote:
>
Subscribe to:
Posts (Atom)