Showing posts with label view. Show all posts
Showing posts with label view. Show all posts

Friday, March 30, 2012

Performance hit for cross-database partitioned view (on same serve

What would be the performance difference for a view spanning multiple tables
in separate filegroups but in the same database as opposed to the same view
spanning tables in different databases (all on the same server).
Since the execution plan is the same, I would think that there is no
difference but some of the tests I have done seem to indicuate a performance
hit of about 10% for the cross-database view.
If there are updates, then there are cost related to transaction commit
cross databases.
You might want to use set statistic IO to see more details of the query
execution.
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://blogs.msdn.com/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"DBA72" <DBA72@.discussions.microsoft.com> wrote in message
news:C764B4BA-4C3E-41B4-BC41-E85A894255EC@.microsoft.com...
> What would be the performance difference for a view spanning multiple
tables
> in separate filegroups but in the same database as opposed to the same
view
> spanning tables in different databases (all on the same server).
> Since the execution plan is the same, I would think that there is no
> difference but some of the tests I have done seem to indicuate a
performance
> hit of about 10% for the cross-database view.
|||If the databases are on the same server and running under the same instance
of SQL Server then the difference should be very small. Running under
different instances (even on the same server) requires cross-process
marshalling and incurrs a performance impact.
"DBA72" wrote:

> What would be the performance difference for a view spanning multiple tables
> in separate filegroups but in the same database as opposed to the same view
> spanning tables in different databases (all on the same server).
> Since the execution plan is the same, I would think that there is no
> difference but some of the tests I have done seem to indicuate a performance
> hit of about 10% for the cross-database view.

Performance hit for cross-database partitioned view (on same serve

What would be the performance difference for a view spanning multiple tables
in separate filegroups but in the same database as opposed to the same view
spanning tables in different databases (all on the same server).
Since the execution plan is the same, I would think that there is no
difference but some of the tests I have done seem to indicuate a performance
hit of about 10% for the cross-database view.If there are updates, then there are cost related to transaction commit
cross databases.
You might want to use set statistic IO to see more details of the query
execution.
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://blogs.msdn.com/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"DBA72" <DBA72@.discussions.microsoft.com> wrote in message
news:C764B4BA-4C3E-41B4-BC41-E85A894255EC@.microsoft.com...
> What would be the performance difference for a view spanning multiple
tables
> in separate filegroups but in the same database as opposed to the same
view
> spanning tables in different databases (all on the same server).
> Since the execution plan is the same, I would think that there is no
> difference but some of the tests I have done seem to indicuate a
performance
> hit of about 10% for the cross-database view.|||If the databases are on the same server and running under the same instance
of SQL Server then the difference should be very small. Running under
different instances (even on the same server) requires cross-process
marshalling and incurrs a performance impact.
"DBA72" wrote:

> What would be the performance difference for a view spanning multiple tabl
es
> in separate filegroups but in the same database as opposed to the same vie
w
> spanning tables in different databases (all on the same server).
> Since the execution plan is the same, I would think that there is no
> difference but some of the tests I have done seem to indicuate a performan
ce
> hit of about 10% for the cross-database view.

Performance hit for cross-database partitioned view (on same serve

What would be the performance difference for a view spanning multiple tables
in separate filegroups but in the same database as opposed to the same view
spanning tables in different databases (all on the same server).
Since the execution plan is the same, I would think that there is no
difference but some of the tests I have done seem to indicuate a performance
hit of about 10% for the cross-database view.If there are updates, then there are cost related to transaction commit
cross databases.
You might want to use set statistic IO to see more details of the query
execution.
--
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://blogs.msdn.com/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"DBA72" <DBA72@.discussions.microsoft.com> wrote in message
news:C764B4BA-4C3E-41B4-BC41-E85A894255EC@.microsoft.com...
> What would be the performance difference for a view spanning multiple
tables
> in separate filegroups but in the same database as opposed to the same
view
> spanning tables in different databases (all on the same server).
> Since the execution plan is the same, I would think that there is no
> difference but some of the tests I have done seem to indicuate a
performance
> hit of about 10% for the cross-database view.|||If the databases are on the same server and running under the same instance
of SQL Server then the difference should be very small. Running under
different instances (even on the same server) requires cross-process
marshalling and incurrs a performance impact.
"DBA72" wrote:
> What would be the performance difference for a view spanning multiple tables
> in separate filegroups but in the same database as opposed to the same view
> spanning tables in different databases (all on the same server).
> Since the execution plan is the same, I would think that there is no
> difference but some of the tests I have done seem to indicuate a performance
> hit of about 10% for the cross-database view.

Monday, March 26, 2012

Performance Degradation

We have a view in production that have been working fine. Recently, the performance on it has changed signficantly. The view is "union" (not "union all") of select statements on four other views. In the past, it would take a few minutes to return the resultset back, but now, it's taking like 30+ minutes.

The individual select statements only take 1 minutes, 2 minutes, 4 minutes and 9 minutes respectively. But when you run the overall select statement with the unioning of the 4, it takes 30+ minutes. This shows the tempdb resources needed to execute the statement is taxing. When we changed the union to union all, the statement only took 13 minutes to run.

Over the weekend, a decimal field was widened from 9,2 to 11,2. Replication was turned off before the field change and then turned back on after the changes to the field. (I hope I have that replication explained right. I'm not familiar with replication as a process.) There was mention that any custom indexes might have been overwritten/lost due to the replication.

The DBA reindexed all the underlying tables for the views tonight.

My question is if the reindexing doesn't improve the performance. Where else can we check? What else can we do?

Check statistics? Check the transaction log/drive? Does calling a view cause impact on the transaction log? Another thought would be to place indexes on the views. We don't have any in place at the moment.

Unfortunately, I can't post the TSQL due to company rules.

Any ideas to improve the performance would be greatly appreciated.

KenReindexing is a good start. Next you might want to take a look at the query plan. Perhaps, you do not have appropriate index.

Also, if you do not need filtering, consider using 'union all' instead of just 'union'. When you only specify 'union', the system will have to filter out duplicates data. Thus, increase overhead.

Friday, March 23, 2012

Performance Dashboard Reports Error

All,

I'm using the new Performance Dashboard Reports tool, and I'm getting an error when trying to view the main dashboard report. It is:

Error: Difference of two datetime columns caused overflow at runtime.

Has anyone seen this error before and know how to correct it?

Thanks,

Eric

Applied fix from this Blog, and it corrected the error.

http://blogs.msdn.com/sqlrem/archive/2007/03/07/Performance-Dashboard-Reports-Now-Available.aspx

Eric

sql

Wednesday, March 21, 2012

Performance Comparison - Code vs SqlDataSource, Gridview etc vs PlainControl

There are so many ways to use database in asp.net/ado.net, I'm a bit confused about their difference from the performance point of view.

So apparently SqlDataSource in DataReader mode is faster than DataSet mode, at a cost of losing some bolt-on builtin functions.

What about SqlDataSource in DataReader mode vs manual binding in code? Say creating a SqlDataSource ds1 and set "DataSourceID" in Gridview, vs manually creating the SqlConnection, SqlCommand, SqlDataReader objects and mannually bind the myReader object to the gridview with the Bind() method.

Also Gridview is a very convenient control for many basic tasks. But for more complex scenarios it requires lots of customization and modification. Now if I do not use gridview at all and build the entire thing from scratch with basic web controls such as table and label controls, and mannually read and display everything from a DataReader object, how's the performance would be like compared to the Gridview-databind route?

I don't have a tested answer for you, just an opinion. Re your own connect vs SqlDataSource, there should be no differenct. The SqlDataSource has to do everything you do so I don't see any performance implication on a single screen. With that said, though, I could see somepotential connection pooling issues since minor differences in connection strings prevent asp.net from reusing the same connection. Without going into details, I think that awell writtenconnection object (that gets reused) is more likely to allow pooling than typing your connection details into wizards all the time. In any case, I wouldn't worry about it at this stage of the game.

Re GridView vs writing all that stuff yourself. If you use all or most or even some of that functionality, I don't think you should attempt to rewrite it -- it's just not worth it. You may even make performance worse if you write it wrong, and frankly, programmer time is worth more than cpu time.

If you're interested in pursuing the issue, Farhan Muhammed wrote a book which gives some fairly detailed numbers comparing different access methods and different controls -- he did real performance comparisons. Real World ASP.NET Best Practiceshttp://www.amazon.com/Real-World-ASP-NET-Best-Practices/dp/1590591003

|||

Thanks a lot for your opinion. By "well written connection object that gets reused" do you mean creating an SqlConnection object once, and try to re-use that same connection for as many command objects (select,insert,update tasks etc) as possible within the same scope?

|||

ilovecats:

do you mean creating an SqlConnection object once, and try to re-use that same connection for as many command objects (select,insert,update tasks etc) as possible within the same scope

Well, what I really mean is that you have a common object used by all pages & modules in your application that manages connections for you, along the lines of an Application Block like this onehttp://msdn2.microsoft.com/en-us/library/aa480458.aspx. Using something like this not only hides the details of the connection from the programmer, it goes a long way towards assuring consistency in connection strings -- which is something you need if asp.net is to be able to pool connections (ie, getting a connection is very expensive, so asp.net keeps a pool of connections around for reuse, if you request a connection and one is available from the pool that has an identical connection string, it gets that one instead of creating a new one).

Now, I'm sure that MS uses good programming when they obtain connections, but it used to be the case -- and I think still is largely the case -- that any differences in your connection string (even cosmetic ones, like case and extra white space) prevent sharing (pooling) connections, so if 2 people put in the same connection strings but with different case, they could not share connections in the pool. I don't know anymore how true this, I seem to recall reading that leading and trailing spaces don't matter anymore, but I'm not sure. A data connection application block would typically get the connection info from a config file, which is to say that everyone uses the same config file, which is to say everyone who connects to sqlserver1.mydatabase has an identical connection string. OTOH, if you use the SqlDataSource, every programmer is entering the connectio info (isn't that right -- perhaps I'm missing something because I haven't used them very much, but I think that's how it has to work -- if I'm wrong I hope someone will correct me), raising the possibility that the connection strings will be a little different, hence they cannot share connections.

I think I'll post a question on this topic to see if my understanding is current.

However, I frankly wouldn't worry about it too much. If you get into it, fine, but unless you're working on a high volume application it won't make a meaningful difference.

In my shop, we do use a data access application block that someone else wrote, but that didn't stop me from using the"no programming" features of asp.net 2.0 SqlDataSource because we just aren't a high volume app.

Tuesday, March 20, 2012

Performance baselines

Hi,
I'm currently looking into moving a table to it's own file group with
a view to improving performance.
I want some way to prove that there is a performance improvement so
want to take a performance base line of current activity to compare to
once I've moved the table. I'm going to use
sys.dm_io_virtual_file_stats to get an over view of what IO has been
taken away from the main file group but would also like to get some
table level stats. The number of times the table is accessed should
not change so I was wondering what table level data is available for
me to show an improvement. That is, if there is an improvement!
Cheers
SimonI think you can do one thing here:
Just note down the time taken(duration) for queries that are getting fired
on this table with existing setup during off peak or peak hours as per ur
envt. and then run them(queries or procedures) again once u r done with
movement and compare the stats.
Manu
"simon.pope.public@.gmail.com" wrote:
> Hi,
> I'm currently looking into moving a table to it's own file group with
> a view to improving performance.
> I want some way to prove that there is a performance improvement so
> want to take a performance base line of current activity to compare to
> once I've moved the table. I'm going to use
> sys.dm_io_virtual_file_stats to get an over view of what IO has been
> taken away from the main file group but would also like to get some
> table level stats. The number of times the table is accessed should
> not change so I was wondering what table level data is available for
> me to show an improvement. That is, if there is an improvement!
> Cheers
> Simon
>|||On 13 Sep, 20:40, manu <m...@.discussions.microsoft.com> wrote:
> I think you can do one thing here:
> Just note down the time taken(duration) for queries that are getting fired
> on this table with existing setup during off peak or peak hours as per ur
> envt. and then run them(queries or procedures) again once u r done with
> movement and compare the stats.
> Manu
>
> "simon.pope.pub...@.gmail.com" wrote:
> > Hi,
> > I'm currently looking into moving a table to it's own file group with
> > a view to improving performance.
> > I want some way to prove that there is a performance improvement so
> > want to take a performance base line of current activity to compare to
> > once I've moved the table. I'm going to use
> > sys.dm_io_virtual_file_stats to get an over view of what IO has been
> > taken away from the main file group but would also like to get some
> > table level stats. The number of times the table is accessed should
> > not change so I was wondering what table level data is available for
> > me to show an improvement. That is, if there is an improvement!
> > Cheers
> > Simon- Hide quoted text -
> - Show quoted text -
Cheers manu, I think you're right. It is a performance enhancement
after all and so time taken to query data in the table is as good an
indicator as any.

Wednesday, March 7, 2012

performance

Hi All:
When I design the report use VB.Net and view the report by using the
Preview tab and take about 1 min to get the result. However, once I deploy
the report and view the report through Report Manager, it take about a hour
to get the result. Why this happen? Anyone can help me on this?
Thanks,
KentKent,
When the report is previewed in the Report Designer the RS web facades are
bypassed. Instead, the Report Designer invokes the Report Server report
rendering functions directly. Perhaps, your performance issue is network
related. Does the Report Manager comes up quickly when you navigate from one
folder to another? If you believe that this is report rendering issue, look
at the Execution Log to find out how much time the Report Server spends in
querying, processing and rendering the report.
--
Hope this helps.
---
Teo Lachev, MVP [SQL Server], MCSD, MCT
Author: "Microsoft Reporting Services in Action"
Publisher website: http://www.manning.com/lachev
Buy it from Amazon.com: http://shrinkster.com/eq
Home page and blog: http://www.prologika.com/
---
"kent kent" <kentkent@.discussions.microsoft.com> wrote in message
news:06DE6559-7DDB-4FD5-B13F-1262466ACC1B@.microsoft.com...
> Hi All:
> When I design the report use VB.Net and view the report by using the
> Preview tab and take about 1 min to get the result. However, once I
deploy
> the report and view the report through Report Manager, it take about a
hour
> to get the result. Why this happen? Anyone can help me on this?
> Thanks,
> Kent|||In addition to what Teo suggested, I think that 1 minute is mightly long.
Many of the tables I report against are between 1 and 10 million rows. Most
of my reports take just a few seconds. How many records are shown in the
final report. How many records are in the base table. Are you using query
parameters or filters. Filters bring all the data over and then filters it.
Query parameters perform the filter at the server and only bring down the
resulting data.
--
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"kent kent" <kentkent@.discussions.microsoft.com> wrote in message
news:06DE6559-7DDB-4FD5-B13F-1262466ACC1B@.microsoft.com...
> Hi All:
> When I design the report use VB.Net and view the report by using the
> Preview tab and take about 1 min to get the result. However, once I
> deploy
> the report and view the report through Report Manager, it take about a
> hour
> to get the result. Why this happen? Anyone can help me on this?
> Thanks,
> Kent