Showing posts with label contains. Show all posts
Showing posts with label contains. Show all posts

Wednesday, March 28, 2012

Performance Difference between SQL Server 2005 Standard Edition and Enterprise Edition

Dear All,

We have a database which contains many tables which have millions of
records. When We attach the database with MS SQL Server 2005 Standard
Edition Server and run some queries (having joins, filters etc.) then
they take very long time to execute while when We execute same queries
on Enterprise Edition then they run 10 times faster than on standard
edition.

Our database does not use any features which are present in Enterprise
Edition and not present in Standard Edition. We want to know what are
the differences between Standard Edition and Enterprise Edition for
performance. Why should we go for Enterprise Edition when Standard
Edition has all the features required.

We are presently using evaluation versions of SQL Server 2005 Standard
and Enterprise Editions.

Thanks and regards,
Nishant Saini
http://www.simplyjava.comNishant Saini (nishant.saini@.gmail.com) writes:

Quote:

Originally Posted by

We have a database which contains many tables which have millions of
records. When We attach the database with MS SQL Server 2005 Standard
Edition Server and run some queries (having joins, filters etc.) then
they take very long time to execute while when We execute same queries
on Enterprise Edition then they run 10 times faster than on standard
edition.
>
Our database does not use any features which are present in Enterprise
Edition and not present in Standard Edition. We want to know what are
the differences between Standard Edition and Enterprise Edition for
performance. Why should we go for Enterprise Edition when Standard
Edition has all the features required.


There is a topic that covers this in Books Online. On the top of my head
I don't recall anything immediate, but I'm off to other things right now,
and don't want to look around.

Could you post one of your queries you've been testing with and the
query plan on Standard and Enterprise?

I presume that you are running the two editions on the same hardware?

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Nishant Saini wrote:

Quote:

Originally Posted by

Dear All,
>
We have a database which contains many tables which have millions of
records. When We attach the database with MS SQL Server 2005 Standard
Edition Server and run some queries (having joins, filters etc.) then
they take very long time to execute while when We execute same queries
on Enterprise Edition then they run 10 times faster than on standard
edition.
>
Our database does not use any features which are present in Enterprise
Edition and not present in Standard Edition. We want to know what are
the differences between Standard Edition and Enterprise Edition for
performance. Why should we go for Enterprise Edition when Standard
Edition has all the features required.
>


The Enterprise Edition offers improvements over the Standard Edition,
especially on parallel operations and caching. If you have a
significant amount of data you will defnitely notice a better
performance with this edition.

Regards,
lucm|||Are there any indexed views in the database? From the Books On Line
(BOL):

"Indexed views can be created in any edition of SQL Server 2005. In
SQL Server 2005 Enterprise Edition, the query optimizer automatically
considers the indexed view. To use an indexed view in all other
editions, the NOEXPAND table hint must be used."

So if there were indexed views, but no references to the table hint,
performance could certainly be far better using the Enterprise
edition.

Roy Harvey
Beacon Falls, CT

On 15 Dec 2006 03:01:20 -0800, "Nishant Saini"
<nishant.saini@.gmail.comwrote:

Quote:

Originally Posted by

>Dear All,
>
>We have a database which contains many tables which have millions of
>records. When We attach the database with MS SQL Server 2005 Standard
>Edition Server and run some queries (having joins, filters etc.) then
>they take very long time to execute while when We execute same queries
>on Enterprise Edition then they run 10 times faster than on standard
>edition.
>
>Our database does not use any features which are present in Enterprise
>Edition and not present in Standard Edition. We want to know what are
>the differences between Standard Edition and Enterprise Edition for
>performance. Why should we go for Enterprise Edition when Standard
>Edition has all the features required.
>
>We are presently using evaluation versions of SQL Server 2005 Standard
>and Enterprise Editions.
>
>Thanks and regards,
>Nishant Saini
>http://www.simplyjava.com

|||Although Enterprise Edition is more aggresive when it comes to
read-aheads, and has some features that can improves performance in some
very specific situations (such a parallel index creation, Advanced
Scanning, etc.), if both systems have the same amount of memory, then I
would not expect a 10-fold performance difference.

How much memory does the system have, and have you assigned this memory
to SQL Server?

You could copy the "fast" database to the "slow" configuration with
detach/attach and see if that makes a difference. This way you can rule
out any database differences (such as one database with up-to-date
statistics and another with unusable or no statistics).

HTH,
Gert-Jan

Nishant Saini wrote:

Quote:

Originally Posted by

>
Dear All,
>
We have a database which contains many tables which have millions of
records. When We attach the database with MS SQL Server 2005 Standard
Edition Server and run some queries (having joins, filters etc.) then
they take very long time to execute while when We execute same queries
on Enterprise Edition then they run 10 times faster than on standard
edition.
>
Our database does not use any features which are present in Enterprise
Edition and not present in Standard Edition. We want to know what are
the differences between Standard Edition and Enterprise Edition for
performance. Why should we go for Enterprise Edition when Standard
Edition has all the features required.
>
We are presently using evaluation versions of SQL Server 2005 Standard
and Enterprise Editions.
>
Thanks and regards,
Nishant Saini
http://www.simplyjava.com

|||"Nishant Saini" <nishant.saini@.gmail.comwrote in message
news:1166180480.672397.223250@.73g2000cwn.googlegro ups.com...

Quote:

Originally Posted by

Dear All,
>
We have a database which contains many tables which have millions of
records. When We attach the database with MS SQL Server 2005 Standard
Edition Server and run some queries (having joins, filters etc.) then
they take very long time to execute while when We execute same queries
on Enterprise Edition then they run 10 times faster than on standard
edition.


Assuming identical hardware, a 10x speed difference is usually the
difference between memory and disk access. Clear the procedure and memory
cache prior to side by side testing.

Quote:

Originally Posted by

>
Our database does not use any features which are present in Enterprise
Edition and not present in Standard Edition. We want to know what are
the differences between Standard Edition and Enterprise Edition for
performance. Why should we go for Enterprise Edition when Standard
Edition has all the features required.


I only consider EE if the hardware requires the use.

Quote:

Originally Posted by

>
We are presently using evaluation versions of SQL Server 2005 Standard
and Enterprise Editions.
>
Thanks and regards,
Nishant Saini
http://www.simplyjava.com
>

|||On Fri, 15 Dec 2006 15:54:07 -0600, "Russ Rose" <russrose@.hotmail.com>
wrote:

Quote:

Originally Posted by

>Assuming identical hardware, a 10x speed difference is usually the
>difference between memory and disk access.


My experience, for what it is worth, has been that extreme performance
differences - better or worse - are usually the result of different
execution plans.

Roy Harvey
Beacon Falls, CT|||Thanks for the responses...

Yes, The execution plans are different in both the databases.
Why the execution plans are so different in both the versions of SQL
Server 2005? Can we control the execution plans?

On Dec 16, 6:39 am, Roy Harvey <roy_har...@.snet.netwrote:

Quote:

Originally Posted by

On Fri, 15 Dec 2006 15:54:07 -0600, "Russ Rose" <russr...@.hotmail.com>
wrote:
>

Quote:

Originally Posted by

Assuming identical hardware, a 10x speed difference is usually the
difference between memory and disk access.My experience, for what it is worth, has been that extreme performance


differences - better or worse - are usually the result of different
execution plans.
>
Roy Harvey
Beacon Falls, CT


If there are no indexed views in the database ans data is selected from
tables directly, then should there be any difference in performance?

Quote:

Originally Posted by

>"Indexed views can be created in any edition of SQL Server 2005. In
>SQL Server 2005 Enterprise Edition, the query optimizer automatically
>considers the indexed view. To use an indexed view in all other
>editions, the NOEXPAND table hint must be used."


Thanks and regards,
Nishant Saini
http://www.simplyjava.com|||Nishant Saini (nishant.saini@.gmail.com) writes:

Quote:

Originally Posted by

Thanks for the responses...
>
Yes, The execution plans are different in both the databases.
Why the execution plans are so different in both the versions of SQL
Server 2005?


There could be many reasons for that. Roy Harvey mentioned indexed views
for instance.

If you want a better answer, please post the query and the two plans,
so that we know what we are talking about.

Quote:

Originally Posted by

Can we control the execution plans?


Yes. SQL 2005 actually permit you to specify the plan exactly to using
plan guides. This is definitely an advance feature, and nothing you
should use at a whim.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||As Nishant said we are not using any special features of Enterprise
Edition. Indexed views are not being used and data is taken directly
from tables.

Regarding the execution plan, it may be different due to different
sizes of both databases. We are right now trying to check execution
plan for similar databases on both editions. We will post our findings
as soon as we get them.

Right now we need to know

1. Enterprise Edition has its own Query running mechanism ( which is
different and faster than Standard Edition mechanism) or NOT ?
2. If it is special, what kind of performance differences we will get
while using Std Edition.
3. Is there any specific DB size ( or table sizes ) for which Standard
Edition is fine and after that one should go for Enterprise.

Thanks in advance
Nitin Goyal

On Dec 16, 4:39 pm, Erland Sommarskog <esq...@.sommarskog.sewrote:

Quote:

Originally Posted by

Nishant Saini (nishant.sa...@.gmail.com) writes:

Quote:

Originally Posted by

Thanks for the responses...


>

Quote:

Originally Posted by

Yes, The execution plans are different in both the databases.
Why the execution plans are so different in both the versions of SQL
Server 2005?There could be many reasons for that. Roy Harvey mentioned indexed views


for instance.
>
If you want a better answer, please post the query and the two plans,
so that we know what we are talking about.
>

Quote:

Originally Posted by

Can we control the execution plans?Yes. SQL 2005 actually permit you to specify the plan exactly to using


plan guides. This is definitely an advance feature, and nothing you
should use at a whim.
>
--
Erland Sommarskog, SQL Server MVP, esq...@.sommarskog.se
>
Books Online for SQL Server 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx

|||<nitin.goel@.daffodildb.comwrote in message
news:1166429585.200709.298530@.80g2000cwy.googlegro ups.com...

Quote:

Originally Posted by

As Nishant said we are not using any special features of Enterprise
Edition. Indexed views are not being used and data is taken directly
from tables.
>
Regarding the execution plan, it may be different due to different
sizes of both databases. We are right now trying to check execution
plan for similar databases on both editions. We will post our findings
as soon as we get them.
>
Right now we need to know
>
1. Enterprise Edition has its own Query running mechanism ( which is
different and faster than Standard Edition mechanism) or NOT ?


The query engine is the same.

Quote:

Originally Posted by

2. If it is special, what kind of performance differences we will get
while using Std Edition.
3. Is there any specific DB size ( or table sizes ) for which Standard
Edition is fine and after that one should go for Enterprise.


Not specifically.

However, Enterprise Edition supports more physical RAM which can make some
queries faster.

In addition, it can support things like parellelized query builds.

In general EE is more for specific features (like clustering beyond 2 nodes)
than pure size.

If you have a mission critical database that's 1 gig, but you want an N+2
architecture, you'll want EE.

If you have a datawarehouse that 500 gig that can be down for periods of
time during restores, etc, Standard Edition may be fine.

Check out MS's page on feature differences.

Quote:

Originally Posted by

>
Thanks in advance
Nitin Goyal
>
>
On Dec 16, 4:39 pm, Erland Sommarskog <esq...@.sommarskog.sewrote:

Quote:

Originally Posted by

>Nishant Saini (nishant.sa...@.gmail.com) writes:

Quote:

Originally Posted by

Thanks for the responses...


>>

Quote:

Originally Posted by

Yes, The execution plans are different in both the databases.
Why the execution plans are so different in both the versions of SQL
Server 2005?There could be many reasons for that. Roy Harvey mentioned
indexed views


>for instance.
>>
>If you want a better answer, please post the query and the two plans,
>so that we know what we are talking about.
>>

Quote:

Originally Posted by

Can we control the execution plans?Yes. SQL 2005 actually permit you to
specify the plan exactly to using


>plan guides. This is definitely an advance feature, and nothing you
>should use at a whim.
>>
>--
>Erland Sommarskog, SQL Server MVP, esq...@.sommarskog.se
>>
>Books Online for SQL Server 2005
>athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
>Books Online for SQL Server 2000
>athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx


>

|||

Quote:

Originally Posted by

Right now we need to know
>
1. Enterprise Edition has its own Query running mechanism ( which is
different and faster than Standard Edition mechanism) or NOT ?
2. If it is special, what kind of performance differences we will get
while using Std Edition.
3. Is there any specific DB size ( or table sizes ) for which Standard
Edition is fine and after that one should go for Enterprise.
>
Thanks in advance
Nitin Goyal


The engine is not the same. Have a look here:
http://www.microsoft.com/sql/prodin...e-features.mspx
especially in "Scalability and Performance" and "High Availability"
sections.

As for choosing Standard or Enterprise, you should think about
concurrent users and operations, not database size.

Regards,
lucm|||lucm wrote:

Quote:

Originally Posted by

>

Quote:

Originally Posted by

Right now we need to know

1. Enterprise Edition has its own Query running mechanism ( which is
different and faster than Standard Edition mechanism) or NOT ?
2. If it is special, what kind of performance differences we will get
while using Std Edition.
3. Is there any specific DB size ( or table sizes ) for which Standard
Edition is fine and after that one should go for Enterprise.

Thanks in advance
Nitin Goyal


>
The engine is not the same. Have a look here:
http://www.microsoft.com/sql/prodin...e-features.mspx
especially in "Scalability and Performance" and "High Availability"
sections.
>
As for choosing Standard or Enterprise, you should think about
concurrent users and operations, not database size.
>
Regards,
lucm


Although the storage engine is different (or at least exposes more
features), the query optimizer is the same. So if you are not using
indexed views, you can live with 4 CPU's and you do not need the
advanced OLAP tools, then you can expect the same query performance. The
only documented exception is Advanced Scans which could improve
performance in OLAP type situations.

For a manageability and high availability point of view, there are many
reasons to choose EE, but IMO not from a performance point of view
(given the restrictions above).

If you disagree with me, then please mention a specific feature that
will cause EE to outperform SE (on regular DML statements).

Gert-Jan|||(nitin.goel@.daffodildb.com) writes:

Quote:

Originally Posted by

As Nishant said we are not using any special features of Enterprise
Edition. Indexed views are not being used and data is taken directly
from tables.
>
Regarding the execution plan, it may be different due to different
sizes of both databases.


If you want to compare Standard and Enterprise, you must of course
use the same database on the same hardware. Else the test is meaningless.

Quote:

Originally Posted by

1. Enterprise Edition has its own Query running mechanism ( which is
different and faster than Standard Edition mechanism) or NOT ?
2. If it is special, what kind of performance differences we will get
while using Std Edition.
3. Is there any specific DB size ( or table sizes ) for which Standard
Edition is fine and after that one should go for Enterprise.


Rather than asking questions on the net, and hope that people will look
up the answers for you, why not look in this place in Books Online
instead:
ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/instsql9/html/81f3e917-884a-4cc8-aca2-0a5fea89f355.htm

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Here is the H/W and database configuration in this case.

System: - HP Proliant ML 370 G4 Server ( Intel Xeon 3.2 GH on Intel
7520 Chipset; Dual CPU capable )
RAM: - 4 GB DDR2 ECC
HDD: - 2 * 73 GB SCSI ( 15000 RPM ) RAID 0
DB size: - 20 GB
Downtime: - 2 hours a day i.e. morning 5 to 7 AM.

Parallel queries: Parallel Query processing runs only on multiple
processors. In our case only one processor is available to SQL Server.

RAM: it is an OS issue and not of SQL Server. There is no RAM support
specification in SQL Server (EE / SE). By the way, OS used by us is Win
2003 Server EE.

FYI, we have searched MS SQL Server books online + seen the differences
of SE and EE given by MS + asked our local vendor but he knows nothing.
And the queries run by us are select queries with joins on around 5 -1
0 tables. So the picture is still murky and Microsoft is not answering.

And Mr Sommarskog, we do not want anyone to search for us. We are
looking for very specific answers and we will be thankful if someone
can help us (we feel others have also faced same problems.)

Thanks
Nitin Goyal

On Dec 19, 3:14 am, Erland Sommarskog <esq...@.sommarskog.sewrote:

Quote:

Originally Posted by

(nitin.g...@.daffodildb.com) writes:

Quote:

Originally Posted by

As Nishant said we are not using any special features of Enterprise
Edition. Indexed views are not being used and data is taken directly
from tables.


>

Quote:

Originally Posted by

Regarding the execution plan, it may be different due to different
sizes of both databases.If you want to compare Standard and Enterprise, you must of course


use the same database on the same hardware. Else the test is meaningless.
>

Quote:

Originally Posted by

1. Enterprise Edition has its own Query running mechanism ( which is
different and faster than Standard Edition mechanism) or NOT ?
2. If it is special, what kind of performance differences we will get
while using Std Edition.
3. Is there any specific DB size ( or table sizes ) for which Standard
Edition is fine and after that one should go for Enterprise.Rather than asking questions on the net, and hope that people will look


up the answers for you, why not look in this place in Books Online
instead:
ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/instsql9/html/81f3e917-884a-4cc8-aca2-0a5fea89f355.htm
>
--
Erland Sommarskog, SQL Server MVP, esq...@.sommarskog.se
>
Books Online for SQL Server 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx

|||

Quote:

Originally Posted by

System: - HP Proliant ML 370 G4 Server ( Intel Xeon 3.2 GH on Intel
7520 Chipset; Dual CPU capable )
RAM: - 4 GB DDR2 ECC
HDD: - 2 * 73 GB SCSI ( 15000 RPM ) RAID 0
DB size: - 20 GB
Downtime: - 2 hours a day i.e. morning 5 to 7 AM.
>
Parallel queries: Parallel Query processing runs only on multiple
processors. In our case only one processor is available to SQL Server.
>
RAM: it is an OS issue and not of SQL Server. There is no RAM support
specification in SQL Server (EE / SE). By the way, OS used by us is Win
2003 Server EE.
>


First of all, RAID-0 is a no-go. If this is a production server you
need at least a mirror (RAID-1).

Now, do you *need* EE or not? Without a load average it is difficult to
say. Since you purchased a server with a single CPU and a small amount
of RAM, you probably don't have too many concurrent users.

One of my customers has up to 100 concurrent users, he has a hardware
weaker than yours, a bigger database, and he is very happy with SE.
They make quite an extensive use of the server, and they can afford the
downtime when they restore lost data or rebuild indexes.

Regards,
lucm|||nitin.goel@.daffodildb.com wrote:

Quote:

Originally Posted by

>
Here is the H/W and database configuration in this case.
>
System: - HP Proliant ML 370 G4 Server ( Intel Xeon 3.2 GH on Intel
7520 Chipset; Dual CPU capable )
RAM: - 4 GB DDR2 ECC
HDD: - 2 * 73 GB SCSI ( 15000 RPM ) RAID 0
DB size: - 20 GB
Downtime: - 2 hours a day i.e. morning 5 to 7 AM.
[snip]


I am not sure you are aware of this, but the EE is approximately 4 times
as expensive as SE (when using CPU-based licensing). There is no way you
can make a business case that justifies spending an additional $15,000
on EE for performance reasons. If you were to spend another x dollars on
performance it would definitely be in the hardware, in whatever area is
limiting perfomance (in your case most likely in the I/O area). But if
you don't have to spend the money now, then you can postpone such
decision and see how it runs with the current hardware.

As for your maintenance window: it should be more than sufficient to
handle any problems with a database of just 20 GB, so from that point of
view you don't need EE either.

Gert-Jan|||(nitin.goel@.daffodildb.com) writes:

Quote:

Originally Posted by

Here is the H/W and database configuration in this case.
>
System: - HP Proliant ML 370 G4 Server ( Intel Xeon 3.2 GH on Intel
7520 Chipset; Dual CPU capable )
RAM: - 4 GB DDR2 ECC
HDD: - 2 * 73 GB SCSI ( 15000 RPM ) RAID 0
DB size: - 20 GB
Downtime: - 2 hours a day i.e. morning 5 to 7 AM.
>
Parallel queries: Parallel Query processing runs only on multiple
processors. In our case only one processor is available to SQL Server.


Does this mean that there are more CPUs in the box, but there are
other applications that gets to use those? Or is there only one CPU
in the box, and SQL Server is the only app on the machine?

The main reason I ask this, is that if there other apps on the machine,
this makes it even more difficult to assess your questions.

Quote:

Originally Posted by

FYI, we have searched MS SQL Server books online + seen the differences
of SE and EE given by MS + asked our local vendor but he knows nothing.
And the queries run by us are select queries with joins on around 5 -1
0 tables. So the picture is still murky and Microsoft is not answering.


And since you never post any of the queries, and nor the query plans,
you are not giving us any chances to straighten things out.

But I like to agree with what Gert-Jan said. If you have extrememe
performance requirement, you may want to cough up the extra money for
Enterprise to get whatever small fraction you can win. But if you
had those requirements, you would not let it suffice with a single
CPU box.

So go for Standard. Keep in mind that if your business expands and
you will want Enterprise one day, you can always upgrade at that point.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||<nitin.goel@.daffodildb.comwrote in message
news:1166597810.015478.291810@.a3g2000cwd.googlegro ups.com...

Quote:

Originally Posted by

Here is the H/W and database configuration in this case.
>
System: - HP Proliant ML 370 G4 Server ( Intel Xeon 3.2 GH on Intel
7520 Chipset; Dual CPU capable )
RAM: - 4 GB DDR2 ECC
HDD: - 2 * 73 GB SCSI ( 15000 RPM ) RAID 0


RED ALERT!!!

Switch to RAID 1 or add at least one more disk and go RAID 5 or kiss your
data (and job?) goodbye.|||Thank you for replies. Some more information from my side:

1. Sorry I have made a mistake, our DB size is 50 GB not 20 GB (typing
mistake).
2. There are around 100 concurrent users accessing this db from 30
locations.
3. There is only one CPU in the box and only SQL Server runs on it.
4. Regarding I/O performance, what more can be done in I/O area (
that's a very novice Q but I have never worked in this field ).
5. We will post some queries by Monday morning.

BTW I know price of both SE and EE but we will spend only if we see any
gain EE. And we are also ready to spend on I/O if it is good for us.

Thanks again
Nitin Goyal|||Thank you for replies. Some more information from my side:

1. Sorry I have made a mistake, our DB size is 50 GB not 20 GB (typing
mistake).
2. There are around 100 concurrent users accessing this db from 30
locations.
3. There is only one CPU in the box and only SQL Server runs on it.
4. Regarding I/O performance, what more can be done in I/O area (
that's a very novice Q but I have never worked in this field ).
5. We will post some queries by Monday morning.

BTW I know price of both SE and EE but we will spend only if we see any
gain EE. And we are also ready to spend on I/O if it is good for us.

Thanks again
Nitin Goyal|||Thank you for replies. Some more information from my side:

1. Sorry I have made a mistake, our DB size is 50 GB not 20 GB (typing
mistake).
2. There are around 100 concurrent users accessing this db from 30
locations.
3. There is only one CPU in the box and only SQL Server runs on it.
4. Regarding I/O performance, what more can be done in I/O area (
that's a very novice Q but I have never worked in this field ).
5. We will post some queries by Monday morning.

BTW I know price of both SE and EE but we will spend only if we see any
gain EE. And we are also ready to spend on I/O if it is good for us.

Thanks again
Nitin Goyal|||<nitin.goel@.daffodildb.comwrote in message
news:1166681115.492154.131080@.79g2000cws.googlegro ups.com...

Quote:

Originally Posted by

Thank you for replies. Some more information from my side:
>
1. Sorry I have made a mistake, our DB size is 50 GB not 20 GB (typing
mistake).
2. There are around 100 concurrent users accessing this db from 30
locations.


Again, the TYPE of query really matters. If each query takes a 100
milliseconds vs. 10 seconds.

Quote:

Originally Posted by

3. There is only one CPU in the box and only SQL Server runs on it.


Well you may or may not be CPU bound, again, depends a lot on what you're
doing her.e

Quote:

Originally Posted by

4. Regarding I/O performance, what more can be done in I/O area (
that's a very novice Q but I have never worked in this field ).


Add disks. Lots of disks. Fast disks.

You're better off with 10 40 gig disk tha 1 400 gig disk for example.

(of course that's ignoring RAID and doing stuff like putting your logs on
their own physical disk group.)

Just as a point of reference we had a system handling 14 million selects and
inserts a day running on a 6 yo 4-way Xeon machine up until recently. Just
recently moved this to a 2-way modern box AND added a bunch of other
queries... and if anything it's faster than before.

Quote:

Originally Posted by

5. We will post some queries by Monday morning.
>
BTW I know price of both SE and EE but we will spend only if we see any
gain EE. And we are also ready to spend on I/O if it is good for us.
>
>
Thanks again
Nitin Goyal
>

|||1. Sorry I have made a mistake, our DB size is 50 GB not 20 GB (typing

Quote:

Originally Posted by

mistake).
2. There are around 100 concurrent users accessing this db from 30
locations.
3. There is only one CPU in the box and only SQL Server runs on it.
4. Regarding I/O performance, what more can be done in I/O area (
that's a very novice Q but I have never worked in this field ).
BTW I know price of both SE and EE but we will spend only if we see any
gain EE. And we are also ready to spend on I/O if it is good for us.


Basic rules for the I/O on a database server: a RAID-10 array using as
many disks as possible (even number) is the best scenario for the
databases, while a 2-disk RAID-1 array is great for transaction logs.

If you use temporary tables, having a distinct 2-disk RAID-1 array for
tempbd is excellent (even better: a 4-disk RAID-10). As for the disks
themselves, the smaller and faster is the better; your 73 GB 15k are
excellent.

If you can afford it, having a RAID controller the databases (RAID-10)
and another one for the transaction logs and tempdb (RAID-1 or RAID-10)
is optimal.

Also don't be shy to add some RAM, as much as you can.

Regards,
lucm

Friday, March 9, 2012

Performance

In a table with 400.000 rows there is a ntext field that contains html
code of about 15/20 k for each row.
Do you think that if I set <field> = '' should I increase performance
for new insert or I have to delete rows?
Any help appreciated.
Regards.
Fabri
(Lattepi chi pu darti di pi?)Hi
No, leave it as NULL until you use it. With NULL, there is no pointer
allocated for the NTEXT data. As soon as you supply some value, the pointer
needs to be allocated against a page.
Regards
Mike
"Fabri" wrote:

> In a table with 400.000 rows there is a ntext field that contains html
> code of about 15/20 k for each row.
> Do you think that if I set <field> = '' should I increase performance
> for new insert or I have to delete rows?
> Any help appreciated.
> Regards.
> --
> Fabri
> (Lattepiù chi può darti di più?)
>|||Mike Epprecht (SQL MVP) wrote:
> Hi
> No, leave it as NULL until you use it. With NULL, there is no pointer
> allocated for the NTEXT data. As soon as you supply some value, the pointe
r
> needs to be allocated against a page.
Thx Mike, how can I "give" the space to the OS?
I tried so:
dbcc shrinkdatabase ('<db>', truncateonly)
but it seems to not give space to os...
dbcc shrinkdatabase ('<db>') results in very-very-long transaction and I
should avoid this solution...
Regards.
Fabri
(Lattepiù chi può darti di più?)|||Hi
If your data is not on different days then you could default the date when
you store it and then you would not need to use convert, otherwise you may
want to think about storing the time separately.
John
"vanitha" wrote:

> hi,
> my query
> Select A.COL1,B.COL1 from
> A,B
> where convert(varchar(8),A.COL3,108)= convert(varchar(8),B.COL3,108)
> provided COL3 is part of the key in both the tables.
> So if we try to convert the index of the table to time alone the performan
ce
> of the query is affected and it's taking a longer time to execute...
> how to solve this?
> thanks
> vanitha
>

Monday, February 20, 2012

Percentage/Proportion

I'm new to Analysis Services 2005 and MDX. I'm trying to create a calculated member that contains the proportion of cases (using [Measures].No) for each dimension in the cube. The following code gives me the percentage of the total number of members in each dimension. This is what I need for attribute hiearchies (single-level), but for user-defined hiearchies (multi-level) I want the percentage of the above level. For example, in a time dimension, I want the proportion of cases in each year (this works) and the proportion of cases in each quarter of a given year (this doesn't work).

CALCULATE;

CREATEMEMBERCURRENTCUBE.[MEASURES].PercentCases

ASCASE

WHENIsEmpty([Measures].No) THENNULL

ELSE ((

[DimLocation].[DimLocation].CurrentMember,

[DimLevel].[DimLevel].CurrentMember,

[DimDischargeYear].[DimDischargeYear].CurrentMember,

[DimDiagnosis].[DimDiagnosis].CurrentMember,

[DimICUAdmSource].[DimICUAdmSource].CurrentMember,

[DimSeverity].[DimSeverity].CurrentMember,

[DimGlucose].[DimGlucose].CurrentMember,

[DimMeanGlucose].[DimMeanGlucose].CurrentMember,

[DimAlbumin].[DimAlbumin].CurrentMember,

[DimAdmissions].[DimAdmissions].CurrentMember,

[DimDischargeHour].[DimDischargeHour].CurrentMember,

[DimUnitType].[DimUnitType].CurrentMember,[Measures].No) /

(Root([DimLocation]),Root([DimLevel]),Root([DimDischargeYear]),Root([DimDiagnosis]),Root([DimICUAdmSource]),Root([DimSeverity]),Root([DimGlucose]),Root([DimMeanGlucose]),Root([DimAlbumin]),Root([DimAdmissions]),Root([DimDischargeHour]),Root([DimUnitType]),[Measures].No))

END,

FORMAT_STRING = "#,0.0%",

VISIBLE = 1 ;

In addition I want the user to be able to get cross tabulations among the dimensions. For example, they may want the proportion of cases in each year at a given location or the proportion of cases in each year by product? In this case, I'd need a generalized formula because I'll never be sure which dimension(s) they may cross-tab.

Can anyone help? Any suggestions would be greatly appreciated.

SAP2006

Hi. To get percentage of the above level use the MDX Parent function in the denominator of your calculation. <dim>.<hier>.CurrentMember.Parent. However, there is a hazard using the .Parent function when CurrentMember is the highest level in the hierarchy. When the CurrentMember is the highest level in the hierarchy you reference a space outside the cube which is null, or 0, and you get a divide by zero error. We can avoid divide by 0 error, but it adds a lot of extra checking to our calculated member.

There are two sets of checks in the MDX below which avoid the divide by zero error.

The first set of checks execute before the calculation is done. Just as your ISEMPTY() check on the measure executes. First we check if the <dim>.<hier>.Currentmember.Level.Ordinal is <> 0. Checking the Level.Ordinal property identifies if we're at the highest level in the hierarchy. If we're not at the highest level in the hierarchy (.ordinal <> 0), then we check the tuple (<dim.<hier>.CurrentMember.Parent, Measures.<desired measure>) for the EMPTY condition.

IIF([Product].[Product Categories].CurrentMember.Level.Ordinal <> 0 AND

ISEMPTY(([Product].[Product Categories].CurrentMember.Parent, Measures.[Customer Count]))

If both conditions are met we return NULL and avoid the divide by zero error.

The second set of checks occurs in the tuple which defines the denominator of our division. Each <dim>.<hier>.CurrentMember is checked to see if we're at the highest level in the hierarchy:

IIF([Product].[Product Categories].CurrentMember.Level.Ordinal = 0

If we are at the highest level then we use the DefaultMember value in the hierarchy. If not at the highest level then we use <dim>.<hier>.CurrentMember.Parent in the denominator calculation.

A major drawback to this technique is the detailed checking of each <dim>.<hier> within the nested IIF(). When you have a lot of hierarchies to include the construction/maintenance if the nested IIF() is tedious. However, if you're diligent you can accomplish it.

The example below uses a subset of hierarchies form the Adventure Works DW sample database. I didn't reference EVERY hierarchy in adventure works because it would have been very long and tedious. So my percentage calculations are ONLY valid across the <dim>.<hier> I reference in the calculated member Measures.[Percent Customer Count]

WITH MEMBER Measures.[Percent Customer Count] AS

'

IIF(ISEMPTY([Measures].[Customer Count]), NULL,

IIF([Product].[Product Categories].CurrentMember.Level.Ordinal <> 0 AND

ISEMPTY(([Product].[Product Categories].CurrentMember.Parent, Measures.[Customer Count])), NULL,

IIF([Date].[Calendar].CurrentMember.Level.Ordinal <> 0 AND

ISEMPTY(([Date].[Calendar].CurrentMember.Parent, Measures.[Customer Count])), NULL,

IIF([Customer].[Customer Geography].CurrentMember.Level.Ordinal <> 0 AND

ISEMPTY(([Customer].[Customer Geography].CurrentMember.Parent, Measures.[Customer Count])), NULL,

// Do the calculation

([Product].[Product Categories].CurrentMember

,[Date].[Calendar].CurrentMember

,[Customer].[Customer Geography].CurrentMember

,[Measures].[Customer Count])

/

(IIF([Product].[Product Categories].CurrentMember.Level.Ordinal = 0, [Product].[Product Categories].DefaultMember, [Product].[Product Categories].CurrentMember.Parent)

,IIF([Date].[Calendar].CurrentMember.Level.Ordinal = 0, [Date].[Calendar].DefaultMember, [Date].[Calendar].CurrentMember.Parent)

,IIF([Customer].[Customer Geography].CurrentMember.Level.Ordinal = 0, [Customer].[Customer Geography].DefaultMember, [Customer].[Customer Geography].CurrentMember.Parent)

,[Measures].[Customer Count])

)

)

)

)

' ,SOLVE_ORDER=10, FORMAT_STRING="#,0.0%"

SELECT

{[Measures].[Customer Count], Measures.[Percent Customer Count]} ONCOLUMNS

,{[Product].[Product Categories].[All]

,DESCENDANTS([Product].[Product Categories].[All], [Product].[Product Categories].[Category])} ONROWS

FROM [Adventure Works]

WHERE(

[Date].[Calendar].[Calendar Year].&[2003]

,[Customer].[Customer Geography].[Country].&[United States]

)

My code copy into this window didn't carry over very well. If you'd like the .MDX file witht he query then drop me a note at paul_goldy@.hotmail.com

Good Luck.

PGoldy

|||

PGoldy,

Thank you...that worked!!! I do have a few questions though.

1) My understanding was that the solve order was no longer necessary in AS2005 because it relies on the order of the MDX script. Therefore, I did not include the solve order=10 because I wasn't sure if 10 was the correct number given the other calculations in the cube. Is it OK to eliminate it?

2) I'm not sure how to ask this question, so please bear with me if it doesn't make sense. Since the calculated percent is for the current dimension the end user is browsing by.... what happens if the first clause of the IIF in the first dimension is true? Does it return NULL and then not move on to the other dimensions? For example, if the following conditions are both met

[Product].[Product Categories].CurrentMember.Level.Ordinal <> 0 AND

ISEMPTY(([Product].[Product Categories].CurrentMember.Parent, Measures.[Customer Count]))

will it return a NULL value for the % when viewing the Product dimension and then STOP or will it continue to evaluate all the other conditions/dimensions as well.

Thanks again.

SAP2006

|||

Hi. Glad to be of help, and thanks for the questions.

(1) You are correct about the SOLVE_ORDER, it is not required and you can eliminate it. I'm afraid I used it out of habit.

(2) Your second question makes sense. Yes, if one of the initial IIF() conditions are met ( .Oridinal <> 0 AND ISEMPTY()) then the expression returns NULL as the result. I'm afraid this is required because if ANY of the denominator values are empty then you get a divide by 0 error - even though the user may only "see" a specific dimension they are drilling up/down on. Sory if that's bad news, but that's how the calculations work.

Good Luck.

PGoldy

|||

I think we will be OK with the nested IIFs. I just wanted to make sure that I understood what the code was doing. Thanks again for all your help.

SAP2006

|||

Hi PGoldy,

The code you provided previously works perfectly if we are only viewing one dimension at a time (column percentages). However, when I do a cross-tab of two or more dimensions, the code is giving me column percentages and what I really need are row percentages. I know I need to change the denominator, but I'm unsure how to write the MDX code given that the denominator could be any dimension in the cube and therefore will be different depending on which dimension the user is browsing. Is there a way to do this?

[DimLocation].[DimLocation].CurrentMember,[Measures].No

/

?

where ? could be any of the following:

[DimDischargeYear].[DimDischargeYear].CurrentMember, [Measures].No

[DimAdmissions].[DimAdmissions].CurrentMember, [Measures].No

.........

[DimLevel].[DimLevel].CurrentMember, [Measures].No

Any help would be appreciated.

Thanks, SAP2006

|||

Sorry, but I don't understand why the percentage calculation only shows corretly on columns. The calculation is not specific to rows or columns, but only adheres to the relative position of members in a hierarchy (children, parents). Perhaps you could provide a complete MDX query which shows the problem?

Also, I'm a little confused what you mean by a "...cross tab of two or more dimensions...". Do you mean you have nested dimensions on the rows and columns, or do you use the PAGE Axis and have multiple tabs of data?

PGoldy

|||

Sorry, I didn't explain myself better. You are correct. Currently, BIDS is calculating a percentage relative to the position of members of in the hierarchy (see table 1), for example, 3849/23343=16.5%. What I would like is a percentage of the row (see table 2), for example 3849/7440=51.7%.

By cross-tab, I simply meant a query using two or more dimensions. For example in the query below, I have Location on rows and Discharge Year on columns. There may be instances where I have nested dimensions as well, for example, Location and Admissions on rows and Discharge Year on columns. I hope this clarifies things.

TABLE 1 Discharge Year Location 2004 2005 Grand Total N PercentCases N PercentCases N PercentCases A 3849 16.5% 3591 15.4% 7440 31.9% B 2887 12.4% 2841 12.2% 5728 24.5% C 5103 21.9% 5072 21.7% 10175 43.6% Grand Total 11839 50.7% 11504 49.3% 23343 100.0% TABLE 2 Discharge Year Location 2004 2005 Grand Total N PercentCases N PercentCases N PercentCases A 3849 51.70% 3591 48.30% 7440 31.9% B 2887 50.40% 2841 49.60% 5728 24.5% C 5103 50.20% 5072 49.80% 10175 43.6% Grand Total 11839 50.7% 11504 49.3% 23343 100.0%

Thanks, SAP2006

|||

Hi, and thanks for the clarification. I understand what you're trying to do. If we define what you're looking for...you'd like to get the percentage contribution for unrelated members. In other words, you want to know how much Location A is contributing to Total Years (3849/7440 = 51.7%). This is a tough problem which is usually solved by a client side tool which takes apart the query result cell set and provides the % calcs as a client side activity without using MDX. Tools like Analyzer2005, and ProClarity demonstrate this capability well. I'm sure there are other tools that do as well, but I'm not as familiar with them.

It may be possible to do what you'd like as an MDX calculation, but it involves retrieving the current tuples defined on the axes (rows or columns). A function which allows retrieval of tuples from the axes is the MDX AXIS() function. I haven't taken a look at the AXIS() function since AS2005 came out because it only worked in limited cases in AS2000. You can find more information on the AXIS() function here:

http://msdn2.microsoft.com/en-us/library/ms145531.aspx

I'll also spend some time tinkering with it to see if we can retrieve current tuples on the axes and post any results here, but don't hold your breath. This is a tough problem usually solved by client side applications operating on the resulting cell-set.

PGoldy

|||

PGoldy,

We are using the ProClarity client tool and although it calculates and displays column, row and grid percentages, it graphs the values (n) rather than the percentages. Therefore, I was trying to create the percentages in SSAS. I reviewed the documentation you posted and included AXIS(1) in the denominator. The following messge was displayed when I put the cursor on the #VALUE result in the cell:

"The function expects a string or numeric expression for the argument. A tuple set expression was used."

The MDX code I processed was

IIF(IsEmpty([Measures].No),NULL,

([DimLocation].[DimLocation].CurrentMember,[Measures].No) /

(AXIS(1),[Measures].No))

Any ideas if this can be changed? If you don't have time to look at it, I understand. I appreciate all the help you've given.

Thanks, SAP2006

|||

PGoldy,

We are using the ProClarity client tool and although it calculates and displays column, row and grid percentages, it graphs the values (n) rather than the percentages. Therefore, I was trying to create the percentages in SSAS. I reviewed the documentation you posted and included AXIS(1) in the denominator. The following messge was displayed when I put the cursor on the #VALUE result in the cell:

"The function expects a string or numeric expression for the argument. A tuple set expression was used."

The MDX code I processed was

IIF(IsEmpty([Measures].No),NULL,

([DimLocation].[DimLocation].CurrentMember,[Measures].No) /

(AXIS(1),[Measures].No))

Any ideas if this can be changed? If you don't have time to look at it, I understand. I appreciate all the help you've given.

Thanks, SAP2006

|||

Hello,

I am new to MDX and I have a similar situation as discussed above i.e. finding % contribution to totals. I am using AS2005. Any idea of how this can be done?

Thanks,

Kulathu Sarma

|||

Hi,

Unfortunately, we haven't figured it out yet. However, in January, we have a consultant coming onsite to help with various issues. I'll post the solution if we find it.

SAP

|||

SAP,

It looks like you're working on a health-related application. We've also just started using SSAS2005 for cancer services data and are currently grappling with the same problem - creating row and column percentage measures that are not tied to specific dimensions. I've tried the Axis function without success; it returns a set that's the same as the current member or cell, so the percentage measure based on it is always 100%. This is probably not how it's supposed to work. Would be grateful for any advice in this regard...

QCCAT

|||

QCCAT,

The following MDX returns row percentages. Good Luck!!

//Begin - Percent Cases

CREATEMEMBERCURRENTCUBE.MEASURES.[Percent Cases]

AS

IIF(Axis(0)(0).Count < 2,

(Axis(0)(0).Dimension.CurrentMember,Measures.N)

/

Sum

(Filter(Axis(0),

InStr(Axis(0)(0).Dimension.CurrentMember.Name,"Total") = 0),

Measures.N),

NULL),

FORMAT_STRING = "0.0%",

VISIBLE = 1;

//End - Percent Cases

Percentage/Proportion

I'm new to Analysis Services 2005 and MDX. I'm trying to create a calculated member that contains the proportion of cases (using [Measures].No) for each dimension in the cube. The following code gives me the percentage of the total number of members in each dimension. This is what I need for attribute hiearchies (single-level), but for user-defined hiearchies (multi-level) I want the percentage of the above level. For example, in a time dimension, I want the proportion of cases in each year (this works) and the proportion of cases in each quarter of a given year (this doesn't work).

CALCULATE;

CREATE MEMBER CURRENTCUBE.[MEASURES].PercentCases

AS CASE

WHEN IsEmpty([Measures].No) THEN NULL

ELSE ((

[DimLocation].[DimLocation].CurrentMember,

[DimLevel].[DimLevel].CurrentMember,

[DimDischargeYear].[DimDischargeYear].CurrentMember,

[DimDiagnosis].[DimDiagnosis].CurrentMember,

[DimICUAdmSource].[DimICUAdmSource].CurrentMember,

[DimSeverity].[DimSeverity].CurrentMember,

[DimGlucose].[DimGlucose].CurrentMember,

[DimMeanGlucose].[DimMeanGlucose].CurrentMember,

[DimAlbumin].[DimAlbumin].CurrentMember,

[DimAdmissions].[DimAdmissions].CurrentMember,

[DimDischargeHour].[DimDischargeHour].CurrentMember,

[DimUnitType].[DimUnitType].CurrentMember,[Measures].No) /

(Root([DimLocation]),Root([DimLevel]),Root([DimDischargeYear]),Root([DimDiagnosis]),Root([DimICUAdmSource]),Root([DimSeverity]),Root([DimGlucose]),Root([DimMeanGlucose]),Root([DimAlbumin]),Root([DimAdmissions]),Root([DimDischargeHour]),Root([DimUnitType]),[Measures].No))

END,

FORMAT_STRING = "#,0.0%",

VISIBLE = 1 ;

In addition I want the user to be able to get cross tabulations among the dimensions. For example, they may want the proportion of cases in each year at a given location or the proportion of cases in each year by product? In this case, I'd need a generalized formula because I'll never be sure which dimension(s) they may cross-tab.

Can anyone help? Any suggestions would be greatly appreciated.

SAP2006

Hi. To get percentage of the above level use the MDX Parent function in the denominator of your calculation. <dim>.<hier>.CurrentMember.Parent. However, there is a hazard using the .Parent function when CurrentMember is the highest level in the hierarchy. When the CurrentMember is the highest level in the hierarchy you reference a space outside the cube which is null, or 0, and you get a divide by zero error. We can avoid divide by 0 error, but it adds a lot of extra checking to our calculated member.

There are two sets of checks in the MDX below which avoid the divide by zero error.

The first set of checks execute before the calculation is done. Just as your ISEMPTY() check on the measure executes. First we check if the <dim>.<hier>.Currentmember.Level.Ordinal is <> 0. Checking the Level.Ordinal property identifies if we're at the highest level in the hierarchy. If we're not at the highest level in the hierarchy (.ordinal <> 0), then we check the tuple (<dim.<hier>.CurrentMember.Parent, Measures.<desired measure>) for the EMPTY condition.

IIF([Product].[Product Categories].CurrentMember.Level.Ordinal <> 0 AND

ISEMPTY(([Product].[Product Categories].CurrentMember.Parent, Measures.[Customer Count]))

If both conditions are met we return NULL and avoid the divide by zero error.

The second set of checks occurs in the tuple which defines the denominator of our division. Each <dim>.<hier>.CurrentMember is checked to see if we're at the highest level in the hierarchy:

IIF([Product].[Product Categories].CurrentMember.Level.Ordinal = 0

If we are at the highest level then we use the DefaultMember value in the hierarchy. If not at the highest level then we use <dim>.<hier>.CurrentMember.Parent in the denominator calculation.

A major drawback to this technique is the detailed checking of each <dim>.<hier> within the nested IIF(). When you have a lot of hierarchies to include the construction/maintenance if the nested IIF() is tedious. However, if you're diligent you can accomplish it.

The example below uses a subset of hierarchies form the Adventure Works DW sample database. I didn't reference EVERY hierarchy in adventure works because it would have been very long and tedious. So my percentage calculations are ONLY valid across the <dim>.<hier> I reference in the calculated member Measures.[Percent Customer Count]

WITH MEMBER Measures.[Percent Customer Count] AS

'

IIF(ISEMPTY([Measures].[Customer Count]), NULL,

IIF([Product].[Product Categories].CurrentMember.Level.Ordinal <> 0 AND

ISEMPTY(([Product].[Product Categories].CurrentMember.Parent, Measures.[Customer Count])), NULL,

IIF([Date].[Calendar].CurrentMember.Level.Ordinal <> 0 AND

ISEMPTY(([Date].[Calendar].CurrentMember.Parent, Measures.[Customer Count])), NULL,

IIF([Customer].[Customer Geography].CurrentMember.Level.Ordinal <> 0 AND

ISEMPTY(([Customer].[Customer Geography].CurrentMember.Parent, Measures.[Customer Count])), NULL,

// Do the calculation

([Product].[Product Categories].CurrentMember

,[Date].[Calendar].CurrentMember

,[Customer].[Customer Geography].CurrentMember

,[Measures].[Customer Count])

/

(IIF([Product].[Product Categories].CurrentMember.Level.Ordinal = 0, [Product].[Product Categories].DefaultMember, [Product].[Product Categories].CurrentMember.Parent)

,IIF([Date].[Calendar].CurrentMember.Level.Ordinal = 0, [Date].[Calendar].DefaultMember, [Date].[Calendar].CurrentMember.Parent)

,IIF([Customer].[Customer Geography].CurrentMember.Level.Ordinal = 0, [Customer].[Customer Geography].DefaultMember, [Customer].[Customer Geography].CurrentMember.Parent)

,[Measures].[Customer Count])

)

)

)

)

' ,SOLVE_ORDER=10, FORMAT_STRING="#,0.0%"

SELECT

{[Measures].[Customer Count], Measures.[Percent Customer Count]} ON COLUMNS

,{[Product].[Product Categories].[All]

,DESCENDANTS([Product].[Product Categories].[All], [Product].[Product Categories].[Category])} ON ROWS

FROM [Adventure Works]

WHERE(

[Date].[Calendar].[Calendar Year].&[2003]

,[Customer].[Customer Geography].[Country].&[United States]

)

My code copy into this window didn't carry over very well. If you'd like the .MDX file witht he query then drop me a note at paul_goldy@.hotmail.com

Good Luck.

PGoldy

|||

PGoldy,

Thank you...that worked!!! I do have a few questions though.

1) My understanding was that the solve order was no longer necessary in AS2005 because it relies on the order of the MDX script. Therefore, I did not include the solve order=10 because I wasn't sure if 10 was the correct number given the other calculations in the cube. Is it OK to eliminate it?

2) I'm not sure how to ask this question, so please bear with me if it doesn't make sense. Since the calculated percent is for the current dimension the end user is browsing by.... what happens if the first clause of the IIF in the first dimension is true? Does it return NULL and then not move on to the other dimensions? For example, if the following conditions are both met

[Product].[Product Categories].CurrentMember.Level.Ordinal <> 0 AND

ISEMPTY(([Product].[Product Categories].CurrentMember.Parent, Measures.[Customer Count]))

will it return a NULL value for the % when viewing the Product dimension and then STOP or will it continue to evaluate all the other conditions/dimensions as well.

Thanks again.

SAP2006

|||

Hi. Glad to be of help, and thanks for the questions.

(1) You are correct about the SOLVE_ORDER, it is not required and you can eliminate it. I'm afraid I used it out of habit.

(2) Your second question makes sense. Yes, if one of the initial IIF() conditions are met ( .Oridinal <> 0 AND ISEMPTY()) then the expression returns NULL as the result. I'm afraid this is required because if ANY of the denominator values are empty then you get a divide by 0 error - even though the user may only "see" a specific dimension they are drilling up/down on. Sory if that's bad news, but that's how the calculations work.

Good Luck.

PGoldy

|||

I think we will be OK with the nested IIFs. I just wanted to make sure that I understood what the code was doing. Thanks again for all your help.

SAP2006

|||

Hi PGoldy,

The code you provided previously works perfectly if we are only viewing one dimension at a time (column percentages). However, when I do a cross-tab of two or more dimensions, the code is giving me column percentages and what I really need are row percentages. I know I need to change the denominator, but I'm unsure how to write the MDX code given that the denominator could be any dimension in the cube and therefore will be different depending on which dimension the user is browsing. Is there a way to do this?

[DimLocation].[DimLocation].CurrentMember,[Measures].No

/

?

where ? could be any of the following:

[DimDischargeYear].[DimDischargeYear].CurrentMember, [Measures].No

[DimAdmissions].[DimAdmissions].CurrentMember, [Measures].No

.........

[DimLevel].[DimLevel].CurrentMember, [Measures].No

Any help would be appreciated.

Thanks, SAP2006

|||

Sorry, but I don't understand why the percentage calculation only shows corretly on columns. The calculation is not specific to rows or columns, but only adheres to the relative position of members in a hierarchy (children, parents). Perhaps you could provide a complete MDX query which shows the problem?

Also, I'm a little confused what you mean by a "...cross tab of two or more dimensions...". Do you mean you have nested dimensions on the rows and columns, or do you use the PAGE Axis and have multiple tabs of data?

PGoldy

|||

Sorry, I didn't explain myself better. You are correct. Currently, BIDS is calculating a percentage relative to the position of members of in the hierarchy (see table 1), for example, 3849/23343=16.5%. What I would like is a percentage of the row (see table 2), for example 3849/7440=51.7%.

By cross-tab, I simply meant a query using two or more dimensions. For example in the query below, I have Location on rows and Discharge Year on columns. There may be instances where I have nested dimensions as well, for example, Location and Admissions on rows and Discharge Year on columns. I hope this clarifies things.

TABLE 1

Discharge Year

Location

2004

2005

Grand Total

N

PercentCases

N

PercentCases

N

PercentCases

A

3849

16.5%

3591

15.4%

7440

31.9%

B

2887

12.4%

2841

12.2%

5728

24.5%

C

5103

21.9%

5072

21.7%

10175

43.6%

Grand Total

11839

50.7%

11504

49.3%

23343

100.0%

TABLE 2

Discharge Year

Location

2004

2005

Grand Total

N

PercentCases

N

PercentCases

N

PercentCases

A

3849

51.70%

3591

48.30%

7440

31.9%

B

2887

50.40%

2841

49.60%

5728

24.5%

C

5103

50.20%

5072

49.80%

10175

43.6%

Grand Total

11839

50.7%

11504

49.3%

23343

100.0%

Thanks, SAP2006

|||

Hi, and thanks for the clarification. I understand what you're trying to do. If we define what you're looking for...you'd like to get the percentage contribution for unrelated members. In other words, you want to know how much Location A is contributing to Total Years (3849/7440 = 51.7%). This is a tough problem which is usually solved by a client side tool which takes apart the query result cell set and provides the % calcs as a client side activity without using MDX. Tools like Analyzer2005, and ProClarity demonstrate this capability well. I'm sure there are other tools that do as well, but I'm not as familiar with them.

It may be possible to do what you'd like as an MDX calculation, but it involves retrieving the current tuples defined on the axes (rows or columns). A function which allows retrieval of tuples from the axes is the MDX AXIS() function. I haven't taken a look at the AXIS() function since AS2005 came out because it only worked in limited cases in AS2000. You can find more information on the AXIS() function here:

http://msdn2.microsoft.com/en-us/library/ms145531.aspx

I'll also spend some time tinkering with it to see if we can retrieve current tuples on the axes and post any results here, but don't hold your breath. This is a tough problem usually solved by client side applications operating on the resulting cell-set.

PGoldy

|||

PGoldy,

We are using the ProClarity client tool and although it calculates and displays column, row and grid percentages, it graphs the values (n) rather than the percentages. Therefore, I was trying to create the percentages in SSAS. I reviewed the documentation you posted and included AXIS(1) in the denominator. The following messge was displayed when I put the cursor on the #VALUE result in the cell:

"The function expects a string or numeric expression for the argument. A tuple set expression was used."

The MDX code I processed was

IIF(IsEmpty([Measures].No),NULL,

([DimLocation].[DimLocation].CurrentMember,[Measures].No) /

(AXIS(1),[Measures].No))

Any ideas if this can be changed? If you don't have time to look at it, I understand. I appreciate all the help you've given.

Thanks, SAP2006

|||

PGoldy,

We are using the ProClarity client tool and although it calculates and displays column, row and grid percentages, it graphs the values (n) rather than the percentages. Therefore, I was trying to create the percentages in SSAS. I reviewed the documentation you posted and included AXIS(1) in the denominator. The following messge was displayed when I put the cursor on the #VALUE result in the cell:

"The function expects a string or numeric expression for the argument. A tuple set expression was used."

The MDX code I processed was

IIF(IsEmpty([Measures].No),NULL,

([DimLocation].[DimLocation].CurrentMember,[Measures].No) /

(AXIS(1),[Measures].No))

Any ideas if this can be changed? If you don't have time to look at it, I understand. I appreciate all the help you've given.

Thanks, SAP2006

|||

Hello,

I am new to MDX and I have a similar situation as discussed above i.e. finding % contribution to totals. I am using AS2005. Any idea of how this can be done?

Thanks,

Kulathu Sarma

|||

Hi,

Unfortunately, we haven't figured it out yet. However, in January, we have a consultant coming onsite to help with various issues. I'll post the solution if we find it.

SAP

|||

SAP,

It looks like you're working on a health-related application. We've also just started using SSAS2005 for cancer services data and are currently grappling with the same problem - creating row and column percentage measures that are not tied to specific dimensions. I've tried the Axis function without success; it returns a set that's the same as the current member or cell, so the percentage measure based on it is always 100%. This is probably not how it's supposed to work. Would be grateful for any advice in this regard...

QCCAT

|||

QCCAT,

The following MDX returns row percentages. Good Luck!!

//Begin - Percent Cases

CREATE MEMBER CURRENTCUBE.MEASURES.[Percent Cases]

AS

IIF(Axis(0)(0).Count < 2,

(Axis(0)(0).Dimension.CurrentMember,Measures.N)

/

Sum

(Filter(Axis(0),

InStr(Axis(0)(0).Dimension.CurrentMember.Name,"Total") = 0),

Measures.N),

NULL),

FORMAT_STRING = "0.0%",

VISIBLE = 1;

//End - Percent Cases