Saturday, February 25, 2012

Perform upgrade method

hi

I have two versions of the same dataflow component , i need to use the new version component in the packages created using the old version . How can i do this using the perform upgrade method .

Thanks

Mani

Explanation about PerformUpgrade and a sample code can be found in book online or at:

http://msdn2.microsoft.com/ko-kr/library/microsoft.sqlserver.dts.pipeline.pipelinecomponent.performupgrade.aspx

Hope this helps,
Ovidiu Burlacu

|||

hi

thanks i got the solution the perform upgrade method only works if the current version property, that is stored in the package is less than the currentversion property of the component on the local computer.

Regards

Mani

|||

hi

If i removed the old DLL from the GAC i am getting errors while opening the package saved with the old version DLL

these are the errors ...

Error 1 Validation error. Data Flow Task: DTS.Pipeline: The component metadata for "component "Oracle Destination" (67)" could not be upgraded to the newer version of the component. The PerformUpgrade method failed. Package1.dtsx 0 0
Error 2 Error loading Package1.dtsx: The "component "Oracle Destination" (67)" failed to cache the component metadata object and returned error code 0x80131600. d:\testproject\integration services project5\Package1.dtsx 1 1
Error 3 Error loading Package1.dtsx: Component "component "Oracle Destination" (67)" could not be created and returned error code 0xC0047067. Make sure that the component is registered correctly. d:\testproject\integration services project5\Package1.dtsx 1 1
Error 4 Error loading Package1.dtsx: The component metadata for "component "Oracle Destination" (67)" could not be upgraded to the newer version of the component. The PerformUpgrade method failed. d:\testproject\integration services project5\Package1.dtsx 1 1

how can i correct these errors

thanks

Mani

|||

You should not remo0ve the old DLL from the GAC becuase the component can no longer be instantiated. The new DLL should be able to live in the GAC with the old DLL because they have different versions (file or assembly or even the PublicTokenKey)

Thanks,
Ovidiu

|||

Ovidiu, I don't see how you can do side by side with SSIS components in the GAC. File versions do not differentiate between assemblies, so you cannot use that to installside by side. If you change the assembly version or the key (and therefore the public key token) you can side by side them, but then your are stuffed since SSIS stores the full strong name in the package. Any change in the string name means you get the nice error about not being able to load the assembly. Hacking the XML would seem to be the only solution, shame it is not a supported one :)

|||Indeed you are right. I think is time for me to go in vacation |||

In fact you can automatically upgrade a dataflow component's metadata when you upgrade the component's assembly version.

Here's how:

1. Remove the old assembly from GAC.

2. Make sure your new component's CurrentVersion argument of the DtsPipelineComponent attribute is greater than the one saved in the old metadata. That'll ensure that PerformUpgrade of the new component will be called when SSIS opens the package.

3. In the new component override PerformUpgrade and in it make sure to include the following:

ComponentMetaData.CustomPropertyCollection["UserComponentTypeName"].Value = this.GetType().AssemblyQualifiedName;

This changes the metadata's type reference to refer to the new type and version. This will ensure that things like doubleclick on the component will work if you're using UITypeName argument of DtsPipelineComponent attribute for example.
Of course here you should also upgrade any metadata properties from the old version to the new one, including any UITypeEditor properties of custom properties which have their custom editors.

4. Install the new assembly to the GAC.

5. Create a policy assembly redirecting the old version to the new one and install it in GAC. For example of how to do that see here: http://samples.gotdotnet.com/quickstart/howto/doc/pubpolicy.aspx

Now when you open packages with the old version the pattern above will automatically upgrade to the new version.

Milen

Perform upgrade method

hi

I have two versions of the same dataflow component , i need to use the new version component in the packages created using the old version . How can i do this using the perform upgrade method .

Thanks

Mani

Explanation about PerformUpgrade and a sample code can be found in book online or at:

http://msdn2.microsoft.com/ko-kr/library/microsoft.sqlserver.dts.pipeline.pipelinecomponent.performupgrade.aspx

Hope this helps,
Ovidiu Burlacu

|||

hi

thanks i got the solution the perform upgrade method only works if the current version property, that is stored in the package is less than the currentversion property of the component on the local computer.

Regards

Mani

|||

hi

If i removed the old DLL from the GAC i am getting errors while opening the package saved with the old version DLL

these are the errors ...

Error 1 Validation error. Data Flow Task: DTS.Pipeline: The component metadata for "component "Oracle Destination" (67)" could not be upgraded to the newer version of the component. The PerformUpgrade method failed. Package1.dtsx 0 0
Error 2 Error loading Package1.dtsx: The "component "Oracle Destination" (67)" failed to cache the component metadata object and returned error code 0x80131600. d:\testproject\integration services project5\Package1.dtsx 1 1
Error 3 Error loading Package1.dtsx: Component "component "Oracle Destination" (67)" could not be created and returned error code 0xC0047067. Make sure that the component is registered correctly. d:\testproject\integration services project5\Package1.dtsx 1 1
Error 4 Error loading Package1.dtsx: The component metadata for "component "Oracle Destination" (67)" could not be upgraded to the newer version of the component. The PerformUpgrade method failed. d:\testproject\integration services project5\Package1.dtsx 1 1

how can i correct these errors

thanks

Mani

|||

You should not remo0ve the old DLL from the GAC becuase the component can no longer be instantiated. The new DLL should be able to live in the GAC with the old DLL because they have different versions (file or assembly or even the PublicTokenKey)

Thanks,
Ovidiu

|||

Ovidiu, I don't see how you can do side by side with SSIS components in the GAC. File versions do not differentiate between assemblies, so you cannot use that to installside by side. If you change the assembly version or the key (and therefore the public key token) you can side by side them, but then your are stuffed since SSIS stores the full strong name in the package. Any change in the string name means you get the nice error about not being able to load the assembly. Hacking the XML would seem to be the only solution, shame it is not a supported one :)

|||Indeed you are right. I think is time for me to go in vacation |||

In fact you can automatically upgrade a dataflow component's metadata when you upgrade the component's assembly version.

Here's how:

1. Remove the old assembly from GAC.

2. Make sure your new component's CurrentVersion argument of the DtsPipelineComponent attribute is greater than the one saved in the old metadata. That'll ensure that PerformUpgrade of the new component will be called when SSIS opens the package.

3. In the new component override PerformUpgrade and in it make sure to include the following:

ComponentMetaData.CustomPropertyCollection["UserComponentTypeName"].Value = this.GetType().AssemblyQualifiedName;

This changes the metadata's type reference to refer to the new type and version. This will ensure that things like doubleclick on the component will work if you're using UITypeName argument of DtsPipelineComponent attribute for example.
Of course here you should also upgrade any metadata properties from the old version to the new one, including any UITypeEditor properties of custom properties which have their custom editors.

4. Install the new assembly to the GAC.

5. Create a policy assembly redirecting the old version to the new one and install it in GAC. For example of how to do that see here: http://samples.gotdotnet.com/quickstart/howto/doc/pubpolicy.aspx

Now when you open packages with the old version the pattern above will automatically upgrade to the new version.

Milen

Perform several operations in 1 ALTER TABLE?

Hi there. Does anyone know of a why to perform several operations on a table within only 1 ALTER TABLE statement? I haven't found anything to date and don't even know if it's possible.
Thanks,
Angelwhat's the point? are you worried about performance? you shouldn't be, because you shouldn't be altering your tables at all in production, other than to fix bugs or release new versions of a product. it shouldn't be an everyday thing.|||I'm almost certain that this is a homework question. If you post a URL to the assignment, or a scanned image of the handout you got in class we can provide you with better help.

Yes, you can make more than one change in a single ALTER TABLE statement. No, you shouldn't do it because of the reasons cited by Jezemine and your present uncertainty in the process.

-PatP

perform floating point addition in SQL stored procedure

Hi..
Is there any way to add the value of 4 column and at the same time print the result together with another column?

I have this stored procedure:

CREATE PROCEDURE sp_queuelist AS
BEGIN
DECLARE @.temp1 As Decimal
DECLARE @.temp2 As Decimal
DECLARE @.cash As Decimal
DECLARE @.cheque As Decimal
DECLARE @.card As Decimal
DECLARE @.nets As Decimal
DECLARE @.bill As Decimal
DECLARE @.company As Decimal

SELECT
@.cash=Cash,@.cheque=Cheque,@.card=Card,@.nets=Nets,
@.bill=Bill,@.company=Company
FROM QUEUE
ORDER BY QNo

SET @.temp1 = @.cash + @.cheque + @.card + @.nets
SET @.temp2 = @.bill +@.company

Select QNO,
PCNo,
PName,
@.temp1 As totalCash,
@.temp2 As totalContract,
Doctor

FROM QUEUE
ORDER BY QNo
END
GO

Basically I want to add 4 columns: cash+cheque+card+nets into totalCash and bill+company to totalContract.

All the 6 field types are decimal.

I want to calculate temp1 and temp2 and then select the rest of the column to be displayed in the datagrid.

However, this stored procedure gives me 2 problems:
1. It gives the rounding result of the addition of decimal number, not the decimal itself.
2. All the rows in datagrid display the same result for totalCash and totalContract, which is the total from the last row in the table.

I seldom use stored procedure.
Is there any way to solve this problem?
Any suggestion is most welcomed.
Thank you in advanced.
Sincerely

Agustina(1) you can try changing the decimal to float.

(2) in the design of your table, you can set the formula for the column as sum of the other 4 columns. that way you dont need to worry abt doing the addition. anytime you make any change in any of the columns, the computed column is automatically updated.
if you need more help in this approach, let me know.

HTH.

Perform calculation using sql script

Is it possible to multiply a vaiable and a set column and update table with
new data? I need to estimate future net income using a predetermined
percentage against the previous years net income numbers. I use this to run
projection calculations against estimated annual net income. The formula
would look like this:
Projected Net Income = Previous Net Income + (Previous Net Income * Percent
Increase)
Table1: INCOME
Fields: NetIncome, DataYear, ProjectedNetIncome
Table2: PROJECTION
ProjectedYr, PercentIncrease
I would like to use PercentIncrease as a variable (this number can change
year to year and give me the ability to change percentage.)
Below is what i have been trying ... i know the code is incorrect and its at
the set statement (multipling variable against netincome field).
This is logically what I want to do but I know the code is all off...
any suggestions on how to correct?
/* Declare Variables */
DECLARE @.CurrentYr Numeric(9)
DECLARE @.IncomeIncrease Numeric(9)
SET @.CurrentYr=(SELECT ProjectedYr FROM Projections)
SET @.IncomeIncrease=(SELECT PercentIncrease FROM Projections)
/* Calculate Projected Income Amt for Year Entered into Projected Year Field
*/
UPDATE Income
SET ProjectedNetIncome = ((NetIncome)*((NetIncome)*(@.IncomeIncrease)))
WHERE DataYear = @.CurrentYr
thanks in advance for any help
rob
Any suggestions would be great.
If you could post DDL and a little sample data + your expected results, it
would help immensely. This assumes that the INCOME and PROJECTION Tables
are related via the DataYear and ProjectYear columns. If the INCOME table
has a row for DataYear = 2003, it will only update if the PROJECTION table
has a row for DataYear = 2003.
UPDATE INCOME SET ProjectedNetIncome = i.NetIncome + (i.NetIncome *
p.PercentIncrease)
FROM INCOME i, PROJECTION p
WHERE i.DataYear = p.ProjectYear
But it's hard to be certain without knowing where you're starting or where
you want to end up...
"Rob" <temp@.dstek.com> wrote in message
news:eOm9mEDTFHA.3188@.TK2MSFTNGP09.phx.gbl...
> Is it possible to multiply a vaiable and a set column and update table
> with
> new data? I need to estimate future net income using a predetermined
> percentage against the previous years net income numbers. I use this to
> run
> projection calculations against estimated annual net income. The formula
> would look like this:
> Projected Net Income = Previous Net Income + (Previous Net Income *
> Percent
> Increase)
> Table1: INCOME
> Fields: NetIncome, DataYear, ProjectedNetIncome
> Table2: PROJECTION
> ProjectedYr, PercentIncrease
> I would like to use PercentIncrease as a variable (this number can change
> year to year and give me the ability to change percentage.)
> Below is what i have been trying ... i know the code is incorrect and its
> at
> the set statement (multipling variable against netincome field).
> This is logically what I want to do but I know the code is all off...
> any suggestions on how to correct?
> /* Declare Variables */
> DECLARE @.CurrentYr Numeric(9)
> DECLARE @.IncomeIncrease Numeric(9)
> SET @.CurrentYr=(SELECT ProjectedYr FROM Projections)
> SET @.IncomeIncrease=(SELECT PercentIncrease FROM Projections)
> /* Calculate Projected Income Amt for Year Entered into Projected Year
> Field
> */
> UPDATE Income
> SET ProjectedNetIncome = ((NetIncome)*((NetIncome)*(@.IncomeIncrease)))
> WHERE DataYear = @.CurrentYr
>
> thanks in advance for any help
> rob
> Any suggestions would be great.
>
>

Perform calculation using sql script

Is it possible to multiply a vaiable and a set column and update table with
new data? I need to estimate future net income using a predetermined
percentage against the previous years net income numbers. I use this to run
projection calculations against estimated annual net income. The formula
would look like this:
Projected Net Income = Previous Net Income + (Previous Net Income * Percent
Increase)
Table1: INCOME
Fields: NetIncome, DataYear, ProjectedNetIncome
Table2: PROJECTION
ProjectedYr, PercentIncrease
I would like to use PercentIncrease as a variable (this number can change
year to year and give me the ability to change percentage.)
Below is what i have been trying ... i know the code is incorrect and its at
the set statement (multipling variable against netincome field).
This is logically what I want to do but I know the code is all off...
any suggestions on how to correct?
/* Declare Variables */
DECLARE @.CurrentYr Numeric(9)
DECLARE @.IncomeIncrease Numeric(9)
SET @.CurrentYr=(SELECT ProjectedYr FROM Projections)
SET @.IncomeIncrease=(SELECT PercentIncrease FROM Projections)
/* Calculate Projected Income Amt for Year Entered into Projected Year Field
*/
UPDATE Income
SET ProjectedNetIncome = ((NetIncome)*((NetIncome)*(@.IncomeIncrea
se)))
WHERE DataYear = @.CurrentYr
thanks in advance for any help
rob
Any suggestions would be great.If you could post DDL and a little sample data + your expected results, it
would help immensely. This assumes that the INCOME and PROJECTION Tables
are related via the DataYear and ProjectYear columns. If the INCOME table
has a row for DataYear = 2003, it will only update if the PROJECTION table
has a row for DataYear = 2003.
UPDATE INCOME SET ProjectedNetIncome = i.NetIncome + (i.NetIncome *
p.PercentIncrease)
FROM INCOME i, PROJECTION p
WHERE i.DataYear = p.ProjectYear
But it's hard to be certain without knowing where you're starting or where
you want to end up...
"Rob" <temp@.dstek.com> wrote in message
news:eOm9mEDTFHA.3188@.TK2MSFTNGP09.phx.gbl...
> Is it possible to multiply a vaiable and a set column and update table
> with
> new data? I need to estimate future net income using a predetermined
> percentage against the previous years net income numbers. I use this to
> run
> projection calculations against estimated annual net income. The formula
> would look like this:
> Projected Net Income = Previous Net Income + (Previous Net Income *
> Percent
> Increase)
> Table1: INCOME
> Fields: NetIncome, DataYear, ProjectedNetIncome
> Table2: PROJECTION
> ProjectedYr, PercentIncrease
> I would like to use PercentIncrease as a variable (this number can change
> year to year and give me the ability to change percentage.)
> Below is what i have been trying ... i know the code is incorrect and its
> at
> the set statement (multipling variable against netincome field).
> This is logically what I want to do but I know the code is all off...
> any suggestions on how to correct?
> /* Declare Variables */
> DECLARE @.CurrentYr Numeric(9)
> DECLARE @.IncomeIncrease Numeric(9)
> SET @.CurrentYr=(SELECT ProjectedYr FROM Projections)
> SET @.IncomeIncrease=(SELECT PercentIncrease FROM Projections)
> /* Calculate Projected Income Amt for Year Entered into Projected Year
> Field
> */
> UPDATE Income
> SET ProjectedNetIncome = ((NetIncome)*((NetIncome)*(@.IncomeIncrea
se)))
> WHERE DataYear = @.CurrentYr
>
> thanks in advance for any help
> rob
> Any suggestions would be great.
>
>

Perform calculation using sql script

Is it possible to multiply a vaiable and a set column and update table with
new data? I need to estimate future net income using a predetermined
percentage against the previous years net income numbers. I use this to run
projection calculations against estimated annual net income. The formula
would look like this:
Projected Net Income = Previous Net Income + (Previous Net Income * Percent
Increase)
Table1: INCOME
Fields: NetIncome, DataYear, ProjectedNetIncome
Table2: PROJECTION
ProjectedYr, PercentIncrease
I would like to use PercentIncrease as a variable (this number can change
year to year and give me the ability to change percentage.)
Below is what i have been trying ... i know the code is incorrect and its at
the set statement (multipling variable against netincome field).
This is logically what I want to do but I know the code is all off...
any suggestions on how to correct?
/* Declare Variables */
DECLARE @.CurrentYr Numeric(9)
DECLARE @.IncomeIncrease Numeric(9)
SET @.CurrentYr=(SELECT ProjectedYr FROM Projections)
SET @.IncomeIncrease=(SELECT PercentIncrease FROM Projections)
/* Calculate Projected Income Amt for Year Entered into Projected Year Field
*/
UPDATE Income
SET ProjectedNetIncome = ((NetIncome)*((NetIncome)*(@.IncomeIncrease)))
WHERE DataYear = @.CurrentYr
thanks in advance for any help
rob
Any suggestions would be great.If you could post DDL and a little sample data + your expected results, it
would help immensely. This assumes that the INCOME and PROJECTION Tables
are related via the DataYear and ProjectYear columns. If the INCOME table
has a row for DataYear = 2003, it will only update if the PROJECTION table
has a row for DataYear = 2003.
UPDATE INCOME SET ProjectedNetIncome = i.NetIncome + (i.NetIncome *
p.PercentIncrease)
FROM INCOME i, PROJECTION p
WHERE i.DataYear = p.ProjectYear
But it's hard to be certain without knowing where you're starting or where
you want to end up...
"Rob" <temp@.dstek.com> wrote in message
news:eOm9mEDTFHA.3188@.TK2MSFTNGP09.phx.gbl...
> Is it possible to multiply a vaiable and a set column and update table
> with
> new data? I need to estimate future net income using a predetermined
> percentage against the previous years net income numbers. I use this to
> run
> projection calculations against estimated annual net income. The formula
> would look like this:
> Projected Net Income = Previous Net Income + (Previous Net Income *
> Percent
> Increase)
> Table1: INCOME
> Fields: NetIncome, DataYear, ProjectedNetIncome
> Table2: PROJECTION
> ProjectedYr, PercentIncrease
> I would like to use PercentIncrease as a variable (this number can change
> year to year and give me the ability to change percentage.)
> Below is what i have been trying ... i know the code is incorrect and its
> at
> the set statement (multipling variable against netincome field).
> This is logically what I want to do but I know the code is all off...
> any suggestions on how to correct?
> /* Declare Variables */
> DECLARE @.CurrentYr Numeric(9)
> DECLARE @.IncomeIncrease Numeric(9)
> SET @.CurrentYr=(SELECT ProjectedYr FROM Projections)
> SET @.IncomeIncrease=(SELECT PercentIncrease FROM Projections)
> /* Calculate Projected Income Amt for Year Entered into Projected Year
> Field
> */
> UPDATE Income
> SET ProjectedNetIncome = ((NetIncome)*((NetIncome)*(@.IncomeIncrease)))
> WHERE DataYear = @.CurrentYr
>
> thanks in advance for any help
> rob
> Any suggestions would be great.
>
>

perform calculation for only desired rows

hi all, i wasnt quite sure where to look to answer my particular problem so i am posting up this thread in the hopes that someone can point me in the right direction. in my report, i am showing sales figures for an area. i added a table to display sales from this year, sales from last year, and then the comparable percentage('this' divided by 'last' and then minus one). to account for some ppl who didnt have sales last year, i was able to use an IIF expression to return "N/A" in the textbox. my problem is i want the compared percentage to show for the area total, but not including ppl who were an 'N/A'. i am assuming that some sort of expression will be needed for the area total row, i.e. i want the area total to sum up this year and last year and then get the percentage, but i need to filter out the individuals who didnt have data for last year.

as an example

nsty nsly percentage

total: 15 7 X

A: 5 4 25%

B: 5 3 66%

C: 5 0 N/A

the percentage for total, X, should be (10/7)-1, with p0 as the format; right now it is summing rows A, B, and C.. how can i exclude row C from the calculation?

am i on the right track by researching filters and expressions, or is this a matter for the query

thanks in advance for any help

Hi

An expression will be able to handle this fine.
Use the IIF function to only include nsty if nsly is greater than zero.
For your total % sum use something like this:

= (Sum(IIF(Fields!nsly.value > 0,Fields!nsty.value,0))/Sum(Fields!nsly.Value)) -1

Hope this helps

Cheers
Mark
|||

thanks for the response; i think i still need to expand on the expression a little bit but at least i know how to go about it now.

Perform an experssion in desing mode

Hi all,

SSIS, I would like to perform/validate an experssion in design mode. In particular, i would like to see the value of the varibles before running the whole package. I tried several time in Command Windows or Immedite Window unsuccessfully. Do those windows can only be used in running mode? Or how can I do that in other way?

Thanks

Where are you performing the expression? In the Control Flow or Data Flow.

This BOL entry describes how to evaluate an expression at design time in the Control Flow: http://msdn2.microsoft.com/en-us/library/ms141698.aspx

Expressions in the data flow are validated automatically at design time and any errors will be raised - but they can not be evaluated at design time.

Debug windows are only available when the package is running in the debugger.

Donald Farmer

|||

That's mean I must create varibles in Control flow or date flow even though i just want to see the result of 1+1? Can't it like in VB or C# IDE?

|||

I'm still not clear where you are using the expression, so it's difficult to help in detail. You can evaluate Property Expressions, but not expressions in the data flow.

Donald

perform all the actions in one pass rather than take serveral call

You're going to spend far less time writing a stored procedure that creates
one account and iterating through your array in your client application
calling the stored proc.
"E B via webservertalk.com" wrote:

> My application has one web form where admin can add as many users as
> desired and at the end when he press SaveChanges I want to add all of them
> to database, There should be some way to pass arrays to stored procedure..
I
> am looking for it or something similar to this...
> I want to pass an array into a stored procedure so that it can perform all
> the actions in one pass rather than take serveral calls to the server.
> I'm working with (SQL Server, .NET)
> --
> Message posted via http://www.webservertalk.com
>Alien2_51 wrote:
> You're going to spend far less time writing a stored procedure that
> creates one account and iterating through your array in your client
> application calling the stored proc.
>
bs
--
Microsoft MVP - ASP/ASP.NET
Please reply to the newsgroup. This email account is my spam trap so I
don't check it very often. If you must reply off-line, then remove the
"NO SPAM"|||to Bob Barrows [MVP] what does it mean ?
Message posted via http://www.webservertalk.com|||E B via webservertalk.com wrote:
> to Bob Barrows [MVP] what does it mean ?
Oh, I'm sorry. it's short for:
I don't believe that what you said is necessarily correct.
Bob Barrows
Microsoft MVP - ASP/ASP.NET
Please reply to the newsgroup. This email account is my spam trap so I
don't check it very often. If you must reply off-line, then remove the
"NO SPAM"|||Ok, however i think i solve the problem , i used the link:
http://www.sommarskog.se/arrays-in-sql.html
what do u think'
Message posted via http://www.webservertalk.com|||E B via webservertalk.com wrote:
> Ok, however i think i solve the problem , i used the link:
> http://www.sommarskog.se/arrays-in-sql.html
> what do u think'
Well, there are several options in that link. Hopefully you satisfied
yourself that you picked the optimum one for your situation.
BTW, my comment was not directed at you. it was directed at the comment that
looping through the array in the client and making multiple calls to a
procedure ... across processes ... would be quicker than making a single
call to a stored procedure.
Microsoft MVP - ASP/ASP.NET
Please reply to the newsgroup. This email account is my spam trap so I
don't check it very often. If you must reply off-line, then remove the
"NO SPAM"

Perform aggregate functions on uniqueidentifiers

For some reason, [on sql2k] one cannot perform "Count(X)" where X is of type
uniqueidentifier. Will future versions of sql server suffer from this
limitation? 2003 or 2005?
We came across this problem when we had to execute a query with multiple
table joins.
Hasani,
The workaround that I use is to store them as BINARY(16).
"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:%233sMM$flEHA.3564@.TK2MSFTNGP14.phx.gbl...
> For some reason, [on sql2k] one cannot perform "Count(X)" where X is of
type
> uniqueidentifier. Will future versions of sql server suffer from this
> limitation? 2003 or 2005?
> We came across this problem when we had to execute a query with multiple
> table joins.
>
|||clever, i'll tell my supervisor tomorrow.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:Oc23mQglEHA.592@.TK2MSFTNGP11.phx.gbl...
> Hasani,
> The workaround that I use is to store them as BINARY(16).
>
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:%233sMM$flEHA.3564@.TK2MSFTNGP14.phx.gbl...
> type
>
|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:%23gbFEiglEHA.2892@.tk2msftngp13.phx.gbl...
> clever, i'll tell my supervisor tomorrow.
If you want to get even tricker, you can experiment with doing something
like this when you store the GUID:
SELECT CONVERT(BINARY(6), GETDATE()) + CONVERT(BINARY(10), NEWID()) AS
DateGUID
This reduces the uniqueness a bit (removes 6 of the 16 bytes), but not
too much because there are only so many rows you can insert every 3
milliseconds. The upside is that you can now cluster on your GUID column
without destroying INSERT performance.
|||Will sql server allow binary columntypes as primary keys?
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:eb$oqkglEHA.3712@.TK2MSFTNGP15.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:%23gbFEiglEHA.2892@.tk2msftngp13.phx.gbl...
> If you want to get even tricker, you can experiment with doing
> something
> like this when you store the GUID:
> SELECT CONVERT(BINARY(6), GETDATE()) + CONVERT(BINARY(10), NEWID()) AS
> DateGUID
> This reduces the uniqueness a bit (removes 6 of the 16 bytes), but not
> too much because there are only so many rows you can insert every 3
> milliseconds. The upside is that you can now cluster on your GUID column
> without destroying INSERT performance.
>
|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:eP7D5uglEHA.712@.TK2MSFTNGP09.phx.gbl...
> Will sql server allow binary columntypes as primary keys?
Yes. When I have used GUIDs as primary keys (rarely, I don't think it's
a great idea most of the time), I have used the BINARY(16) technique. More
recently I've used the date concatenation technique in a project and it
worked out very well.
|||What are you reasons for not using a guid as a primary key?
We currently use integers as a primary key, but we use a stored procedure to
generate a unqiue random non-sequential integer, and we store this value in
a table to stop duplicates. In that scenario, I'm arguing that we should
just use uniqueidentifier types because we seem to just be reinventing the
wheel, but then someone mention the aggregate function thing with
uniqueidentifier types. I'm not aware of any penalties associated with using
uniqueidentifier types though, other than, it will require more bytes per
column, than an int.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23OvcnxglEHA.596@.tk2msftngp13.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:eP7D5uglEHA.712@.TK2MSFTNGP09.phx.gbl...
> Yes. When I have used GUIDs as primary keys (rarely, I don't think
> it's
> a great idea most of the time), I have used the BINARY(16) technique.
> More
> recently I've used the date concatenation technique in a project and it
> worked out very well.
>
|||Hasani (remove nospam from address) wrote:
> What are you reasons for not using a guid as a primary key?
> We currently use integers as a primary key, but we use a stored
> procedure to generate a unqiue random non-sequential integer, and we
> store this value in a table to stop duplicates. In that scenario, I'm
> arguing that we should just use uniqueidentifier types because we
> seem to just be reinventing the wheel, but then someone mention the
> aggregate function thing with uniqueidentifier types. I'm not aware
> of any penalties associated with using uniqueidentifier types though,
> other than, it will require more bytes per column, than an int.
You're right in that it's a lot more bytes per row using a UID as
opposed to an INT IDENTITY. Four times the storage, which translates to
a much larger index when using a uniqueidentifier. And as Adam
eloquently mentioned, using a UID as a clustered key does not work well
because you get a lot of page splitting and head movement on the drives.
Adding a date component as a prefix to the UID prevents much of th epage
splitting, increasing insert performance. However, using a UID as
clustered key means propagating that key to all non-clustered indexes,
making them much larger as well.
If you can, I would stick with an INT IDENTITY column for a PK.
David G.
|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:OUyrc5glEHA.2892@.tk2msftngp13.phx.gbl...
> What are you reasons for not using a guid as a primary key?
> We currently use integers as a primary key, but we use a stored procedure
to
> generate a unqiue random non-sequential integer, and we store this value
in
> a table to stop duplicates. In that scenario, I'm arguing that we should
> just use uniqueidentifier types because we seem to just be reinventing the
> wheel, but then someone mention the aggregate function thing with
> uniqueidentifier types. I'm not aware of any penalties associated with
using
> uniqueidentifier types though, other than, it will require more bytes per
> column, than an int.
I think David G pointed out most of the issues in his post, so I'll
instead refer to the only times I have had to use a GUID, which is when the
application itself was responsible for creating the key. Applications
cannot reliably create unique integers, so GUIDs are pretty much the only
choice (or natural primary keys, if there's one available).
Also, why would you want to use a non-sequential random integer instead
of an IDENTITY?
|||Maybe I contradicted myself when I said non-sequential random...
We essentially need a random number generator to use as a primary key value.
I don't know if sql supports it. All I've seen is a unique number generator
that increments by one on every insert. It's unique but not random. The
problem is is, this value is going to be made public and we don't want to
make it obvious that it's just an incrementing value (think cookies and
websessions).
What we currently do (sometimes) is have 2 columns, I that's an
autoincrementing int that's a primary key, and the other is a
uniqueidentifer column that isn't a primary key (but may have a constraint
to make sure there are no duplicates), and we would make the uniqueidentifer
value public so in a cookie, it would always look random.
I don't feel comfortable in the scenario because you have 2 columns that are
doing the same thing (preserving/ensuring uniqueness). So I'm trying to look
at all the tradeoffs of using a uniqueidentifier instead of an int, and vice
versa.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23IUly9mlEHA.1652@.TK2MSFTNGP09.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:OUyrc5glEHA.2892@.tk2msftngp13.phx.gbl...
> to
> in
> using
> I think David G pointed out most of the issues in his post, so I'll
> instead refer to the only times I have had to use a GUID, which is when
> the
> application itself was responsible for creating the key. Applications
> cannot reliably create unique integers, so GUIDs are pretty much the only
> choice (or natural primary keys, if there's one available).
> Also, why would you want to use a non-sequential random integer instead
> of an IDENTITY?
>

Perform aggregate functions on uniqueidentifiers

For some reason, [on sql2k] one cannot perform "Count(X)" where X is of type
uniqueidentifier. Will future versions of sql server suffer from this
limitation? 2003 or 2005?
We came across this problem when we had to execute a query with multiple
table joins.Hasani,
The workaround that I use is to store them as BINARY(16).
"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:%233sMM$flEHA.3564@.TK2MSFTNGP14.phx.gbl...
> For some reason, [on sql2k] one cannot perform "Count(X)" where X is of
type
> uniqueidentifier. Will future versions of sql server suffer from this
> limitation? 2003 or 2005?
> We came across this problem when we had to execute a query with multiple
> table joins.
>|||clever, i'll tell my supervisor tomorrow.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:Oc23mQglEHA.592@.TK2MSFTNGP11.phx.gbl...
> Hasani,
> The workaround that I use is to store them as BINARY(16).
>
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:%233sMM$flEHA.3564@.TK2MSFTNGP14.phx.gbl...
>> For some reason, [on sql2k] one cannot perform "Count(X)" where X is of
> type
>> uniqueidentifier. Will future versions of sql server suffer from this
>> limitation? 2003 or 2005?
>> We came across this problem when we had to execute a query with multiple
>> table joins.
>>
>|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:%23gbFEiglEHA.2892@.tk2msftngp13.phx.gbl...
> clever, i'll tell my supervisor tomorrow.
If you want to get even tricker, you can experiment with doing something
like this when you store the GUID:
SELECT CONVERT(BINARY(6), GETDATE()) + CONVERT(BINARY(10), NEWID()) AS
DateGUID
This reduces the uniqueness a bit (removes 6 of the 16 bytes), but not
too much because there are only so many rows you can insert every 3
milliseconds. The upside is that you can now cluster on your GUID column
without destroying INSERT performance.|||Will sql server allow binary columntypes as primary keys?
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:eb$oqkglEHA.3712@.TK2MSFTNGP15.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:%23gbFEiglEHA.2892@.tk2msftngp13.phx.gbl...
>> clever, i'll tell my supervisor tomorrow.
> If you want to get even tricker, you can experiment with doing
> something
> like this when you store the GUID:
> SELECT CONVERT(BINARY(6), GETDATE()) + CONVERT(BINARY(10), NEWID()) AS
> DateGUID
> This reduces the uniqueness a bit (removes 6 of the 16 bytes), but not
> too much because there are only so many rows you can insert every 3
> milliseconds. The upside is that you can now cluster on your GUID column
> without destroying INSERT performance.
>|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:eP7D5uglEHA.712@.TK2MSFTNGP09.phx.gbl...
> Will sql server allow binary columntypes as primary keys?
Yes. When I have used GUIDs as primary keys (rarely, I don't think it's
a great idea most of the time), I have used the BINARY(16) technique. More
recently I've used the date concatenation technique in a project and it
worked out very well.|||What are you reasons for not using a guid as a primary key?
We currently use integers as a primary key, but we use a stored procedure to
generate a unqiue random non-sequential integer, and we store this value in
a table to stop duplicates. In that scenario, I'm arguing that we should
just use uniqueidentifier types because we seem to just be reinventing the
wheel, but then someone mention the aggregate function thing with
uniqueidentifier types. I'm not aware of any penalties associated with using
uniqueidentifier types though, other than, it will require more bytes per
column, than an int.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23OvcnxglEHA.596@.tk2msftngp13.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:eP7D5uglEHA.712@.TK2MSFTNGP09.phx.gbl...
>> Will sql server allow binary columntypes as primary keys?
> Yes. When I have used GUIDs as primary keys (rarely, I don't think
> it's
> a great idea most of the time), I have used the BINARY(16) technique.
> More
> recently I've used the date concatenation technique in a project and it
> worked out very well.
>|||Hasani (remove nospam from address) wrote:
> What are you reasons for not using a guid as a primary key?
> We currently use integers as a primary key, but we use a stored
> procedure to generate a unqiue random non-sequential integer, and we
> store this value in a table to stop duplicates. In that scenario, I'm
> arguing that we should just use uniqueidentifier types because we
> seem to just be reinventing the wheel, but then someone mention the
> aggregate function thing with uniqueidentifier types. I'm not aware
> of any penalties associated with using uniqueidentifier types though,
> other than, it will require more bytes per column, than an int.
You're right in that it's a lot more bytes per row using a UID as
opposed to an INT IDENTITY. Four times the storage, which translates to
a much larger index when using a uniqueidentifier. And as Adam
eloquently mentioned, using a UID as a clustered key does not work well
because you get a lot of page splitting and head movement on the drives.
Adding a date component as a prefix to the UID prevents much of th epage
splitting, increasing insert performance. However, using a UID as
clustered key means propagating that key to all non-clustered indexes,
making them much larger as well.
If you can, I would stick with an INT IDENTITY column for a PK.
David G.|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:OUyrc5glEHA.2892@.tk2msftngp13.phx.gbl...
> What are you reasons for not using a guid as a primary key?
> We currently use integers as a primary key, but we use a stored procedure
to
> generate a unqiue random non-sequential integer, and we store this value
in
> a table to stop duplicates. In that scenario, I'm arguing that we should
> just use uniqueidentifier types because we seem to just be reinventing the
> wheel, but then someone mention the aggregate function thing with
> uniqueidentifier types. I'm not aware of any penalties associated with
using
> uniqueidentifier types though, other than, it will require more bytes per
> column, than an int.
I think David G pointed out most of the issues in his post, so I'll
instead refer to the only times I have had to use a GUID, which is when the
application itself was responsible for creating the key. Applications
cannot reliably create unique integers, so GUIDs are pretty much the only
choice (or natural primary keys, if there's one available).
Also, why would you want to use a non-sequential random integer instead
of an IDENTITY?|||Maybe I contradicted myself when I said non-sequential random...
We essentially need a random number generator to use as a primary key value.
I don't know if sql supports it. All I've seen is a unique number generator
that increments by one on every insert. It's unique but not random. The
problem is is, this value is going to be made public and we don't want to
make it obvious that it's just an incrementing value (think cookies and
websessions).
What we currently do (sometimes) is have 2 columns, I that's an
autoincrementing int that's a primary key, and the other is a
uniqueidentifer column that isn't a primary key (but may have a constraint
to make sure there are no duplicates), and we would make the uniqueidentifer
value public so in a cookie, it would always look random.
I don't feel comfortable in the scenario because you have 2 columns that are
doing the same thing (preserving/ensuring uniqueness). So I'm trying to look
at all the tradeoffs of using a uniqueidentifier instead of an int, and vice
versa.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23IUly9mlEHA.1652@.TK2MSFTNGP09.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:OUyrc5glEHA.2892@.tk2msftngp13.phx.gbl...
>> What are you reasons for not using a guid as a primary key?
>> We currently use integers as a primary key, but we use a stored procedure
> to
>> generate a unqiue random non-sequential integer, and we store this value
> in
>> a table to stop duplicates. In that scenario, I'm arguing that we should
>> just use uniqueidentifier types because we seem to just be reinventing
>> the
>> wheel, but then someone mention the aggregate function thing with
>> uniqueidentifier types. I'm not aware of any penalties associated with
> using
>> uniqueidentifier types though, other than, it will require more bytes per
>> column, than an int.
> I think David G pointed out most of the issues in his post, so I'll
> instead refer to the only times I have had to use a GUID, which is when
> the
> application itself was responsible for creating the key. Applications
> cannot reliably create unique integers, so GUIDs are pretty much the only
> choice (or natural primary keys, if there's one available).
> Also, why would you want to use a non-sequential random integer instead
> of an IDENTITY?
>|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:%23Aar%23RnlEHA.1356@.TK2MSFTNGP09.phx.gbl...
> Maybe I contradicted myself when I said non-sequential random...
> We essentially need a random number generator to use as a primary key
value.
> I don't know if sql supports it. All I've seen is a unique number
generator
> that increments by one on every insert. It's unique but not random. The
> problem is is, this value is going to be made public and we don't want to
> make it obvious that it's just an incrementing value (think cookies and
> websessions).
If you're only generating one at a time, why not just use RAND()?|||We'll there's a stored procedure someone created that uses RAND to create a
unique integer, by storing all values created by the stored proc in a table,
to stop duplicates, but, unfortunately, when a record is deleted that has a
value generated by the stored procedure, it doesn't remove the generated
value from the lookup table used by the stored procedure. That's the only
reason why I'm was RAND, but I can modify the code to make sure deleted
records 'release' the generated RAND value. But I do like the uid because
it's alphanumeric, which is secure more in a cookie, well in cracking time,
than an all numeric cookie.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:OGss9inlEHA.748@.TK2MSFTNGP15.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:%23Aar%23RnlEHA.1356@.TK2MSFTNGP09.phx.gbl...
>> Maybe I contradicted myself when I said non-sequential random...
>> We essentially need a random number generator to use as a primary key
> value.
>> I don't know if sql supports it. All I've seen is a unique number
> generator
>> that increments by one on every insert. It's unique but not random. The
>> problem is is, this value is going to be made public and we don't want to
>> make it obvious that it's just an incrementing value (think cookies and
>> websessions).
> If you're only generating one at a time, why not just use RAND()?
>|||Hasani (remove nospam from address) wrote:
> We'll there's a stored procedure someone created that uses RAND to
> create a unique integer, by storing all values created by the stored
> proc in a table, to stop duplicates, but, unfortunately, when a
> record is deleted that has a value generated by the stored procedure,
> it doesn't remove the generated value from the lookup table used by
> the stored procedure. That's the only reason why I'm was RAND, but I
> can modify the code to make sure deleted records 'release' the
> generated RAND value. But I do like the uid because it's
> alphanumeric, which is secure more in a cookie, well in cracking
> time, than an all numeric cookie.
>
You could add a computed column to the table to do the same thing (which
would eliminate the overhead of using a uniqueidentifier altogether).
And you can start the identity value higher if you don't want it to
start at 0.
Something like:
Create Table #test (
ID INT IDENTITY NOT NULL,
SomeText nvarchar(10),
"CookieID" as N'ALPHA-STUFF' + RIGHT(N'0000000000' + CAST(ID as
NVARCHAR(10)), 10))
Insert into #test values ('ABC')
Insert into #test values ('ABC')
Insert into #test values ('ABC')
Insert into #test values ('ABC')
Insert into #test values ('ABC')
Select * from #test
David G.

perform aggregate function & group by

Hello, anyone can help?
below SQL works in sybase but fail in sql2000. sql2000 show error on the sum
of netweight (cannot perform aggregate function) and group by commodity_code
(invalid column name !!)
SELECT
(select HTS
from bur_inv_item_list
where item = material
and plantcode = plant) as commodity_code,
(select DESCRIPTION
from bur_inv_item_list
where item = material
and plantcode = plant) as description,
uom as uom,
(select Orig
from bur_inv_item_list
where item = material
and plantcode = plant) as coo,
netweight =
sum(case when plant = '0014' or plant = '0024' then
round(qty * (select weight
from bur_inv_item_list
where item = material
and plantcode = plant), 2)
else
round(qty * (select weight
from bur_inv_item_list
where item = material
and plantcode = plant) /2.20462, 2)
end),
sum(ext_cost) as cost
from bur_inv_cntr_list
group by
commodity_code,
description,
uom,
coo
order by
commodity_code,
description,
uom,
coo
;The logical evaluation order of a SELECT statement is (top to bottom):
FROM
WHERE
GROUP BY
HAVING
SELECT
ORDER BY
TOP
As you see, the SELECT hasn't happened yet when the GROUP BY is performed. T
his mean that you cannot
refer to any column alias name in the GROUP BY clause. Some product diverts
from the ANSI SQL
standard behavior, SQL Server does not (in this regard). So push the express
ions in a derived table
and work against that.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"frankie lo" <frankiecblo@.hotmail.com> wrote in message
news:emgGyQOaGHA.1196@.TK2MSFTNGP03.phx.gbl...
> Hello, anyone can help?
>
> below SQL works in sybase but fail in sql2000. sql2000 show error on the s
um of netweight (cannot
> perform aggregate function) and group by commodity_code (invalid column na
me !!)
>
> SELECT
> (select HTS
> from bur_inv_item_list
> where item = material
> and plantcode = plant) as commodity_code,
> (select DESCRIPTION
> from bur_inv_item_list
> where item = material
> and plantcode = plant) as description,
> uom as uom,
> (select Orig
> from bur_inv_item_list
> where item = material
> and plantcode = plant) as coo,
> netweight =
> sum(case when plant = '0014' or plant = '0024' then
> round(qty * (select weight
> from bur_inv_item_list
> where item = material
> and plantcode = plant), 2)
> else
> round(qty * (select weight
> from bur_inv_item_list
> where item = material
> and plantcode = plant) /2.20462, 2)
> end),
> sum(ext_cost) as cost
> from bur_inv_cntr_list
> group by
> commodity_code,
> description,
> uom,
> coo
> order by
> commodity_code,
> description,
> uom,
> coo
> ;
>|||hi tibor,
thanks for your info. do you have any sample/case. I want to see the sample
to modify below script.
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:u3R1jsPaGHA.504@.TK2MSFTNGP03.phx.gbl...
> The logical evaluation order of a SELECT statement is (top to bottom):
> FROM
> WHERE
> GROUP BY
> HAVING
> SELECT
> ORDER BY
> TOP
> As you see, the SELECT hasn't happened yet when the GROUP BY is performed.
> This mean that you cannot refer to any column alias name in the GROUP BY
> clause. Some product diverts from the ANSI SQL standard behavior, SQL
> Server does not (in this regard). So push the expressions in a derived
> table and work against that.
>
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "frankie lo" <frankiecblo@.hotmail.com> wrote in message
> news:emgGyQOaGHA.1196@.TK2MSFTNGP03.phx.gbl...
>|||Here's a very simple example where a derived table is used so you don't have
to repeat the DATEPART
expression:
USE pubs
SELECT DATEPART(mm, pubdate) AS pub_month, COUNT(*) AS no_of_titles
FROM titles
GROUP BY DATEPART(mm, pubdate)
SELECT pub_month, COUNT(*) AS titles
FROM
(
SELECT DATEPART(mm, pubdate) AS pub_month
FROM titles
) AS i
GROUP BY pub_month
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"frankie lo" <frankiecblo@.hotmail.com> wrote in message
news:OqUKBdRaGHA.2368@.TK2MSFTNGP03.phx.gbl...
> hi tibor,
> thanks for your info. do you have any sample/case. I want to see the sampl
e to modify below
> script.
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote i
n message
> news:u3R1jsPaGHA.504@.TK2MSFTNGP03.phx.gbl...
>|||Hi Tibor,
Many Thanks.
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:ebF1yHSaGHA.1196@.TK2MSFTNGP03.phx.gbl...
> Here's a very simple example where a derived table is used so you don't
> have to repeat the DATEPART expression:
> USE pubs
> SELECT DATEPART(mm, pubdate) AS pub_month, COUNT(*) AS no_of_titles
> FROM titles
> GROUP BY DATEPART(mm, pubdate)
> SELECT pub_month, COUNT(*) AS titles
> FROM
> (
> SELECT DATEPART(mm, pubdate) AS pub_month
> FROM titles
> ) AS i
> GROUP BY pub_month
>
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "frankie lo" <frankiecblo@.hotmail.com> wrote in message
> news:OqUKBdRaGHA.2368@.TK2MSFTNGP03.phx.gbl...
>

Perform aggregate against group value

Is there a way that I can perform my aggregate function agains the value in
group row instead of details row?
Because I do not show numeric values in the details but group row.
--
SevDer
http://www.sevder.com
A new .NET Source For .NET DevelopersDid you try the Previous aggregate function? Just place it into the group
header like =Previous(Fields!Country.Value) and it should work. Note: the
previous function has only one argument.
-- Robert
This posting is provided "AS IS" with no warranties, and confers no rights.
"SevDer" <sevder@.newsgroup.nospam> wrote in message
news:uEOFqzm2FHA.1188@.TK2MSFTNGP12.phx.gbl...
> Is there a way that I can perform my aggregate function agains the value
> in group row instead of details row?
> Because I do not show numeric values in the details but group row.
> --
> SevDer
> http://www.sevder.com
> A new .NET Source For .NET Developers
>
>|||Hi Robert,
I tried Previous as you suggested but this time I endup with empty
datacell..
However, please excuse me that I was not clear enough previously, I want to
perform this aggregate against the group in the footer. So I tried to use
the full previous function as described in the help "Previous(Expression,
AggFunction, PreviousScope, AggScope)" but it basically fails as you've
mentioned.
Anyway, do you have a solution for me?
I would like to sum my group values in the footer.
--
SevDer
http://www.sevder.com
A new .NET Source For .NET Developers
"Robert Bruckner [MSFT]" <robruc@.online.microsoft.com> wrote in message
news:uzeghIq2FHA.3244@.tk2msftngp13.phx.gbl...
> Did you try the Previous aggregate function? Just place it into the group
> header like =Previous(Fields!Country.Value) and it should work. Note: the
> previous function has only one argument.
> -- Robert
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>
> "SevDer" <sevder@.newsgroup.nospam> wrote in message
> news:uEOFqzm2FHA.1188@.TK2MSFTNGP12.phx.gbl...
>> Is there a way that I can perform my aggregate function agains the value
>> in group row instead of details row?
>> Because I do not show numeric values in the details but group row.
>> --
>> SevDer
>> http://www.sevder.com
>> A new .NET Source For .NET Developers
>>
>|||> I would like to sum my group values in the footer.
Maybe I'm missing something, but assuming you group on
=Fields!FieldName.Value, just adding an expression like
=Sum(Fields!FieldName.Value) in the table footer should sum the group
values.
-- Robert
This posting is provided "AS IS" with no warranties, and confers no rights.
"SevDer" <sevder@.newsgroup.nospam> wrote in message
news:%23rqJeUw2FHA.1184@.TK2MSFTNGP12.phx.gbl...
> Hi Robert,
> I tried Previous as you suggested but this time I endup with empty
> datacell..
> However, please excuse me that I was not clear enough previously, I want
> to perform this aggregate against the group in the footer. So I tried to
> use the full previous function as described in the help
> "Previous(Expression, AggFunction, PreviousScope, AggScope)" but it
> basically fails as you've mentioned.
> Anyway, do you have a solution for me?
> I would like to sum my group values in the footer.
> --
> SevDer
> http://www.sevder.com
> A new .NET Source For .NET Developers
>
> "Robert Bruckner [MSFT]" <robruc@.online.microsoft.com> wrote in message
> news:uzeghIq2FHA.3244@.tk2msftngp13.phx.gbl...
>> Did you try the Previous aggregate function? Just place it into the group
>> header like =Previous(Fields!Country.Value) and it should work. Note: the
>> previous function has only one argument.
>> -- Robert
>> This posting is provided "AS IS" with no warranties, and confers no
>> rights.
>>
>> "SevDer" <sevder@.newsgroup.nospam> wrote in message
>> news:uEOFqzm2FHA.1188@.TK2MSFTNGP12.phx.gbl...
>> Is there a way that I can perform my aggregate function agains the value
>> in group row instead of details row?
>> Because I do not show numeric values in the details but group row.
>> --
>> SevDer
>> http://www.sevder.com
>> A new .NET Source For .NET Developers
>>
>>
>

Perform a select asynchronously with ADO (C++)

I'm working with ADO 2.8 en C++ with Visual Studio 2005. I want to perform a "select" in asynchronous mode. I don't really understand the logical of the recordset events. For example, I received a number of MoveComplete event higher than the number of rows in my recordset. It is really not clear for me ...

Does someone knows where I can find a a good example of C++ (or VB) code to manage select statements in asynchronous mode ?

Thanks in advance for your help.

Fran?ois.

ADO Code Examples in Visual C++

other links:

WillMove and MoveComplete Events (ADO)
With Further ADO
Asynchronous Processing (OLE DB)

Perfomance tuning

Dear friends
We have one problem in our existing system.We are expecting some expert comment on this.We have one corebanking system back end as MS SQL server with IIS server.Our system is always very slow in the peak times of tranasactions.We are planning to optimize this with a short time plan .So pls give some suggestions that our DBA team can implement in a short time with SQL SERVER 2000

Thanks in Advance
Filson

A possible course of action is this:

Try to check your IIS's and SQL's processor during the peak times to see if the problem is with your SQL or IIS

If your problem is not in IIS:

Try to see if your sql server's processor is 100% or near. If it is, then your queries are probably too heavy - if it is not, then you possibly have locking problems.

Try to find if there are specific queries that are taking longer. You could use Sql Profiler for this, by specifying a duration minimum threshold for the events - for instance, only show queries that take more than 5 seconds (warning: this might deterioate even further your performance while the trace is active). After finding out the worse queries, try to understand why they are running slow - you could use SQL Query Analyzer for this, using the SP's execution plans.

perfomance related questions

Why would a stored procedure take longer to run on an OLTP environment
although %processor time doesnt seem to be pegged ( 30%), %disk time is
around 10% and pages/sec is 0. It takes around 10 secs to run everytime on
an average but runs faster on development boxes like around 1 sec. It has
the same exact execution plan on both environments. My question is what else
do i need to look into since it doesnt seem to be hardware bound here. Using
SQL 2000. CPU and reads in trace are also higher than in our dev
environment. but the perfmon for h/w looks decent or atleast not that bad .
Any other counters to look at or suggestionsDo you have the same amount of data on both servers? For example if you
have a cursor that runs through all the records in one table and in the
development database you have only 100 records but in the production server
you have 100000 records, then I'd exepect that the stored procedure would
take longer to run on the production server. Also if you have lots of
users that are working with the database in the production, you might have
locking problems, that slow the execution.
Adi
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:eY8NNr7YDHA.2524@.TK2MSFTNGP09.phx.gbl...
> Why would a stored procedure take longer to run on an OLTP environment
> although %processor time doesnt seem to be pegged ( 30%), %disk time is
> around 10% and pages/sec is 0. It takes around 10 secs to run everytime on
> an average but runs faster on development boxes like around 1 sec. It has
> the same exact execution plan on both environments. My question is what
else
> do i need to look into since it doesnt seem to be hardware bound here.
Using
> SQL 2000. CPU and reads in trace are also higher than in our dev
> environment. but the perfmon for h/w looks decent or atleast not that bad
.
> Any other counters to look at or suggestions
>
>|||perfmon cpu is the instantaneous cpu usage, profiler cpu
is the cumulative cpu for the query, regardless of how
long it took,
unless a query takes several cpu secs, and the difference
between profiler duration and cpu is not larger, you may
not even notice the cpu in perfmon
>--Original Message--
>Assuming the stored procedure was only selecting , why
would the CPU from
>perfmon do not show it as busy yet consume more time in
CPU in profiler to
>execute the sproc. There is no parallelism. There are
some temp tables
>..There are some nested joins.
>"joe chang" <jchang6@.yahoo.com> wrote in message
>news:000d01c36401$27d19f60$a101280a@.phx.gbl...
>> what specifically are the production and development
>> systems?
>> processor, frequency, cache, #of cpu, memory, # of
disks,
>> status of hyper-threading?
>> what is in the execution plan of your stored proc ?
>> joins - what type, loop, hash, merge, row count for each
>> table, is there a temp table involved? is a parallel
plan
>> involved? if so, disabled HT or try OPTION (MAXDOP 1)
>> sometimes the dev env is JBOD (no raid) the prod env is
>> raid5, with absolutely horrible write performance, so
>> anything that involves writes will performance much
worse
>> on the $40k production system than the $1k dev system
>> >--Original Message--
>> >Why would a stored procedure take longer to run on an
>> OLTP environment
>> >although %processor time doesnt seem to be pegged (
30%),
>> %disk time is
>> >around 10% and pages/sec is 0. It takes around 10 secs
to
>> run everytime on
>> >an average but runs faster on development boxes like
>> around 1 sec. It has
>> >the same exact execution plan on both environments. My
>> question is what else
>> >do i need to look into since it doesnt seem to be
>> hardware bound here. Using
>> >SQL 2000. CPU and reads in trace are also higher than
in
>> our dev
>> >environment. but the perfmon for h/w looks decent or
>> atleast not that bad .
>> >Any other counters to look at or suggestions
>> >
>> >
>> >
>> >.
>> >
>
>.
>

Perfomance Questions

I have a few questions.
First: We have a quad Xeon 500MHz server running SQL2K
and Win2K. All SPs are applied. Sometimes the
performance becomes an issue. My boss wants me to go
through and kill some processes when that happens. Are
there any adverse reactions to that?
Second: We also have a Terminal Server with Win2k3
running. Does anyone know of any performance issues with
this?
Thanks.
DonDon,
If the solution is to kill processes, you need a better solution.
Killing a process naturally affects the user of that process. Depending on
how the client and T-SQL code is written this can range from harmless but
annoying to leaving data in a logically incomplete state. (It will not
physically corrupt the database, but logical corruption is just as
troublesome.)
The thing to do is investigate the processes that are candidates for
killing, figure out what is wrong, and help the developer/user to correct
the problems. (Famous bad query: multi-table cartesian product to get just
a few rows. Done by people who do not understand joining.)
Russell Fields
"Don" <donolwert@.hotmail.com> wrote in message
news:0d3301c35c30$a7ecb780$a401280a@.phx.gbl...
> I have a few questions.
> First: We have a quad Xeon 500MHz server running SQL2K
> and Win2K. All SPs are applied. Sometimes the
> performance becomes an issue. My boss wants me to go
> through and kill some processes when that happens. Are
> there any adverse reactions to that?
> Second: We also have a Terminal Server with Win2k3
> running. Does anyone know of any performance issues with
> this?
> Thanks.
> Don|||One time I puked because a process was killed.. I couldn't stand it.
MS
"chris" <chrisr@.fingps.com> wrote in message
news:057901c35c45$0c1de350$a101280a@.phx.gbl...
> My boss wants me to go
> >through and kill some processes when that happens. Are
> >there any adverse reactions to that?
> Other than users losing the work they were doing, no.
>
> >--Original Message--
> >I have a few questions.
> >
> >First: We have a quad Xeon 500MHz server running SQL2K
> >and Win2K. All SPs are applied. Sometimes the
> >performance becomes an issue. My boss wants me to go
> >through and kill some processes when that happens. Are
> >there any adverse reactions to that?
> >
> >Second: We also have a Terminal Server with Win2k3
> >running. Does anyone know of any performance issues with
> >this?
> >
> >Thanks.
> >Don
> >.
> >|||Russell,
Thanks for the good detailed information.
Don
>--Original Message--
>Don,
>If the solution is to kill processes, you need a better
solution.
>Killing a process naturally affects the user of that
process. Depending on
>how the client and T-SQL code is written this can range
from harmless but
>annoying to leaving data in a logically incomplete
state. (It will not
>physically corrupt the database, but logical corruption
is just as
>troublesome.)
>The thing to do is investigate the processes that are
candidates for
>killing, figure out what is wrong, and help the
developer/user to correct
>the problems. (Famous bad query: multi-table cartesian
product to get just
>a few rows. Done by people who do not understand
joining.)
>Russell Fields
>"Don" <donolwert@.hotmail.com> wrote in message
>news:0d3301c35c30$a7ecb780$a401280a@.phx.gbl...
>> I have a few questions.
>> First: We have a quad Xeon 500MHz server running SQL2K
>> and Win2K. All SPs are applied. Sometimes the
>> performance becomes an issue. My boss wants me to go
>> through and kill some processes when that happens. Are
>> there any adverse reactions to that?
>> Second: We also have a Terminal Server with Win2k3
>> running. Does anyone know of any performance issues
with
>> this?
>> Thanks.
>> Don
>
>.
>

Perfomance Question

I look after a database which is part of a third party CRM product. The
users of the product complain of intermittant poor performance, the
suspicion is that some more senior users are running their own queries
(the product allows users to do this). I've been asked by the
development team to try to capture the details of long running queries.

I've looked at the events listed in profiler and can't see one that
would be useful. Ideally I want to know who is running which query that
is taking longer than x seconds.

Any suggestions

TIA

LaurencePersonally, I would say 4-6 seconds is to long for a query. You may have a
different expectation.
Run the Profiler for a period of time _ i aim for at least 3 hrs - depnds on
server traffic etc

Events to capture : Stored Procedures--RPC:Completed &&
TSQL--SQL:BatchCompleted
(all sps and t-sql statement)

The critical columns to capture are: Duration and textdata . Others as well
for whatever other analysis you may need

Use the "duration" filter , do it by db id.

Jack Vamvas
___________________________________
Receive free SQL tips - www.ciquery.com/sqlserver.htm
___________________________________

"Laurence Breeze" <i.l.breeze@.open.ac.uk> wrote in message
news:448ED88F.1000104@.open.ac.uk...
> I look after a database which is part of a third party CRM product. The
> users of the product complain of intermittant poor performance, the
> suspicion is that some more senior users are running their own queries
> (the product allows users to do this). I've been asked by the
> development team to try to capture the details of long running queries.
> I've looked at the events listed in profiler and can't see one that
> would be useful. Ideally I want to know who is running which query that
> is taking longer than x seconds.
> Any suggestions
> TIA
> Laurence|||Thanks Jack,

I'll give that a go.

Laurence

Jack Vamvas wrote:
> Personally, I would say 4-6 seconds is to long for a query. You may have a
> different expectation.
> Run the Profiler for a period of time _ i aim for at least 3 hrs - depnds on
> server traffic etc
> Events to capture : Stored Procedures--RPC:Completed &&
> TSQL--SQL:BatchCompleted
> (all sps and t-sql statement)
> The critical columns to capture are: Duration and textdata . Others as well
> for whatever other analysis you may need
> Use the "duration" filter , do it by db id.
>
> Jack Vamvas
> ___________________________________
> Receive free SQL tips - www.ciquery.com/sqlserver.htm
> ___________________________________
>
>
> "Laurence Breeze" <i.l.breeze@.open.ac.uk> wrote in message
> news:448ED88F.1000104@.open.ac.uk...
>>I look after a database which is part of a third party CRM product. The
>>users of the product complain of intermittant poor performance, the
>>suspicion is that some more senior users are running their own queries
>>(the product allows users to do this). I've been asked by the
>>development team to try to capture the details of long running queries.
>>
>>I've looked at the events listed in profiler and can't see one that
>>would be useful. Ideally I want to know who is running which query that
>>is taking longer than x seconds.
>>
>>Any suggestions
>>
>>TIA
>>
>>Laurence
>>

Perfomance monitor

Hi all,
I am thinking how bad it can be if I have perfomance monitor and sql
profiler running for my production SQL server? do they affact the perfomance
a lot? If there a better way for me to do the samilar thing?
Thanks.See if this helps:
Automating Server Side Tracing in SQL Server
http://vyaskn.tripod.com/server_side_tracing_in_sql_server.htm
Tips On Using the SQL Server Profiler
http://www.sql-server-performance.com/sql_server_profiler_tips.asp
Tips for Using Performance Monitor
http://www.sql-server-performance.com/performance_monitor_tips.asp
AMB
"Catelin Wang" wrote:
> Hi all,
> I am thinking how bad it can be if I have perfomance monitor and sql
> profiler running for my production SQL server? do they affact the perfomance
> a lot? If there a better way for me to do the samilar thing?
> Thanks.|||Very good infomation. Thanks a lot .
"Alejandro Mesa" wrote:
> See if this helps:
> Automating Server Side Tracing in SQL Server
> http://vyaskn.tripod.com/server_side_tracing_in_sql_server.htm
> Tips On Using the SQL Server Profiler
> http://www.sql-server-performance.com/sql_server_profiler_tips.asp
> Tips for Using Performance Monitor
> http://www.sql-server-performance.com/performance_monitor_tips.asp
>
> AMB
> "Catelin Wang" wrote:
> > Hi all,
> > I am thinking how bad it can be if I have perfomance monitor and sql
> > profiler running for my production SQL server? do they affact the perfomance
> > a lot? If there a better way for me to do the samilar thing?
> >
> > Thanks.

Perfomance monitor

Hi all,
I am thinking how bad it can be if I have perfomance monitor and sql
profiler running for my production SQL server? do they affact the perfomanc
e
a lot? If there a better way for me to do the samilar thing?
Thanks.See if this helps:
Automating Server Side Tracing in SQL Server
http://vyaskn.tripod.com/server_sid..._sql_server.htm
Tips On Using the SQL Server Profiler
http://www.sql-server-performance.c...ofiler_tips.asp
Tips for Using Performance Monitor
http://www.sql-server-performance.c...onitor_tips.asp
AMB
"Catelin Wang" wrote:

> Hi all,
> I am thinking how bad it can be if I have perfomance monitor and sql
> profiler running for my production SQL server? do they affact the perfoma
nce
> a lot? If there a better way for me to do the samilar thing?
> Thanks.|||Very good infomation. Thanks a lot .
"Alejandro Mesa" wrote:
[vbcol=seagreen]
> See if this helps:
> Automating Server Side Tracing in SQL Server
> http://vyaskn.tripod.com/server_sid..._sql_server.htm
> Tips On Using the SQL Server Profiler
> http://www.sql-server-performance.c...ofiler_tips.asp
> Tips for Using Performance Monitor
> http://www.sql-server-performance.c...onitor_tips.asp
>
> AMB
> "Catelin Wang" wrote:
>

Perfomance monitor

Hi all,
I am thinking how bad it can be if I have perfomance monitor and sql
profiler running for my production SQL server? do they affact the perfomance
a lot? If there a better way for me to do the samilar thing?
Thanks.
See if this helps:
Automating Server Side Tracing in SQL Server
http://vyaskn.tripod.com/server_side...sql_server.htm
Tips On Using the SQL Server Profiler
http://www.sql-server-performance.co...filer_tips.asp
Tips for Using Performance Monitor
http://www.sql-server-performance.co...nitor_tips.asp
AMB
"Catelin Wang" wrote:

> Hi all,
> I am thinking how bad it can be if I have perfomance monitor and sql
> profiler running for my production SQL server? do they affact the perfomance
> a lot? If there a better way for me to do the samilar thing?
> Thanks.
|||Very good infomation. Thanks a lot .
"Alejandro Mesa" wrote:
[vbcol=seagreen]
> See if this helps:
> Automating Server Side Tracing in SQL Server
> http://vyaskn.tripod.com/server_side...sql_server.htm
> Tips On Using the SQL Server Profiler
> http://www.sql-server-performance.co...filer_tips.asp
> Tips for Using Performance Monitor
> http://www.sql-server-performance.co...nitor_tips.asp
>
> AMB
> "Catelin Wang" wrote:

Perfomance issue

Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist a
t
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are goin
g
to 1 table while the majority are going to another table in the same DB. Thi
s
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
DonTry updating stats with FULLSCAN. Also, trace the Lock Escalation event.
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"donsql22222" <donsql22222@.discussions.microsoft.com> wrote in message
news:776E5217-2EDB-40FF-BFA6-E8A3E1ADF980@.microsoft.com...
Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist
at
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are
going
to 1 table while the majority are going to another table in the same DB.
This
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
Don|||On Mon, 13 Feb 2006 16:38:26 -0800, "donsql22222"
<donsql22222@.discussions.microsoft.com> wrote:
>It's almost looks like theres some sort of contention between the INSERT an
d
>the SELECT, but since it's in separate tables...i have no idea what to look
>for.
Of course there's contention, for LRU pages in memory, the moreso
because they are different tables!
Run the queries during the slow period from QA with SET STATISTICS IO
ON, and look at the physical reads numbers going from 0 to whatever.
J.

Perfomance issue

Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist at
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are going
to 1 table while the majority are going to another table in the same DB. This
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
DonTry updating stats with FULLSCAN. Also, trace the Lock Escalation event.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"donsql22222" <donsql22222@.discussions.microsoft.com> wrote in message
news:776E5217-2EDB-40FF-BFA6-E8A3E1ADF980@.microsoft.com...
Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist
at
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are
going
to 1 table while the majority are going to another table in the same DB.
This
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
Don|||On Mon, 13 Feb 2006 16:38:26 -0800, "donsql22222"
<donsql22222@.discussions.microsoft.com> wrote:
>It's almost looks like theres some sort of contention between the INSERT and
>the SELECT, but since it's in separate tables...i have no idea what to look
>for.
Of course there's contention, for LRU pages in memory, the moreso
because they are different tables!
Run the queries during the slow period from QA with SET STATISTICS IO
ON, and look at the physical reads numbers going from 0 to whatever.
J.

Perfomance issue

Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist at
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are going
to 1 table while the majority are going to another table in the same DB. This
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
Don
Try updating stats with FULLSCAN. Also, trace the Lock Escalation event.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
..
"donsql22222" <donsql22222@.discussions.microsoft.com> wrote in message
news:776E5217-2EDB-40FF-BFA6-E8A3E1ADF980@.microsoft.com...
Looking for suggestions about what I might look for..
During periods of time when no INSERTS are taking place, SELECT statement
durations are around 16-30ms for hundreds of requests per minute.
When there is a batch of INSERTS that are sent to the table via SP that
number in the many thousands..for a while both INSERTS and SELECTS coexist
at
around the same duration of 16-30 ms..watching this with Profiler.
Then, for some reason..some SELECT and few INSERTS will go up to the 4000 -
5000 ms range. Then things will quiet down, then go back up to 4000ms. These
don't have to be on the same table and frequently are not...INSERTS are
going
to 1 table while the majority are going to another table in the same DB.
This
happens on all the servers we have so it's not specific to one server. They
all have the same hardware and software.
I've looked for deadlocks and there are none. I've set some of the common
things in PERFMON such as CACHE BUFFER HIT RATIO. Looks ok. There's 4GB of
RAM on the server and there's 1GB available.
The MDF has it's own separate drive and the LDF has it's own drive.
It's almost looks like theres some sort of contention between the INSERT and
the SELECT, but since it's in separate tables...i have no idea what to look
for.
any help appreciated.
The servers were recently upgraded from SQL 7 .. I'm not sure if this was
happening under SQL 7. I don't think so. I've updated statistics on all
servers and recreated the clustered indexes.
SQL 2000 with SP4.
thanks,
Don
|||On Mon, 13 Feb 2006 16:38:26 -0800, "donsql22222"
<donsql22222@.discussions.microsoft.com> wrote:
>It's almost looks like theres some sort of contention between the INSERT and
>the SELECT, but since it's in separate tables...i have no idea what to look
>for.
Of course there's contention, for LRU pages in memory, the moreso
because they are different tables!
Run the queries during the slow period from QA with SET STATISTICS IO
ON, and look at the physical reads numbers going from 0 to whatever.
J.

Perfomance Enhancement through proper database designing

Dear Reader

I am trying to design a database. How can I make best Judgement that Indexing (which I am trying to fix during Diagram Desingning process)is ok.

I am able to identify the best candidate for the indexing.

Below is the details I want to understand:

Area
ZIP
City
County
District
State/Province
Country

Now I want the data retrival optimization through Index. (you can suggest another idea, also)

Entities Area,..., Country have independent tables.
Example:

Area_Table
AreaID (PK)
Area
They have relationship- one to many- if you go from Country to Area.

There is one more table:

Location_Table (PK)
LocationID
AreaID
ZIPID
CityID
CountyID
DistrictID
State/ProvinceID
CountryID

(Location_ID is further related to the Address of the contact.)

GUI has a single form to enter these details.On a save command details in all the tables -Area to Country- (individually) being inserted.
& simultaniously Location_Table is also being inserted with the details.

Following is the situation of being queried these tables:

(1) GUI user can select an Area than the related details of ZIP .., ..., ...upto Country etc. should be loaded automatically (id it is previously stored by the user entry in the database.)

(2) Contacts have to be retrived on the basis of Area, ZIP, ....County. (Necessary Groupings are required )

Example:
If Contacts are queried Country Wise then the Display should be
Country1
State1
District1
County1
City1
ZIP1
Area1
Area2
ZIP2
City2

County2
District2
Country2

Please Guide.

SuryaPrakash

*****************************************
* This message was posted via http://www.sqlmonster.com
*
* Report spam or abuse by clicking the following URL:
* http://www.sqlmonster.com/Uwe/Abuse...0255a1765491f15
*****************************************SuryaPrakash Patel via SQLMonster.com (forum@.SQLMonster.com) writes:
> I am trying to design a database. How can I make best Judgement that
> Indexing (which I am trying to fix during Diagram Desingning process)is
> ok.

I was not really able to understand exactly what the queries would look
like. While a trained database designer certainly puts indexes already
in the early design stage, it's better to focus to get the data model
right to support the functional requirements first. Once we have the
queries and the tables, it may be eaesier to say what would be the
best indexes.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||On Tue, 09 Nov 2004 04:32:02 GMT, SuryaPrakash Patel via SQLMonster.com
wrote:

>Dear Reader
>I am trying to design a database.
(snip)

Hi SuryaPrakash,

Further to Erlands's comments: you should also not think about GUI or
report formats when designing a database. Database design should be driven
by the structure of the data only.

Only when you have a properly normalized database that will store all data
the application needs and that will reject all modifications that would
hurt data integrity comes the time to think about the user interface (both
for in- and output screens and for reports) and about adding extra indexes
for performance enhancement.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||Dear Erland & Hugo

Thanks

I got your suggestions.

I have specific needs to get the certain ways of output. So it is obvious that I incorporate the needs while doing the database design.

But nowonwards I will try to stick the database structure, only. And leave the GUI part and Indexing part for later stages of development.

Point to be noted is that I will not have any chance to deploy any index after the software has been installed at user's end. Any comments.

Thanks again

SuryaPrakash

*****************************************
* A copy of the whole thread can be found at:
* http://www.sqlmonster.com/Uwe/Forum...sql-server/5093
*
* Report spam or abuse by clicking the following URL:
* http://www.sqlmonster.com/Uwe/Abuse...4931f7dbd3d74e1
*****************************************|||On Wed, 10 Nov 2004 00:59:23 GMT, SuryaPrakash Patel via SQLMonster.com
wrote:

>Point to be noted is that I will not have any chance to deploy any index after the software has been installed at user's end. Any comments.

Hi SuryaPrakash,

That's a logical and quite common scenario. That's why you should only
install the software when it's completely finished. I'd recommend to build
the software first, then test and debug until it works as desired, then
start doing performance test and tweaking things (adding indexes,
rewriting queries, etc) until the speed is as desired. Then do a final
test to check that performance tweaking didn't break functionality.

Installing at the user's end should be postponed until all these stages
are done and the product is completely finished.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||SuryaPrakash Patel via SQLMonster.com (forum@.SQLMonster.com) writes:
> Point to be noted is that I will not have any chance to deploy any index
> after the software has been installed at user's end. Any comments.

So you need to test carefully with real-world data before you ship.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp