Showing posts with label versions. Show all posts
Showing posts with label versions. Show all posts

Wednesday, March 28, 2012

performance differnce between 3027 and 3033

Hi,

I have found major differnces in performance between the two versions - specially with queries that should be answerd from cache if i execute multiple times...

Test Query - after the first execute it tooks

1 second with 9.0.3027 and 12 seconds at 9.0.3033.

Is this a bug in the last ctp - does this change back in the released SP2? or is the something bad in my models for the latest version

we used two equal machines - processed the db on 9.0.3033 and restored the backup on 9.0.3027

query looks like

select

{[4WochenSet]* [Zeit].[Tag].[Tag].Members} on axis(0),

Descendants([Immobilien].[Geographie].[Bundesland].&[Wien]&[AUT], [Immobilien].[Geographie].[PLZ],SELF) on axis(1)

from

Angebotsanalyse

where ([Measures].[Kaufpreis je Wohnflaeche MW], [Plattform].[Plattform].&[156],[Immobilien].[Immobilientyp].&[5])

LG, HANNES

Are there any calculations used by the query? How is the set [4WochenSet] defined? If you capture the performance counters under MSAS 2005:MDX during the query, do you see any significant difference in performance counter values between the two builds? For example, do you see a much larger increase in the value of the [Total calculation covers] performance counter in build 3033? If there is calculation involved in the query, such IIF, try connection string property "Cache Policy=9" to see if there is any difference in performance.

Jeffrey

|||

There are some calc, the measure [Kaufpreis je Wohnflaeche MW] is defined as

AVG(existing({[Immobilien].[Immobilie].[Immobilie].Members}),[Measures].[Kaufpreis je Wohnflaeche])

and the set is defined as

{Tail(except(

exists({[Zeit].[Zeit nach Wochen].[Woche].Members},[Zeit].[Abgeschlossen].&[True]),

exists({[Zeit].[Zeit nach Wochen].[Woche].Members},[Zeit].[Abgeschlossen].&[False])

),4)}

by this definition there are no if calculations, but the base measure [Measures].[Kaufpreis je Wohnflaeche] is defined as lastnonempty.

At the moment i test only using management studio - later i will test it with your sugestions.

Is this further optimized in the released version of SP2? so we do not need the setting?

LG, HANNES

|||

It's hard to tell what caused the performance degredation. But since there is no IIF and other calculation conditions, "Cache Policy=9" shouldn't make any difference.

Saturday, February 25, 2012

Perform upgrade method

hi

I have two versions of the same dataflow component , i need to use the new version component in the packages created using the old version . How can i do this using the perform upgrade method .

Thanks

Mani

Explanation about PerformUpgrade and a sample code can be found in book online or at:

http://msdn2.microsoft.com/ko-kr/library/microsoft.sqlserver.dts.pipeline.pipelinecomponent.performupgrade.aspx

Hope this helps,
Ovidiu Burlacu

|||

hi

thanks i got the solution the perform upgrade method only works if the current version property, that is stored in the package is less than the currentversion property of the component on the local computer.

Regards

Mani

|||

hi

If i removed the old DLL from the GAC i am getting errors while opening the package saved with the old version DLL

these are the errors ...

Error 1 Validation error. Data Flow Task: DTS.Pipeline: The component metadata for "component "Oracle Destination" (67)" could not be upgraded to the newer version of the component. The PerformUpgrade method failed. Package1.dtsx 0 0
Error 2 Error loading Package1.dtsx: The "component "Oracle Destination" (67)" failed to cache the component metadata object and returned error code 0x80131600. d:\testproject\integration services project5\Package1.dtsx 1 1
Error 3 Error loading Package1.dtsx: Component "component "Oracle Destination" (67)" could not be created and returned error code 0xC0047067. Make sure that the component is registered correctly. d:\testproject\integration services project5\Package1.dtsx 1 1
Error 4 Error loading Package1.dtsx: The component metadata for "component "Oracle Destination" (67)" could not be upgraded to the newer version of the component. The PerformUpgrade method failed. d:\testproject\integration services project5\Package1.dtsx 1 1

how can i correct these errors

thanks

Mani

|||

You should not remo0ve the old DLL from the GAC becuase the component can no longer be instantiated. The new DLL should be able to live in the GAC with the old DLL because they have different versions (file or assembly or even the PublicTokenKey)

Thanks,
Ovidiu

|||

Ovidiu, I don't see how you can do side by side with SSIS components in the GAC. File versions do not differentiate between assemblies, so you cannot use that to installside by side. If you change the assembly version or the key (and therefore the public key token) you can side by side them, but then your are stuffed since SSIS stores the full strong name in the package. Any change in the string name means you get the nice error about not being able to load the assembly. Hacking the XML would seem to be the only solution, shame it is not a supported one :)

|||Indeed you are right. I think is time for me to go in vacation |||

In fact you can automatically upgrade a dataflow component's metadata when you upgrade the component's assembly version.

Here's how:

1. Remove the old assembly from GAC.

2. Make sure your new component's CurrentVersion argument of the DtsPipelineComponent attribute is greater than the one saved in the old metadata. That'll ensure that PerformUpgrade of the new component will be called when SSIS opens the package.

3. In the new component override PerformUpgrade and in it make sure to include the following:

ComponentMetaData.CustomPropertyCollection["UserComponentTypeName"].Value = this.GetType().AssemblyQualifiedName;

This changes the metadata's type reference to refer to the new type and version. This will ensure that things like doubleclick on the component will work if you're using UITypeName argument of DtsPipelineComponent attribute for example.
Of course here you should also upgrade any metadata properties from the old version to the new one, including any UITypeEditor properties of custom properties which have their custom editors.

4. Install the new assembly to the GAC.

5. Create a policy assembly redirecting the old version to the new one and install it in GAC. For example of how to do that see here: http://samples.gotdotnet.com/quickstart/howto/doc/pubpolicy.aspx

Now when you open packages with the old version the pattern above will automatically upgrade to the new version.

Milen

Perform upgrade method

hi

I have two versions of the same dataflow component , i need to use the new version component in the packages created using the old version . How can i do this using the perform upgrade method .

Thanks

Mani

Explanation about PerformUpgrade and a sample code can be found in book online or at:

http://msdn2.microsoft.com/ko-kr/library/microsoft.sqlserver.dts.pipeline.pipelinecomponent.performupgrade.aspx

Hope this helps,
Ovidiu Burlacu

|||

hi

thanks i got the solution the perform upgrade method only works if the current version property, that is stored in the package is less than the currentversion property of the component on the local computer.

Regards

Mani

|||

hi

If i removed the old DLL from the GAC i am getting errors while opening the package saved with the old version DLL

these are the errors ...

Error 1 Validation error. Data Flow Task: DTS.Pipeline: The component metadata for "component "Oracle Destination" (67)" could not be upgraded to the newer version of the component. The PerformUpgrade method failed. Package1.dtsx 0 0
Error 2 Error loading Package1.dtsx: The "component "Oracle Destination" (67)" failed to cache the component metadata object and returned error code 0x80131600. d:\testproject\integration services project5\Package1.dtsx 1 1
Error 3 Error loading Package1.dtsx: Component "component "Oracle Destination" (67)" could not be created and returned error code 0xC0047067. Make sure that the component is registered correctly. d:\testproject\integration services project5\Package1.dtsx 1 1
Error 4 Error loading Package1.dtsx: The component metadata for "component "Oracle Destination" (67)" could not be upgraded to the newer version of the component. The PerformUpgrade method failed. d:\testproject\integration services project5\Package1.dtsx 1 1

how can i correct these errors

thanks

Mani

|||

You should not remo0ve the old DLL from the GAC becuase the component can no longer be instantiated. The new DLL should be able to live in the GAC with the old DLL because they have different versions (file or assembly or even the PublicTokenKey)

Thanks,
Ovidiu

|||

Ovidiu, I don't see how you can do side by side with SSIS components in the GAC. File versions do not differentiate between assemblies, so you cannot use that to installside by side. If you change the assembly version or the key (and therefore the public key token) you can side by side them, but then your are stuffed since SSIS stores the full strong name in the package. Any change in the string name means you get the nice error about not being able to load the assembly. Hacking the XML would seem to be the only solution, shame it is not a supported one :)

|||Indeed you are right. I think is time for me to go in vacation |||

In fact you can automatically upgrade a dataflow component's metadata when you upgrade the component's assembly version.

Here's how:

1. Remove the old assembly from GAC.

2. Make sure your new component's CurrentVersion argument of the DtsPipelineComponent attribute is greater than the one saved in the old metadata. That'll ensure that PerformUpgrade of the new component will be called when SSIS opens the package.

3. In the new component override PerformUpgrade and in it make sure to include the following:

ComponentMetaData.CustomPropertyCollection["UserComponentTypeName"].Value = this.GetType().AssemblyQualifiedName;

This changes the metadata's type reference to refer to the new type and version. This will ensure that things like doubleclick on the component will work if you're using UITypeName argument of DtsPipelineComponent attribute for example.
Of course here you should also upgrade any metadata properties from the old version to the new one, including any UITypeEditor properties of custom properties which have their custom editors.

4. Install the new assembly to the GAC.

5. Create a policy assembly redirecting the old version to the new one and install it in GAC. For example of how to do that see here: http://samples.gotdotnet.com/quickstart/howto/doc/pubpolicy.aspx

Now when you open packages with the old version the pattern above will automatically upgrade to the new version.

Milen

Perform aggregate functions on uniqueidentifiers

For some reason, [on sql2k] one cannot perform "Count(X)" where X is of type
uniqueidentifier. Will future versions of sql server suffer from this
limitation? 2003 or 2005?
We came across this problem when we had to execute a query with multiple
table joins.
Hasani,
The workaround that I use is to store them as BINARY(16).
"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:%233sMM$flEHA.3564@.TK2MSFTNGP14.phx.gbl...
> For some reason, [on sql2k] one cannot perform "Count(X)" where X is of
type
> uniqueidentifier. Will future versions of sql server suffer from this
> limitation? 2003 or 2005?
> We came across this problem when we had to execute a query with multiple
> table joins.
>
|||clever, i'll tell my supervisor tomorrow.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:Oc23mQglEHA.592@.TK2MSFTNGP11.phx.gbl...
> Hasani,
> The workaround that I use is to store them as BINARY(16).
>
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:%233sMM$flEHA.3564@.TK2MSFTNGP14.phx.gbl...
> type
>
|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:%23gbFEiglEHA.2892@.tk2msftngp13.phx.gbl...
> clever, i'll tell my supervisor tomorrow.
If you want to get even tricker, you can experiment with doing something
like this when you store the GUID:
SELECT CONVERT(BINARY(6), GETDATE()) + CONVERT(BINARY(10), NEWID()) AS
DateGUID
This reduces the uniqueness a bit (removes 6 of the 16 bytes), but not
too much because there are only so many rows you can insert every 3
milliseconds. The upside is that you can now cluster on your GUID column
without destroying INSERT performance.
|||Will sql server allow binary columntypes as primary keys?
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:eb$oqkglEHA.3712@.TK2MSFTNGP15.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:%23gbFEiglEHA.2892@.tk2msftngp13.phx.gbl...
> If you want to get even tricker, you can experiment with doing
> something
> like this when you store the GUID:
> SELECT CONVERT(BINARY(6), GETDATE()) + CONVERT(BINARY(10), NEWID()) AS
> DateGUID
> This reduces the uniqueness a bit (removes 6 of the 16 bytes), but not
> too much because there are only so many rows you can insert every 3
> milliseconds. The upside is that you can now cluster on your GUID column
> without destroying INSERT performance.
>
|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:eP7D5uglEHA.712@.TK2MSFTNGP09.phx.gbl...
> Will sql server allow binary columntypes as primary keys?
Yes. When I have used GUIDs as primary keys (rarely, I don't think it's
a great idea most of the time), I have used the BINARY(16) technique. More
recently I've used the date concatenation technique in a project and it
worked out very well.
|||What are you reasons for not using a guid as a primary key?
We currently use integers as a primary key, but we use a stored procedure to
generate a unqiue random non-sequential integer, and we store this value in
a table to stop duplicates. In that scenario, I'm arguing that we should
just use uniqueidentifier types because we seem to just be reinventing the
wheel, but then someone mention the aggregate function thing with
uniqueidentifier types. I'm not aware of any penalties associated with using
uniqueidentifier types though, other than, it will require more bytes per
column, than an int.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23OvcnxglEHA.596@.tk2msftngp13.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:eP7D5uglEHA.712@.TK2MSFTNGP09.phx.gbl...
> Yes. When I have used GUIDs as primary keys (rarely, I don't think
> it's
> a great idea most of the time), I have used the BINARY(16) technique.
> More
> recently I've used the date concatenation technique in a project and it
> worked out very well.
>
|||Hasani (remove nospam from address) wrote:
> What are you reasons for not using a guid as a primary key?
> We currently use integers as a primary key, but we use a stored
> procedure to generate a unqiue random non-sequential integer, and we
> store this value in a table to stop duplicates. In that scenario, I'm
> arguing that we should just use uniqueidentifier types because we
> seem to just be reinventing the wheel, but then someone mention the
> aggregate function thing with uniqueidentifier types. I'm not aware
> of any penalties associated with using uniqueidentifier types though,
> other than, it will require more bytes per column, than an int.
You're right in that it's a lot more bytes per row using a UID as
opposed to an INT IDENTITY. Four times the storage, which translates to
a much larger index when using a uniqueidentifier. And as Adam
eloquently mentioned, using a UID as a clustered key does not work well
because you get a lot of page splitting and head movement on the drives.
Adding a date component as a prefix to the UID prevents much of th epage
splitting, increasing insert performance. However, using a UID as
clustered key means propagating that key to all non-clustered indexes,
making them much larger as well.
If you can, I would stick with an INT IDENTITY column for a PK.
David G.
|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:OUyrc5glEHA.2892@.tk2msftngp13.phx.gbl...
> What are you reasons for not using a guid as a primary key?
> We currently use integers as a primary key, but we use a stored procedure
to
> generate a unqiue random non-sequential integer, and we store this value
in
> a table to stop duplicates. In that scenario, I'm arguing that we should
> just use uniqueidentifier types because we seem to just be reinventing the
> wheel, but then someone mention the aggregate function thing with
> uniqueidentifier types. I'm not aware of any penalties associated with
using
> uniqueidentifier types though, other than, it will require more bytes per
> column, than an int.
I think David G pointed out most of the issues in his post, so I'll
instead refer to the only times I have had to use a GUID, which is when the
application itself was responsible for creating the key. Applications
cannot reliably create unique integers, so GUIDs are pretty much the only
choice (or natural primary keys, if there's one available).
Also, why would you want to use a non-sequential random integer instead
of an IDENTITY?
|||Maybe I contradicted myself when I said non-sequential random...
We essentially need a random number generator to use as a primary key value.
I don't know if sql supports it. All I've seen is a unique number generator
that increments by one on every insert. It's unique but not random. The
problem is is, this value is going to be made public and we don't want to
make it obvious that it's just an incrementing value (think cookies and
websessions).
What we currently do (sometimes) is have 2 columns, I that's an
autoincrementing int that's a primary key, and the other is a
uniqueidentifer column that isn't a primary key (but may have a constraint
to make sure there are no duplicates), and we would make the uniqueidentifer
value public so in a cookie, it would always look random.
I don't feel comfortable in the scenario because you have 2 columns that are
doing the same thing (preserving/ensuring uniqueness). So I'm trying to look
at all the tradeoffs of using a uniqueidentifier instead of an int, and vice
versa.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23IUly9mlEHA.1652@.TK2MSFTNGP09.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:OUyrc5glEHA.2892@.tk2msftngp13.phx.gbl...
> to
> in
> using
> I think David G pointed out most of the issues in his post, so I'll
> instead refer to the only times I have had to use a GUID, which is when
> the
> application itself was responsible for creating the key. Applications
> cannot reliably create unique integers, so GUIDs are pretty much the only
> choice (or natural primary keys, if there's one available).
> Also, why would you want to use a non-sequential random integer instead
> of an IDENTITY?
>

Perform aggregate functions on uniqueidentifiers

For some reason, [on sql2k] one cannot perform "Count(X)" where X is of type
uniqueidentifier. Will future versions of sql server suffer from this
limitation? 2003 or 2005?
We came across this problem when we had to execute a query with multiple
table joins.Hasani,
The workaround that I use is to store them as BINARY(16).
"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:%233sMM$flEHA.3564@.TK2MSFTNGP14.phx.gbl...
> For some reason, [on sql2k] one cannot perform "Count(X)" where X is of
type
> uniqueidentifier. Will future versions of sql server suffer from this
> limitation? 2003 or 2005?
> We came across this problem when we had to execute a query with multiple
> table joins.
>|||clever, i'll tell my supervisor tomorrow.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:Oc23mQglEHA.592@.TK2MSFTNGP11.phx.gbl...
> Hasani,
> The workaround that I use is to store them as BINARY(16).
>
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:%233sMM$flEHA.3564@.TK2MSFTNGP14.phx.gbl...
>> For some reason, [on sql2k] one cannot perform "Count(X)" where X is of
> type
>> uniqueidentifier. Will future versions of sql server suffer from this
>> limitation? 2003 or 2005?
>> We came across this problem when we had to execute a query with multiple
>> table joins.
>>
>|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:%23gbFEiglEHA.2892@.tk2msftngp13.phx.gbl...
> clever, i'll tell my supervisor tomorrow.
If you want to get even tricker, you can experiment with doing something
like this when you store the GUID:
SELECT CONVERT(BINARY(6), GETDATE()) + CONVERT(BINARY(10), NEWID()) AS
DateGUID
This reduces the uniqueness a bit (removes 6 of the 16 bytes), but not
too much because there are only so many rows you can insert every 3
milliseconds. The upside is that you can now cluster on your GUID column
without destroying INSERT performance.|||Will sql server allow binary columntypes as primary keys?
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:eb$oqkglEHA.3712@.TK2MSFTNGP15.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:%23gbFEiglEHA.2892@.tk2msftngp13.phx.gbl...
>> clever, i'll tell my supervisor tomorrow.
> If you want to get even tricker, you can experiment with doing
> something
> like this when you store the GUID:
> SELECT CONVERT(BINARY(6), GETDATE()) + CONVERT(BINARY(10), NEWID()) AS
> DateGUID
> This reduces the uniqueness a bit (removes 6 of the 16 bytes), but not
> too much because there are only so many rows you can insert every 3
> milliseconds. The upside is that you can now cluster on your GUID column
> without destroying INSERT performance.
>|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:eP7D5uglEHA.712@.TK2MSFTNGP09.phx.gbl...
> Will sql server allow binary columntypes as primary keys?
Yes. When I have used GUIDs as primary keys (rarely, I don't think it's
a great idea most of the time), I have used the BINARY(16) technique. More
recently I've used the date concatenation technique in a project and it
worked out very well.|||What are you reasons for not using a guid as a primary key?
We currently use integers as a primary key, but we use a stored procedure to
generate a unqiue random non-sequential integer, and we store this value in
a table to stop duplicates. In that scenario, I'm arguing that we should
just use uniqueidentifier types because we seem to just be reinventing the
wheel, but then someone mention the aggregate function thing with
uniqueidentifier types. I'm not aware of any penalties associated with using
uniqueidentifier types though, other than, it will require more bytes per
column, than an int.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23OvcnxglEHA.596@.tk2msftngp13.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:eP7D5uglEHA.712@.TK2MSFTNGP09.phx.gbl...
>> Will sql server allow binary columntypes as primary keys?
> Yes. When I have used GUIDs as primary keys (rarely, I don't think
> it's
> a great idea most of the time), I have used the BINARY(16) technique.
> More
> recently I've used the date concatenation technique in a project and it
> worked out very well.
>|||Hasani (remove nospam from address) wrote:
> What are you reasons for not using a guid as a primary key?
> We currently use integers as a primary key, but we use a stored
> procedure to generate a unqiue random non-sequential integer, and we
> store this value in a table to stop duplicates. In that scenario, I'm
> arguing that we should just use uniqueidentifier types because we
> seem to just be reinventing the wheel, but then someone mention the
> aggregate function thing with uniqueidentifier types. I'm not aware
> of any penalties associated with using uniqueidentifier types though,
> other than, it will require more bytes per column, than an int.
You're right in that it's a lot more bytes per row using a UID as
opposed to an INT IDENTITY. Four times the storage, which translates to
a much larger index when using a uniqueidentifier. And as Adam
eloquently mentioned, using a UID as a clustered key does not work well
because you get a lot of page splitting and head movement on the drives.
Adding a date component as a prefix to the UID prevents much of th epage
splitting, increasing insert performance. However, using a UID as
clustered key means propagating that key to all non-clustered indexes,
making them much larger as well.
If you can, I would stick with an INT IDENTITY column for a PK.
David G.|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:OUyrc5glEHA.2892@.tk2msftngp13.phx.gbl...
> What are you reasons for not using a guid as a primary key?
> We currently use integers as a primary key, but we use a stored procedure
to
> generate a unqiue random non-sequential integer, and we store this value
in
> a table to stop duplicates. In that scenario, I'm arguing that we should
> just use uniqueidentifier types because we seem to just be reinventing the
> wheel, but then someone mention the aggregate function thing with
> uniqueidentifier types. I'm not aware of any penalties associated with
using
> uniqueidentifier types though, other than, it will require more bytes per
> column, than an int.
I think David G pointed out most of the issues in his post, so I'll
instead refer to the only times I have had to use a GUID, which is when the
application itself was responsible for creating the key. Applications
cannot reliably create unique integers, so GUIDs are pretty much the only
choice (or natural primary keys, if there's one available).
Also, why would you want to use a non-sequential random integer instead
of an IDENTITY?|||Maybe I contradicted myself when I said non-sequential random...
We essentially need a random number generator to use as a primary key value.
I don't know if sql supports it. All I've seen is a unique number generator
that increments by one on every insert. It's unique but not random. The
problem is is, this value is going to be made public and we don't want to
make it obvious that it's just an incrementing value (think cookies and
websessions).
What we currently do (sometimes) is have 2 columns, I that's an
autoincrementing int that's a primary key, and the other is a
uniqueidentifer column that isn't a primary key (but may have a constraint
to make sure there are no duplicates), and we would make the uniqueidentifer
value public so in a cookie, it would always look random.
I don't feel comfortable in the scenario because you have 2 columns that are
doing the same thing (preserving/ensuring uniqueness). So I'm trying to look
at all the tradeoffs of using a uniqueidentifier instead of an int, and vice
versa.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23IUly9mlEHA.1652@.TK2MSFTNGP09.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:OUyrc5glEHA.2892@.tk2msftngp13.phx.gbl...
>> What are you reasons for not using a guid as a primary key?
>> We currently use integers as a primary key, but we use a stored procedure
> to
>> generate a unqiue random non-sequential integer, and we store this value
> in
>> a table to stop duplicates. In that scenario, I'm arguing that we should
>> just use uniqueidentifier types because we seem to just be reinventing
>> the
>> wheel, but then someone mention the aggregate function thing with
>> uniqueidentifier types. I'm not aware of any penalties associated with
> using
>> uniqueidentifier types though, other than, it will require more bytes per
>> column, than an int.
> I think David G pointed out most of the issues in his post, so I'll
> instead refer to the only times I have had to use a GUID, which is when
> the
> application itself was responsible for creating the key. Applications
> cannot reliably create unique integers, so GUIDs are pretty much the only
> choice (or natural primary keys, if there's one available).
> Also, why would you want to use a non-sequential random integer instead
> of an IDENTITY?
>|||"Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com> wrote
in message news:%23Aar%23RnlEHA.1356@.TK2MSFTNGP09.phx.gbl...
> Maybe I contradicted myself when I said non-sequential random...
> We essentially need a random number generator to use as a primary key
value.
> I don't know if sql supports it. All I've seen is a unique number
generator
> that increments by one on every insert. It's unique but not random. The
> problem is is, this value is going to be made public and we don't want to
> make it obvious that it's just an incrementing value (think cookies and
> websessions).
If you're only generating one at a time, why not just use RAND()?|||We'll there's a stored procedure someone created that uses RAND to create a
unique integer, by storing all values created by the stored proc in a table,
to stop duplicates, but, unfortunately, when a record is deleted that has a
value generated by the stored procedure, it doesn't remove the generated
value from the lookup table used by the stored procedure. That's the only
reason why I'm was RAND, but I can modify the code to make sure deleted
records 'release' the generated RAND value. But I do like the uid because
it's alphanumeric, which is secure more in a cookie, well in cracking time,
than an all numeric cookie.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:OGss9inlEHA.748@.TK2MSFTNGP15.phx.gbl...
> "Hasani (remove nospam from address)" <hblackwell@.n0sp4m.popstick.com>
> wrote
> in message news:%23Aar%23RnlEHA.1356@.TK2MSFTNGP09.phx.gbl...
>> Maybe I contradicted myself when I said non-sequential random...
>> We essentially need a random number generator to use as a primary key
> value.
>> I don't know if sql supports it. All I've seen is a unique number
> generator
>> that increments by one on every insert. It's unique but not random. The
>> problem is is, this value is going to be made public and we don't want to
>> make it obvious that it's just an incrementing value (think cookies and
>> websessions).
> If you're only generating one at a time, why not just use RAND()?
>|||Hasani (remove nospam from address) wrote:
> We'll there's a stored procedure someone created that uses RAND to
> create a unique integer, by storing all values created by the stored
> proc in a table, to stop duplicates, but, unfortunately, when a
> record is deleted that has a value generated by the stored procedure,
> it doesn't remove the generated value from the lookup table used by
> the stored procedure. That's the only reason why I'm was RAND, but I
> can modify the code to make sure deleted records 'release' the
> generated RAND value. But I do like the uid because it's
> alphanumeric, which is secure more in a cookie, well in cracking
> time, than an all numeric cookie.
>
You could add a computed column to the table to do the same thing (which
would eliminate the overhead of using a uniqueidentifier altogether).
And you can start the identity value higher if you don't want it to
start at 0.
Something like:
Create Table #test (
ID INT IDENTITY NOT NULL,
SomeText nvarchar(10),
"CookieID" as N'ALPHA-STUFF' + RIGHT(N'0000000000' + CAST(ID as
NVARCHAR(10)), 10))
Insert into #test values ('ABC')
Insert into #test values ('ABC')
Insert into #test values ('ABC')
Insert into #test values ('ABC')
Insert into #test values ('ABC')
Select * from #test
David G.