Monday, March 26, 2012
Performance degradation after 450 user connections.
concurrent connections reaches 450 and above, the performance of the
server drops significantly. 450 seems to be a magic number, anything
below it doesn't effect performance.
We have a large SQL 2k Ent environment that should be able to handle a
lot more users. Server has 16 processors, 7 gig of ram dedicated to
sql, and a fast EMC SAN. Cache hit ration is high, I/O normal, CPU
utilization at 35%.
The locking on the tables is minimal, all the queries use "nolock"
hint and we have no ad-hoc updates/inserts.
Application that connects to the db gets very slow however, and
connecting (opening a new window) through query analyzer takes up to a
minute.
None of the connections use significant amount of resources and
connection are closed properly.
What else should I be looking at?i am certain sql server ought to handle atleast 700-800
connections
what might be happening in your case is that too much
memory is allocated to buffers in the lower 3GB of address
space (memory >4GB is handled by PAE) that not enough
address space is left for network connections
try -g100 (where 100 is in MB) or some other values for
the SQL Server startup parameter
this can be done in Computer Management->Services on the
properties page for MSSQLSERVER at the bottom
you will need to stop the service first
>--Original Message--
>I am seeing a strange behavior with my server. After the
number of
>concurrent connections reaches 450 and above, the
performance of the
>server drops significantly. 450 seems to be a magic
number, anything
>below it doesn't effect performance.
>We have a large SQL 2k Ent environment that should be
able to handle a
>lot more users. Server has 16 processors, 7 gig of ram
dedicated to
>sql, and a fast EMC SAN. Cache hit ration is high, I/O
normal, CPU
>utilization at 35%.
>The locking on the tables is minimal, all the queries
use "nolock"
>hint and we have no ad-hoc updates/inserts.
>Application that connects to the db gets very slow
however, and
>connecting (opening a new window) through query analyzer
takes up to a
>minute.
>None of the connections use significant amount of
resources and
>connection are closed properly.
>What else should I be looking at?
>.
>|||I noticed that I have a lot of context switching, 5000/sec. The
minimum query plan threshold for switching to parrallel execution is
5, I suspect that this could be the cause of all the context
switching. 5 seems to be very low based on our queries.
I will try switching to fibers to see if this helps, but is there a
formula to figure out the parallel execution threshold based on the
CPU speed?|||Another thing to try is to see what the "max worker threads" setting is, via
sp_configure (it is an advanced option). The default setting of this option
is 255, in order to prevent thrashing. If your CPU utilization is pretty
low, you can try increasing this setting and see if it makes a difference.
Typically if there is a difference in response time as opposed to
throughput, it is possible it is because requests are waiting for a thread
to execute.
Something that was not clear from your mail is whether the slowdown happened
after all the connections are established, while just running queries, or
whether the slowdown was just in connect time. If the slowdown was just
during connect time, do you have NT authentication turned on ? There was a
bug in SQL Server (which has since been fixed) where if the DC was not
responding fast, SQL Server used to unnecessarily hold threads while doing
the logon. If you're not running on the latest QFE rollup, that would be
another thing to consider doing, and it should fix this particular issue if
that's what you're having.
S. Ashwin
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm.
"Eugene" <kyevgeniy@.hotmail.com> wrote in message
news:71740f98.0308150650.612af5e3@.posting.google.com...
> I noticed that I have a lot of context switching, 5000/sec. The
> minimum query plan threshold for switching to parrallel execution is
> 5, I suspect that this could be the cause of all the context
> switching. 5 seems to be very low based on our queries.
> I will try switching to fibers to see if this helps, but is there a
> formula to figure out the parallel execution threshold based on the
> CPU speed?
Monday, March 12, 2012
performance and #of users
Does it slow down enough on its ability to service connections that you would appear to need to add more concurrent connections? If you add these connections, will the server as a whole start to slow down as the system starts using more and more dasd to emulate the RAM it needs?
:confused:Originally posted by ricka49
If we have an SQL server with 20 concurrent connections, how does it fail as it runs out of RAM and starts swapping?
Does it slow down enough on its ability to service connections that you would appear to need to add more concurrent connections? If you add these connections, will the server as a whole start to slow down as the system starts using more and more dasd to emulate the RAM it needs?
:confused:
SQL Server 7.0 and 2000 use approximately 50 Kb of server RAM per connection. Note that a user can have multiple connections, even with connection pooling. A good rule of thumb here is one connection per 100 records returned, so 50Kb of server memory per 100 records.
I cannot site an article on this. I got this information from a MS developer when I was looking at licensing options a couple of years back.|||You can check the number of users logged in from Enterprise Manager. You can also run sp_who and/or sp_who2. There is another place to check on user connections (and also connection memory)::
Look at the sysperfinfo table on master:
SELECT * FROM master..sysperfinfo
WHERE
(object_name = 'SQLServer:General Statistics' and counter_name = 'User Connections') OR
(object_name = 'SQLServer:Memory Manager' and counter_name = 'Connection Memory (KB)')
I make it a general habit to snapshot this data to a history table on an hourly basis. It can help me sometimes track usage and performance issues.
Regards,
Hugh Scott
Originally posted by ricka49
If we have an SQL server with 20 concurrent connections, how does it fail as it runs out of RAM and starts swapping?
Does it slow down enough on its ability to service connections that you would appear to need to add more concurrent connections? If you add these connections, will the server as a whole start to slow down as the system starts using more and more dasd to emulate the RAM it needs?
:confused:
Friday, March 9, 2012
Performance - Database apps, connections, and C#
I have a windows client/server App. There will typically be 3-5 client users accessing the system at any given time, all on a local area network. When they are using the system, they might use it for an hour or more. While viewing the data, they will m
ake frequent requests from the database. (In some cases, the screens they are viewing will automatically refresh, by sending a new query to the database.) In other words, each client might request new data every 5 seconds or so.
Also, there is a service running that automatically monitors machines, and puts the recorded data into a database, resulting in a database updata once per second from each machine. There are anywhere from 4 to 100 machines.
My question is whether I would be better off to create a database connection at the time the user logs on, and maintain that connection until he logs off, or would it be better to create and use a new connection for every query. Using a new connection fo
r every query, I get the overhead associated with new connections. Keeping the connection alive the whole time, I get the overhead associated with maintaining a connection.
Likewise, with the machine monitoring connections, would I be better off opening a connection for each machine, and keeping it alive as long as the monitoring continued, or would I be better off creating and disposing of it frequently?
In case it matters, this is all using Visual C# with ADO.net.
Generally, most apps are written today to close the connections when you're
done with them. Web applications especially open and close connections
very frequently. The client will use connection pooling to reduce the
overhead of establishing brand new connections so the perf hit isn't so bad.
Most performace gains are achieved by tuning the queries you're sending to
the server etc. and good database design.
Thanks,
Kevin McDonnell
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.
Performance - Database apps, connections, and C#
keeping connections open is very expensive.
I have a windows client/server App. There will typically be 3-5 client user
s accessing the system at any given time, all on a local area network. When
they are using the system, they might use it for an hour or more. While vi
ewing the data, they will m
ake frequent requests from the database. (In some cases, the screens they a
re viewing will automatically refresh, by sending a new query to the databas
e.) In other words, each client might request new data every 5 seconds or s
o.
Also, there is a service running that automatically monitors machines, and p
uts the recorded data into a database, resulting in a database updata once p
er second from each machine. There are anywhere from 4 to 100 machines.
My question is whether I would be better off to create a database connection
at the time the user logs on, and maintain that connection until he logs of
f, or would it be better to create and use a new connection for every query.
Using a new connection fo
r every query, I get the overhead associated with new connections. Keeping
the connection alive the whole time, I get the overhead associated with main
taining a connection.
Likewise, with the machine monitoring connections, would I be better off ope
ning a connection for each machine, and keeping it alive as long as the moni
toring continued, or would I be better off creating and disposing of it freq
uently?
In case it matters, this is all using Visual C# with ADO.net.Generally, most apps are written today to close the connections when you're
done with them. Web applications especially open and close connections
very frequently. The client will use connection pooling to reduce the
overhead of establishing brand new connections so the perf hit isn't so bad.
Most performace gains are achieved by tuning the queries you're sending to
the server etc. and good database design.
Thanks,
Kevin McDonnell
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.
Performance
My question is: I might have implementation for over 350 concurrent users, using the application at the same time. How can I verify that SQL Server will not face any performance issues? Any particular specifications for the Server that SQL server will be installed upon? should I use multiple File Groups, or seperate Log File on a Hard disk and Data file on another Hard disk?
Please if you have any documentation that provide real time performance testing of SQL Server with more that 350 users, it would be a great help.
P.S: My database is small (about 130 tables), most of them are only listings (users, permissions...) and mainly I have about 5 tables which will contain data incrementally.
Thanks all,
Regards,
Kamel BalquisThere is no stock answer that can be applied to your situation. There are too many factors that can affect your system.|||This is exactly what I want to know, what are these factors? Server performance? RAM Size, Hard disk speed? Multiple file groups?....?
Monday, February 20, 2012
Perf Mon
SQL Server 2005 sp1
The object General Statistics: User Connections has a value of 1400
The object General Statistics:Logical Connections has avalue of 1000
Does anyone know why there is a differenece in the values?
Thanks,
YanHi
Is this applicable?
http://support.microsoft.com/kb/922118
John
"Yan" <yaniv.etrogi@.gmail.com> wrote in message
news:eLbfLtibIHA.4196@.TK2MSFTNGP04.phx.gbl...
> Hi,
> SQL Server 2005 sp1
>
> The object General Statistics: User Connections has a value of 1400
> The object General Statistics:Logical Connections has avalue of 1000
>
> Does anyone know why there is a differenece in the values?
>
> Thanks,
> Yan
>|||You may also want to check if this is occuring!
http://support.microsoft.com/kb/937745
John
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:eNN0sgybIHA.4652@.TK2MSFTNGP06.phx.gbl...
> Hi
> Is this applicable?
> http://support.microsoft.com/kb/922118
> John
>
> "Yan" <yaniv.etrogi@.gmail.com> wrote in message
> news:eLbfLtibIHA.4196@.TK2MSFTNGP04.phx.gbl...
>> Hi,
>> SQL Server 2005 sp1
>>
>> The object General Statistics: User Connections has a value of 1400
>> The object General Statistics:Logical Connections has avalue of 1000
>>
>> Does anyone know why there is a differenece in the values?
>>
>> Thanks,
>> Yan
>
Perf Mon
SQL Server 2005 sp1
The object General Statistics: User Connections has a value of 1400
The object General Statistics:Logical Connections has avalue of 1000
Does anyone know why there is a differenece in the values?
Thanks,
Yan
Hi
Is this applicable?
http://support.microsoft.com/kb/922118
John
"Yan" <yaniv.etrogi@.gmail.com> wrote in message
news:eLbfLtibIHA.4196@.TK2MSFTNGP04.phx.gbl...
> Hi,
> SQL Server 2005 sp1
>
> The object General Statistics: User Connections has a value of 1400
> The object General Statistics:Logical Connections has avalue of 1000
>
> Does anyone know why there is a differenece in the values?
>
> Thanks,
> Yan
>
|||You may also want to check if this is occuring!
http://support.microsoft.com/kb/937745
John
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:eNN0sgybIHA.4652@.TK2MSFTNGP06.phx.gbl...
> Hi
> Is this applicable?
> http://support.microsoft.com/kb/922118
> John
>
> "Yan" <yaniv.etrogi@.gmail.com> wrote in message
> news:eLbfLtibIHA.4196@.TK2MSFTNGP04.phx.gbl...
>