Sunday, March 25, 2012
Clean Caches command vs recompile
Before I do performance check for stored procedure, I always clean the
caches using the following commands.
DBCC DROPCLEANBUFFERS
Go
DBCC FREEPROCCACHE
GO
But sometimes I think it's annoying as it basically purges all the caches
from the SQL server.
So if I just test one procedure's performance, I can just add 'WITH
recompile' keyword in the end, right?
is that the same thing since this way stored procedure will not use the
caches even caches exist.no, it's not the same thing. both 'with recompile' and 'dbcc freeprocbuffer'
will force recompile of the procedure, but 'dbcc dropcleanbuffers' will
purge the data cache (which is the reason it's not advisable to do it on a
production server).
dean
"Britney" <britneychen_2001@.yahoo.com> wrote in message
news:%23Ut4V3EFFHA.1084@.tk2msftngp13.phx.gbl...
>
> Before I do performance check for stored procedure, I always clean the
> caches using the following commands.
>
>
> DBCC DROPCLEANBUFFERS
> Go
> DBCC FREEPROCCACHE
> GO
>
> But sometimes I think it's annoying as it basically purges all the caches
> from the SQL server.
> So if I just test one procedure's performance, I can just add 'WITH
> recompile' keyword in the end, right?
> is that the same thing since this way stored procedure will not use the
> caches even caches exist.
>
>
>
>
Monday, March 19, 2012
Chosen datatype for Primary key field and performance questions?
Hi there,
I have been hired for a couple of weeks to investigate the performance of a sql server 2000 system.
One of the things that strikes me is that all the Primary key (identity field) fileds uses an decimal(18,0) as it's datatype.
An decimal with a precision of 18,0 takes 9 bytes for each column, while an int takes only 4 bytes and and bigint 8 bytes.
Many tables aren't that big, so the values will fit in an int datatype.
1. Is iot a good option to change the decimals columns to an int column ?
2. Many of these columns are indexed by a clustered index. Can the decimal datatype be a performance issue ?
3. sometimes they have deadlocks due page splits. Can this by reduced by changing the data types, while more data fit's into an page?
Thanks in advance,
Greetz,
Patrick de Jong
To me decimal (18,0) does not make sense at all... if at all u need to store larger number u could have gone for BigInt. Ofcourse the index size increases. To reduce the page split the change of datatype may not be sufficient. you many need to re-look your fillfactor for the index.
Madhu
|||Using int for an indentity column is the most common solution, but just be aware of the roughly 2.1 billion upper limit for that data type. If 2.1 billion is not large enough, most people use bigint.
Changing your fillfactor can help minimize page splits, at the cost of making your index larger. Your deadlocks are probably not caused by page splits. Its more likely that they are caused by lack of, or improper indexes. You might try running this query to see if you are seeing any blocking:
-- Detect blocking
SELECT blocked_query.session_id AS blocked_session_id,
blocking_query.session_id AS blocking_session_id,
sql_text.text AS blocked_text,
sql_btext.text AS blocking_text,
waits.wait_type AS blocking_resource
FROM sys.dm_exec_requests AS blocked_query
INNER JOIN sys.dm_exec_requests AS blocking_query
ON blocked_query.blocking_session_id = blocking_query.session_id
CROSS APPLY
(SELECT *
FROM sys.dm_exec_sql_text(blocking_query.sql_handle)
) sql_btext
CROSS APPLY
(SELECT *
FROM sys.dm_exec_sql_text(blocked_query.sql_handle)
) sql_text
INNER JOIN sys.dm_os_waiting_tasks AS waits
ON waits.session_id = blocking_query.session_id
Choosing Performance measurements from a profiler workload
maybe the group gurus can review it and suggest improvements. Please
tell me if this is a sound approach.
I want to have global performance measurements. They should guide us
in improving the database overall responsiveness. So, if we add a
complex clustered index that speed a query, but slows a frequent
update, it should appear in the measurements.
Here are the steps I followed:
1) Backup of the production database.
2) Recording of a representative SQL Profiler trace file of the
production database
using the TSQL_Replay template. It will be used as my base
workload.
3) Restoring the backup to a Microsoft Virtual Server instance
(http://www.microsoft.com/windowsserversystem/virtualserver/ )
4) Waste a lot of time configuring login permissions and making the
database have the same
Id number of the original one so I can replay the trace file.
5) Turning on the "Enable Undo disks" option in the Virtual Hard Disk
Properties of the Virtual
server instance.
6) Replay the saved trace file, saving another trace file to gather
performance data.
7) Load the new trace file to a table and run some queries to measure
the database
performance.
8) Discard all the modifications of the virtual server instances,
Tweak the database and go to
step 6 to gather new data.
Is it a sound approach? Are there better ways to do it?
Now is my doubt: which measurements should I take from the saved
profile?
There are a lot of possible events to record in the Profiler trace,
and in each one the fields mean something different.
Here is my initial list of queries on the trace table:
This is probably my most important measure. It gives me the total
duration and CPU spent:
SELECT sum(Duration) as total_time , sum(CPU) total_cpu,
sum(Duration - CPU) as total_wait FROM trace WHERE EventClass = 12 -- 12 is SQL:BatchCompleted
If instead of SQL:BatchCompleted, I use SQL:StmtCompleted I get a
smaller value for sum(duration). If I choose to sum the Duration
column of the SQLTransaction EventClass (number 50) with the commit
EventSubClass (1) the sum(duration) is bigger. Which one gives me the
better measurement?
Here is the commited transactions total, average, max and standard
deviation duration:
SELECT sum(Duration) as trans_total_time , AVG(Duration) as
average_time, max(Duration) as max_transaction_time, stdev(Duration)
as standard_deviation
FROM trace WHERE EventClass = 50 and EventSubClass =1
This is the time wasted rolling back transactions:
SELECT sum(Duration) FROM trace
WHERE EventClass = 50 and EventSubClass =2 -- duration of roolback
transactions
Time spent waiting for locks:
SELECT sum(Duration) FROM trace
WHERE EventClass = 24 -- 24 is Lock:Acquired event
Number of deadlocks:
SELECT MAX(IntegerData) FROM trace WHERE EventClass=59 --
Lock:Deadlock Chain
How do I measure the time wasted when a deadlock happens and a query
is canceled? Is it the duration of the Lock:Cancel event class?
Do the gurus here have any idea of more good data to collect?
BTW, I'm using SQL Server 2000
kind regards,
Paulo Eduardo Neves
http://NaoTemMosquito.blogspot.comHi Paulo
You will need to start your replay trace before yuo take the backup to make
sure it contains all changes between the backup finishing and the trace
starting. You can remove the backup commands after as well as other steps you
don't want. You will need to match database ids with the new system (or
change the trace). If you create the corresponding logins on the new server
you should be able to match the SIDs using sp_change_users_login, so mapping
users should not be a great deal!
For analysis you can use Bill Graziano's cleartrace at
www.cleardata.biz/cleartrace/download
If you are tuning then you should look at statements, frequency as well as
duration and I/O may also be important.
John
"Paulo Eduardo Neves" wrote:
> I'm creating a performance testing server. I'll describe my approach,
> maybe the group gurus can review it and suggest improvements. Please
> tell me if this is a sound approach.
> I want to have global performance measurements. They should guide us
> in improving the database overall responsiveness. So, if we add a
> complex clustered index that speed a query, but slows a frequent
> update, it should appear in the measurements.
> Here are the steps I followed:
> 1) Backup of the production database.
> 2) Recording of a representative SQL Profiler trace file of the
> production database
> using the TSQL_Replay template. It will be used as my base
> workload.
> 3) Restoring the backup to a Microsoft Virtual Server instance
> (http://www.microsoft.com/windowsserversystem/virtualserver/ )
> 4) Waste a lot of time configuring login permissions and making the
> database have the same
> Id number of the original one so I can replay the trace file.
> 5) Turning on the "Enable Undo disks" option in the Virtual Hard Disk
> Properties of the Virtual
> server instance.
> 6) Replay the saved trace file, saving another trace file to gather
> performance data.
> 7) Load the new trace file to a table and run some queries to measure
> the database
> performance.
> 8) Discard all the modifications of the virtual server instances,
> Tweak the database and go to
> step 6 to gather new data.
> Is it a sound approach? Are there better ways to do it?
> Now is my doubt: which measurements should I take from the saved
> profile?
> There are a lot of possible events to record in the Profiler trace,
> and in each one the fields mean something different.
> Here is my initial list of queries on the trace table:
> This is probably my most important measure. It gives me the total
> duration and CPU spent:
> SELECT sum(Duration) as total_time , sum(CPU) total_cpu,
> sum(Duration - CPU) as total_wait FROM trace WHERE EventClass => 12 -- 12 is SQL:BatchCompleted
> If instead of SQL:BatchCompleted, I use SQL:StmtCompleted I get a
> smaller value for sum(duration). If I choose to sum the Duration
> column of the SQLTransaction EventClass (number 50) with the commit
> EventSubClass (1) the sum(duration) is bigger. Which one gives me the
> better measurement?
> Here is the commited transactions total, average, max and standard
> deviation duration:
> SELECT sum(Duration) as trans_total_time , AVG(Duration) as
> average_time, max(Duration) as max_transaction_time, stdev(Duration)
> as standard_deviation
> FROM trace WHERE EventClass = 50 and EventSubClass =1
> This is the time wasted rolling back transactions:
> SELECT sum(Duration) FROM trace
> WHERE EventClass = 50 and EventSubClass =2 -- duration of roolback
> transactions
> Time spent waiting for locks:
> SELECT sum(Duration) FROM trace
> WHERE EventClass = 24 -- 24 is Lock:Acquired event
> Number of deadlocks:
> SELECT MAX(IntegerData) FROM trace WHERE EventClass=59 --
> Lock:Deadlock Chain
> How do I measure the time wasted when a deadlock happens and a query
> is canceled? Is it the duration of the Lock:Cancel event class?
> Do the gurus here have any idea of more good data to collect?
> BTW, I'm using SQL Server 2000
> kind regards,
> Paulo Eduardo Neves
> http://NaoTemMosquito.blogspot.com
>|||On Wed, 03 Oct 2007 12:22:03 -0700, Paulo Eduardo Neves
<pauloneves@.gmail.com> wrote:
>Do the gurus here have any idea of more good data to collect?
Looks like you've got the idea, but the basic structure I always
follow does a group by on the command text. Now, this can be tricky,
I think SQL 2005 has added some features to help there ... but you're
on 2K, so never mind that!
Anyway, my preferred figure of merit is reads, not duration. They
tend to correspond heavily, but reads is logical reads and is somewhat
independent of load, and tells you what it is costing to run that
command.
So,
select
top 20
substring(textdata,1,30) as cmd,
sum(reads) as sumreads,
avg(reads) as avgreads,
count(*) as cnt
from dbo.yourtractable
where eventclass in (rpc done, batch done)
group by substring(textdata,1,30)
order by 2
Now, if you're tuning an entire system, you need to create a total
picture, you need to come up with a way to baseline your performance
and measure the quantitative components and improvements you make. I
guess clocktime through a trace is a good overall measure.
... but this also assumes a lot of stuff about your IO system not
being saturated, and such. When the hardware starts to run out, these
softer statistics become hard to interpret. And replaying it on a VM,
boy, I dunno about that. But in any case, probably the most robust
and significant figure is logical reads, in my ever so humble (or not
so humble) opinion.
Josh|||On Oct 4, 6:17 am, John Bell <jbellnewspo...@.hotmail.com> wrote:
> You will need to start your replay trace before yuo take the backup to make
> sure it contains all changes between the backup finishing and the trace
> starting. You can remove the backup commands after as well as other steps you
> don't want.
I did a backup just a little before. I don't mind if some queries fail
later during the replay. They fail and the replay continues. They
would be insignificant when I calculate the total time.
You will need to match database ids with the new system (or
> change the trace). If you create the corresponding logins on the new server
> you should be able to match the SIDs using sp_change_users_login, so mapping
> users should not be a great deal!
And the default database of the replay users should be the one you are
testing.
> For analysis you can use Bill Graziano's cleartrace atwww.cleardata.biz/cleartrace/download
I've already seen Cleartrace. It looks very useful. The problem is
that I currently just have access to a SQL Server 2000 installation
and it doesn't run in the free version of SQL Server 2005. It looks
like that cleartrace doesn't give me global measurements, just
aggregate the statements.
Thanks for your tips.
regards,
Paulo Eduardo Neves
http://NaoTemMosquito.blogspot.com|||On Oct 5, 2:33 am, JXStern <JXSternChange...@.gte.net> wrote:
> On Wed, 03 Oct 2007 12:22:03 -0700, Paulo Eduardo Neves
> <paulone...@.gmail.com> wrote:
> >Do the gurus here have any idea of more good data to collect?
>
...
> Anyway, my preferred figure of merit is reads, not duration. They
> tend to correspond heavily, but reads is logical reads and is somewhat
> independent of load, and tells you what it is costing to run that
> command.
Yes. But I believe that the problem in my hands are caused due to
excessive locking in the database. Measuring Reads and Writes won't
display the locking problems.
> So,
> select
> top 20
> substring(textdata,1,30) as cmd,
> sum(reads) as sumreads,
> avg(reads) as avgreads,
> count(*) as cnt
> from dbo.yourtractable
> where eventclass in (rpc done, batch done)
> group by substring(textdata,1,30)
> order by 2
I'll try to use the cleartrace tool to get the statement aggregate
data.
> Now, if you're tuning an entire system, you need to create a total
> picture, you need to come up with a way to baseline your performance
> and measure the quantitative components and improvements you make. I
> guess clocktime through a trace is a good overall measure.
Will the clocktime value be different from the total transactions
duration?
> ... but this also assumes a lot of stuff about your IO system not
> being saturated, and such. When the hardware starts to run out, these
> softer statistics become hard to interpret. And replaying it on a VM,
> boy, I dunno about that. But in any case, probably the most robust
> and significant figure is logical reads, in my ever so humble (or not
> so humble) opinion.
First I'm working in a proof of concept server. After showing its
utility, I'll lobby for a dedicated performance testing machine. If I
guarantee that there's nothing else running in the machine, and put
the replay trace file and the performance recording trace in other
harddisk than the one with the database file, I believe that I will
have consistent performance measurements.
I've had a nice idea about the setup. Filtering the trace events where
Duration != 0, will record just the interesting log events.
Thanks for the good ideas!
Paulo
http://NaoTemMosquito.blogspot.com|||Hi Paulo
Replaying traces will potentially highlight problems and a very useful
facility, it is not guaranteed that they always find your problems. Check out
books online to see the restictions when replaying a trace. You may want to
consider using a test tool such as Microsoft Team Suite, LoadRunner or
Rational Performance Tester etc..
If you have locking problems look at usings some of the locking events to
check what your system is doing, rather than relying on time/reads/writes etc.
John
"Paulo Eduardo Neves" wrote:
> On Oct 5, 2:33 am, JXStern <JXSternChange...@.gte.net> wrote:
> > On Wed, 03 Oct 2007 12:22:03 -0700, Paulo Eduardo Neves
> >
> > <paulone...@.gmail.com> wrote:
> > >Do the gurus here have any idea of more good data to collect?
> >
> ...
> > Anyway, my preferred figure of merit is reads, not duration. They
> > tend to correspond heavily, but reads is logical reads and is somewhat
> > independent of load, and tells you what it is costing to run that
> > command.
> Yes. But I believe that the problem in my hands are caused due to
> excessive locking in the database. Measuring Reads and Writes won't
> display the locking problems.
> > So,
> >
> > select
> > top 20
> > substring(textdata,1,30) as cmd,
> > sum(reads) as sumreads,
> > avg(reads) as avgreads,
> > count(*) as cnt
> > from dbo.yourtractable
> > where eventclass in (rpc done, batch done)
> > group by substring(textdata,1,30)
> > order by 2
> I'll try to use the cleartrace tool to get the statement aggregate
> data.
> >
> > Now, if you're tuning an entire system, you need to create a total
> > picture, you need to come up with a way to baseline your performance
> > and measure the quantitative components and improvements you make. I
> > guess clocktime through a trace is a good overall measure.
> Will the clocktime value be different from the total transactions
> duration?
> >
> > ... but this also assumes a lot of stuff about your IO system not
> > being saturated, and such. When the hardware starts to run out, these
> > softer statistics become hard to interpret. And replaying it on a VM,
> > boy, I dunno about that. But in any case, probably the most robust
> > and significant figure is logical reads, in my ever so humble (or not
> > so humble) opinion.
> First I'm working in a proof of concept server. After showing its
> utility, I'll lobby for a dedicated performance testing machine. If I
> guarantee that there's nothing else running in the machine, and put
> the replay trace file and the performance recording trace in other
> harddisk than the one with the database file, I believe that I will
> have consistent performance measurements.
> I've had a nice idea about the setup. Filtering the trace events where
> Duration != 0, will record just the interesting log events.
> Thanks for the good ideas!
> Paulo
> http://NaoTemMosquito.blogspot.com
>
>
Choosing DB Edition (Std vs Ent)
criteria - but its secondary to performance - <!--and I am not paying for
it myself-->)
The server spec under consideration: Dual Xeon, 1GB RAM, 36GB - RAID 1
(Dell PowerEdge 1850).
Application: Windows 2003 Std Server, ASP.NET, MS SQL Server 2000 based
data driven web application.
Approximately 25 simultaneous clients. Peak activity would probably be 50
transactions/activities per second (2 per second per client). I expect
the database size to grow up to 4GB in 1 year.
The application would use only basic OLAP features (if at all)...so
feature set wise I believe that standard edition is good enough.
What I am concerned about is when MS documentation says that Standard
Edition is for "organization that do not require the advanced scalability,
availability, performance, or analysis features of the SQL Server 2000
Enterprise Edition"
Is there a difference in performance between Std and Ent editions? In
terms of number of transactions per second that can be serviced?
What other criteria should I be aware of before deciding to go one way or
the other?
Any ideas?"Jonas Hei" <maps_263@.hotmail.com> wrote in message
news:opsehfbjyzr0m89z@.fx1025...
>I need to decided between Standard and Enterprise Edition (Cost is a
>criteria - but its secondary to performance - <!--and I am not paying for
>it myself-->)
> The server spec under consideration: Dual Xeon, 1GB RAM, 36GB - RAID 1
> (Dell PowerEdge 1850).
> Application: Windows 2003 Std Server, ASP.NET, MS SQL Server 2000 based
> data driven web application.
> Approximately 25 simultaneous clients. Peak activity would probably be 50
> transactions/activities per second (2 per second per client). I expect
> the database size to grow up to 4GB in 1 year.
> The application would use only basic OLAP features (if at all)...so
> feature set wise I believe that standard edition is good enough.
> What I am concerned about is when MS documentation says that Standard
> Edition is for "organization that do not require the advanced scalability,
> availability, performance, or analysis features of the SQL Server 2000
> Enterprise Edition"
> Is there a difference in performance between Std and Ent editions? In
> terms of number of transactions per second that can be serviced?
> What other criteria should I be aware of before deciding to go one way or
> the other?
> Any ideas?
I'd guess that we're referring to features in the section you quoted even
though it makes it sound like the Enterprise Edition is inherently faster
than the Standard Edition. That's simply not the case. For example,
Clustering is a high availability option that is only available in the
Enterprise Edition. You can get more information about features by Edition
and choosing a particular Edition here:
http://www.microsoft.com/sql/evalua...es/choosing.asp.
There is nothing in either the Standard or Enterprise Edition engine that
I'm aware of that throttles performance based on the Edition that you're
using. The only Edition that has a performance throttle based on the Edition
is MSDE.
If the Standard Edition contains the features your application needs, my
guess is that it will run it just fine. Of course, without testing that's
impossible to know for sure.
--
Sincerely,
Stephen Dybing
This posting is provided "AS IS" with no warranties, and confers no rights.|||Hi
The engine is the same for both editions, the Enterprise edition has
additional features, such as failover clustering, built in log shipping and
automatic use of indexed views.
See
http://msdn.microsoft.com/library/d..._ar_ts_1cdv.asp
If you don't want to use these features or if you are happy to "manually"
implement the features or can provide your own solutions, then standard
edition should be ok. All editions should be supported on your hardware.
John
"Jonas Hei" <maps_263@.hotmail.com> wrote in message
news:opsehfbjyzr0m89z@.fx1025...
> I need to decided between Standard and Enterprise Edition (Cost is a
> criteria - but its secondary to performance - <!--and I am not paying for
> it myself-->)
> The server spec under consideration: Dual Xeon, 1GB RAM, 36GB - RAID 1
> (Dell PowerEdge 1850).
> Application: Windows 2003 Std Server, ASP.NET, MS SQL Server 2000 based
> data driven web application.
> Approximately 25 simultaneous clients. Peak activity would probably be 50
> transactions/activities per second (2 per second per client). I expect
> the database size to grow up to 4GB in 1 year.
> The application would use only basic OLAP features (if at all)...so
> feature set wise I believe that standard edition is good enough.
> What I am concerned about is when MS documentation says that Standard
> Edition is for "organization that do not require the advanced scalability,
> availability, performance, or analysis features of the SQL Server 2000
> Enterprise Edition"
> Is there a difference in performance between Std and Ent editions? In
> terms of number of transactions per second that can be serviced?
> What other criteria should I be aware of before deciding to go one way or
> the other?
> Any ideas?|||Stephen Dybing [MSFT] (stephd@.online.microsoft.com) writes:
> There is nothing in either the Standard or Enterprise Edition engine
> that I'm aware of that throttles performance based on the Edition that
> you're using. The only Edition that has a performance throttle based on
> the Edition is MSDE.
There are however features in Enterprise Edition that may help to
improve performance. One such features in indexed views. You can use
indexed views in Std Edition too, but there situations where the optimizer
will not consider the view.
Then again, if you are not using indexed views, this will not make a
difference.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||True, but in my defense, it's listed on the Features page I pointed
everybody at. :-)
--
Sincerely,
Stephen Dybing
This posting is provided "AS IS" with no warranties, and confers no rights.
"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns956828A66F7CYazorman@.127.0.0.1...
> Stephen Dybing [MSFT] (stephd@.online.microsoft.com) writes:
>> There is nothing in either the Standard or Enterprise Edition engine
>> that I'm aware of that throttles performance based on the Edition that
>> you're using. The only Edition that has a performance throttle based on
>> the Edition is MSDE.
> There are however features in Enterprise Edition that may help to
> improve performance. One such features in indexed views. You can use
> indexed views in Std Edition too, but there situations where the optimizer
> will not consider the view.
> Then again, if you are not using indexed views, this will not make a
> difference.
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp
Wednesday, March 7, 2012
CHECKSUM_AGG and BINARY_CHECKSUM performance problems
I am using the following query to get a list of grouped checksum data.
SELECT CAST(Field0_datetime AS INT),
CHECKSUM_AGG(BINARY_CHECKSUM(Field1_bigint, Field2_datetime,
Field3_datetime, Field4_bigint, Field5_bigint, CAST(Field6_float
Decimal(38,6)), Field7_datetime))
FROM Table1
WHERE Field0_datetime BETWEEN '2003-01-01' AND '2003-01-20'
GROUP BY CAST(Field0_datetime AS INT)
Please notice the used filter: from January 1 to January 20.
That query takes about 6 minutes do return the data. The result is 18
records.
However, when I execute the same query filtering BETWEEN '2003-01-01' and
'2003-01-10', this time it takes only 1 second to return data.
When I execute the query filtering BETWEEN '2003-01-10' and '2003-01-20' the
query takes another 1 second to return data.
So why 6 minutes to process them together??
The table have an index by Field0_datetime.
It contains about 1.5 millions records total, using around 1.7Gb of
diskspace, indexes included.
From 2003-01-01 and 2003-01-20, there are 11401 records selected. Don't look
like that much.
The situation is repeatable, I mean, if I execute the queries back and
again, they takes the about the same ammount of time to execute, so I don't
think this problem is related to cache or something like that.
I would appreciate any advice about what might be wrong with my situation.
Thanks a lot and kind regards,
Orly Junior
IT ProfessionalBy using the profiler, I found that while executing the first query (20 days
span), the system don't use the index. How it possible?
A simpler version of the query that causes the same problem is:
select checksum_agg(binary_checksum([dc])) from [table1] where [dc] between
'2003-01-01' and '2003-01-20'
The profiler reports it will be using a clustered index scan wich is
unacceptable since the table have a lot of records.
Why the hell it is not using the [dc] index ?? If a tight the criteria to
between a 10-day span it uses the index correctly.
Do you have any idea why is that happening?
Thanks in advance and best regards,
Orly Junior
IT Professional
"Orly Junior" <nomail@.nomail.com> wrote in message
news:42b0c9e6$0$32014$a729d347@.news.telepac.pt...
> Gentlemen,
> I am using the following query to get a list of grouped checksum data.
> SELECT CAST(Field0_datetime AS INT),
> CHECKSUM_AGG(BINARY_CHECKSUM(Field1_bigint, Field2_datetime,
> Field3_datetime, Field4_bigint, Field5_bigint, CAST(Field6_float
> Decimal(38,6)), Field7_datetime))
> FROM Table1
> WHERE Field0_datetime BETWEEN '2003-01-01' AND '2003-01-20'
> GROUP BY CAST(Field0_datetime AS INT)
> Please notice the used filter: from January 1 to January 20.
> That query takes about 6 minutes do return the data. The result is 18
> records.
> However, when I execute the same query filtering BETWEEN '2003-01-01' and
> '2003-01-10', this time it takes only 1 second to return data.
> When I execute the query filtering BETWEEN '2003-01-10' and '2003-01-20'
> the query takes another 1 second to return data.
> So why 6 minutes to process them together??
> The table have an index by Field0_datetime.
> It contains about 1.5 millions records total, using around 1.7Gb of
> diskspace, indexes included.
> From 2003-01-01 and 2003-01-20, there are 11401 records selected. Don't
> look like that much.
> The situation is repeatable, I mean, if I execute the queries back and
> again, they takes the about the same ammount of time to execute, so I
> don't think this problem is related to cache or something like that.
> I would appreciate any advice about what might be wrong with my situation.
> Thanks a lot and kind regards,
> Orly Junior
> IT Professional|||Orly Junior (nomail@.nomail.com) writes:
> By using the profiler, I found that while executing the first query (20
> days span), the system don't use the index. How it possible?
When you have a non-clustered index that can be used to compute a query,
SQL Server cannot always use this index blindly. If the selection is
small, the index is find. If the selection is large, the index spells
disaster. This is because every hit in the pages, requires an access to
the data pages. This can up with more pages reads, than use scanning the
table once.
Now, in your case, there are 11041 rows that matches the WHERE clause.
The table is 1.7 GB, which is 207 000 pages. Even if some of those
1.7 GB are indexes, the table scan is obviously more expensive.
But SQL Server does not build query plans from full knowledge, but from
statistics it has saved about the table. If this statistics is inaccurate
for some reason, the estimate may be incorrect. By default, SQL Server
does only sample data for its statistics.
You can try "UPDATE STATISTICS tbl WITH FULLSCAN" and see if this
has any effect. SQL Server will now look at all rows. However, it
saves data in a histogramme, so you may still lose accuracy. DBCC
SHOW_STATISTICS may give some information.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp
Saturday, February 25, 2012
Checkpoint and performance
that while the database is checkpointing, the performance of these stored
procs gets worse. What usually will takes 100 ms takes 400 to 700 ms. Once
the checkpoint is done, they are back to their normal runtime
Is this expected behavior?
BTW this is a quad P4 xeon 2.8ghz box with 12 gig of ram running Windows
Server 2003/SQL Server 2000 Enterprise SP3a.With direct attached SCSI I saw the same behavior. Basically, your
checkpoint is saturating the IO bandwidth of your controller card. I saw
this with high-end RAID controllers with 128MB RAM when the checkpoint was
large enough to flood the on-board cache. This went away with a SAN since
FC is full duplex and the SAN has 3GB of write cache, easily enough to hold
a checkpoint.
--
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
"Kevin Jackson" <kjackson@.powerwayinc.com> wrote in message
news:ONLYmSpuDHA.2304@.TK2MSFTNGP12.phx.gbl...
> While profiling our stored procs that write data to the database, we
notice
> that while the database is checkpointing, the performance of these stored
> procs gets worse. What usually will takes 100 ms takes 400 to 700 ms.
Once
> the checkpoint is done, they are back to their normal runtime
> Is this expected behavior?
> BTW this is a quad P4 xeon 2.8ghz box with 12 gig of ram running Windows
> Server 2003/SQL Server 2000 Enterprise SP3a.
>|||I forgot to add we are using attached to an EMC storage system with 1
terabyte of storage... The controller card is fiber to the EMC system and
the sustained throughput during a checkpoint is 8 MBps and the that is
nothing compared to what we can actually push to that system. During
database restores we can get over 50 MBps!!! So while it may be an I/O
issue, we are not even reaching the saturation point of our attached I/O
system.
"Geoff N. Hiten" <SRDBA@.Careerbuilder.com> wrote in message
news:OCuhqipuDHA.3220@.tk2msftngp13.phx.gbl...
> With direct attached SCSI I saw the same behavior. Basically, your
> checkpoint is saturating the IO bandwidth of your controller card. I saw
> this with high-end RAID controllers with 128MB RAM when the checkpoint was
> large enough to flood the on-board cache. This went away with a SAN since
> FC is full duplex and the SAN has 3GB of write cache, easily enough to
hold
> a checkpoint.
> --
> Geoff N. Hiten
> Microsoft SQL Server MVP
> Senior Database Administrator
> Careerbuilder.com
>
>
> "Kevin Jackson" <kjackson@.powerwayinc.com> wrote in message
> news:ONLYmSpuDHA.2304@.TK2MSFTNGP12.phx.gbl...
> > While profiling our stored procs that write data to the database, we
> notice
> > that while the database is checkpointing, the performance of these
stored
> > procs gets worse. What usually will takes 100 ms takes 400 to 700 ms.
> Once
> > the checkpoint is done, they are back to their normal runtime
> >
> > Is this expected behavior?
> >
> > BTW this is a quad P4 xeon 2.8ghz box with 12 gig of ram running Windows
> > Server 2003/SQL Server 2000 Enterprise SP3a.
> >
> >
>|||I am also using an EMC SAN (CX-600) with 4 fiber cards from each cluster
host to the SAN. The real key is write cache. Mine is set to max at 3GB.
If you overfill your write cache, you go to direct write mode and lose a lot
of the benefits of your high-end IO system.
There may be CPU issues as well. Have you monitored processor time on a
per-CPU basis? Also, do you have Hyperthreading on or off?
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
"Kevin Jackson" <softwiz@.covad.net> wrote in message
news:u6qIPUquDHA.2308@.TK2MSFTNGP11.phx.gbl...
> I forgot to add we are using attached to an EMC storage system with 1
> terabyte of storage... The controller card is fiber to the EMC system and
> the sustained throughput during a checkpoint is 8 MBps and the that is
> nothing compared to what we can actually push to that system. During
> database restores we can get over 50 MBps!!! So while it may be an I/O
> issue, we are not even reaching the saturation point of our attached I/O
> system.
> "Geoff N. Hiten" <SRDBA@.Careerbuilder.com> wrote in message
> news:OCuhqipuDHA.3220@.tk2msftngp13.phx.gbl...
> > With direct attached SCSI I saw the same behavior. Basically, your
> > checkpoint is saturating the IO bandwidth of your controller card. I
saw
> > this with high-end RAID controllers with 128MB RAM when the checkpoint
was
> > large enough to flood the on-board cache. This went away with a SAN
since
> > FC is full duplex and the SAN has 3GB of write cache, easily enough to
> hold
> > a checkpoint.
> >
> > --
> > Geoff N. Hiten
> > Microsoft SQL Server MVP
> > Senior Database Administrator
> > Careerbuilder.com
> >
> >
> >
> >
> > "Kevin Jackson" <kjackson@.powerwayinc.com> wrote in message
> > news:ONLYmSpuDHA.2304@.TK2MSFTNGP12.phx.gbl...
> > > While profiling our stored procs that write data to the database, we
> > notice
> > > that while the database is checkpointing, the performance of these
> stored
> > > procs gets worse. What usually will takes 100 ms takes 400 to 700 ms.
> > Once
> > > the checkpoint is done, they are back to their normal runtime
> > >
> > > Is this expected behavior?
> > >
> > > BTW this is a quad P4 xeon 2.8ghz box with 12 gig of ram running
Windows
> > > Server 2003/SQL Server 2000 Enterprise SP3a.
> > >
> > >
> >
> >
>|||I have seen minimally configured SAN systems saturate at around 50MB/sec.
are you seeing high disk queue lengths? if so, then how much bandwidth do
you have to the storage subsystem? How many spindles, what raid level, how
much write cache?
--
Kevin Connell, MCDBA
----
The views expressed here are my own
and not of my employer.
----
"Kevin Jackson" <softwiz@.covad.net> wrote in message
news:u6qIPUquDHA.2308@.TK2MSFTNGP11.phx.gbl...
> I forgot to add we are using attached to an EMC storage system with 1
> terabyte of storage... The controller card is fiber to the EMC system and
> the sustained throughput during a checkpoint is 8 MBps and the that is
> nothing compared to what we can actually push to that system. During
> database restores we can get over 50 MBps!!! So while it may be an I/O
> issue, we are not even reaching the saturation point of our attached I/O
> system.
> "Geoff N. Hiten" <SRDBA@.Careerbuilder.com> wrote in message
> news:OCuhqipuDHA.3220@.tk2msftngp13.phx.gbl...
> > With direct attached SCSI I saw the same behavior. Basically, your
> > checkpoint is saturating the IO bandwidth of your controller card. I
saw
> > this with high-end RAID controllers with 128MB RAM when the checkpoint
was
> > large enough to flood the on-board cache. This went away with a SAN
since
> > FC is full duplex and the SAN has 3GB of write cache, easily enough to
> hold
> > a checkpoint.
> >
> > --
> > Geoff N. Hiten
> > Microsoft SQL Server MVP
> > Senior Database Administrator
> > Careerbuilder.com
> >
> >
> >
> >
> > "Kevin Jackson" <kjackson@.powerwayinc.com> wrote in message
> > news:ONLYmSpuDHA.2304@.TK2MSFTNGP12.phx.gbl...
> > > While profiling our stored procs that write data to the database, we
> > notice
> > > that while the database is checkpointing, the performance of these
> stored
> > > procs gets worse. What usually will takes 100 ms takes 400 to 700 ms.
> > Once
> > > the checkpoint is done, they are back to their normal runtime
> > >
> > > Is this expected behavior?
> > >
> > > BTW this is a quad P4 xeon 2.8ghz box with 12 gig of ram running
Windows
> > > Server 2003/SQL Server 2000 Enterprise SP3a.
> > >
> > >
> >
> >
>|||Very short disk queue lengths.
4 ea FC2 HBAs per host computer. Theoretical max 800MB/sec. Box maxes out
at 1100 MB/sec manufacturer's spec. Real limit seems to be aroudn
200MB/sec.
Most RAID sets are 16 spindles RAID 1+0 Some are fewer spindles but all are
1+0.
Total system has 3GB write cache.
Front end is 4-node 3 instance cluster (Windows Server 2003) 8x2.8GHz procs
and 32GB RAM /box. (Unisys ES-7000)
Obviously NOT a minimally configured system. I was able to get 50MB/sec
from direct attached SCSI. Again, the problem I ran into was cache
saturation and half-duplex data pathing.
Then again, there is the flip-side problem where the IO system can be too
fast.
FIX: The Checkpoint Process Can Delay SQL Server Database Activity and Does
Not Yield Scheduler Correctly Causing Error: 17883 to Occur
http://support.microsoft.com/default.aspx?scid=kb;en-us;815056&Product=sql2k
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
"Kevin" <ReplyTo@.Newsgroups.only> wrote in message
news:u2X1wUsuDHA.1512@.TK2MSFTNGP10.phx.gbl...
> I have seen minimally configured SAN systems saturate at around 50MB/sec.
> are you seeing high disk queue lengths? if so, then how much bandwidth do
> you have to the storage subsystem? How many spindles, what raid level,
how
> much write cache?
> --
> Kevin Connell, MCDBA
> ----
> The views expressed here are my own
> and not of my employer.
> ----
> "Kevin Jackson" <softwiz@.covad.net> wrote in message
> news:u6qIPUquDHA.2308@.TK2MSFTNGP11.phx.gbl...
> > I forgot to add we are using attached to an EMC storage system with 1
> > terabyte of storage... The controller card is fiber to the EMC system
and
> > the sustained throughput during a checkpoint is 8 MBps and the that is
> > nothing compared to what we can actually push to that system. During
> > database restores we can get over 50 MBps!!! So while it may be an I/O
> > issue, we are not even reaching the saturation point of our attached I/O
> > system.
> >
> > "Geoff N. Hiten" <SRDBA@.Careerbuilder.com> wrote in message
> > news:OCuhqipuDHA.3220@.tk2msftngp13.phx.gbl...
> > > With direct attached SCSI I saw the same behavior. Basically, your
> > > checkpoint is saturating the IO bandwidth of your controller card. I
> saw
> > > this with high-end RAID controllers with 128MB RAM when the checkpoint
> was
> > > large enough to flood the on-board cache. This went away with a SAN
> since
> > > FC is full duplex and the SAN has 3GB of write cache, easily enough to
> > hold
> > > a checkpoint.
> > >
> > > --
> > > Geoff N. Hiten
> > > Microsoft SQL Server MVP
> > > Senior Database Administrator
> > > Careerbuilder.com
> > >
> > >
> > >
> > >
> > > "Kevin Jackson" <kjackson@.powerwayinc.com> wrote in message
> > > news:ONLYmSpuDHA.2304@.TK2MSFTNGP12.phx.gbl...
> > > > While profiling our stored procs that write data to the database, we
> > > notice
> > > > that while the database is checkpointing, the performance of these
> > stored
> > > > procs gets worse. What usually will takes 100 ms takes 400 to 700
ms.
> > > Once
> > > > the checkpoint is done, they are back to their normal runtime
> > > >
> > > > Is this expected behavior?
> > > >
> > > > BTW this is a quad P4 xeon 2.8ghz box with 12 gig of ram running
> Windows
> > > > Server 2003/SQL Server 2000 Enterprise SP3a.
> > > >
> > > >
> > >
> > >
> >
> >
>|||16 spindles is not very much - 200MB/sec is actually really quite good for
16 15KRPM spindles.
--
Kevin Connell, MCDBA
----
The views expressed here are my own
and not of my employer.
----
"Geoff N. Hiten" <SRDBA@.Careerbuilder.com> wrote in message
news:Os3suQ1uDHA.2340@.TK2MSFTNGP12.phx.gbl...
> Very short disk queue lengths.
> 4 ea FC2 HBAs per host computer. Theoretical max 800MB/sec. Box maxes
out
> at 1100 MB/sec manufacturer's spec. Real limit seems to be aroudn
> 200MB/sec.
> Most RAID sets are 16 spindles RAID 1+0 Some are fewer spindles but all
are
> 1+0.
> Total system has 3GB write cache.
> Front end is 4-node 3 instance cluster (Windows Server 2003) 8x2.8GHz
procs
> and 32GB RAM /box. (Unisys ES-7000)
> Obviously NOT a minimally configured system. I was able to get 50MB/sec
> from direct attached SCSI. Again, the problem I ran into was cache
> saturation and half-duplex data pathing.
> Then again, there is the flip-side problem where the IO system can be too
> fast.
> FIX: The Checkpoint Process Can Delay SQL Server Database Activity and
Does
> Not Yield Scheduler Correctly Causing Error: 17883 to Occur
>
http://support.microsoft.com/default.aspx?scid=kb;en-us;815056&Product=sql2k
>
> --
> Geoff N. Hiten
> Microsoft SQL Server MVP
> Senior Database Administrator
> Careerbuilder.com
>
>
> "Kevin" <ReplyTo@.Newsgroups.only> wrote in message
> news:u2X1wUsuDHA.1512@.TK2MSFTNGP10.phx.gbl...
> > I have seen minimally configured SAN systems saturate at around
50MB/sec.
> > are you seeing high disk queue lengths? if so, then how much bandwidth
do
> > you have to the storage subsystem? How many spindles, what raid level,
> how
> > much write cache?
> >
> > --
> > Kevin Connell, MCDBA
> > ----
> > The views expressed here are my own
> > and not of my employer.
> > ----
> > "Kevin Jackson" <softwiz@.covad.net> wrote in message
> > news:u6qIPUquDHA.2308@.TK2MSFTNGP11.phx.gbl...
> > > I forgot to add we are using attached to an EMC storage system with 1
> > > terabyte of storage... The controller card is fiber to the EMC system
> and
> > > the sustained throughput during a checkpoint is 8 MBps and the that is
> > > nothing compared to what we can actually push to that system. During
> > > database restores we can get over 50 MBps!!! So while it may be an
I/O
> > > issue, we are not even reaching the saturation point of our attached
I/O
> > > system.
> > >
> > > "Geoff N. Hiten" <SRDBA@.Careerbuilder.com> wrote in message
> > > news:OCuhqipuDHA.3220@.tk2msftngp13.phx.gbl...
> > > > With direct attached SCSI I saw the same behavior. Basically, your
> > > > checkpoint is saturating the IO bandwidth of your controller card.
I
> > saw
> > > > this with high-end RAID controllers with 128MB RAM when the
checkpoint
> > was
> > > > large enough to flood the on-board cache. This went away with a SAN
> > since
> > > > FC is full duplex and the SAN has 3GB of write cache, easily enough
to
> > > hold
> > > > a checkpoint.
> > > >
> > > > --
> > > > Geoff N. Hiten
> > > > Microsoft SQL Server MVP
> > > > Senior Database Administrator
> > > > Careerbuilder.com
> > > >
> > > >
> > > >
> > > >
> > > > "Kevin Jackson" <kjackson@.powerwayinc.com> wrote in message
> > > > news:ONLYmSpuDHA.2304@.TK2MSFTNGP12.phx.gbl...
> > > > > While profiling our stored procs that write data to the database,
we
> > > > notice
> > > > > that while the database is checkpointing, the performance of these
> > > stored
> > > > > procs gets worse. What usually will takes 100 ms takes 400 to 700
> ms.
> > > > Once
> > > > > the checkpoint is done, they are back to their normal runtime
> > > > >
> > > > > Is this expected behavior?
> > > > >
> > > > > BTW this is a quad P4 xeon 2.8ghz box with 12 gig of ram running
> > Windows
> > > > > Server 2003/SQL Server 2000 Enterprise SP3a.
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> >
> >
>
Friday, February 24, 2012
Checking performance problem
I am experiencing some performance problems and i need to check what
database is creating a large work on my server, whats analisys parameter i'd
add to performance monitor to check this.
Atenciosamente
Leandro Loureiro dos SantosSql Profiler -- You can run a trace on your server, and include the fields (TextData, HostName, Duration, DatabaseID, DatabaseName, CPU, Reads, Writes) Then run it for events TSQL:StmtComplete and/or Stored Procs-SP:Completed. I like to set the trade to log to a sql server table, then you can query the avg cpu, reads, and writes by database id... You can also use TextData to find out what procedures are the most resource intensive.
Or, a simpler, but less exact method, is to run sp_who2... This will show you connections, what database it is connected to, and accumlative cpu time, reads and writes.|||Stored Procs-SP:Completed does not report CPU or Reads,
you will need Stored Procs-RPC:Completed for that
>--Original Message--
>Sql Profiler -- You can run a trace on your server, and
include the fields (TextData, HostName, Duration,
DatabaseID, DatabaseName, CPU, Reads, Writes) Then run it
for events TSQL:StmtComplete and/or Stored Procs-
SP:Completed. I like to set the trade to log to a sql
server table, then you can query the avg cpu, reads, and
writes by database id... You can also use TextData to find
out what procedures are the most resource intensive.
>Or, a simpler, but less exact method, is to run
sp_who2... This will show you connections, what database
it is connected to, and accumlative cpu time, reads and
writes.
>.
>|||If i run this trace on the Host(SQLSERVER) this will impact the server
performance?
Would be better to execute in a client machine?
Thanks
"joe chang" <anonymous@.discussions.microsoft.com> escreveu na mensagem
news:084501c3ad24$24b9f360$a501280a@.phx.gbl...
> Stored Procs-SP:Completed does not report CPU or Reads,
> you will need Stored Procs-RPC:Completed for that
> >--Original Message--
> >Sql Profiler -- You can run a trace on your server, and
> include the fields (TextData, HostName, Duration,
> DatabaseID, DatabaseName, CPU, Reads, Writes) Then run it
> for events TSQL:StmtComplete and/or Stored Procs-
> SP:Completed. I like to set the trade to log to a sql
> server table, then you can query the avg cpu, reads, and
> writes by database id... You can also use TextData to find
> out what procedures are the most resource intensive.
> >
> >Or, a simpler, but less exact method, is to run
> sp_who2... This will show you connections, what database
> it is connected to, and accumlative cpu time, reads and
> writes.
> >.
> >|||the most critical item to avoid adverse performance impact
is:
DO NOT save the profiler trace to a table on the
production server
my preference is to save to a file, doesn't matter too
much where, but probably not the server log disk
the reason for this is that single row inserts to a
database is a cpu intensive task, i believe profiler may
be issuing each row separately,
when saving to a file, i believe profiler buffers 64k,
before writing to disk
also, don't too many events to trace, start with the one
recommended below. if the captures exceed 100-200
events/sec, you might think about filtering out the less
interesting items
>--Original Message--
>If i run this trace on the Host(SQLSERVER) this will
impact the server
>performance?
>Would be better to execute in a client machine?
>Thanks
>"joe chang" <anonymous@.discussions.microsoft.com>
escreveu na mensagem
>news:084501c3ad24$24b9f360$a501280a@.phx.gbl...
>> Stored Procs-SP:Completed does not report CPU or Reads,
>> you will need Stored Procs-RPC:Completed for that
>> >--Original Message--
>> >Sql Profiler -- You can run a trace on your server, and
>> include the fields (TextData, HostName, Duration,
>> DatabaseID, DatabaseName, CPU, Reads, Writes) Then run
it
>> for events TSQL:StmtComplete and/or Stored Procs-
>> SP:Completed. I like to set the trade to log to a sql
>> server table, then you can query the avg cpu, reads, and
>> writes by database id... You can also use TextData to
find
>> out what procedures are the most resource intensive.
>> >
>> >Or, a simpler, but less exact method, is to run
>> sp_who2... This will show you connections, what database
>> it is connected to, and accumlative cpu time, reads and
>> writes.
>> >.
>> >
>
>.
>|||On Mon, 17 Nov 2003 15:22:02 -0300, "Leandro Loureiro dos Santos"
<leandro@.email.com> wrote:
>If i run this trace on the Host(SQLSERVER) this will impact the server
>performance?
Yes, there is a small impact.
Of coure, this hurts the most when you already have a performance
problem!
>Would be better to execute in a client machine?
Of course.
But as Joe says, you can run the trace on the host, and save the
results to a database or file on another machine, to split the impact.
J.
Thursday, February 16, 2012
checking bandwidth of network
how can i check if the current bandwidth that i am having is enough for
the operation ? can i use the performance monitor or the profiler(which item
to choose) ?
tks & rdgs
what operation do you have in mind? performance monitor can show you network
bandwidth consumption. Even the task manager can do it.
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"maxzsim" <maxzsim@.discussions.microsoft.com> wrote in message
news:FCEFEFEB-F6E0-4FB3-AB78-80C8E3DC6076@.microsoft.com...
> Hi ,
> how can i check if the current bandwidth that i am having is enough for
> the operation ? can i use the performance monitor or the profiler(which
> item
> to choose) ?
> tks & rdgs
|||Hi ,
i am thinking of monitoring the SQL SERVER status from a client PC and the
operations i meant earlier are just some normal updates , select queries thru
a client interface
tks & rdgs
"wei xiao" wrote:
> what operation do you have in mind? performance monitor can show you network
> bandwidth consumption. Even the task manager can do it.
> --
> Wei Xiao [MSFT]
> SQL Server Storage Engine Development
> http://weblogs.asp.net/weix
> This posting is provided "AS IS" with no warranties, and confers no rights.
> "maxzsim" <maxzsim@.discussions.microsoft.com> wrote in message
> news:FCEFEFEB-F6E0-4FB3-AB78-80C8E3DC6076@.microsoft.com...
>
>
Tuesday, February 14, 2012
checking bandwidth of network
how can i check if the current bandwidth that i am having is enough for
the operation ? can i use the performance monitor or the profiler(which item
to choose) ?
tks & rdgswhat operation do you have in mind? performance monitor can show you network
bandwidth consumption. Even the task manager can do it.
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"maxzsim" <maxzsim@.discussions.microsoft.com> wrote in message
news:FCEFEFEB-F6E0-4FB3-AB78-80C8E3DC6076@.microsoft.com...
> Hi ,
> how can i check if the current bandwidth that i am having is enough for
> the operation ? can i use the performance monitor or the profiler(which
> item
> to choose) ?
> tks & rdgs|||Hi ,
i am thinking of monitoring the SQL SERVER status from a client PC and the
operations i meant earlier are just some normal updates , select queries thru
a client interface
tks & rdgs
"wei xiao" wrote:
> what operation do you have in mind? performance monitor can show you network
> bandwidth consumption. Even the task manager can do it.
> --
> Wei Xiao [MSFT]
> SQL Server Storage Engine Development
> http://weblogs.asp.net/weix
> This posting is provided "AS IS" with no warranties, and confers no rights.
> "maxzsim" <maxzsim@.discussions.microsoft.com> wrote in message
> news:FCEFEFEB-F6E0-4FB3-AB78-80C8E3DC6076@.microsoft.com...
> > Hi ,
> >
> > how can i check if the current bandwidth that i am having is enough for
> > the operation ? can i use the performance monitor or the profiler(which
> > item
> > to choose) ?
> >
> > tks & rdgs
>
>
checking bandwidth of network
how can i check if the current bandwidth that i am having is enough for
the operation ? can i use the performance monitor or the profiler(which item
to choose) ?
tks & rdgswhat operation do you have in mind? performance monitor can show you network
bandwidth consumption. Even the task manager can do it.
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"maxzsim" <maxzsim@.discussions.microsoft.com> wrote in message
news:FCEFEFEB-F6E0-4FB3-AB78-80C8E3DC6076@.microsoft.com...
> Hi ,
> how can i check if the current bandwidth that i am having is enough for
> the operation ? can i use the performance monitor or the profiler(which
> item
> to choose) ?
> tks & rdgs|||Hi ,
i am thinking of monitoring the SQL SERVER status from a client PC and the
operations i meant earlier are just some normal updates , select queries thr
u
a client interface
tks & rdgs
"wei xiao" wrote:
> what operation do you have in mind? performance monitor can show you netwo
rk
> bandwidth consumption. Even the task manager can do it.
> --
> Wei Xiao [MSFT]
> SQL Server Storage Engine Development
> http://weblogs.asp.net/weix
> This posting is provided "AS IS" with no warranties, and confers no rights
.
> "maxzsim" <maxzsim@.discussions.microsoft.com> wrote in message
> news:FCEFEFEB-F6E0-4FB3-AB78-80C8E3DC6076@.microsoft.com...
>
>
CHECKDB Error
problems. I ran a DBCC CHECKDB and get a slew of issues. They have been
having the problem for too long to restore/recover. Is there anything I can
do to clear this issue up? I did try to do a repair with data loss and the
problem did not go away.
Thanks.
Richard
Executed as user: NT AUTHORITY\SYSTEM. ...e GAM, but no SGAM or IAM has
allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:14040) in database ID
7 is marked allocated in the GAM, but no SGAM or IAM has allocated it.
[SQLSTATE 42000] (Error 8905) Extent (1:14352) in database ID 7 is marked
allocated in the GAM, but no SGAM or IAM has allocated it. [SQLSTATE 42000]
(Error 8905) Extent (1:14448) in database ID 7 is marked allocated in the
GAM, but no SGAM or IAM has allocated it. [SQLSTATE 42000] (Error 8905)
Extent (1:14488) in database ID 7 is marked allocated in the GAM, but no
SGAM or IAM has allocated it. [SQLSTATE 42000] (Error 8905) Extent
(1:14704) in database ID 7 is marked allocated in the GAM, but no SGAM or
IAM has allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:15176) in
database ID 7 is marked allocated in the GAM, but no SGAM or IAM has
allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:15232) in database ID
7 is marked allocated in the GWhy would you run a CheckDB for performance issues? What are the messages it
is generating?
"Richard Douglass" <RDouglass@.arisinc.com> wrote in message
news:uLEUTFGUHHA.1180@.TK2MSFTNGP05.phx.gbl...
>I was asked to look at a database that is having serious performance
>problems. I ran a DBCC CHECKDB and get a slew of issues. They have
>been having the problem for too long to restore/recover. Is there anything
>I can do to clear this issue up? I did try to do a repair with data loss
>and the problem did not go away.
> Thanks.
> Richard
>
> Executed as user: NT AUTHORITY\SYSTEM. ...e GAM, but no SGAM or IAM has
> allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:14040) in database
> ID 7 is marked allocated in the GAM, but no SGAM or IAM has allocated it.
> [SQLSTATE 42000] (Error 8905) Extent (1:14352) in database ID 7 is marked
> allocated in the GAM, but no SGAM or IAM has allocated it. [SQLSTATE
> 42000] (Error 8905) Extent (1:14448) in database ID 7 is marked allocated
> in the GAM, but no SGAM or IAM has allocated it. [SQLSTATE 42000] (Error
> 8905) Extent (1:14488) in database ID 7 is marked allocated in the GAM,
> but no SGAM or IAM has allocated it. [SQLSTATE 42000] (Error 8905) Extent
> (1:14704) in database ID 7 is marked allocated in the GAM, but no SGAM or
> IAM has allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:15176) in
> database ID 7 is marked allocated in the GAM, but no SGAM or IAM has
> allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:15232) in database
> ID 7 is marked allocated in the G
>|||I ran the check as part of a bigger picture maintenance plan. Reindexing,
purging stale data, etc.
When I run the CHECKDB I get error 8905 over and over.
Is there something I can do to make the database healthy and happy again'
Thanks!
Richard
"ChrisR" <ChrisR@.foo.com> wrote in message
news:OdOQHDHUHHA.920@.TK2MSFTNGP05.phx.gbl...
> Why would you run a CheckDB for performance issues? What are the messages
> it is generating?
>
> "Richard Douglass" <RDouglass@.arisinc.com> wrote in message
> news:uLEUTFGUHHA.1180@.TK2MSFTNGP05.phx.gbl...
>>I was asked to look at a database that is having serious performance
>>problems. I ran a DBCC CHECKDB and get a slew of issues. They have
>>been having the problem for too long to restore/recover. Is there
>>anything I can do to clear this issue up? I did try to do a repair with
>>data loss and the problem did not go away.
>> Thanks.
>> Richard
>>
>> Executed as user: NT AUTHORITY\SYSTEM. ...e GAM, but no SGAM or IAM has
>> allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:14040) in database
>> ID 7 is marked allocated in the GAM, but no SGAM or IAM has allocated it.
>> [SQLSTATE 42000] (Error 8905) Extent (1:14352) in database ID 7 is
>> marked allocated in the GAM, but no SGAM or IAM has allocated it.
>> [SQLSTATE 42000] (Error 8905) Extent (1:14448) in database ID 7 is
>> marked allocated in the GAM, but no SGAM or IAM has allocated it.
>> [SQLSTATE 42000] (Error 8905) Extent (1:14488) in database ID 7 is marked
>> allocated in the GAM, but no SGAM or IAM has allocated it. [SQLSTATE
>> 42000] (Error 8905) Extent (1:14704) in database ID 7 is marked
>> allocated in the GAM, but no SGAM or IAM has allocated it. [SQLSTATE
>> 42000] (Error 8905) Extent (1:15176) in database ID 7 is marked
>> allocated in the GAM, but no SGAM or IAM has allocated it. [SQLSTATE
>> 42000] (Error 8905) Extent (1:15232) in database ID 7 is marked
>> allocated in the G
>|||Perhaps this applies to you? Not very encouraging, though...
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=61284
Also, here's some good techninfo, which mentions 8905:
http://blogs.msdn.com/sqlserverstorageengine/archive/2006/07/18/670341.aspx
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Richard Douglass" <RDouglass@.arisinc.com> wrote in message
news:e7g4G9HUHHA.2212@.TK2MSFTNGP02.phx.gbl...
>I ran the check as part of a bigger picture maintenance plan. Reindexing, purging stale data, etc.
> When I run the CHECKDB I get error 8905 over and over.
> Is there something I can do to make the database healthy and happy again'
> Thanks!
> Richard
>
> "ChrisR" <ChrisR@.foo.com> wrote in message news:OdOQHDHUHHA.920@.TK2MSFTNGP05.phx.gbl...
>> Why would you run a CheckDB for performance issues? What are the messages it is generating?
>>
>> "Richard Douglass" <RDouglass@.arisinc.com> wrote in message
>> news:uLEUTFGUHHA.1180@.TK2MSFTNGP05.phx.gbl...
>>I was asked to look at a database that is having serious performance problems. I ran a DBCC
>>CHECKDB and get a slew of issues. They have been having the problem for too long to
>>restore/recover. Is there anything I can do to clear this issue up? I did try to do a repair
>>with data loss and the problem did not go away.
>> Thanks.
>> Richard
>>
>> Executed as user: NT AUTHORITY\SYSTEM. ...e GAM, but no SGAM or IAM has allocated it. [SQLSTATE
>> 42000] (Error 8905) Extent (1:14040) in database ID 7 is marked allocated in the GAM, but no
>> SGAM or IAM has allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:14352) in database ID 7
>> is marked allocated in the GAM, but no SGAM or IAM has allocated it. [SQLSTATE 42000] (Error
>> 8905) Extent (1:14448) in database ID 7 is marked allocated in the GAM, but no SGAM or IAM has
>> allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:14488) in database ID 7 is marked
>> allocated in the GAM, but no SGAM or IAM has allocated it. [SQLSTATE 42000] (Error 8905) Extent
>> (1:14704) in database ID 7 is marked allocated in the GAM, but no SGAM or IAM has allocated it.
>> [SQLSTATE 42000] (Error 8905) Extent (1:15176) in database ID 7 is marked allocated in the GAM,
>> but no SGAM or IAM has allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:15232) in database
>> ID 7 is marked allocated in the G
>>
>|||Can you email me the complete output from CHECKDB? (Send me email through my
blog below)
--
Paul Randal
Principal Lead Program Manager
Core Storage Engine, Microsoft SQL Server Team
http://blogs.msdn.com/sqlserverstorageengine/default.aspx
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:e472SBIUHHA.4744@.TK2MSFTNGP02.phx.gbl...
> Perhaps this applies to you? Not very encouraging, though...
> http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=61284
> Also, here's some good techninfo, which mentions 8905:
> http://blogs.msdn.com/sqlserverstorageengine/archive/2006/07/18/670341.aspx
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "Richard Douglass" <RDouglass@.arisinc.com> wrote in message
> news:e7g4G9HUHHA.2212@.TK2MSFTNGP02.phx.gbl...
>>I ran the check as part of a bigger picture maintenance plan. Reindexing,
>>purging stale data, etc.
>> When I run the CHECKDB I get error 8905 over and over.
>> Is there something I can do to make the database healthy and happy
>> again'
>> Thanks!
>> Richard
>>
>> "ChrisR" <ChrisR@.foo.com> wrote in message
>> news:OdOQHDHUHHA.920@.TK2MSFTNGP05.phx.gbl...
>> Why would you run a CheckDB for performance issues? What are the
>> messages it is generating?
>>
>> "Richard Douglass" <RDouglass@.arisinc.com> wrote in message
>> news:uLEUTFGUHHA.1180@.TK2MSFTNGP05.phx.gbl...
>>I was asked to look at a database that is having serious performance
>>problems. I ran a DBCC CHECKDB and get a slew of issues. They have
>>been having the problem for too long to restore/recover. Is there
>>anything I can do to clear this issue up? I did try to do a repair with
>>data loss and the problem did not go away.
>> Thanks.
>> Richard
>>
>> Executed as user: NT AUTHORITY\SYSTEM. ...e GAM, but no SGAM or IAM has
>> allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:14040) in
>> database ID 7 is marked allocated in the GAM, but no SGAM or IAM has
>> allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:14352) in
>> database ID 7 is marked allocated in the GAM, but no SGAM or IAM has
>> allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:14448) in
>> database ID 7 is marked allocated in the GAM, but no SGAM or IAM has
>> allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:14488) in
>> database ID 7 is marked allocated in the GAM, but no SGAM or IAM has
>> allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:14704) in
>> database ID 7 is marked allocated in the GAM, but no SGAM or IAM has
>> allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:15176) in
>> database ID 7 is marked allocated in the GAM, but no SGAM or IAM has
>> allocated it. [SQLSTATE 42000] (Error 8905) Extent (1:15232) in
>> database ID 7 is marked allocated in the G
>>
>>
>