Tuesday, March 27, 2012
Clear/Purge Log files
but this does not seems to reduce the mdf or ldf filesize.
What I did is detach the database, delete the ldf file, re-attach the
database to create a new ldf file. If I do not do so, the application cannot
work (hang!) because the ldf file is too huge and it takes ages to commit a
transaction. Is there a "better" way to control the ldf file like
auto-purging ? Should I restrict the log file size to a specific filesize
like 500MB ? Does this mean it will auto-purge each time it reach 500MB for
the ldf file ?
ThanksThe way you manage your log file is driven by your database recovery plan.
If your recovery plan is to restore from your last full backup and not apply
transaction log backups, then change your database recovery model to SIMPLE.
This will keep your log size reasonable by removing committed transactions
from the log. The log will still need to be large enough to accommodate
your largest single transaction. If you want to reduce potential data loss,
you should use the BULK_LOGGED or FULL recovery model and backup your log
periodically.
The proper way to shrink files is with DBCC SHRINKFILE. See the Books
Online for details. You should not need to do this as part of routine
maintenance.
Hope this helps.
Dan Guzman
SQL Server MVP
"Carlos" <wt_know@.hotmail.com> wrote in message
news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
database"
> but this does not seems to reduce the mdf or ldf filesize.
> What I did is detach the database, delete the ldf file, re-attach the
> database to create a new ldf file. If I do not do so, the application
cannot
> work (hang!) because the ldf file is too huge and it takes ages to commit
a
> transaction. Is there a "better" way to control the ldf file like
> auto-purging ? Should I restrict the log file size to a specific filesize
> like 500MB ? Does this mean it will auto-purge each time it reach 500MB
for
> the ldf file ?
> Thanks
>
>|||Hi,
Instead of detaching , delete the LDF file, Attach the database ytou should
have tried the below steps:-
Alter database <dbname> set single_user with rollback immediate
go
backup log <dbname> to disk='d:\backup\dbname.trn1'
go
dbcc shrinkfile('logical_log_name',truncateon
ly)
go
Alter database <dbname> set multi_user
After executing the above you can execute the below command check the log
file size and usage,
dbcc sqlperf(logspace)
Like Dan suggested go for SIMPLE recovery model if your data is not critical
or you not require a transaction log based recovery (POINT_IN_TIME).
Thanks
Hari
MCDBA
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> The way you manage your log file is driven by your database recovery plan.
> If your recovery plan is to restore from your last full backup and not
apply
> transaction log backups, then change your database recovery model to
SIMPLE.
> This will keep your log size reasonable by removing committed transactions
> from the log. The log will still need to be large enough to accommodate
> your largest single transaction. If you want to reduce potential data
loss,
> you should use the BULK_LOGGED or FULL recovery model and backup your log
> periodically.
> The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> Online for details. You should not need to do this as part of routine
> maintenance.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Carlos" <wt_know@.hotmail.com> wrote in message
> news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> database"
> cannot
commit[vbcol=seagreen]
> a
filesize[vbcol=seagreen]
> for
>|||Thanks for the advices ! :-)
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:%23MWo0lUPEHA.3328@.TK2MSFTNGP09.phx.gbl...
> Hi,
> Instead of detaching , delete the LDF file, Attach the database ytou
should
> have tried the below steps:-
> Alter database <dbname> set single_user with rollback immediate
> go
> backup log <dbname> to disk='d:\backup\dbname.trn1'
> go
> dbcc shrinkfile('logical_log_name',truncateon
ly)
> go
> Alter database <dbname> set multi_user
> After executing the above you can execute the below command check the log
> file size and usage,
> dbcc sqlperf(logspace)
> Like Dan suggested go for SIMPLE recovery model if your data is not
critical
> or you not require a transaction log based recovery (POINT_IN_TIME).
> Thanks
> Hari
> MCDBA
>
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
plan.[vbcol=seagreen]
> apply
> SIMPLE.
transactions[vbcol=seagreen]
> loss,
log[vbcol=seagreen]
> commit
> filesize
500MB[vbcol=seagreen]
>
Clear/Purge Log files
but this does not seems to reduce the mdf or ldf filesize.
What I did is detach the database, delete the ldf file, re-attach the
database to create a new ldf file. If I do not do so, the application cannot
work (hang!) because the ldf file is too huge and it takes ages to commit a
transaction. Is there a "better" way to control the ldf file like
auto-purging ? Should I restrict the log file size to a specific filesize
like 500MB ? Does this mean it will auto-purge each time it reach 500MB for
the ldf file ?
ThanksThe way you manage your log file is driven by your database recovery plan.
If your recovery plan is to restore from your last full backup and not apply
transaction log backups, then change your database recovery model to SIMPLE.
This will keep your log size reasonable by removing committed transactions
from the log. The log will still need to be large enough to accommodate
your largest single transaction. If you want to reduce potential data loss,
you should use the BULK_LOGGED or FULL recovery model and backup your log
periodically.
The proper way to shrink files is with DBCC SHRINKFILE. See the Books
Online for details. You should not need to do this as part of routine
maintenance.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Carlos" <wt_know@.hotmail.com> wrote in message
news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
database"
> but this does not seems to reduce the mdf or ldf filesize.
> What I did is detach the database, delete the ldf file, re-attach the
> database to create a new ldf file. If I do not do so, the application
cannot
> work (hang!) because the ldf file is too huge and it takes ages to commit
a
> transaction. Is there a "better" way to control the ldf file like
> auto-purging ? Should I restrict the log file size to a specific filesize
> like 500MB ? Does this mean it will auto-purge each time it reach 500MB
for
> the ldf file ?
> Thanks
>
>|||Hi,
Instead of detaching , delete the LDF file, Attach the database ytou should
have tried the below steps:-
Alter database <dbname> set single_user with rollback immediate
go
backup log <dbname> to disk='d:\backup\dbname.trn1'
go
dbcc shrinkfile('logical_log_name',truncateonly)
go
Alter database <dbname> set multi_user
After executing the above you can execute the below command check the log
file size and usage,
dbcc sqlperf(logspace)
Like Dan suggested go for SIMPLE recovery model if your data is not critical
or you not require a transaction log based recovery (POINT_IN_TIME).
Thanks
Hari
MCDBA
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> The way you manage your log file is driven by your database recovery plan.
> If your recovery plan is to restore from your last full backup and not
apply
> transaction log backups, then change your database recovery model to
SIMPLE.
> This will keep your log size reasonable by removing committed transactions
> from the log. The log will still need to be large enough to accommodate
> your largest single transaction. If you want to reduce potential data
loss,
> you should use the BULK_LOGGED or FULL recovery model and backup your log
> periodically.
> The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> Online for details. You should not need to do this as part of routine
> maintenance.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Carlos" <wt_know@.hotmail.com> wrote in message
> news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> > The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
> database"
> > but this does not seems to reduce the mdf or ldf filesize.
> >
> > What I did is detach the database, delete the ldf file, re-attach the
> > database to create a new ldf file. If I do not do so, the application
> cannot
> > work (hang!) because the ldf file is too huge and it takes ages to
commit
> a
> > transaction. Is there a "better" way to control the ldf file like
> > auto-purging ? Should I restrict the log file size to a specific
filesize
> > like 500MB ? Does this mean it will auto-purge each time it reach 500MB
> for
> > the ldf file ?
> >
> > Thanks
> >
> >
> >
>|||Thanks for the advices ! :-)
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:%23MWo0lUPEHA.3328@.TK2MSFTNGP09.phx.gbl...
> Hi,
> Instead of detaching , delete the LDF file, Attach the database ytou
should
> have tried the below steps:-
> Alter database <dbname> set single_user with rollback immediate
> go
> backup log <dbname> to disk='d:\backup\dbname.trn1'
> go
> dbcc shrinkfile('logical_log_name',truncateonly)
> go
> Alter database <dbname> set multi_user
> After executing the above you can execute the below command check the log
> file size and usage,
> dbcc sqlperf(logspace)
> Like Dan suggested go for SIMPLE recovery model if your data is not
critical
> or you not require a transaction log based recovery (POINT_IN_TIME).
> Thanks
> Hari
> MCDBA
>
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> > The way you manage your log file is driven by your database recovery
plan.
> > If your recovery plan is to restore from your last full backup and not
> apply
> > transaction log backups, then change your database recovery model to
> SIMPLE.
> > This will keep your log size reasonable by removing committed
transactions
> > from the log. The log will still need to be large enough to accommodate
> > your largest single transaction. If you want to reduce potential data
> loss,
> > you should use the BULK_LOGGED or FULL recovery model and backup your
log
> > periodically.
> >
> > The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> > Online for details. You should not need to do this as part of routine
> > maintenance.
> >
> > --
> > Hope this helps.
> >
> > Dan Guzman
> > SQL Server MVP
> >
> > "Carlos" <wt_know@.hotmail.com> wrote in message
> > news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> > > The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
> > database"
> > > but this does not seems to reduce the mdf or ldf filesize.
> > >
> > > What I did is detach the database, delete the ldf file, re-attach the
> > > database to create a new ldf file. If I do not do so, the application
> > cannot
> > > work (hang!) because the ldf file is too huge and it takes ages to
> commit
> > a
> > > transaction. Is there a "better" way to control the ldf file like
> > > auto-purging ? Should I restrict the log file size to a specific
> filesize
> > > like 500MB ? Does this mean it will auto-purge each time it reach
500MB
> > for
> > > the ldf file ?
> > >
> > > Thanks
> > >
> > >
> > >
> >
> >
>
Clear SQl server Logs by run xp_cmdshell
(my user in sql server is a admin).
i don't want use enterprise manager . i want write a query and use at xp_cmdshell .
befor thx.What logs are you talking about... SQLServer has error logs that you can cycle, the server keeps 5 logs active.
Or for the Transaction logs, you can use TQL to backup the transaction logs and truncate.
But do you use the Transaction logs, ie are you using them for Disaster recovery or such...
You can also turn off the transaction logs or at least turn them down. This is done buy changeing the databases logging method from Full (All Transactions Logged), to Bulk (Only bulk loading transactions), or Simple (No Logging).
There are also artilcles on SQLServercentral.com and MSDN on useing TSQL to shrink the logs and reduce wasted space.
Remember that All enterprise manager is is a front end for TSQL Commands to the database, all commands can be run from code.
Sunday, March 25, 2012
Clean up old stored procedures
while?
Mike W wrote:
> Is there a way to find out if a stored procedure has not been run in a
> while?
Not directly. The best you can do is create a server-side trace that
traps the SP:Starting event. You do not need to bind the TextData, only
the ObjectID, DatabaseID. Let it run until you feel you'll get the best
collection of procedures. You can resolve the object names against the
sysobjects table in each database or use object_name() for the currently
selected database.
I would obviously test the removal in the deve environment before
removing any procedures from production.
David Gugick
Imceda Software
www.imceda.com
Clean up old stored procedures
while?Mike W wrote:
> Is there a way to find out if a stored procedure has not been run in a
> while?
Not directly. The best you can do is create a server-side trace that
traps the SP:Starting event. You do not need to bind the TextData, only
the ObjectID, DatabaseID. Let it run until you feel you'll get the best
collection of procedures. You can resolve the object names against the
sysobjects table in each database or use object_name() for the currently
selected database.
I would obviously test the removal in the deve environment before
removing any procedures from production.
David Gugick
Imceda Software
www.imceda.comsqlsql
Clean up old stored procedures
while?Mike W wrote:
> Is there a way to find out if a stored procedure has not been run in a
> while?
Not directly. The best you can do is create a server-side trace that
traps the SP:Starting event. You do not need to bind the TextData, only
the ObjectID, DatabaseID. Let it run until you feel you'll get the best
collection of procedures. You can resolve the object names against the
sysobjects table in each database or use object_name() for the currently
selected database.
I would obviously test the removal in the deve environment before
removing any procedures from production.
David Gugick
Imceda Software
www.imceda.com
clean up data - when process runs next time
In a integration project I am moving data from A to B.
First time is fine - since table B is empty.
However next time I run the process, I would like to delete all records in B before I run the project again.
What is the best way to delete / clean up data when you re run the process ?
Cheers, T
Issue a DELETE or TRUNCATE from an Execute SQL Task.
-Jamie
Thursday, March 22, 2012
ClassNotFoundException
I am new to java. I have installed SQL Server JDBC Driver SP3 on my
computer.
When I run the statement Class.forName("..."); , I get the
ClassNotFoundException.
I have read the following article
http://support.microsoft.com/default...;en-us;313100.
And I am sure that I have my CLASSPATH variable is like the following:
CLASSPATH=.;c:\program files\Microsoft SQL Server 2000 Driver for
JDBC\lib\msbase.jar;c:\program files\Microsoft SQL Server 2000 Driver for
JDBC\lib\msutil.jar;c:\program files\Microsoft SQL Server 2000 Driver for
JDBC\lib\mssqlserver.jar
But the exception still occurs. Even when I copied the 3 jar files into
c:\j2sdk\lib, it also didn't work.
Anyone can help me?
Thank you!
Hi Charlie,
Are you sure that the driver classes are accesible from your calling class,
ie, have you imported the driver package?
Ian
"Charlie Tong" wrote:
> Hi all,
> I am new to java. I have installed SQL Server JDBC Driver SP3 on my
> computer.
> When I run the statement Class.forName("..."); , I get the
> ClassNotFoundException.
> I have read the following article
> http://support.microsoft.com/default...;en-us;313100.
> And I am sure that I have my CLASSPATH variable is like the following:
> CLASSPATH=.;c:\program files\Microsoft SQL Server 2000 Driver for
> JDBC\lib\msbase.jar;c:\program files\Microsoft SQL Server 2000 Driver for
> JDBC\lib\msutil.jar;c:\program files\Microsoft SQL Server 2000 Driver for
> JDBC\lib\mssqlserver.jar
> But the exception still occurs. Even when I copied the 3 jar files into
> c:\j2sdk\lib, it also didn't work.
> Anyone can help me?
> Thank you!
>
>
|||It works now.
Thanks Ian!
"Ian555" <Ian555@.discussions.microsoft.com> wrote in message
news:CFED38BD-C33C-4336-97E3-624D5C5513DD@.microsoft.com...[vbcol=seagreen]
> Hi Charlie,
> Are you sure that the driver classes are accesible from your calling
> class,
> ie, have you imported the driver package?
> Ian
> "Charlie Tong" wrote:
Classic ASP in Report Manager
I'm implementing a multi-value select report which works fine using a
standard html page. I want to serive my values from a database however
and am therefor using asp. The asp page will display in the report
manager but the code will not run.
Thoughts?
Thanks!I had uploaded the asp page to the report manager. Keeping the page
outside report manager did the trick.
-J|||You would need to add the correct script mappings to the Report Manager
virtual directory. We do not add .asp mappings by default.
--
Brian Welcker
Group Program Manager
Microsoft SQL Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"Jim" <emailjeffp@.yahoo.com> wrote in message
news:1114279221.384407.35000@.z14g2000cwz.googlegroups.com...
>I had uploaded the asp page to the report manager. Keeping the page
> outside report manager did the trick.
> -J
>
Class not registered error
When I try to run DTS Package in SQL 2000, it reports a error - Class not registered?
What is the solution to this problem?
You might try the DTS forum - http://groups.google.com/group/microsoft.public.sqlserver.dts?lnk=srg
This forum is for SSIS.
Tuesday, March 20, 2012
ChunkData table is growing!
I have a report that refers to lots of external images (200+ each
run). I've been taking a snapshot of this report so that it's quicker
for users. I've limited the history to 1 snapshot. However, it appears
that the images are being cached in the ChunkData table (in the
ReportServer database) and that the data isn't automatically purged
once the snapshot no longer exists. Is there any way to manually purge
unused data?
Thanks,
CamelIf you look in the Reporting Services log files you should see a line like
this:
w3wp!library!1!10/2/2007-09:50:47:: i INFO: Cleaned 0 batch records, 0
policies, 0 sessions, 0 cache entries, 0 snapshots, 0 chunks, 0 running
jobs, 0 persisted streams
Thats the process that runs that will clean up those tables. If those items
are no longer needed then you'll see "15 chunks" cleaned or some such in
that log message. If they are necessary then they will not be cleaned.
Most of the stuff in the tables in the TempDB database can be cleaned out
without hurting anything though so if you want you can shutdown the SRS
Service and wipe those tables clean and you should be ok.
--
Chris Alton, Microsoft Corp.
SQL Server Developer Support Engineer
This posting is provided "AS IS" with no warranties, and confers no rights.
--
> From: Camel <hamishmurphy@.gmail.com>
> Newsgroups: microsoft.public.sqlserver.reportingsvcs
> Subject: ChunkData table is growing!
> Date: Fri, 05 Oct 2007 05:17:09 -0000
> Organization: http://groups.google.com
> Hi,
> I have a report that refers to lots of external images (200+ each
> run). I've been taking a snapshot of this report so that it's quicker
> for users. I've limited the history to 1 snapshot. However, it appears
> that the images are being cached in the ChunkData table (in the
> ReportServer database) and that the data isn't automatically purged
> once the snapshot no longer exists. Is there any way to manually purge
> unused data?
> Thanks,
> Camel
>|||Hi Chris,
Thanks for your reply. I've checked the log out and found the
following exception:
3wp!dbcleanup!1e68!8/10/2007-16:30:20:: e ERROR: Sql Error in
CleanOrphanedSnapshots: System.Data.SqlClient.SqlException: Timeout
expired. The timeout period elapsed prior to completion of the
operation or the server is not responding.
at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior
cmdBehavior, RunBehavior runBehavior, Boolean returnStream)
at System.Data.SqlClient.SqlCommand.ExecuteReader()
at
Microsoft.ReportingServices.Library.InstrumentedSqlCommand.ExecuteReader()
at
Microsoft.ReportingServices.Library.DatabaseSessionStorage.CleanOrphanedSnapshots(Int32&
chunksCleaned)
w3wp!library!1e68!8/10/2007-16:30:20:: i INFO: Cleaned 0 batch
records, 0 policies, 0 sessions, 0 cache entries, 0 snapshots, 0
chunks, 0 running jobs
Any ideas why this would be the case?
Camel
On Oct 6, 12:55 am, cal...@.online.microsoft.com (Chris Alton [MSFT])
wrote:
> If you look in the Reporting Services log files you should see a line like
> this:
> w3wp!library!1!10/2/2007-09:50:47:: i INFO: Cleaned 0 batch records, 0
> policies, 0 sessions, 0 cache entries, 0 snapshots, 0 chunks, 0 running
> jobs, 0 persisted streams
> Thats the process that runs that will clean up those tables. If those items
> are no longer needed then you'll see "15 chunks" cleaned or some such in
> that log message. If they are necessary then they will not be cleaned.
> Most of the stuff in the tables in the TempDB database can be cleaned out
> without hurting anything though so if you want you can shutdown the SRS
> Service and wipe those tables clean and you should be ok.
> --
> Chris Alton, Microsoft Corp.
> SQL Server Developer Support Engineer
> This posting is provided "AS IS" with no warranties, and confers no rights.
> --
> > From: Camel <hamishmur...@.gmail.com>
> > Newsgroups: microsoft.public.sqlserver.reportingsvcs
> > Subject: ChunkData table is growing!
> > Date: Fri, 05 Oct 2007 05:17:09 -0000
> > Organization:http://groups.google.com
> > Hi,
> > I have a report that refers to lots of external images (200+ each
> > run). I've been taking a snapshot of this report so that it's quicker
> > for users. I've limited the history to 1 snapshot. However, it appears
> > that the images are being cached in the ChunkData table (in the
> > ReportServer database) and that the data isn't automatically purged
> > once the snapshot no longer exists. Is there any way to manually purge
> > unused data?
> > Thanks,
> > Camel|||That usually means you are having a performance issue on the SQL Server
database that is causing it not to return data in the 30 second command
timeout window. It could also be blocking on the SQL Server as well.
Check and see if there are any SPIDs out there that are blocking other
processes.
I'd check your SQL Server Activity and see if there is anything that could
be causing this.
--
Chris Alton, Microsoft Corp.
SQL Server Developer Support Engineer
This posting is provided "AS IS" with no warranties, and confers no rights.
--
> From: Camel <hamishmurphy@.gmail.com>
> Newsgroups: microsoft.public.sqlserver.reportingsvcs
> Subject: Re: ChunkData table is growing!
> Date: Mon, 08 Oct 2007 23:20:17 -0000
> Hi Chris,
> Thanks for your reply. I've checked the log out and found the
> following exception:
> 3wp!dbcleanup!1e68!8/10/2007-16:30:20:: e ERROR: Sql Error in
> CleanOrphanedSnapshots: System.Data.SqlClient.SqlException: Timeout
> expired. The timeout period elapsed prior to completion of the
> operation or the server is not responding.
> at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior
> cmdBehavior, RunBehavior runBehavior, Boolean returnStream)
> at System.Data.SqlClient.SqlCommand.ExecuteReader()
> at
> Microsoft.ReportingServices.Library.InstrumentedSqlCommand.ExecuteReader()
> at
>
Microsoft.ReportingServices.Library.DatabaseSessionStorage.CleanOrphanedSnap
shots(Int32&
> chunksCleaned)
> w3wp!library!1e68!8/10/2007-16:30:20:: i INFO: Cleaned 0 batch
> records, 0 policies, 0 sessions, 0 cache entries, 0 snapshots, 0
> chunks, 0 running jobs
> Any ideas why this would be the case?
> Camel
> On Oct 6, 12:55 am, cal...@.online.microsoft.com (Chris Alton [MSFT])
> wrote:
> > If you look in the Reporting Services log files you should see a line
like
> > this:
> > w3wp!library!1!10/2/2007-09:50:47:: i INFO: Cleaned 0 batch records, 0
> > policies, 0 sessions, 0 cache entries, 0 snapshots, 0 chunks, 0 running
> > jobs, 0 persisted streams
> >
> > Thats the process that runs that will clean up those tables. If those
items
> > are no longer needed then you'll see "15 chunks" cleaned or some such in
> > that log message. If they are necessary then they will not be cleaned.
> >
> > Most of the stuff in the tables in the TempDB database can be cleaned
out
> > without hurting anything though so if you want you can shutdown the SRS
> > Service and wipe those tables clean and you should be ok.
> > --
> > Chris Alton, Microsoft Corp.
> > SQL Server Developer Support Engineer
> > This posting is provided "AS IS" with no warranties, and confers no
rights.
> > --
> >
> > > From: Camel <hamishmur...@.gmail.com>
> > > Newsgroups: microsoft.public.sqlserver.reportingsvcs
> > > Subject: ChunkData table is growing!
> > > Date: Fri, 05 Oct 2007 05:17:09 -0000
> > > Organization:http://groups.google.com
> >
> > > Hi,
> >
> > > I have a report that refers to lots of external images (200+ each
> > > run). I've been taking a snapshot of this report so that it's quicker
> > > for users. I've limited the history to 1 snapshot. However, it appears
> > > that the images are being cached in the ChunkData table (in the
> > > ReportServer database) and that the data isn't automatically purged
> > > once the snapshot no longer exists. Is there any way to manually purge
> > > unused data?
> >
> > > Thanks,
> >
> > > Camel
>
>|||Hi Chris,
I ran CleanOrphanedSnapshots manually with an Actual Execution Plan.
The query takes between 5 and 6 minutes. The bottleneck is in the
second query:
DELETE ChunkData FROM ChunkData INNER JOIN #tempSnapshot ON
ChunkData.SnapshotDataID = #tempSnapshot.SnapshotDataID
The query takes 91% of the total execution time. 31% of this statement
is spent on a Sort, 31% on a Distinct Sort and 29% on a Clustered
Index Seek. It's deleting 22000+ records. Can you make any suggestions
to improve the performance of deleting records from the ChunkData
table?
Camel
On Oct 10, 1:37 am, cal...@.online.microsoft.com (Chris Alton [MSFT])
wrote:
> That usually means you are having a performance issue on the SQL Server
> database that is causing it not to return data in the 30 second command
> timeout window. It could also be blocking on the SQL Server as well.
> Check and see if there are any SPIDs out there that are blocking other
> processes.
> I'd check your SQL Server Activity and see if there is anything that could
> be causing this.
> --
> Chris Alton, Microsoft Corp.
> SQL Server Developer Support Engineer
> This posting is provided "AS IS" with no warranties, and confers no rights.
> --
>|||Unfortunately SQL Performance isn't my area of expertise. Try the regular
SQL Database newsgroup and see if they can give you any pointers there.
--
Chris Alton, Microsoft Corp.
SQL Server Developer Support Engineer
This posting is provided "AS IS" with no warranties, and confers no rights.
--
> From: Camel <hamishmurphy@.gmail.com>
> Newsgroups: microsoft.public.sqlserver.reportingsvcs
> Subject: Re: ChunkData table is growing!
> Date: Thu, 11 Oct 2007 23:40:10 -0000
> Organization: http://groups.google.com
> Lines: 34
> Hi Chris,
> I ran CleanOrphanedSnapshots manually with an Actual Execution Plan.
> The query takes between 5 and 6 minutes. The bottleneck is in the
> second query:
> DELETE ChunkData FROM ChunkData INNER JOIN #tempSnapshot ON
> ChunkData.SnapshotDataID = #tempSnapshot.SnapshotDataID
> The query takes 91% of the total execution time. 31% of this statement
> is spent on a Sort, 31% on a Distinct Sort and 29% on a Clustered
> Index Seek. It's deleting 22000+ records. Can you make any suggestions
> to improve the performance of deleting records from the ChunkData
> table?
> Camel
> On Oct 10, 1:37 am, cal...@.online.microsoft.com (Chris Alton [MSFT])
> wrote:
> > That usually means you are having a performance issue on the SQL Server
> > database that is causing it not to return data in the 30 second command
> > timeout window. It could also be blocking on the SQL Server as well.
> >
> > Check and see if there are any SPIDs out there that are blocking other
> > processes.
> > I'd check your SQL Server Activity and see if there is anything that
could
> > be causing this.
> > --
> > Chris Alton, Microsoft Corp.
> > SQL Server Developer Support Engineer
> > This posting is provided "AS IS" with no warranties, and confers no
rights.
> > --
> >
>|||I'll see what I can find out. Thanks for your help.
On Oct 13, 5:23 am, cal...@.online.microsoft.com (Chris Alton [MSFT])
wrote:
> Unfortunately SQL Performance isn't my area of expertise. Try the regular
> SQL Database newsgroup and see if they can give you any pointers there.
> --
> Chris Alton, Microsoft Corp.
> SQL Server Developer Support Engineer
> This posting is provided "AS IS" with no warranties, and confers no rights.
> --
> > From: Camel <hamishmur...@.gmail.com>
> > Newsgroups: microsoft.public.sqlserver.reportingsvcs
> > Subject: Re: ChunkData table is growing!
> > Date: Thu, 11 Oct 2007 23:40:10 -0000
> > Organization:http://groups.google.com
> > Lines: 34
> > Hi Chris,
> > I ran CleanOrphanedSnapshots manually with an Actual Execution Plan.
> > The query takes between 5 and 6 minutes. The bottleneck is in the
> > second query:
> > DELETE ChunkData FROM ChunkData INNER JOIN #tempSnapshot ON
> > ChunkData.SnapshotDataID = #tempSnapshot.SnapshotDataID
> > The query takes 91% of the total execution time. 31% of this statement
> > is spent on a Sort, 31% on a Distinct Sort and 29% on a Clustered
> > Index Seek. It's deleting 22000+ records. Can you make any suggestions
> > to improve the performance of deleting records from the ChunkData
> > table?
> > Camel
> > On Oct 10, 1:37 am, cal...@.online.microsoft.com (Chris Alton [MSFT])
> > wrote:
> > > That usually means you are having a performance issue on the SQL Server
> > > database that is causing it not to return data in the 30 second command
> > > timeout window. It could also be blocking on the SQL Server as well.
> > > Check and see if there are any SPIDs out there that are blocking other
> > > processes.
> > > I'd check your SQL Server Activity and see if there is anything that
> could
> > > be causing this.
> > > --
> > > Chris Alton, Microsoft Corp.
> > > SQL Server Developer Support Engineer
> > > This posting is provided "AS IS" with no warranties, and confers no
> rights.
> > > --
Chunkdata
database. It grows too large after we run reports with a lot of images. I
wonder if it is save to truncate the Chunkdata table if we don't care about
the caching of the report images, the report history or snapshots.
Any help on this issue is greatly appreciated!
TommyDid you manage to resolve this issue? I am getting the same problem, as are
other people.
I have tried TRUNCATE TABLE ChunkData on a nightly basis, which initially
appeared to work, however I think it may have caused some problems with some
specific streamed graphs and the like. To resolve this I RESTORed an older
ReportServerTempDB which got it all going again.
I am going to be trying to RESTORE a ReportServerTempDB from when the
database was small, each night to try and limit this growth.
I can only assume that this is some sort of BUG in the cleanup routines, as
18Gb (the entire disk) is an enormopus amount of caching, for what are
generally only 4 or 5 page reports.
Anyway, if you got anywhere with this, I would be grateful to know.
Thanks in advance.
"Tommy Tao" wrote:
> Help is needed on managing the Chunkdata table in the ReportServerTempDB
> database. It grows too large after we run reports with a lot of images. I
> wonder if it is save to truncate the Chunkdata table if we don't care about
> the caching of the report images, the report history or snapshots.
> Any help on this issue is greatly appreciated!
> Tommy
Thursday, March 8, 2012
Child Package Fails when called from parent
So I have a parent package that calls another package using the Execute Package Task. When I run the child it runs fine but when I run it from the parent i get this msg...Any ideas?
Error: 0xC00220E4 at Execute VR Account Load: Error 0xC0012050 while preparing to load the package. Package failed validation from the ExecutePackage task. The package cannot run.
Pls take a look at this post to see whether the investigations and solutions there helps. http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=241941&SiteID=1
thanks
wenyang
|||Wenyang,
Thanks for the reply but the solution did not work. I might have found a bug here because the child package uses package configurations and even though I have disabled the package config on the child when I execute it from the parent the output window says that it is trying to load the package configurations. Not sure if this has anything to do with it.
Information: 0x40016040 at VR Load Account: The package is attempting to configure from SQL Server using the configuration string ""localhost.CALLMIS";"[dbo].[SSIS Configurations]";"MISLoads_ServerName";".
Thx
|||Disregard last msg I posted. It appears the problem was that there was a bad connection guid or something like that still hanging around in the child? I recreated the package and it appears to be working now...
Not my idea of fun...
CHEKING SQL AGENT IN A LAN
machines, causing jobs not to run, so i have to check manually if the agent
is running. The question is: There's a fast way to check if all the agents in
my network are started?
* my version is sql server2000 with SP4
Thanks
Have a look at "Mutiserver Adminstration" in BOL.
"amejiae" wrote:
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the agent
> is running. The question is: There's a fast way to check if all the agents in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks
|||You might benefit greatly from a product available at:
http://www.sqlsentry.net/
Andrew J. Kelly SQL MVP
"amejiae" <amejiae@.discussions.microsoft.com> wrote in message
news:3B73EBF5-D28D-4DCA-9672-35BD27BA26B9@.microsoft.com...
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the
> agent
> is running. The question is: There's a fast way to check if all the agents
> in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks
|||you could write a script swith SQLDMO
"amejiae" <amejiae@.discussions.microsoft.com> wrote in message
news:3B73EBF5-D28D-4DCA-9672-35BD27BA26B9@.microsoft.com...
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the
> agent
> is running. The question is: There's a fast way to check if all the agents
> in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks
|||Or you can take a look at IDERA's Diagnostic Manager.
"amejiae" wrote:
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the agent
> is running. The question is: There's a fast way to check if all the agents in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks
CHEKING SQL AGENT IN A LAN
machines, causing jobs not to run, so i have to check manually if the agent
is running. The question is: There's a fast way to check if all the agents i
n
my network are started?
* my version is sql server2000 with SP4
ThanksHave a look at "Mutiserver Adminstration" in BOL.
"amejiae" wrote:
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the agen
t
> is running. The question is: There's a fast way to check if all the agents
in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks|||You might benefit greatly from a product available at:
http://www.sqlsentry.net/
Andrew J. Kelly SQL MVP
"amejiae" <amejiae@.discussions.microsoft.com> wrote in message
news:3B73EBF5-D28D-4DCA-9672-35BD27BA26B9@.microsoft.com...
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the
> agent
> is running. The question is: There's a fast way to check if all the agents
> in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks|||you could write a script swith SQLDMO
"amejiae" <amejiae@.discussions.microsoft.com> wrote in message
news:3B73EBF5-D28D-4DCA-9672-35BD27BA26B9@.microsoft.com...
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the
> agent
> is running. The question is: There's a fast way to check if all the agents
> in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks|||Or you can take a look at IDERA's Diagnostic Manager.
"amejiae" wrote:
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the agen
t
> is running. The question is: There's a fast way to check if all the agents
in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks
CHEKING SQL AGENT IN A LAN
machines, causing jobs not to run, so i have to check manually if the agent
is running. The question is: There's a fast way to check if all the agents in
my network are started?
* my version is sql server2000 with SP4
ThanksHave a look at "Mutiserver Adminstration" in BOL.
"amejiae" wrote:
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the agent
> is running. The question is: There's a fast way to check if all the agents in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks|||You might benefit greatly from a product available at:
http://www.sqlsentry.net/
Andrew J. Kelly SQL MVP
"amejiae" <amejiae@.discussions.microsoft.com> wrote in message
news:3B73EBF5-D28D-4DCA-9672-35BD27BA26B9@.microsoft.com...
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the
> agent
> is running. The question is: There's a fast way to check if all the agents
> in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks|||you could write a script swith SQLDMO
"amejiae" <amejiae@.discussions.microsoft.com> wrote in message
news:3B73EBF5-D28D-4DCA-9672-35BD27BA26B9@.microsoft.com...
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the
> agent
> is running. The question is: There's a fast way to check if all the agents
> in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks|||Or you can take a look at IDERA's Diagnostic Manager.
"amejiae" wrote:
> Hi we have a LAN of 400 stations and sometimes the agent stop in several
> machines, causing jobs not to run, so i have to check manually if the agent
> is running. The question is: There's a fast way to check if all the agents in
> my network are started?
> * my version is sql server2000 with SP4
> Thanks
checktable with repair_rebuild
Checktable with repair_rebuild option. I have a table with more than 100
million record with size around 120 G and when I ran the command it failed
because of space issue. I only had around 50 G. of free space left on that
drive.
Does it require same free space as in DBCC DBReindex (1.2 * Table)? Is there
any command that I can use to use Tempdb for the rebuild process instead of
using the database space?
I appreciate your answer.If your running 2000 then you can specify the ESTIMATE_ONLY option to see
how much space you need in tempdb. Check out BOL for more details.
--
Andrew J. Kelly SQL MVP
"james" <kush@.brandes.com> wrote in message
news:%23u9f%23UNJEHA.2604@.tk2msftngp13.phx.gbl...
> Hi! Could anyone tell me how much free space is needed to run DBCC
> Checktable with repair_rebuild option. I have a table with more than 100
> million record with size around 120 G and when I ran the command it failed
> because of space issue. I only had around 50 G. of free space left on that
> drive.
> Does it require same free space as in DBCC DBReindex (1.2 * Table)? Is
there
> any command that I can use to use Tempdb for the rebuild process instead
of
> using the database space?
> I appreciate your answer.
>
>|||However the ESTIMATEONLY option only works out how much space is required to
run the check - it does not know what repairs may be necessary and how much
space they will require. For an index rebuild, you will need the same amount
of free space as if you were running DBCC DBREINDEX on the same index.
BTW, if you have determined that there is an integrity problem, it is in
your best interestes to do root-cause analysis of the problem and see why it
happened (almost certainly hardware). Examine your event logs, the SQL
Server error logs and run any hardware diagnostics you can - the odds are it
will happen again.
Regards.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Andrew J. Kelly" <sqlmvpnoooospam@.shadhawk.com> wrote in message
news:OAWZVyOJEHA.2508@.TK2MSFTNGP10.phx.gbl...
> If your running 2000 then you can specify the ESTIMATE_ONLY option to see
> how much space you need in tempdb. Check out BOL for more details.
> --
> Andrew J. Kelly SQL MVP
>
> "james" <kush@.brandes.com> wrote in message
> news:%23u9f%23UNJEHA.2604@.tk2msftngp13.phx.gbl...
> > Hi! Could anyone tell me how much free space is needed to run DBCC
> > Checktable with repair_rebuild option. I have a table with more than 100
> > million record with size around 120 G and when I ran the command it
failed
> > because of space issue. I only had around 50 G. of free space left on
that
> > drive.
> > Does it require same free space as in DBCC DBReindex (1.2 * Table)? Is
> there
> > any command that I can use to use Tempdb for the rebuild process instead
> of
> > using the database space?
> >
> > I appreciate your answer.
> >
> >
> >
>|||Thanks for the answer.
In order to find what may have caused this, our Network guys did some online
hardware diognistic and didn't see any problems. Now we are planning to do
offline diognistic. I have one question on this:
Is there any specifc hardware (disk i/o, cpu, memory etc) do we need to
focus on? In other words what has been the main culprit from hardware side
causing database corruption, in your experience?
I appreciate your answer.
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
news:eYh248UJEHA.3436@.tk2msftngp13.phx.gbl...
> However the ESTIMATEONLY option only works out how much space is required
to
> run the check - it does not know what repairs may be necessary and how
much
> space they will require. For an index rebuild, you will need the same
amount
> of free space as if you were running DBCC DBREINDEX on the same index.
> BTW, if you have determined that there is an integrity problem, it is in
> your best interestes to do root-cause analysis of the problem and see why
it
> happened (almost certainly hardware). Examine your event logs, the SQL
> Server error logs and run any hardware diagnostics you can - the odds are
it
> will happen again.
> Regards.
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no
rights.
> "Andrew J. Kelly" <sqlmvpnoooospam@.shadhawk.com> wrote in message
> news:OAWZVyOJEHA.2508@.TK2MSFTNGP10.phx.gbl...
> > If your running 2000 then you can specify the ESTIMATE_ONLY option to
see
> > how much space you need in tempdb. Check out BOL for more details.
> >
> > --
> > Andrew J. Kelly SQL MVP
> >
> >
> > "james" <kush@.brandes.com> wrote in message
> > news:%23u9f%23UNJEHA.2604@.tk2msftngp13.phx.gbl...
> > > Hi! Could anyone tell me how much free space is needed to run DBCC
> > > Checktable with repair_rebuild option. I have a table with more than
100
> > > million record with size around 120 G and when I ran the command it
> failed
> > > because of space issue. I only had around 50 G. of free space left on
> that
> > > drive.
> > > Does it require same free space as in DBCC DBReindex (1.2 * Table)? Is
> > there
> > > any command that I can use to use Tempdb for the rebuild process
instead
> > of
> > > using the database space?
> > >
> > > I appreciate your answer.
> > >
> > >
> > >
> >
> >
>|||In my experience, the major culprits have been bad drives, cables and
occasional problems with controllers. Make sure you're on the latest
software rev for all your controllers etc.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"james" <kush@.brandes.com> wrote in message
news:uSAp727JEHA.2452@.TK2MSFTNGP09.phx.gbl...
> Thanks for the answer.
> In order to find what may have caused this, our Network guys did some
online
> hardware diognistic and didn't see any problems. Now we are planning to do
> offline diognistic. I have one question on this:
> Is there any specifc hardware (disk i/o, cpu, memory etc) do we need to
> focus on? In other words what has been the main culprit from hardware side
> causing database corruption, in your experience?
> I appreciate your answer.
>
> "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
> news:eYh248UJEHA.3436@.tk2msftngp13.phx.gbl...
> > However the ESTIMATEONLY option only works out how much space is
required
> to
> > run the check - it does not know what repairs may be necessary and how
> much
> > space they will require. For an index rebuild, you will need the same
> amount
> > of free space as if you were running DBCC DBREINDEX on the same index.
> >
> > BTW, if you have determined that there is an integrity problem, it is in
> > your best interestes to do root-cause analysis of the problem and see
why
> it
> > happened (almost certainly hardware). Examine your event logs, the SQL
> > Server error logs and run any hardware diagnostics you can - the odds
are
> it
> > will happen again.
> >
> > Regards.
> >
> > --
> > Paul Randal
> > Dev Lead, Microsoft SQL Server Storage Engine
> >
> > This posting is provided "AS IS" with no warranties, and confers no
> rights.
> >
> > "Andrew J. Kelly" <sqlmvpnoooospam@.shadhawk.com> wrote in message
> > news:OAWZVyOJEHA.2508@.TK2MSFTNGP10.phx.gbl...
> > > If your running 2000 then you can specify the ESTIMATE_ONLY option to
> see
> > > how much space you need in tempdb. Check out BOL for more details.
> > >
> > > --
> > > Andrew J. Kelly SQL MVP
> > >
> > >
> > > "james" <kush@.brandes.com> wrote in message
> > > news:%23u9f%23UNJEHA.2604@.tk2msftngp13.phx.gbl...
> > > > Hi! Could anyone tell me how much free space is needed to run DBCC
> > > > Checktable with repair_rebuild option. I have a table with more than
> 100
> > > > million record with size around 120 G and when I ran the command it
> > failed
> > > > because of space issue. I only had around 50 G. of free space left
on
> > that
> > > > drive.
> > > > Does it require same free space as in DBCC DBReindex (1.2 * Table)?
Is
> > > there
> > > > any command that I can use to use Tempdb for the rebuild process
> instead
> > > of
> > > > using the database space?
> > > >
> > > > I appreciate your answer.
> > > >
> > > >
> > > >
> > >
> > >
> >
> >
>|||> In order to find what may have caused this, our Network guys did some
online
> hardware diognistic and didn't see any problems. Now we are planning to do
> offline diognistic.
--
Hi James,
You should also consider SQLIOStress tool in your battery of tests:
HOW TO: Use the SQLIOStress Utility to Stress a Disk Subsystem Such As SQL
Server
http://support.microsoft.com/?id=231619
Hope this helps,
--
Eric Cárdenas
SQL Server senior support professional
checktable with repair_rebuild
Checktable with repair_rebuild option. I have a table with more than 100
million record with size around 120 G and when I ran the command it failed
because of space issue. I only had around 50 G. of free space left on that
drive.
Does it require same free space as in DBCC DBReindex (1.2 * Table)? Is there
any command that I can use to use Tempdb for the rebuild process instead of
using the database space?
I appreciate your answer.If your running 2000 then you can specify the ESTIMATE_ONLY option to see
how much space you need in tempdb. Check out BOL for more details.
Andrew J. Kelly SQL MVP
"james" <kush@.brandes.com> wrote in message
news:%23u9f%23UNJEHA.2604@.tk2msftngp13.phx.gbl...
> Hi! Could anyone tell me how much free space is needed to run DBCC
> Checktable with repair_rebuild option. I have a table with more than 100
> million record with size around 120 G and when I ran the command it failed
> because of space issue. I only had around 50 G. of free space left on that
> drive.
> Does it require same free space as in DBCC DBReindex (1.2 * Table)? Is
there
> any command that I can use to use Tempdb for the rebuild process instead
of
> using the database space?
> I appreciate your answer.
>
>|||However the ESTIMATEONLY option only works out how much space is required to
run the check - it does not know what repairs may be necessary and how much
space they will require. For an index rebuild, you will need the same amount
of free space as if you were running DBCC DBREINDEX on the same index.
BTW, if you have determined that there is an integrity problem, it is in
your best interestes to do root-cause analysis of the problem and see why it
happened (almost certainly hardware). Examine your event logs, the SQL
Server error logs and run any hardware diagnostics you can - the odds are it
will happen again.
Regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Andrew J. Kelly" <sqlmvpnoooospam@.shadhawk.com> wrote in message
news:OAWZVyOJEHA.2508@.TK2MSFTNGP10.phx.gbl...
> If your running 2000 then you can specify the ESTIMATE_ONLY option to see
> how much space you need in tempdb. Check out BOL for more details.
> --
> Andrew J. Kelly SQL MVP
>
> "james" <kush@.brandes.com> wrote in message
> news:%23u9f%23UNJEHA.2604@.tk2msftngp13.phx.gbl...
failed[vbcol=seagreen]
that[vbcol=seagreen]
> there
> of
>|||Thanks for the answer.
In order to find what may have caused this, our Network guys did some online
hardware diognistic and didn't see any problems. Now we are planning to do
offline diognistic. I have one question on this:
Is there any specifc hardware (disk i/o, cpu, memory etc) do we need to
focus on? In other words what has been the main culprit from hardware side
causing database corruption, in your experience?
I appreciate your answer.
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
news:eYh248UJEHA.3436@.tk2msftngp13.phx.gbl...
> However the ESTIMATEONLY option only works out how much space is required
to
> run the check - it does not know what repairs may be necessary and how
much
> space they will require. For an index rebuild, you will need the same
amount
> of free space as if you were running DBCC DBREINDEX on the same index.
> BTW, if you have determined that there is an integrity problem, it is in
> your best interestes to do root-cause analysis of the problem and see why
it
> happened (almost certainly hardware). Examine your event logs, the SQL
> Server error logs and run any hardware diagnostics you can - the odds are
it
> will happen again.
> Regards.
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no
rights.
> "Andrew J. Kelly" <sqlmvpnoooospam@.shadhawk.com> wrote in message
> news:OAWZVyOJEHA.2508@.TK2MSFTNGP10.phx.gbl...
see[vbcol=seagreen]
100[vbcol=seagreen]
> failed
> that
instead[vbcol=seagreen]
>|||In my experience, the major culprits have been bad drives, cables and
occasional problems with controllers. Make sure you're on the latest
software rev for all your controllers etc.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"james" <kush@.brandes.com> wrote in message
news:uSAp727JEHA.2452@.TK2MSFTNGP09.phx.gbl...
> Thanks for the answer.
> In order to find what may have caused this, our Network guys did some
online
> hardware diognistic and didn't see any problems. Now we are planning to do
> offline diognistic. I have one question on this:
> Is there any specifc hardware (disk i/o, cpu, memory etc) do we need to
> focus on? In other words what has been the main culprit from hardware side
> causing database corruption, in your experience?
> I appreciate your answer.
>
> "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
> news:eYh248UJEHA.3436@.tk2msftngp13.phx.gbl...
required[vbcol=seagreen]
> to
> much
> amount
why[vbcol=seagreen]
> it
are[vbcol=seagreen]
> it
> rights.
> see
> 100
on[vbcol=seagreen]
Is[vbcol=seagreen]
> instead
>|||> In order to find what may have caused this, our Network guys did some
online
> hardware diognistic and didn't see any problems. Now we are planning to do
> offline diognistic.
--
Hi James,
You should also consider SQLIOStress tool in your battery of tests:
HOW TO: Use the SQLIOStress Utility to Stress a Disk Subsystem Such As SQL
Server
http://support.microsoft.com/?id=231619
Hope this helps,
Eric Crdenas
SQL Server senior support professional
checktable with repair_rebuild
Checktable with repair_rebuild option. I have a table with more than 100
million record with size around 120 G and when I ran the command it failed
because of space issue. I only had around 50 G. of free space left on that
drive.
Does it require same free space as in DBCC DBReindex (1.2 * Table)? Is there
any command that I can use to use Tempdb for the rebuild process instead of
using the database space?
I appreciate your answer.
If your running 2000 then you can specify the ESTIMATE_ONLY option to see
how much space you need in tempdb. Check out BOL for more details.
Andrew J. Kelly SQL MVP
"james" <kush@.brandes.com> wrote in message
news:%23u9f%23UNJEHA.2604@.tk2msftngp13.phx.gbl...
> Hi! Could anyone tell me how much free space is needed to run DBCC
> Checktable with repair_rebuild option. I have a table with more than 100
> million record with size around 120 G and when I ran the command it failed
> because of space issue. I only had around 50 G. of free space left on that
> drive.
> Does it require same free space as in DBCC DBReindex (1.2 * Table)? Is
there
> any command that I can use to use Tempdb for the rebuild process instead
of
> using the database space?
> I appreciate your answer.
>
>
|||However the ESTIMATEONLY option only works out how much space is required to
run the check - it does not know what repairs may be necessary and how much
space they will require. For an index rebuild, you will need the same amount
of free space as if you were running DBCC DBREINDEX on the same index.
BTW, if you have determined that there is an integrity problem, it is in
your best interestes to do root-cause analysis of the problem and see why it
happened (almost certainly hardware). Examine your event logs, the SQL
Server error logs and run any hardware diagnostics you can - the odds are it
will happen again.
Regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Andrew J. Kelly" <sqlmvpnoooospam@.shadhawk.com> wrote in message
news:OAWZVyOJEHA.2508@.TK2MSFTNGP10.phx.gbl...[vbcol=seagreen]
> If your running 2000 then you can specify the ESTIMATE_ONLY option to see
> how much space you need in tempdb. Check out BOL for more details.
> --
> Andrew J. Kelly SQL MVP
>
> "james" <kush@.brandes.com> wrote in message
> news:%23u9f%23UNJEHA.2604@.tk2msftngp13.phx.gbl...
failed[vbcol=seagreen]
that
> there
> of
>
|||Thanks for the answer.
In order to find what may have caused this, our Network guys did some online
hardware diognistic and didn't see any problems. Now we are planning to do
offline diognistic. I have one question on this:
Is there any specifc hardware (disk i/o, cpu, memory etc) do we need to
focus on? In other words what has been the main culprit from hardware side
causing database corruption, in your experience?
I appreciate your answer.
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
news:eYh248UJEHA.3436@.tk2msftngp13.phx.gbl...
> However the ESTIMATEONLY option only works out how much space is required
to
> run the check - it does not know what repairs may be necessary and how
much
> space they will require. For an index rebuild, you will need the same
amount
> of free space as if you were running DBCC DBREINDEX on the same index.
> BTW, if you have determined that there is an integrity problem, it is in
> your best interestes to do root-cause analysis of the problem and see why
it
> happened (almost certainly hardware). Examine your event logs, the SQL
> Server error logs and run any hardware diagnostics you can - the odds are
it
> will happen again.
> Regards.
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no
rights.[vbcol=seagreen]
> "Andrew J. Kelly" <sqlmvpnoooospam@.shadhawk.com> wrote in message
> news:OAWZVyOJEHA.2508@.TK2MSFTNGP10.phx.gbl...
see[vbcol=seagreen]
100[vbcol=seagreen]
> failed
> that
instead
>
|||In my experience, the major culprits have been bad drives, cables and
occasional problems with controllers. Make sure you're on the latest
software rev for all your controllers etc.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"james" <kush@.brandes.com> wrote in message
news:uSAp727JEHA.2452@.TK2MSFTNGP09.phx.gbl...
> Thanks for the answer.
> In order to find what may have caused this, our Network guys did some
online[vbcol=seagreen]
> hardware diognistic and didn't see any problems. Now we are planning to do
> offline diognistic. I have one question on this:
> Is there any specifc hardware (disk i/o, cpu, memory etc) do we need to
> focus on? In other words what has been the main culprit from hardware side
> causing database corruption, in your experience?
> I appreciate your answer.
>
> "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
> news:eYh248UJEHA.3436@.tk2msftngp13.phx.gbl...
required[vbcol=seagreen]
> to
> much
> amount
why[vbcol=seagreen]
> it
are[vbcol=seagreen]
> it
> rights.
> see
> 100
on[vbcol=seagreen]
Is
> instead
>
|||> In order to find what may have caused this, our Network guys did some
online
> hardware diognistic and didn't see any problems. Now we are planning to do
> offline diognistic.
Hi James,
You should also consider SQLIOStress tool in your battery of tests:
HOW TO: Use the SQLIOStress Utility to Stress a Disk Subsystem Such As SQL
Server
http://support.microsoft.com/?id=231619
Hope this helps,
Eric Crdenas
SQL Server senior support professional
Saturday, February 25, 2012
Checkpoint and TempDB
Occasionally we have a TempDB log that starts growing exponentially. We run
DBCC OpenTran across all databases, we look for high rowcounts on tempdb..
sysindexes for objects like '#%', but nothing shows its face. This happens
very infrequently, and since nothing definitive is found, we reluctantly stop
and restart services.
One thing I would like information on is the checkpoint process, suspecting
that the automatic checkpoint is not occurring in TempDB, and thus, all of a
sudden we have rapid growth.
Is there any way to verify at the time, if checkpoint is on/off, working/not
working?
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200701/1I have a similar problelm. We have some in-house databases and some
that we have unfortunately inherited. One of them pulls in millions of
records into tables that are temporary using #tablename. Our company is
not going to pay to change the application at this point, so I have to
deal with it. What I did is create a simple job that checks the size of
the temp db file. I am on 2000 so I used sysfiles. When the size in our
case exceeds 15 gigs, I just use msdb.db.sp_start_job to kick off a job
to shrink the temp db. I think it probably runs about 3X a week, but
has been working to keep everything under control. I also set a max
size of 20 gigs so it doesn't eat up all the space it has on its drive.
Maybe these ideas will help. At least you wouldn't have to stop/start
services and manually intervene. My shrinks generally occur at 2-4 a.m.
when I am sleeping :)
cbrichards via SQLMonster.com wrote:
> We are running SQL 2005, SP1, on Windows 2003.
> Occasionally we have a TempDB log that starts growing exponentially. We run
> DBCC OpenTran across all databases, we look for high rowcounts on tempdb..
> sysindexes for objects like '#%', but nothing shows its face. This happens
> very infrequently, and since nothing definitive is found, we reluctantly stop
> and restart services.
> One thing I would like information on is the checkpoint process, suspecting
> that the automatic checkpoint is not occurring in TempDB, and thus, all of a
> sudden we have rapid growth.
> Is there any way to verify at the time, if checkpoint is on/off, working/not
> working?
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200701/1|||Thanks Kristina, but that does not address my issue, as I suspect the
Checkpoint is not working. Since I suspect that the checkpoint process is not
occurring on TempDB, I need to know if there is a way to verify my suspicions
at the time the crisis is occurring.
This Thread is not closed. Please HELP!!
--
Message posted via http://www.sqlmonster.com|||To capture CHECKPOINT you need to run profiler.
"cbrichards via SQLMonster.com" <u3288@.uwe> wrote in message
news:6c32652e33427@.uwe...
> Thanks Kristina, but that does not address my issue, as I suspect the
> Checkpoint is not working. Since I suspect that the checkpoint process is
> not
> occurring on TempDB, I need to know if there is a way to verify my
> suspicions
> at the time the crisis is occurring.
> This Thread is not closed. Please HELP!!
> --
> Message posted via http://www.sqlmonster.com
>|||So I am in the middle of a crisis, my tempdb log is growing at about 2 gig a
minute, and the quickest way to determine if my CHECKPOINT is working is to
run Profiler?
What EventClass and Columns would I use?
Within those EventClasses and Columns you recommend, what am I looking for?
I take it I would filter the Profiler on DatabaseID = 2?
Uri Dimant wrote:
>To capture CHECKPOINT you need to run profiler.
>> Thanks Kristina, but that does not address my issue, as I suspect the
>> Checkpoint is not working. Since I suspect that the checkpoint process is
>[quoted text clipped - 4 lines]
>> This Thread is not closed. Please HELP!!
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200701/1|||Hi
You can restart SQL Server and it will create a new tempdb database
"cbrichards via SQLMonster.com" <u3288@.uwe> wrote in message
news:6c576f9f33a54@.uwe...
> So I am in the middle of a crisis, my tempdb log is growing at about 2 gig
> a
> minute, and the quickest way to determine if my CHECKPOINT is working is
> to
> run Profiler?
> What EventClass and Columns would I use?
> Within those EventClasses and Columns you recommend, what am I looking
> for?
> I take it I would filter the Profiler on DatabaseID = 2?
> Uri Dimant wrote:
>>To capture CHECKPOINT you need to run profiler.
>> Thanks Kristina, but that does not address my issue, as I suspect the
>> Checkpoint is not working. Since I suspect that the checkpoint process
>> is
>>[quoted text clipped - 4 lines]
>> This Thread is not closed. Please HELP!!
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200701/1
>|||Wow!
Am I not making sense!!
Please somebody...address my questions.
This case is not closed!!!
Uri Dimant wrote:
>Hi
>You can restart SQL Server and it will create a new tempdb database
>> So I am in the middle of a crisis, my tempdb log is growing at about 2 gig
>> a
>[quoted text clipped - 16 lines]
>> This Thread is not closed. Please HELP!!
--
Message posted via http://www.sqlmonster.com|||Does this help?
http://support.microsoft.com/kb/317375/
If this is urgent you should open a support case.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"cbrichards via SQLMonster.com" <u3288@.uwe> wrote in message
news:6c60e0873cbc4@.uwe...
> Wow!
> Am I not making sense!!
> Please somebody...address my questions.
> This case is not closed!!!
> Uri Dimant wrote:
>>Hi
>>You can restart SQL Server and it will create a new tempdb database
>> So I am in the middle of a crisis, my tempdb log is growing at about 2
>> gig
>> a
>>[quoted text clipped - 16 lines]
>> This Thread is not closed. Please HELP!!
> --
> Message posted via http://www.sqlmonster.com
>|||While that is a good link, and I have it bookmarked, my initial question that
started this thread is still not being addressed:
*******************************************************************************************************************
One thing I would like information on is the checkpoint process, suspecting
that the automatic checkpoint is not occurring in TempDB, and thus, all of a
sudden we have rapid growth.
Is there any way to verify at the time of the crisis, if checkpoint is on/off,
working/not
working?
*******************************************************************************************************************
Roger Wolter[MSFT] wrote:
>Does this help?
>http://support.microsoft.com/kb/317375/
>If this is urgent you should open a support case.
>> Wow!
>[quoted text clipped - 14 lines]
>> This Thread is not closed. Please HELP!!
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200701/1|||Hi
> Is there any way to verify at the time of the crisis, if checkpoint is
> on/off,
> working/not
> working?
1)
You can monitor the number of pages flushed by a checkpoint
using PerfMon - SQLServer:Buffer Manager object, Checkpoint
pages/sec counter
2)
You can start SQL Server with the
traceflag 3502. With this trace falg on, whenever a checkpoint occurs, it
will be recorded in the SQL Server error log, along with the time of the
checkpoint.
"cbrichards via SQLMonster.com" <u3288@.uwe> wrote in message
news:6c62ef5fc5f05@.uwe...
> While that is a good link, and I have it bookmarked, my initial question
> that
> started this thread is still not being addressed:
> *******************************************************************************************************************
> One thing I would like information on is the checkpoint process,
> suspecting
> that the automatic checkpoint is not occurring in TempDB, and thus, all of
> a
> sudden we have rapid growth.
> Is there any way to verify at the time of the crisis, if checkpoint is
> on/off,
> working/not
> working?
> *******************************************************************************************************************
> Roger Wolter[MSFT] wrote:
>>Does this help?
>>http://support.microsoft.com/kb/317375/
>>If this is urgent you should open a support case.
>> Wow!
>>[quoted text clipped - 14 lines]
>>>
>>> This Thread is not closed. Please HELP!!
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200701/1
>|||Thanks Uri. I appreciate the info.
Uri Dimant wrote:
>Hi
>> Is there any way to verify at the time of the crisis, if checkpoint is
>> on/off,
>> working/not
>> working?
>1)
>You can monitor the number of pages flushed by a checkpoint
>using PerfMon - SQLServer:Buffer Manager object, Checkpoint
>pages/sec counter
>2)
>You can start SQL Server with the
>traceflag 3502. With this trace falg on, whenever a checkpoint occurs, it
>will be recorded in the SQL Server error log, along with the time of the
>checkpoint.
>> While that is a good link, and I have it bookmarked, my initial question
>> that
>[quoted text clipped - 22 lines]
>>>
>>> This Thread is not closed. Please HELP!!
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200701/1