Showing posts with label shrink. Show all posts
Showing posts with label shrink. Show all posts

Tuesday, March 27, 2012

Clear/Purge Log files

The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink database"
but this does not seems to reduce the mdf or ldf filesize.
What I did is detach the database, delete the ldf file, re-attach the
database to create a new ldf file. If I do not do so, the application cannot
work (hang!) because the ldf file is too huge and it takes ages to commit a
transaction. Is there a "better" way to control the ldf file like
auto-purging ? Should I restrict the log file size to a specific filesize
like 500MB ? Does this mean it will auto-purge each time it reach 500MB for
the ldf file ?
ThanksThe way you manage your log file is driven by your database recovery plan.
If your recovery plan is to restore from your last full backup and not apply
transaction log backups, then change your database recovery model to SIMPLE.
This will keep your log size reasonable by removing committed transactions
from the log. The log will still need to be large enough to accommodate
your largest single transaction. If you want to reduce potential data loss,
you should use the BULK_LOGGED or FULL recovery model and backup your log
periodically.
The proper way to shrink files is with DBCC SHRINKFILE. See the Books
Online for details. You should not need to do this as part of routine
maintenance.
Hope this helps.
Dan Guzman
SQL Server MVP
"Carlos" <wt_know@.hotmail.com> wrote in message
news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
database"
> but this does not seems to reduce the mdf or ldf filesize.
> What I did is detach the database, delete the ldf file, re-attach the
> database to create a new ldf file. If I do not do so, the application
cannot
> work (hang!) because the ldf file is too huge and it takes ages to commit
a
> transaction. Is there a "better" way to control the ldf file like
> auto-purging ? Should I restrict the log file size to a specific filesize
> like 500MB ? Does this mean it will auto-purge each time it reach 500MB
for
> the ldf file ?
> Thanks
>
>|||Hi,
Instead of detaching , delete the LDF file, Attach the database ytou should
have tried the below steps:-
Alter database <dbname> set single_user with rollback immediate
go
backup log <dbname> to disk='d:\backup\dbname.trn1'
go
dbcc shrinkfile('logical_log_name',truncateon
ly)
go
Alter database <dbname> set multi_user
After executing the above you can execute the below command check the log
file size and usage,
dbcc sqlperf(logspace)
Like Dan suggested go for SIMPLE recovery model if your data is not critical
or you not require a transaction log based recovery (POINT_IN_TIME).
Thanks
Hari
MCDBA
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> The way you manage your log file is driven by your database recovery plan.
> If your recovery plan is to restore from your last full backup and not
apply
> transaction log backups, then change your database recovery model to
SIMPLE.
> This will keep your log size reasonable by removing committed transactions
> from the log. The log will still need to be large enough to accommodate
> your largest single transaction. If you want to reduce potential data
loss,
> you should use the BULK_LOGGED or FULL recovery model and backup your log
> periodically.
> The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> Online for details. You should not need to do this as part of routine
> maintenance.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Carlos" <wt_know@.hotmail.com> wrote in message
> news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> database"
> cannot
commit[vbcol=seagreen]
> a
filesize[vbcol=seagreen]
> for
>|||Thanks for the advices ! :-)
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:%23MWo0lUPEHA.3328@.TK2MSFTNGP09.phx.gbl...
> Hi,
> Instead of detaching , delete the LDF file, Attach the database ytou
should
> have tried the below steps:-
> Alter database <dbname> set single_user with rollback immediate
> go
> backup log <dbname> to disk='d:\backup\dbname.trn1'
> go
> dbcc shrinkfile('logical_log_name',truncateon
ly)
> go
> Alter database <dbname> set multi_user
> After executing the above you can execute the below command check the log
> file size and usage,
> dbcc sqlperf(logspace)
> Like Dan suggested go for SIMPLE recovery model if your data is not
critical
> or you not require a transaction log based recovery (POINT_IN_TIME).
> Thanks
> Hari
> MCDBA
>
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
plan.[vbcol=seagreen]
> apply
> SIMPLE.
transactions[vbcol=seagreen]
> loss,
log[vbcol=seagreen]
> commit
> filesize
500MB[vbcol=seagreen]
>

Clear/Purge Log files

The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink database"
but this does not seems to reduce the mdf or ldf filesize.
What I did is detach the database, delete the ldf file, re-attach the
database to create a new ldf file. If I do not do so, the application cannot
work (hang!) because the ldf file is too huge and it takes ages to commit a
transaction. Is there a "better" way to control the ldf file like
auto-purging ? Should I restrict the log file size to a specific filesize
like 500MB ? Does this mean it will auto-purge each time it reach 500MB for
the ldf file ?
ThanksThe way you manage your log file is driven by your database recovery plan.
If your recovery plan is to restore from your last full backup and not apply
transaction log backups, then change your database recovery model to SIMPLE.
This will keep your log size reasonable by removing committed transactions
from the log. The log will still need to be large enough to accommodate
your largest single transaction. If you want to reduce potential data loss,
you should use the BULK_LOGGED or FULL recovery model and backup your log
periodically.
The proper way to shrink files is with DBCC SHRINKFILE. See the Books
Online for details. You should not need to do this as part of routine
maintenance.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Carlos" <wt_know@.hotmail.com> wrote in message
news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
database"
> but this does not seems to reduce the mdf or ldf filesize.
> What I did is detach the database, delete the ldf file, re-attach the
> database to create a new ldf file. If I do not do so, the application
cannot
> work (hang!) because the ldf file is too huge and it takes ages to commit
a
> transaction. Is there a "better" way to control the ldf file like
> auto-purging ? Should I restrict the log file size to a specific filesize
> like 500MB ? Does this mean it will auto-purge each time it reach 500MB
for
> the ldf file ?
> Thanks
>
>|||Hi,
Instead of detaching , delete the LDF file, Attach the database ytou should
have tried the below steps:-
Alter database <dbname> set single_user with rollback immediate
go
backup log <dbname> to disk='d:\backup\dbname.trn1'
go
dbcc shrinkfile('logical_log_name',truncateonly)
go
Alter database <dbname> set multi_user
After executing the above you can execute the below command check the log
file size and usage,
dbcc sqlperf(logspace)
Like Dan suggested go for SIMPLE recovery model if your data is not critical
or you not require a transaction log based recovery (POINT_IN_TIME).
Thanks
Hari
MCDBA
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> The way you manage your log file is driven by your database recovery plan.
> If your recovery plan is to restore from your last full backup and not
apply
> transaction log backups, then change your database recovery model to
SIMPLE.
> This will keep your log size reasonable by removing committed transactions
> from the log. The log will still need to be large enough to accommodate
> your largest single transaction. If you want to reduce potential data
loss,
> you should use the BULK_LOGGED or FULL recovery model and backup your log
> periodically.
> The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> Online for details. You should not need to do this as part of routine
> maintenance.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Carlos" <wt_know@.hotmail.com> wrote in message
> news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> > The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
> database"
> > but this does not seems to reduce the mdf or ldf filesize.
> >
> > What I did is detach the database, delete the ldf file, re-attach the
> > database to create a new ldf file. If I do not do so, the application
> cannot
> > work (hang!) because the ldf file is too huge and it takes ages to
commit
> a
> > transaction. Is there a "better" way to control the ldf file like
> > auto-purging ? Should I restrict the log file size to a specific
filesize
> > like 500MB ? Does this mean it will auto-purge each time it reach 500MB
> for
> > the ldf file ?
> >
> > Thanks
> >
> >
> >
>|||Thanks for the advices ! :-)
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:%23MWo0lUPEHA.3328@.TK2MSFTNGP09.phx.gbl...
> Hi,
> Instead of detaching , delete the LDF file, Attach the database ytou
should
> have tried the below steps:-
> Alter database <dbname> set single_user with rollback immediate
> go
> backup log <dbname> to disk='d:\backup\dbname.trn1'
> go
> dbcc shrinkfile('logical_log_name',truncateonly)
> go
> Alter database <dbname> set multi_user
> After executing the above you can execute the below command check the log
> file size and usage,
> dbcc sqlperf(logspace)
> Like Dan suggested go for SIMPLE recovery model if your data is not
critical
> or you not require a transaction log based recovery (POINT_IN_TIME).
> Thanks
> Hari
> MCDBA
>
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> > The way you manage your log file is driven by your database recovery
plan.
> > If your recovery plan is to restore from your last full backup and not
> apply
> > transaction log backups, then change your database recovery model to
> SIMPLE.
> > This will keep your log size reasonable by removing committed
transactions
> > from the log. The log will still need to be large enough to accommodate
> > your largest single transaction. If you want to reduce potential data
> loss,
> > you should use the BULK_LOGGED or FULL recovery model and backup your
log
> > periodically.
> >
> > The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> > Online for details. You should not need to do this as part of routine
> > maintenance.
> >
> > --
> > Hope this helps.
> >
> > Dan Guzman
> > SQL Server MVP
> >
> > "Carlos" <wt_know@.hotmail.com> wrote in message
> > news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> > > The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
> > database"
> > > but this does not seems to reduce the mdf or ldf filesize.
> > >
> > > What I did is detach the database, delete the ldf file, re-attach the
> > > database to create a new ldf file. If I do not do so, the application
> > cannot
> > > work (hang!) because the ldf file is too huge and it takes ages to
> commit
> > a
> > > transaction. Is there a "better" way to control the ldf file like
> > > auto-purging ? Should I restrict the log file size to a specific
> filesize
> > > like 500MB ? Does this mean it will auto-purge each time it reach
500MB
> > for
> > > the ldf file ?
> > >
> > > Thanks
> > >
> > >
> > >
> >
> >
>

Tuesday, March 20, 2012

circular transaction log

Hi,
I have two important questions :
Question 1:
I need to know what the is the importance of transaction log file in MSSQL.
If we shrink our transaction log regularly, then would there be any negative
effect or loss of any useful feature in MSSQL.
Question 2:
In our scenario, the transaction log grows very fast because of the heavy DB
operations, so we need to shrink the transaction file regularly. I need a
way to create a circular log file in MSSQL so that it will automatically
overwrite the transaction log file when ever it reaches the maximum limit. I
know in oracle and DB2 this feature is there, but I could not find any
satisfactory Microsoft resource which tells can inform me how to create a
circular log file. Please help me out.
Best Regrads,
Abdul-Rahman"Abdul-Rahman" <rahman.mahmood@.pk.softecheww.com> wrote in message
news:%23AgZWGg3EHA.2600@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have two important questions :
> Question 1:
>
> I need to know what the is the importance of transaction log file in
> MSSQL.
> If we shrink our transaction log regularly, then would there be any
> negative
> effect or loss of any useful feature in MSSQL.
>
The transaction log is used for recovery. Without a transaction log you
will only be able to restore your database with a full backup. For instance
if you take full backups every night,
Here's a description of Sql Server recovery models.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_bkprst_60s9.asp
Under the Simple Recovery model your log file will be truncated at every
database checkpoint, and is only used to roll back failed transactions.
> Question 2:
>
> In our scenario, the transaction log grows very fast because of the heavy
> DB
> operations, so we need to shrink the transaction file regularly. I need a
> way to create a circular log file in MSSQL so that it will automatically
> overwrite the transaction log file when ever it reaches the maximum limit.
> I
> know in oracle and DB2 this feature is there, but I could not find any
> satisfactory Microsoft resource which tells can inform me how to create a
> circular log file. Please help me out.
>
The equivilent of Oracle's NOARCHIVELOG mode is Simple Recovery model. In
the full recovery model, with heavy use you will need to run a BACKUP LOG to
move the log somewhere safe and truncate it.
David|||"Abdul-Rahman" <rahman.mahmood@.pk.softecheww.com> wrote in message
news:%23AgZWGg3EHA.2600@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have two important questions :
> Question 1:
>
> I need to know what the is the importance of transaction log file in
MSSQL.
> If we shrink our transaction log regularly, then would there be any
negative
> effect or loss of any useful feature in MSSQL.
Shrinking the transaction log is generally a bad idea.
For one thing you risk getting disk level fragmentation. You're better off
creating a full size transaction log on a newly formatted disk and not
shrinking it.
In addition, if you keep shrinking it, you'll have to keep expanding it.
This takes time and can slow down your database.
Also, how are you clearing it out?
For production databases, you probably should be be doing backups as often
as necessary.
>
> Question 2:
>
> In our scenario, the transaction log grows very fast because of the heavy
DB
> operations, so we need to shrink the transaction file regularly. I need a
> way to create a circular log file in MSSQL so that it will automatically
> overwrite the transaction log file when ever it reaches the maximum limit.
I
> know in oracle and DB2 this feature is there, but I could not find any
> satisfactory Microsoft resource which tells can inform me how to create a
> circular log file. Please help me out.
>
Use Bulk Logged or Simple Recovery Mode. Books Online can go into more
detail.
However, keep in mind your decisions affect your recovery options.
>
>
> Best Regrads,
> Abdul-Rahman
>

circular transaction log

Hi,
I have two important questions :
Question 1:
I need to know what the is the importance of transaction log file in MSSQL.
If we shrink our transaction log regularly, then would there be any negative
effect or loss of any useful feature in MSSQL.
Question 2:
In our scenario, the transaction log grows very fast because of the heavy DB
operations, so we need to shrink the transaction file regularly. I need a
way to create a circular log file in MSSQL so that it will automatically
overwrite the transaction log file when ever it reaches the maximum limit. I
know in oracle and DB2 this feature is there, but I could not find any
satisfactory Microsoft resource which tells can inform me how to create a
circular log file. Please help me out.
Best Regrads,
Abdul-Rahman
"Abdul-Rahman" <rahman.mahmood@.pk.softecheww.com> wrote in message
news:%23AgZWGg3EHA.2600@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have two important questions :
> Question 1:
>
> I need to know what the is the importance of transaction log file in
> MSSQL.
> If we shrink our transaction log regularly, then would there be any
> negative
> effect or loss of any useful feature in MSSQL.
>
The transaction log is used for recovery. Without a transaction log you
will only be able to restore your database with a full backup. For instance
if you take full backups every night,
Here's a description of Sql Server recovery models.
http://msdn.microsoft.com/library/de...kprst_60s9.asp
Under the Simple Recovery model your log file will be truncated at every
database checkpoint, and is only used to roll back failed transactions.

> Question 2:
>
> In our scenario, the transaction log grows very fast because of the heavy
> DB
> operations, so we need to shrink the transaction file regularly. I need a
> way to create a circular log file in MSSQL so that it will automatically
> overwrite the transaction log file when ever it reaches the maximum limit.
> I
> know in oracle and DB2 this feature is there, but I could not find any
> satisfactory Microsoft resource which tells can inform me how to create a
> circular log file. Please help me out.
>
The equivilent of Oracle's NOARCHIVELOG mode is Simple Recovery model. In
the full recovery model, with heavy use you will need to run a BACKUP LOG to
move the log somewhere safe and truncate it.
David
|||"Abdul-Rahman" <rahman.mahmood@.pk.softecheww.com> wrote in message
news:%23AgZWGg3EHA.2600@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have two important questions :
> Question 1:
>
> I need to know what the is the importance of transaction log file in
MSSQL.
> If we shrink our transaction log regularly, then would there be any
negative
> effect or loss of any useful feature in MSSQL.
Shrinking the transaction log is generally a bad idea.
For one thing you risk getting disk level fragmentation. You're better off
creating a full size transaction log on a newly formatted disk and not
shrinking it.
In addition, if you keep shrinking it, you'll have to keep expanding it.
This takes time and can slow down your database.
Also, how are you clearing it out?
For production databases, you probably should be be doing backups as often
as necessary.

>
> Question 2:
>
> In our scenario, the transaction log grows very fast because of the heavy
DB
> operations, so we need to shrink the transaction file regularly. I need a
> way to create a circular log file in MSSQL so that it will automatically
> overwrite the transaction log file when ever it reaches the maximum limit.
I
> know in oracle and DB2 this feature is there, but I could not find any
> satisfactory Microsoft resource which tells can inform me how to create a
> circular log file. Please help me out.
>
Use Bulk Logged or Simple Recovery Mode. Books Online can go into more
detail.
However, keep in mind your decisions affect your recovery options.

>
>
> Best Regrads,
> Abdul-Rahman
>

circular log

Hi,
I have 2 questions:
Question 1:
I need to know what the importance of the transaction log file is. If we
shrink our transaction log regularly, then would there be any negative
effect or loss of any useful feature in MSSQL.
Question 2:
In our scenario, the transaction log grows very fast because of the heavy DB
operations, so we need to shrink the transaction file regularly. I need a
way to create a circular log file in MSSQL so that it will automatically
overwrite the transaction log file when ever it reaches the maximum limit. I
know in oracle and DB2 this feature is there, but I could not find any
satisfactory Microsoft resource which tells us how to create a circular log
file. Please help me out
Best Regards,
Abdul-Rahman.> Hi,
> I have 2 questions:
>
> Question 1:
> I need to know what the importance of the transaction log file is.
--
The transaction log allows point-in-time data recovery.
> If we shrink our transaction log regularly, then would there be any
negative
> effect or loss of any useful feature in MSSQL.
--
It is safe to truncate the transaction log regularly by backing up the
transaction log regularly.
> Question 2:
> In our scenario, the transaction log grows very fast because of the heavy
DB
> operations, so we need to shrink the transaction file regularly. I need a
> way to create a circular log file in MSSQL so that it will automatically
> overwrite the transaction log file when ever it reaches the maximum
limit. I
> know in oracle and DB2 this feature is there, but I could not find any
> satisfactory Microsoft resource which tells us how to create a circular
log
> file. Please help me out
--
There is no circular log in SQL Server. Do you need point-in-time recovery?
If not, you can use a SIMPLE RECOVERY model for your database. Or you can
keep your FULL RECOVERY MODEL and backup your transaction logs more
frequently. Backing up your transaction log will truncate it.
Hope this helps,
--
Eric Cárdenas
Support professional
This posting is provided "AS IS" with no warranties, and confers no rights.sqlsql