Showing posts with label ldf. Show all posts
Showing posts with label ldf. Show all posts

Tuesday, March 27, 2012

Clear/Purge Log files

The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink database"
but this does not seems to reduce the mdf or ldf filesize.
What I did is detach the database, delete the ldf file, re-attach the
database to create a new ldf file. If I do not do so, the application cannot
work (hang!) because the ldf file is too huge and it takes ages to commit a
transaction. Is there a "better" way to control the ldf file like
auto-purging ? Should I restrict the log file size to a specific filesize
like 500MB ? Does this mean it will auto-purge each time it reach 500MB for
the ldf file ?
ThanksThe way you manage your log file is driven by your database recovery plan.
If your recovery plan is to restore from your last full backup and not apply
transaction log backups, then change your database recovery model to SIMPLE.
This will keep your log size reasonable by removing committed transactions
from the log. The log will still need to be large enough to accommodate
your largest single transaction. If you want to reduce potential data loss,
you should use the BULK_LOGGED or FULL recovery model and backup your log
periodically.
The proper way to shrink files is with DBCC SHRINKFILE. See the Books
Online for details. You should not need to do this as part of routine
maintenance.
Hope this helps.
Dan Guzman
SQL Server MVP
"Carlos" <wt_know@.hotmail.com> wrote in message
news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
database"
> but this does not seems to reduce the mdf or ldf filesize.
> What I did is detach the database, delete the ldf file, re-attach the
> database to create a new ldf file. If I do not do so, the application
cannot
> work (hang!) because the ldf file is too huge and it takes ages to commit
a
> transaction. Is there a "better" way to control the ldf file like
> auto-purging ? Should I restrict the log file size to a specific filesize
> like 500MB ? Does this mean it will auto-purge each time it reach 500MB
for
> the ldf file ?
> Thanks
>
>|||Hi,
Instead of detaching , delete the LDF file, Attach the database ytou should
have tried the below steps:-
Alter database <dbname> set single_user with rollback immediate
go
backup log <dbname> to disk='d:\backup\dbname.trn1'
go
dbcc shrinkfile('logical_log_name',truncateon
ly)
go
Alter database <dbname> set multi_user
After executing the above you can execute the below command check the log
file size and usage,
dbcc sqlperf(logspace)
Like Dan suggested go for SIMPLE recovery model if your data is not critical
or you not require a transaction log based recovery (POINT_IN_TIME).
Thanks
Hari
MCDBA
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> The way you manage your log file is driven by your database recovery plan.
> If your recovery plan is to restore from your last full backup and not
apply
> transaction log backups, then change your database recovery model to
SIMPLE.
> This will keep your log size reasonable by removing committed transactions
> from the log. The log will still need to be large enough to accommodate
> your largest single transaction. If you want to reduce potential data
loss,
> you should use the BULK_LOGGED or FULL recovery model and backup your log
> periodically.
> The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> Online for details. You should not need to do this as part of routine
> maintenance.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Carlos" <wt_know@.hotmail.com> wrote in message
> news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> database"
> cannot
commit[vbcol=seagreen]
> a
filesize[vbcol=seagreen]
> for
>|||Thanks for the advices ! :-)
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:%23MWo0lUPEHA.3328@.TK2MSFTNGP09.phx.gbl...
> Hi,
> Instead of detaching , delete the LDF file, Attach the database ytou
should
> have tried the below steps:-
> Alter database <dbname> set single_user with rollback immediate
> go
> backup log <dbname> to disk='d:\backup\dbname.trn1'
> go
> dbcc shrinkfile('logical_log_name',truncateon
ly)
> go
> Alter database <dbname> set multi_user
> After executing the above you can execute the below command check the log
> file size and usage,
> dbcc sqlperf(logspace)
> Like Dan suggested go for SIMPLE recovery model if your data is not
critical
> or you not require a transaction log based recovery (POINT_IN_TIME).
> Thanks
> Hari
> MCDBA
>
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
plan.[vbcol=seagreen]
> apply
> SIMPLE.
transactions[vbcol=seagreen]
> loss,
log[vbcol=seagreen]
> commit
> filesize
500MB[vbcol=seagreen]
>

Clear/Purge Log files

The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink database"
but this does not seems to reduce the mdf or ldf filesize.
What I did is detach the database, delete the ldf file, re-attach the
database to create a new ldf file. If I do not do so, the application cannot
work (hang!) because the ldf file is too huge and it takes ages to commit a
transaction. Is there a "better" way to control the ldf file like
auto-purging ? Should I restrict the log file size to a specific filesize
like 500MB ? Does this mean it will auto-purge each time it reach 500MB for
the ldf file ?
ThanksThe way you manage your log file is driven by your database recovery plan.
If your recovery plan is to restore from your last full backup and not apply
transaction log backups, then change your database recovery model to SIMPLE.
This will keep your log size reasonable by removing committed transactions
from the log. The log will still need to be large enough to accommodate
your largest single transaction. If you want to reduce potential data loss,
you should use the BULK_LOGGED or FULL recovery model and backup your log
periodically.
The proper way to shrink files is with DBCC SHRINKFILE. See the Books
Online for details. You should not need to do this as part of routine
maintenance.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Carlos" <wt_know@.hotmail.com> wrote in message
news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
database"
> but this does not seems to reduce the mdf or ldf filesize.
> What I did is detach the database, delete the ldf file, re-attach the
> database to create a new ldf file. If I do not do so, the application
cannot
> work (hang!) because the ldf file is too huge and it takes ages to commit
a
> transaction. Is there a "better" way to control the ldf file like
> auto-purging ? Should I restrict the log file size to a specific filesize
> like 500MB ? Does this mean it will auto-purge each time it reach 500MB
for
> the ldf file ?
> Thanks
>
>|||Hi,
Instead of detaching , delete the LDF file, Attach the database ytou should
have tried the below steps:-
Alter database <dbname> set single_user with rollback immediate
go
backup log <dbname> to disk='d:\backup\dbname.trn1'
go
dbcc shrinkfile('logical_log_name',truncateonly)
go
Alter database <dbname> set multi_user
After executing the above you can execute the below command check the log
file size and usage,
dbcc sqlperf(logspace)
Like Dan suggested go for SIMPLE recovery model if your data is not critical
or you not require a transaction log based recovery (POINT_IN_TIME).
Thanks
Hari
MCDBA
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> The way you manage your log file is driven by your database recovery plan.
> If your recovery plan is to restore from your last full backup and not
apply
> transaction log backups, then change your database recovery model to
SIMPLE.
> This will keep your log size reasonable by removing committed transactions
> from the log. The log will still need to be large enough to accommodate
> your largest single transaction. If you want to reduce potential data
loss,
> you should use the BULK_LOGGED or FULL recovery model and backup your log
> periodically.
> The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> Online for details. You should not need to do this as part of routine
> maintenance.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Carlos" <wt_know@.hotmail.com> wrote in message
> news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> > The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
> database"
> > but this does not seems to reduce the mdf or ldf filesize.
> >
> > What I did is detach the database, delete the ldf file, re-attach the
> > database to create a new ldf file. If I do not do so, the application
> cannot
> > work (hang!) because the ldf file is too huge and it takes ages to
commit
> a
> > transaction. Is there a "better" way to control the ldf file like
> > auto-purging ? Should I restrict the log file size to a specific
filesize
> > like 500MB ? Does this mean it will auto-purge each time it reach 500MB
> for
> > the ldf file ?
> >
> > Thanks
> >
> >
> >
>|||Thanks for the advices ! :-)
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:%23MWo0lUPEHA.3328@.TK2MSFTNGP09.phx.gbl...
> Hi,
> Instead of detaching , delete the LDF file, Attach the database ytou
should
> have tried the below steps:-
> Alter database <dbname> set single_user with rollback immediate
> go
> backup log <dbname> to disk='d:\backup\dbname.trn1'
> go
> dbcc shrinkfile('logical_log_name',truncateonly)
> go
> Alter database <dbname> set multi_user
> After executing the above you can execute the below command check the log
> file size and usage,
> dbcc sqlperf(logspace)
> Like Dan suggested go for SIMPLE recovery model if your data is not
critical
> or you not require a transaction log based recovery (POINT_IN_TIME).
> Thanks
> Hari
> MCDBA
>
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> > The way you manage your log file is driven by your database recovery
plan.
> > If your recovery plan is to restore from your last full backup and not
> apply
> > transaction log backups, then change your database recovery model to
> SIMPLE.
> > This will keep your log size reasonable by removing committed
transactions
> > from the log. The log will still need to be large enough to accommodate
> > your largest single transaction. If you want to reduce potential data
> loss,
> > you should use the BULK_LOGGED or FULL recovery model and backup your
log
> > periodically.
> >
> > The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> > Online for details. You should not need to do this as part of routine
> > maintenance.
> >
> > --
> > Hope this helps.
> >
> > Dan Guzman
> > SQL Server MVP
> >
> > "Carlos" <wt_know@.hotmail.com> wrote in message
> > news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> > > The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
> > database"
> > > but this does not seems to reduce the mdf or ldf filesize.
> > >
> > > What I did is detach the database, delete the ldf file, re-attach the
> > > database to create a new ldf file. If I do not do so, the application
> > cannot
> > > work (hang!) because the ldf file is too huge and it takes ages to
> commit
> > a
> > > transaction. Is there a "better" way to control the ldf file like
> > > auto-purging ? Should I restrict the log file size to a specific
> filesize
> > > like 500MB ? Does this mean it will auto-purge each time it reach
500MB
> > for
> > > the ldf file ?
> > >
> > > Thanks
> > >
> > >
> > >
> >
> >
>

Sunday, March 25, 2012

cleaning out my transaction db (*.ldf)

i have an sql db which is about 400 megs the transaction log is over 2 gig. how can i bring the size down. does enterprise manager have some sort of shrink, compress or delete tool that i can use.
thank you,
ThomasTry DBCC Shrinkfile.|||Read up dbcc shrinkfile from the Holy book (SQL Server Books Online)

Also here is a reference :
http://www.dbforums.com/showthread.php?threadid=861115|||You need to back it up! After is has been backed up once or twice then you can use DBCC SHRINKFILE to reduce its size.|||If you still have a problem shrinking the log, please post back.

Sunday, March 11, 2012

Choices of creating database files

Working on a database structure on SQL 2000 server, I have MDF and LDF creat
ed.
I need to create NDF files to use 5 logical drives on the server. All
logical drives are located in SAN storage with RAID10. When I create the ND
F
files , should I create one file on each drive OR create multiple files on
each drive? Which way is better for SQL server performance?
Thanks for recommendation!Mike,
Like so many other concepts, it depends. What is the architecture of your
SAN? If it is of later technology, then your disk allocation may be
"virtuallized" anyway. This means that you really don't have physical
control over which drives your data goes to (within the LUN) - the SAN will
determine that. For example, the HP EVA might span a RAID 10 configuration
across 100 drives in 2K blocks if so configured, even if you are allocating
only 10 GB. If you map 5 logical drives the data corresponding to all 5
drives will be interleaved and spread out over the same 100 drives.
There may be other reasons for separating the NDF files (file backups,
process isolation, etc.). You may want to isolate the log and data for
snapshots, or other reasons, but I think any performance gain would be
nominal.
Also, allocation additional space is usually a straight forward process, but
deleting space usually requires deleting and re-adding the configured space.
If you break up your data files, undoubtedly some will be a larger size than
others and this could result in a maintenance issue. You may be in a
situation where you want to take space from drive A and add it to drive B.
There could be some nominal performance having multiple drives due to SQL
Server having more I/O buffers, but it probably would not be noticable.
Unless you want to have a more sophisticated file backup scheme, I would
start with two drives (for isolation purposes), one for the log and one for
the data, indexes and tempdb.
If your architecture is not of later technology, then it depends (again!).
What kind of SAN are you working with?
-- Bill
"Mike Torry" <MikeTorry@.discussions.microsoft.com> wrote in message
news:A2FD70DA-F8C1-4446-AB9F-4C5D73BB60EE@.microsoft.com...
> Working on a database structure on SQL 2000 server, I have MDF and LDF
> created.
> I need to create NDF files to use 5 logical drives on the server. All
> logical drives are located in SAN storage with RAID10. When I create the
> NDF
> files , should I create one file on each drive OR create multiple files on
> each drive? Which way is better for SQL server performance?
> Thanks for recommendation!

Choices of creating database files

Working on a database structure on SQL 2000 server, I have MDF and LDF created.
I need to create NDF files to use 5 logical drives on the server. All
logical drives are located in SAN storage with RAID10. When I create the NDF
files , should I create one file on each drive OR create multiple files on
each drive? Which way is better for SQL server performance?
Thanks for recommendation!
Mike,
Like so many other concepts, it depends. What is the architecture of your
SAN? If it is of later technology, then your disk allocation may be
"virtuallized" anyway. This means that you really don't have physical
control over which drives your data goes to (within the LUN) - the SAN will
determine that. For example, the HP EVA might span a RAID 10 configuration
across 100 drives in 2K blocks if so configured, even if you are allocating
only 10 GB. If you map 5 logical drives the data corresponding to all 5
drives will be interleaved and spread out over the same 100 drives.
There may be other reasons for separating the NDF files (file backups,
process isolation, etc.). You may want to isolate the log and data for
snapshots, or other reasons, but I think any performance gain would be
nominal.
Also, allocation additional space is usually a straight forward process, but
deleting space usually requires deleting and re-adding the configured space.
If you break up your data files, undoubtedly some will be a larger size than
others and this could result in a maintenance issue. You may be in a
situation where you want to take space from drive A and add it to drive B.
There could be some nominal performance having multiple drives due to SQL
Server having more I/O buffers, but it probably would not be noticable.
Unless you want to have a more sophisticated file backup scheme, I would
start with two drives (for isolation purposes), one for the log and one for
the data, indexes and tempdb.
If your architecture is not of later technology, then it depends (again!).
What kind of SAN are you working with?
-- Bill
"Mike Torry" <MikeTorry@.discussions.microsoft.com> wrote in message
news:A2FD70DA-F8C1-4446-AB9F-4C5D73BB60EE@.microsoft.com...
> Working on a database structure on SQL 2000 server, I have MDF and LDF
> created.
> I need to create NDF files to use 5 logical drives on the server. All
> logical drives are located in SAN storage with RAID10. When I create the
> NDF
> files , should I create one file on each drive OR create multiple files on
> each drive? Which way is better for SQL server performance?
> Thanks for recommendation!