Tuesday, March 27, 2012

Clear/Purge Log files

The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink database"
but this does not seems to reduce the mdf or ldf filesize.
What I did is detach the database, delete the ldf file, re-attach the
database to create a new ldf file. If I do not do so, the application cannot
work (hang!) because the ldf file is too huge and it takes ages to commit a
transaction. Is there a "better" way to control the ldf file like
auto-purging ? Should I restrict the log file size to a specific filesize
like 500MB ? Does this mean it will auto-purge each time it reach 500MB for
the ldf file ?
ThanksThe way you manage your log file is driven by your database recovery plan.
If your recovery plan is to restore from your last full backup and not apply
transaction log backups, then change your database recovery model to SIMPLE.
This will keep your log size reasonable by removing committed transactions
from the log. The log will still need to be large enough to accommodate
your largest single transaction. If you want to reduce potential data loss,
you should use the BULK_LOGGED or FULL recovery model and backup your log
periodically.
The proper way to shrink files is with DBCC SHRINKFILE. See the Books
Online for details. You should not need to do this as part of routine
maintenance.
Hope this helps.
Dan Guzman
SQL Server MVP
"Carlos" <wt_know@.hotmail.com> wrote in message
news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
database"
> but this does not seems to reduce the mdf or ldf filesize.
> What I did is detach the database, delete the ldf file, re-attach the
> database to create a new ldf file. If I do not do so, the application
cannot
> work (hang!) because the ldf file is too huge and it takes ages to commit
a
> transaction. Is there a "better" way to control the ldf file like
> auto-purging ? Should I restrict the log file size to a specific filesize
> like 500MB ? Does this mean it will auto-purge each time it reach 500MB
for
> the ldf file ?
> Thanks
>
>|||Hi,
Instead of detaching , delete the LDF file, Attach the database ytou should
have tried the below steps:-
Alter database <dbname> set single_user with rollback immediate
go
backup log <dbname> to disk='d:\backup\dbname.trn1'
go
dbcc shrinkfile('logical_log_name',truncateon
ly)
go
Alter database <dbname> set multi_user
After executing the above you can execute the below command check the log
file size and usage,
dbcc sqlperf(logspace)
Like Dan suggested go for SIMPLE recovery model if your data is not critical
or you not require a transaction log based recovery (POINT_IN_TIME).
Thanks
Hari
MCDBA
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> The way you manage your log file is driven by your database recovery plan.
> If your recovery plan is to restore from your last full backup and not
apply
> transaction log backups, then change your database recovery model to
SIMPLE.
> This will keep your log size reasonable by removing committed transactions
> from the log. The log will still need to be large enough to accommodate
> your largest single transaction. If you want to reduce potential data
loss,
> you should use the BULK_LOGGED or FULL recovery model and backup your log
> periodically.
> The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> Online for details. You should not need to do this as part of routine
> maintenance.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Carlos" <wt_know@.hotmail.com> wrote in message
> news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> database"
> cannot
commit[vbcol=seagreen]
> a
filesize[vbcol=seagreen]
> for
>|||Thanks for the advices ! :-)
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:%23MWo0lUPEHA.3328@.TK2MSFTNGP09.phx.gbl...
> Hi,
> Instead of detaching , delete the LDF file, Attach the database ytou
should
> have tried the below steps:-
> Alter database <dbname> set single_user with rollback immediate
> go
> backup log <dbname> to disk='d:\backup\dbname.trn1'
> go
> dbcc shrinkfile('logical_log_name',truncateon
ly)
> go
> Alter database <dbname> set multi_user
> After executing the above you can execute the below command check the log
> file size and usage,
> dbcc sqlperf(logspace)
> Like Dan suggested go for SIMPLE recovery model if your data is not
critical
> or you not require a transaction log based recovery (POINT_IN_TIME).
> Thanks
> Hari
> MCDBA
>
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
plan.[vbcol=seagreen]
> apply
> SIMPLE.
transactions[vbcol=seagreen]
> loss,
log[vbcol=seagreen]
> commit
> filesize
500MB[vbcol=seagreen]
>

Clear/Purge Log files

The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink database"
but this does not seems to reduce the mdf or ldf filesize.
What I did is detach the database, delete the ldf file, re-attach the
database to create a new ldf file. If I do not do so, the application cannot
work (hang!) because the ldf file is too huge and it takes ages to commit a
transaction. Is there a "better" way to control the ldf file like
auto-purging ? Should I restrict the log file size to a specific filesize
like 500MB ? Does this mean it will auto-purge each time it reach 500MB for
the ldf file ?
ThanksThe way you manage your log file is driven by your database recovery plan.
If your recovery plan is to restore from your last full backup and not apply
transaction log backups, then change your database recovery model to SIMPLE.
This will keep your log size reasonable by removing committed transactions
from the log. The log will still need to be large enough to accommodate
your largest single transaction. If you want to reduce potential data loss,
you should use the BULK_LOGGED or FULL recovery model and backup your log
periodically.
The proper way to shrink files is with DBCC SHRINKFILE. See the Books
Online for details. You should not need to do this as part of routine
maintenance.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Carlos" <wt_know@.hotmail.com> wrote in message
news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
database"
> but this does not seems to reduce the mdf or ldf filesize.
> What I did is detach the database, delete the ldf file, re-attach the
> database to create a new ldf file. If I do not do so, the application
cannot
> work (hang!) because the ldf file is too huge and it takes ages to commit
a
> transaction. Is there a "better" way to control the ldf file like
> auto-purging ? Should I restrict the log file size to a specific filesize
> like 500MB ? Does this mean it will auto-purge each time it reach 500MB
for
> the ldf file ?
> Thanks
>
>|||Hi,
Instead of detaching , delete the LDF file, Attach the database ytou should
have tried the below steps:-
Alter database <dbname> set single_user with rollback immediate
go
backup log <dbname> to disk='d:\backup\dbname.trn1'
go
dbcc shrinkfile('logical_log_name',truncateonly)
go
Alter database <dbname> set multi_user
After executing the above you can execute the below command check the log
file size and usage,
dbcc sqlperf(logspace)
Like Dan suggested go for SIMPLE recovery model if your data is not critical
or you not require a transaction log based recovery (POINT_IN_TIME).
Thanks
Hari
MCDBA
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> The way you manage your log file is driven by your database recovery plan.
> If your recovery plan is to restore from your last full backup and not
apply
> transaction log backups, then change your database recovery model to
SIMPLE.
> This will keep your log size reasonable by removing committed transactions
> from the log. The log will still need to be large enough to accommodate
> your largest single transaction. If you want to reduce potential data
loss,
> you should use the BULK_LOGGED or FULL recovery model and backup your log
> periodically.
> The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> Online for details. You should not need to do this as part of routine
> maintenance.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Carlos" <wt_know@.hotmail.com> wrote in message
> news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> > The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
> database"
> > but this does not seems to reduce the mdf or ldf filesize.
> >
> > What I did is detach the database, delete the ldf file, re-attach the
> > database to create a new ldf file. If I do not do so, the application
> cannot
> > work (hang!) because the ldf file is too huge and it takes ages to
commit
> a
> > transaction. Is there a "better" way to control the ldf file like
> > auto-purging ? Should I restrict the log file size to a specific
filesize
> > like 500MB ? Does this mean it will auto-purge each time it reach 500MB
> for
> > the ldf file ?
> >
> > Thanks
> >
> >
> >
>|||Thanks for the advices ! :-)
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:%23MWo0lUPEHA.3328@.TK2MSFTNGP09.phx.gbl...
> Hi,
> Instead of detaching , delete the LDF file, Attach the database ytou
should
> have tried the below steps:-
> Alter database <dbname> set single_user with rollback immediate
> go
> backup log <dbname> to disk='d:\backup\dbname.trn1'
> go
> dbcc shrinkfile('logical_log_name',truncateonly)
> go
> Alter database <dbname> set multi_user
> After executing the above you can execute the below command check the log
> file size and usage,
> dbcc sqlperf(logspace)
> Like Dan suggested go for SIMPLE recovery model if your data is not
critical
> or you not require a transaction log based recovery (POINT_IN_TIME).
> Thanks
> Hari
> MCDBA
>
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> > The way you manage your log file is driven by your database recovery
plan.
> > If your recovery plan is to restore from your last full backup and not
> apply
> > transaction log backups, then change your database recovery model to
> SIMPLE.
> > This will keep your log size reasonable by removing committed
transactions
> > from the log. The log will still need to be large enough to accommodate
> > your largest single transaction. If you want to reduce potential data
> loss,
> > you should use the BULK_LOGGED or FULL recovery model and backup your
log
> > periodically.
> >
> > The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> > Online for details. You should not need to do this as part of routine
> > maintenance.
> >
> > --
> > Hope this helps.
> >
> > Dan Guzman
> > SQL Server MVP
> >
> > "Carlos" <wt_know@.hotmail.com> wrote in message
> > news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> > > The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
> > database"
> > > but this does not seems to reduce the mdf or ldf filesize.
> > >
> > > What I did is detach the database, delete the ldf file, re-attach the
> > > database to create a new ldf file. If I do not do so, the application
> > cannot
> > > work (hang!) because the ldf file is too huge and it takes ages to
> commit
> > a
> > > transaction. Is there a "better" way to control the ldf file like
> > > auto-purging ? Should I restrict the log file size to a specific
> filesize
> > > like 500MB ? Does this mean it will auto-purge each time it reach
500MB
> > for
> > > the ldf file ?
> > >
> > > Thanks
> > >
> > >
> > >
> >
> >
>

Clear the transaction log

Can someone tell me the procedure the clear out the transaction log so it
doesn't crash my server again in the future?
Thanks!
JonHow did it "crash your server"?
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"Jon Glazer" <jglazer.delete.me@.adconn.com> wrote in message
news:LuA2e.456$UA4.181@.fe1.columbus.rr.com...
> Can someone tell me the procedure the clear out the transaction log so it
> doesn't crash my server again in the future?
> Thanks!
> Jon
>|||Well the procedure is to back it up on a regular basis so it can reuse the
space. Or set it to Simple recovery mode if you don't want or need Log
backups. These might be of interest:
http://www.nigelrivett.net/Transact...ileGrows_1.html Log File issues
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
Server 7.0 Tran Log
http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
Server 2000 with DBCC SHRINKFILE
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
Andrew J. Kelly SQL MVP
"Jon Glazer" <jglazer.delete.me@.adconn.com> wrote in message
news:LuA2e.456$UA4.181@.fe1.columbus.rr.com...
> Can someone tell me the procedure the clear out the transaction log so it
> doesn't crash my server again in the future?
> Thanks!
> Jon
>

Clear the resultset during batch processing

Hello SQL Experts,
I have a Batch of Select statments which runs in a while loop and i use
the @.@.rowcount to get the rows returned to manipulate furture.
the problem is after some time in the middle of the process i get the
below error
"The query has exceeded the maximum number of result sets that can be
displayed in the results grid. Only the first 100 result sets are
displayed in the grid."
Is there a way by which i can stop the results being displayed in the
message tab or clear the diplayed result or result set as said in the
error messge. as after the error message the task stops on it own.
i dont know if there is any thing using SET statment
/soni
hi,
This means essentially you do not need these results generated by the SQLs?
is this the case? can you send some sample SQLs so something can be
suggested close to your needs.
rgds
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164947463.012890.94570@.f1g2000cwa.googlegrou ps.com...
> Hello SQL Experts,
> I have a Batch of Select statments which runs in a while loop and i use
> the @.@.rowcount to get the rows returned to manipulate furture.
> the problem is after some time in the middle of the process i get the
> below error
>
> "The query has exceeded the maximum number of result sets that can be
> displayed in the results grid. Only the first 100 result sets are
> displayed in the grid."
> Is there a way by which i can stop the results being displayed in the
> message tab or clear the diplayed result or result set as said in the
> error messge. as after the error message the task stops on it own.
> i dont know if there is any thing using SET statment
> /soni
>
|||Hello Falik,
Thanks for showing a helping hand
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:[vbcol=seagreen]
> hi,
> This means essentially you do not need these results generated by the SQLs?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegrou ps.com...
|||Hello Falik,
Thanks for showing a helping hand
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:[vbcol=seagreen]
> hi,
> This means essentially you do not need these results generated by the SQLs?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegrou ps.com...
|||Hi Soni,
can u produce the sampel data , expected result and SQLs (may use the pubs
database for this), as I can;t understand the why count is not used or group
by can;t be used for this?
as down the line we need to stop the unwanted result-set generated by the
selects......
thnx
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164955286.981884.286430@.j44g2000cwa.googlegr oups.com...
> Hello Falik,
> Thanks for showing a helping hand
> something like this
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'A'
> -- this will return 2100 rows
> Select @.ARank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'B'
> -- this will return 1000 rows
> Select @.BRank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'C'
> -- this will return 1000 rows
> Select @.CRank = @.@.rowcount
> this way the code is runing in a while loop logic. I cannot use count
> () function as there is data mismatch because of Duplicate records in
> some table.
> /soni
> Falik Sher wrote:
>
sqlsql

Clear the resultset during batch processing

Hello SQL Experts,
I have a Batch of Select statments which runs in a while loop and i use
the @.@.rowcount to get the rows returned to manipulate furture.
the problem is after some time in the middle of the process i get the
below error
"The query has exceeded the maximum number of result sets that can be
displayed in the results grid. Only the first 100 result sets are
displayed in the grid."
Is there a way by which i can stop the results being displayed in the
message tab or clear the diplayed result or result set as said in the
error messge. as after the error message the task stops on it own.
i dont know if there is any thing using SET statment
/sonihi,
This means essentially you do not need these results generated by the SQLs?
is this the case? can you send some sample SQLs so something can be
suggested close to your needs.
rgds
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...
> Hello SQL Experts,
> I have a Batch of Select statments which runs in a while loop and i use
> the @.@.rowcount to get the rows returned to manipulate furture.
> the problem is after some time in the middle of the process i get the
> below error
>
> "The query has exceeded the maximum number of result sets that can be
> displayed in the results grid. Only the first 100 result sets are
> displayed in the grid."
> Is there a way by which i can stop the results being displayed in the
> message tab or clear the diplayed result or result set as said in the
> error messge. as after the error message the task stops on it own.
> i dont know if there is any thing using SET statment
> /soni
>|||Hello Falik,
Thanks for showing a helping hand :)
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:
> hi,
> This means essentially you do not need these results generated by the SQLs?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...
> > Hello SQL Experts,
> >
> > I have a Batch of Select statments which runs in a while loop and i use
> > the @.@.rowcount to get the rows returned to manipulate furture.
> >
> > the problem is after some time in the middle of the process i get the
> > below error
> >
> >
> > "The query has exceeded the maximum number of result sets that can be
> > displayed in the results grid. Only the first 100 result sets are
> > displayed in the grid."
> >
> > Is there a way by which i can stop the results being displayed in the
> > message tab or clear the diplayed result or result set as said in the
> > error messge. as after the error message the task stops on it own.
> >
> > i dont know if there is any thing using SET statment
> >
> > /soni
> >|||Hello Falik,
Thanks for showing a helping hand :)
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:
> hi,
> This means essentially you do not need these results generated by the SQLs?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...
> > Hello SQL Experts,
> >
> > I have a Batch of Select statments which runs in a while loop and i use
> > the @.@.rowcount to get the rows returned to manipulate furture.
> >
> > the problem is after some time in the middle of the process i get the
> > below error
> >
> >
> > "The query has exceeded the maximum number of result sets that can be
> > displayed in the results grid. Only the first 100 result sets are
> > displayed in the grid."
> >
> > Is there a way by which i can stop the results being displayed in the
> > message tab or clear the diplayed result or result set as said in the
> > error messge. as after the error message the task stops on it own.
> >
> > i dont know if there is any thing using SET statment
> >
> > /soni
> >|||Hi Soni,
can u produce the sampel data , expected result and SQLs (may use the pubs
database for this), as I can;t understand the why count is not used or group
by can;t be used for this'
as down the line we need to stop the unwanted result-set generated by the
selects......
thnx
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164955286.981884.286430@.j44g2000cwa.googlegroups.com...
> Hello Falik,
> Thanks for showing a helping hand :)
> something like this
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'A'
> -- this will return 2100 rows
> Select @.ARank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'B'
> -- this will return 1000 rows
> Select @.BRank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'C'
> -- this will return 1000 rows
> Select @.CRank = @.@.rowcount
> this way the code is runing in a while loop logic. I cannot use count
> () function as there is data mismatch because of Duplicate records in
> some table.
> /soni
> Falik Sher wrote:
>> hi,
>> This means essentially you do not need these results generated by the
>> SQLs?
>> is this the case? can you send some sample SQLs so something can be
>> suggested close to your needs.
>> rgds
>> Falik
>>
>> <chandresh.x.soni@.sonyericsson.com> wrote in message
>> news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...
>> > Hello SQL Experts,
>> >
>> > I have a Batch of Select statments which runs in a while loop and i use
>> > the @.@.rowcount to get the rows returned to manipulate furture.
>> >
>> > the problem is after some time in the middle of the process i get the
>> > below error
>> >
>> >
>> > "The query has exceeded the maximum number of result sets that can be
>> > displayed in the results grid. Only the first 100 result sets are
>> > displayed in the grid."
>> >
>> > Is there a way by which i can stop the results being displayed in the
>> > message tab or clear the diplayed result or result set as said in the
>> > error messge. as after the error message the task stops on it own.
>> >
>> > i dont know if there is any thing using SET statment
>> >
>> > /soni
>> >
>

Clear the resultset during batch processing

Hello SQL Experts,
I have a Batch of Select statments which runs in a while loop and i use
the @.@.rowcount to get the rows returned to manipulate furture.
the problem is after some time in the middle of the process i get the
below error
"The query has exceeded the maximum number of result sets that can be
displayed in the results grid. Only the first 100 result sets are
displayed in the grid."
Is there a way by which i can stop the results being displayed in the
message tab or clear the diplayed result or result set as said in the
error messge. as after the error message the task stops on it own.
i dont know if there is any thing using SET statment
/sonihi,
This means essentially you do not need these results generated by the SQLs?
is this the case? can you send some sample SQLs so something can be
suggested close to your needs.
rgds
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...
> Hello SQL Experts,
> I have a Batch of Select statments which runs in a while loop and i use
> the @.@.rowcount to get the rows returned to manipulate furture.
> the problem is after some time in the middle of the process i get the
> below error
>
> "The query has exceeded the maximum number of result sets that can be
> displayed in the results grid. Only the first 100 result sets are
> displayed in the grid."
> Is there a way by which i can stop the results being displayed in the
> message tab or clear the diplayed result or result set as said in the
> error messge. as after the error message the task stops on it own.
> i dont know if there is any thing using SET statment
> /soni
>|||Hello Falik,
Thanks for showing a helping hand
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:[vbcol=seagreen]
> hi,
> This means essentially you do not need these results generated by the SQLs
?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...|||Hello Falik,
Thanks for showing a helping hand
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:[vbcol=seagreen]
> hi,
> This means essentially you do not need these results generated by the SQLs
?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...|||Hi Soni,
can u produce the sampel data , expected result and SQLs (may use the pubs
database for this), as I can;t understand the why count is not used or group
by can;t be used for this'
as down the line we need to stop the unwanted result-set generated by the
selects......
thnx
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164955286.981884.286430@.j44g2000cwa.googlegroups.com...
> Hello Falik,
> Thanks for showing a helping hand
> something like this
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'A'
> -- this will return 2100 rows
> Select @.ARank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'B'
> -- this will return 1000 rows
> Select @.BRank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'C'
> -- this will return 1000 rows
> Select @.CRank = @.@.rowcount
> this way the code is runing in a while loop logic. I cannot use count
> () function as there is data mismatch because of Duplicate records in
> some table.
> /soni
> Falik Sher wrote:
>

Clear SQl server Logs by run xp_cmdshell

how i can clear sql server logs by run xp_cmdshell stored procedure.
(my user in sql server is a admin).
i don't want use enterprise manager . i want write a query and use at xp_cmdshell .
befor thx.What logs are you talking about... SQLServer has error logs that you can cycle, the server keeps 5 logs active.

Or for the Transaction logs, you can use TQL to backup the transaction logs and truncate.

But do you use the Transaction logs, ie are you using them for Disaster recovery or such...
You can also turn off the transaction logs or at least turn them down. This is done buy changeing the databases logging method from Full (All Transactions Logged), to Bulk (Only bulk loading transactions), or Simple (No Logging).

There are also artilcles on SQLServercentral.com and MSDN on useing TSQL to shrink the logs and reduce wasted space.

Remember that All enterprise manager is is a front end for TSQL Commands to the database, all commands can be run from code.