Tuesday, March 27, 2012

Clear/Purge Log files

The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink database"
but this does not seems to reduce the mdf or ldf filesize.
What I did is detach the database, delete the ldf file, re-attach the
database to create a new ldf file. If I do not do so, the application cannot
work (hang!) because the ldf file is too huge and it takes ages to commit a
transaction. Is there a "better" way to control the ldf file like
auto-purging ? Should I restrict the log file size to a specific filesize
like 500MB ? Does this mean it will auto-purge each time it reach 500MB for
the ldf file ?
ThanksThe way you manage your log file is driven by your database recovery plan.
If your recovery plan is to restore from your last full backup and not apply
transaction log backups, then change your database recovery model to SIMPLE.
This will keep your log size reasonable by removing committed transactions
from the log. The log will still need to be large enough to accommodate
your largest single transaction. If you want to reduce potential data loss,
you should use the BULK_LOGGED or FULL recovery model and backup your log
periodically.
The proper way to shrink files is with DBCC SHRINKFILE. See the Books
Online for details. You should not need to do this as part of routine
maintenance.
Hope this helps.
Dan Guzman
SQL Server MVP
"Carlos" <wt_know@.hotmail.com> wrote in message
news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
database"
> but this does not seems to reduce the mdf or ldf filesize.
> What I did is detach the database, delete the ldf file, re-attach the
> database to create a new ldf file. If I do not do so, the application
cannot
> work (hang!) because the ldf file is too huge and it takes ages to commit
a
> transaction. Is there a "better" way to control the ldf file like
> auto-purging ? Should I restrict the log file size to a specific filesize
> like 500MB ? Does this mean it will auto-purge each time it reach 500MB
for
> the ldf file ?
> Thanks
>
>|||Hi,
Instead of detaching , delete the LDF file, Attach the database ytou should
have tried the below steps:-
Alter database <dbname> set single_user with rollback immediate
go
backup log <dbname> to disk='d:\backup\dbname.trn1'
go
dbcc shrinkfile('logical_log_name',truncateon
ly)
go
Alter database <dbname> set multi_user
After executing the above you can execute the below command check the log
file size and usage,
dbcc sqlperf(logspace)
Like Dan suggested go for SIMPLE recovery model if your data is not critical
or you not require a transaction log based recovery (POINT_IN_TIME).
Thanks
Hari
MCDBA
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> The way you manage your log file is driven by your database recovery plan.
> If your recovery plan is to restore from your last full backup and not
apply
> transaction log backups, then change your database recovery model to
SIMPLE.
> This will keep your log size reasonable by removing committed transactions
> from the log. The log will still need to be large enough to accommodate
> your largest single transaction. If you want to reduce potential data
loss,
> you should use the BULK_LOGGED or FULL recovery model and backup your log
> periodically.
> The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> Online for details. You should not need to do this as part of routine
> maintenance.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Carlos" <wt_know@.hotmail.com> wrote in message
> news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> database"
> cannot
commit[vbcol=seagreen]
> a
filesize[vbcol=seagreen]
> for
>|||Thanks for the advices ! :-)
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:%23MWo0lUPEHA.3328@.TK2MSFTNGP09.phx.gbl...
> Hi,
> Instead of detaching , delete the LDF file, Attach the database ytou
should
> have tried the below steps:-
> Alter database <dbname> set single_user with rollback immediate
> go
> backup log <dbname> to disk='d:\backup\dbname.trn1'
> go
> dbcc shrinkfile('logical_log_name',truncateon
ly)
> go
> Alter database <dbname> set multi_user
> After executing the above you can execute the below command check the log
> file size and usage,
> dbcc sqlperf(logspace)
> Like Dan suggested go for SIMPLE recovery model if your data is not
critical
> or you not require a transaction log based recovery (POINT_IN_TIME).
> Thanks
> Hari
> MCDBA
>
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
plan.[vbcol=seagreen]
> apply
> SIMPLE.
transactions[vbcol=seagreen]
> loss,
log[vbcol=seagreen]
> commit
> filesize
500MB[vbcol=seagreen]
>

Clear/Purge Log files

The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink database"
but this does not seems to reduce the mdf or ldf filesize.
What I did is detach the database, delete the ldf file, re-attach the
database to create a new ldf file. If I do not do so, the application cannot
work (hang!) because the ldf file is too huge and it takes ages to commit a
transaction. Is there a "better" way to control the ldf file like
auto-purging ? Should I restrict the log file size to a specific filesize
like 500MB ? Does this mean it will auto-purge each time it reach 500MB for
the ldf file ?
ThanksThe way you manage your log file is driven by your database recovery plan.
If your recovery plan is to restore from your last full backup and not apply
transaction log backups, then change your database recovery model to SIMPLE.
This will keep your log size reasonable by removing committed transactions
from the log. The log will still need to be large enough to accommodate
your largest single transaction. If you want to reduce potential data loss,
you should use the BULK_LOGGED or FULL recovery model and backup your log
periodically.
The proper way to shrink files is with DBCC SHRINKFILE. See the Books
Online for details. You should not need to do this as part of routine
maintenance.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Carlos" <wt_know@.hotmail.com> wrote in message
news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
database"
> but this does not seems to reduce the mdf or ldf filesize.
> What I did is detach the database, delete the ldf file, re-attach the
> database to create a new ldf file. If I do not do so, the application
cannot
> work (hang!) because the ldf file is too huge and it takes ages to commit
a
> transaction. Is there a "better" way to control the ldf file like
> auto-purging ? Should I restrict the log file size to a specific filesize
> like 500MB ? Does this mean it will auto-purge each time it reach 500MB
for
> the ldf file ?
> Thanks
>
>|||Hi,
Instead of detaching , delete the LDF file, Attach the database ytou should
have tried the below steps:-
Alter database <dbname> set single_user with rollback immediate
go
backup log <dbname> to disk='d:\backup\dbname.trn1'
go
dbcc shrinkfile('logical_log_name',truncateonly)
go
Alter database <dbname> set multi_user
After executing the above you can execute the below command check the log
file size and usage,
dbcc sqlperf(logspace)
Like Dan suggested go for SIMPLE recovery model if your data is not critical
or you not require a transaction log based recovery (POINT_IN_TIME).
Thanks
Hari
MCDBA
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> The way you manage your log file is driven by your database recovery plan.
> If your recovery plan is to restore from your last full backup and not
apply
> transaction log backups, then change your database recovery model to
SIMPLE.
> This will keep your log size reasonable by removing committed transactions
> from the log. The log will still need to be large enough to accommodate
> your largest single transaction. If you want to reduce potential data
loss,
> you should use the BULK_LOGGED or FULL recovery model and backup your log
> periodically.
> The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> Online for details. You should not need to do this as part of routine
> maintenance.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Carlos" <wt_know@.hotmail.com> wrote in message
> news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> > The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
> database"
> > but this does not seems to reduce the mdf or ldf filesize.
> >
> > What I did is detach the database, delete the ldf file, re-attach the
> > database to create a new ldf file. If I do not do so, the application
> cannot
> > work (hang!) because the ldf file is too huge and it takes ages to
commit
> a
> > transaction. Is there a "better" way to control the ldf file like
> > auto-purging ? Should I restrict the log file size to a specific
filesize
> > like 500MB ? Does this mean it will auto-purge each time it reach 500MB
> for
> > the ldf file ?
> >
> > Thanks
> >
> >
> >
>|||Thanks for the advices ! :-)
"Hari" <hari_prasad_k@.hotmail.com> wrote in message
news:%23MWo0lUPEHA.3328@.TK2MSFTNGP09.phx.gbl...
> Hi,
> Instead of detaching , delete the LDF file, Attach the database ytou
should
> have tried the below steps:-
> Alter database <dbname> set single_user with rollback immediate
> go
> backup log <dbname> to disk='d:\backup\dbname.trn1'
> go
> dbcc shrinkfile('logical_log_name',truncateonly)
> go
> Alter database <dbname> set multi_user
> After executing the above you can execute the below command check the log
> file size and usage,
> dbcc sqlperf(logspace)
> Like Dan suggested go for SIMPLE recovery model if your data is not
critical
> or you not require a transaction log based recovery (POINT_IN_TIME).
> Thanks
> Hari
> MCDBA
>
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:uTGdG0TPEHA.2256@.TK2MSFTNGP10.phx.gbl...
> > The way you manage your log file is driven by your database recovery
plan.
> > If your recovery plan is to restore from your last full backup and not
> apply
> > transaction log backups, then change your database recovery model to
> SIMPLE.
> > This will keep your log size reasonable by removing committed
transactions
> > from the log. The log will still need to be large enough to accommodate
> > your largest single transaction. If you want to reduce potential data
> loss,
> > you should use the BULK_LOGGED or FULL recovery model and backup your
log
> > periodically.
> >
> > The proper way to shrink files is with DBCC SHRINKFILE. See the Books
> > Online for details. You should not need to do this as part of routine
> > maintenance.
> >
> > --
> > Hope this helps.
> >
> > Dan Guzman
> > SQL Server MVP
> >
> > "Carlos" <wt_know@.hotmail.com> wrote in message
> > news:OueiitTPEHA.620@.TK2MSFTNGP10.phx.gbl...
> > > The MDF file is 800 MB but the LDF file is 5 GB. I run the "shrink
> > database"
> > > but this does not seems to reduce the mdf or ldf filesize.
> > >
> > > What I did is detach the database, delete the ldf file, re-attach the
> > > database to create a new ldf file. If I do not do so, the application
> > cannot
> > > work (hang!) because the ldf file is too huge and it takes ages to
> commit
> > a
> > > transaction. Is there a "better" way to control the ldf file like
> > > auto-purging ? Should I restrict the log file size to a specific
> filesize
> > > like 500MB ? Does this mean it will auto-purge each time it reach
500MB
> > for
> > > the ldf file ?
> > >
> > > Thanks
> > >
> > >
> > >
> >
> >
>

Clear the transaction log

Can someone tell me the procedure the clear out the transaction log so it
doesn't crash my server again in the future?
Thanks!
JonHow did it "crash your server"?
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"Jon Glazer" <jglazer.delete.me@.adconn.com> wrote in message
news:LuA2e.456$UA4.181@.fe1.columbus.rr.com...
> Can someone tell me the procedure the clear out the transaction log so it
> doesn't crash my server again in the future?
> Thanks!
> Jon
>|||Well the procedure is to back it up on a regular basis so it can reuse the
space. Or set it to Simple recovery mode if you don't want or need Log
backups. These might be of interest:
http://www.nigelrivett.net/Transact...ileGrows_1.html Log File issues
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
Server 7.0 Tran Log
http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
Server 2000 with DBCC SHRINKFILE
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
Andrew J. Kelly SQL MVP
"Jon Glazer" <jglazer.delete.me@.adconn.com> wrote in message
news:LuA2e.456$UA4.181@.fe1.columbus.rr.com...
> Can someone tell me the procedure the clear out the transaction log so it
> doesn't crash my server again in the future?
> Thanks!
> Jon
>

Clear the resultset during batch processing

Hello SQL Experts,
I have a Batch of Select statments which runs in a while loop and i use
the @.@.rowcount to get the rows returned to manipulate furture.
the problem is after some time in the middle of the process i get the
below error
"The query has exceeded the maximum number of result sets that can be
displayed in the results grid. Only the first 100 result sets are
displayed in the grid."
Is there a way by which i can stop the results being displayed in the
message tab or clear the diplayed result or result set as said in the
error messge. as after the error message the task stops on it own.
i dont know if there is any thing using SET statment
/soni
hi,
This means essentially you do not need these results generated by the SQLs?
is this the case? can you send some sample SQLs so something can be
suggested close to your needs.
rgds
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164947463.012890.94570@.f1g2000cwa.googlegrou ps.com...
> Hello SQL Experts,
> I have a Batch of Select statments which runs in a while loop and i use
> the @.@.rowcount to get the rows returned to manipulate furture.
> the problem is after some time in the middle of the process i get the
> below error
>
> "The query has exceeded the maximum number of result sets that can be
> displayed in the results grid. Only the first 100 result sets are
> displayed in the grid."
> Is there a way by which i can stop the results being displayed in the
> message tab or clear the diplayed result or result set as said in the
> error messge. as after the error message the task stops on it own.
> i dont know if there is any thing using SET statment
> /soni
>
|||Hello Falik,
Thanks for showing a helping hand
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:[vbcol=seagreen]
> hi,
> This means essentially you do not need these results generated by the SQLs?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegrou ps.com...
|||Hello Falik,
Thanks for showing a helping hand
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:[vbcol=seagreen]
> hi,
> This means essentially you do not need these results generated by the SQLs?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegrou ps.com...
|||Hi Soni,
can u produce the sampel data , expected result and SQLs (may use the pubs
database for this), as I can;t understand the why count is not used or group
by can;t be used for this?
as down the line we need to stop the unwanted result-set generated by the
selects......
thnx
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164955286.981884.286430@.j44g2000cwa.googlegr oups.com...
> Hello Falik,
> Thanks for showing a helping hand
> something like this
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'A'
> -- this will return 2100 rows
> Select @.ARank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'B'
> -- this will return 1000 rows
> Select @.BRank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'C'
> -- this will return 1000 rows
> Select @.CRank = @.@.rowcount
> this way the code is runing in a while loop logic. I cannot use count
> () function as there is data mismatch because of Duplicate records in
> some table.
> /soni
> Falik Sher wrote:
>
sqlsql

Clear the resultset during batch processing

Hello SQL Experts,
I have a Batch of Select statments which runs in a while loop and i use
the @.@.rowcount to get the rows returned to manipulate furture.
the problem is after some time in the middle of the process i get the
below error
"The query has exceeded the maximum number of result sets that can be
displayed in the results grid. Only the first 100 result sets are
displayed in the grid."
Is there a way by which i can stop the results being displayed in the
message tab or clear the diplayed result or result set as said in the
error messge. as after the error message the task stops on it own.
i dont know if there is any thing using SET statment
/sonihi,
This means essentially you do not need these results generated by the SQLs?
is this the case? can you send some sample SQLs so something can be
suggested close to your needs.
rgds
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...
> Hello SQL Experts,
> I have a Batch of Select statments which runs in a while loop and i use
> the @.@.rowcount to get the rows returned to manipulate furture.
> the problem is after some time in the middle of the process i get the
> below error
>
> "The query has exceeded the maximum number of result sets that can be
> displayed in the results grid. Only the first 100 result sets are
> displayed in the grid."
> Is there a way by which i can stop the results being displayed in the
> message tab or clear the diplayed result or result set as said in the
> error messge. as after the error message the task stops on it own.
> i dont know if there is any thing using SET statment
> /soni
>|||Hello Falik,
Thanks for showing a helping hand :)
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:
> hi,
> This means essentially you do not need these results generated by the SQLs?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...
> > Hello SQL Experts,
> >
> > I have a Batch of Select statments which runs in a while loop and i use
> > the @.@.rowcount to get the rows returned to manipulate furture.
> >
> > the problem is after some time in the middle of the process i get the
> > below error
> >
> >
> > "The query has exceeded the maximum number of result sets that can be
> > displayed in the results grid. Only the first 100 result sets are
> > displayed in the grid."
> >
> > Is there a way by which i can stop the results being displayed in the
> > message tab or clear the diplayed result or result set as said in the
> > error messge. as after the error message the task stops on it own.
> >
> > i dont know if there is any thing using SET statment
> >
> > /soni
> >|||Hello Falik,
Thanks for showing a helping hand :)
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:
> hi,
> This means essentially you do not need these results generated by the SQLs?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...
> > Hello SQL Experts,
> >
> > I have a Batch of Select statments which runs in a while loop and i use
> > the @.@.rowcount to get the rows returned to manipulate furture.
> >
> > the problem is after some time in the middle of the process i get the
> > below error
> >
> >
> > "The query has exceeded the maximum number of result sets that can be
> > displayed in the results grid. Only the first 100 result sets are
> > displayed in the grid."
> >
> > Is there a way by which i can stop the results being displayed in the
> > message tab or clear the diplayed result or result set as said in the
> > error messge. as after the error message the task stops on it own.
> >
> > i dont know if there is any thing using SET statment
> >
> > /soni
> >|||Hi Soni,
can u produce the sampel data , expected result and SQLs (may use the pubs
database for this), as I can;t understand the why count is not used or group
by can;t be used for this'
as down the line we need to stop the unwanted result-set generated by the
selects......
thnx
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164955286.981884.286430@.j44g2000cwa.googlegroups.com...
> Hello Falik,
> Thanks for showing a helping hand :)
> something like this
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'A'
> -- this will return 2100 rows
> Select @.ARank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'B'
> -- this will return 1000 rows
> Select @.BRank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'C'
> -- this will return 1000 rows
> Select @.CRank = @.@.rowcount
> this way the code is runing in a while loop logic. I cannot use count
> () function as there is data mismatch because of Duplicate records in
> some table.
> /soni
> Falik Sher wrote:
>> hi,
>> This means essentially you do not need these results generated by the
>> SQLs?
>> is this the case? can you send some sample SQLs so something can be
>> suggested close to your needs.
>> rgds
>> Falik
>>
>> <chandresh.x.soni@.sonyericsson.com> wrote in message
>> news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...
>> > Hello SQL Experts,
>> >
>> > I have a Batch of Select statments which runs in a while loop and i use
>> > the @.@.rowcount to get the rows returned to manipulate furture.
>> >
>> > the problem is after some time in the middle of the process i get the
>> > below error
>> >
>> >
>> > "The query has exceeded the maximum number of result sets that can be
>> > displayed in the results grid. Only the first 100 result sets are
>> > displayed in the grid."
>> >
>> > Is there a way by which i can stop the results being displayed in the
>> > message tab or clear the diplayed result or result set as said in the
>> > error messge. as after the error message the task stops on it own.
>> >
>> > i dont know if there is any thing using SET statment
>> >
>> > /soni
>> >
>

Clear the resultset during batch processing

Hello SQL Experts,
I have a Batch of Select statments which runs in a while loop and i use
the @.@.rowcount to get the rows returned to manipulate furture.
the problem is after some time in the middle of the process i get the
below error
"The query has exceeded the maximum number of result sets that can be
displayed in the results grid. Only the first 100 result sets are
displayed in the grid."
Is there a way by which i can stop the results being displayed in the
message tab or clear the diplayed result or result set as said in the
error messge. as after the error message the task stops on it own.
i dont know if there is any thing using SET statment
/sonihi,
This means essentially you do not need these results generated by the SQLs?
is this the case? can you send some sample SQLs so something can be
suggested close to your needs.
rgds
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...
> Hello SQL Experts,
> I have a Batch of Select statments which runs in a while loop and i use
> the @.@.rowcount to get the rows returned to manipulate furture.
> the problem is after some time in the middle of the process i get the
> below error
>
> "The query has exceeded the maximum number of result sets that can be
> displayed in the results grid. Only the first 100 result sets are
> displayed in the grid."
> Is there a way by which i can stop the results being displayed in the
> message tab or clear the diplayed result or result set as said in the
> error messge. as after the error message the task stops on it own.
> i dont know if there is any thing using SET statment
> /soni
>|||Hello Falik,
Thanks for showing a helping hand
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:[vbcol=seagreen]
> hi,
> This means essentially you do not need these results generated by the SQLs
?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...|||Hello Falik,
Thanks for showing a helping hand
something like this
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'A'
-- this will return 2100 rows
Select @.ARank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'B'
-- this will return 1000 rows
Select @.BRank = @.@.rowcount
Select T1.issue, T2,Project
from Issue T1, Project T2...
Where T1.Rank = 'C'
-- this will return 1000 rows
Select @.CRank = @.@.rowcount
this way the code is runing in a while loop logic. I cannot use count
() function as there is data mismatch because of Duplicate records in
some table.
/soni
Falik Sher wrote:[vbcol=seagreen]
> hi,
> This means essentially you do not need these results generated by the SQLs
?
> is this the case? can you send some sample SQLs so something can be
> suggested close to your needs.
> rgds
> Falik
>
> <chandresh.x.soni@.sonyericsson.com> wrote in message
> news:1164947463.012890.94570@.f1g2000cwa.googlegroups.com...|||Hi Soni,
can u produce the sampel data , expected result and SQLs (may use the pubs
database for this), as I can;t understand the why count is not used or group
by can;t be used for this'
as down the line we need to stop the unwanted result-set generated by the
selects......
thnx
Falik
<chandresh.x.soni@.sonyericsson.com> wrote in message
news:1164955286.981884.286430@.j44g2000cwa.googlegroups.com...
> Hello Falik,
> Thanks for showing a helping hand
> something like this
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'A'
> -- this will return 2100 rows
> Select @.ARank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'B'
> -- this will return 1000 rows
> Select @.BRank = @.@.rowcount
> Select T1.issue, T2,Project
> from Issue T1, Project T2...
> Where T1.Rank = 'C'
> -- this will return 1000 rows
> Select @.CRank = @.@.rowcount
> this way the code is runing in a while loop logic. I cannot use count
> () function as there is data mismatch because of Duplicate records in
> some table.
> /soni
> Falik Sher wrote:
>

Clear SQl server Logs by run xp_cmdshell

how i can clear sql server logs by run xp_cmdshell stored procedure.
(my user in sql server is a admin).
i don't want use enterprise manager . i want write a query and use at xp_cmdshell .
befor thx.What logs are you talking about... SQLServer has error logs that you can cycle, the server keeps 5 logs active.

Or for the Transaction logs, you can use TQL to backup the transaction logs and truncate.

But do you use the Transaction logs, ie are you using them for Disaster recovery or such...
You can also turn off the transaction logs or at least turn them down. This is done buy changeing the databases logging method from Full (All Transactions Logged), to Bulk (Only bulk loading transactions), or Simple (No Logging).

There are also artilcles on SQLServercentral.com and MSDN on useing TSQL to shrink the logs and reduce wasted space.

Remember that All enterprise manager is is a front end for TSQL Commands to the database, all commands can be run from code.

clear sql error

i am using sql 2000 and calling a sp from vb.net code.
I want when a error occur in sp i want to do some processing in sp and do
not want to throw that error to front end. how can i achieve that as after
checking @.@.error and doing processing, still error gets thrown to front endHave a look at
http://www.sommarskog.se/error-handling-II.html
Roji. P. Thomas
Net Asset Management
http://toponewithties.blogspot.com
"Vikram" <aa@.aa> wrote in message
news:%23iBslxSBGHA.740@.TK2MSFTNGP12.phx.gbl...
>i am using sql 2000 and calling a sp from vb.net code.
> I want when a error occur in sp i want to do some processing in sp and do
> not want to throw that error to front end. how can i achieve that as after
> checking @.@.error and doing processing, still error gets thrown to front
> end
>|||Hi
http://www.sommarskog.se/error-handling-II.html
"Vikram" <aa@.aa> wrote in message
news:%23iBslxSBGHA.740@.TK2MSFTNGP12.phx.gbl...
>i am using sql 2000 and calling a sp from vb.net code.
> I want when a error occur in sp i want to do some processing in sp and do
> not want to throw that error to front end. how can i achieve that as after
> checking @.@.error and doing processing, still error gets thrown to front
> end
>|||You can configure this in your web.config file
regards
Thakkudu
Vikram wrote:
> i am using sql 2000 and calling a sp from vb.net code.
> I want when a error occur in sp i want to do some processing in sp and do
> not want to throw that error to front end. how can i achieve that as after
> checking @.@.error and doing processing, still error gets thrown to front end[/color
]|||:)
Roji. P. Thomas
Net Asset Management
http://toponewithties.blogspot.com
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:O4JapKTBGHA.3488@.TK2MSFTNGP10.phx.gbl...
> Hi
> http://www.sommarskog.se/error-handling-II.html
>
>
> "Vikram" <aa@.aa> wrote in message
> news:%23iBslxSBGHA.740@.TK2MSFTNGP12.phx.gbl...
>|||There's a setting in web.config that automagically adds error-handling to
stored procedures? :)
ML
http://milambda.blogspot.com/|||Also note that when you get a chance to upgrade to SQL Server 2005 this gets
much better because SQL Server 2005 introduces TRY...CATCH for Transact-SQL,
and also the ability to retrieve the error information (not just the error
number in @.@.ERROR) in Transact-SQL.
http://msdn2.microsoft.com/en-us/library/ms179465(en-US,SQL.90).aspx
Alan Brewer [MSFT]
Content Architect, SQL Server Documentation Team
SQL Server Developer Center: http://msdn.microsoft.com/sql
SQL Server TechCenter: http://technet.microsoft.com/sql/
This posting is provided "AS IS" with no warranties, and confers no rights.|||> There's a setting in web.config that automagically adds error-handling to
> stored procedures? :)
yep:
[BestPractices]
SafeProgramming=True

Clear Old Server Names Out Of SQL Server Service Manager

How can I Clear Old Server Names Out Of SQL Server Service
Manager. They are ghosts."Jay Kusch" <Jay.Kusch@.mm-games.com> wrote in message
news:067601c3c578$b3a92520$a001280a@.phx.gbl...
> How can I Clear Old Server Names Out Of SQL Server Service
> Manager. They are ghosts.
You mean the list of servers that display for selection to connect? I do not
believe that you can remove those... If you are talking about something
different, please respond with more detail.
Steve|||This list is kept in the registry
HKEY_CURRENT_USER\Software\Microsoft\MSSQLServer\Client\PrefServers
"Jay Kusch" <Jay.Kusch@.mm-games.com> wrote in message
news:01a601c3c584$82991010$a101280a@.phx.gbl...
> >--Original Message--
> >"Jay Kusch" <Jay.Kusch@.mm-games.com> wrote in message
> >news:067601c3c578$b3a92520$a001280a@.phx.gbl...
> >> How can I Clear Old Server Names Out Of SQL Server
> Service
> >> Manager. They are ghosts.
> >
> >You mean the list of servers that display for selection
> to connect? I do not
> >believe that you can remove those... If you are talking
> about something
> >different, please respond with more detail.
> >
> >Steve
> >
> >
> >
> >
> >.
> >
> Thanks Steve ...
> That was the answer that covers it all. We have had so
> many servers in and out of test and production that my
> server list is just unmanagable. Wish there were some way
> to clear the ghosts. Thanks and Happy Holidays!
> J. Kuschsqlsql

Clear log after backup

Hello everybody,
I set up a backup routine to one of my SQL-Server databases.
It´s a full backup, and I don´t know why, but the log file it´s not cleared
after the backup. I thought the full backup, when executed, would clear the
database log. Is there something missing ?
EvandroFull backups do not truncate the log. You need to issue a log backup for
that or set the recovery mode to SIMPLE.
--
Andrew J. Kelly
SQL Server MVP
"Evandro Braga" <evandro_braga@.hotmail.com> wrote in message
news:uGI4TGn9DHA.2640@.TK2MSFTNGP10.phx.gbl...
> Hello everybody,
> I set up a backup routine to one of my SQL-Server databases.
> It´s a full backup, and I don´t know why, but the log file it´s not
cleared
> after the backup. I thought the full backup, when executed, would clear
the
> database log. Is there something missing ?
>
> Evandro
>

Clear log after backup

Hello everybody,
I set up a backup routine to one of my SQL-Server databases.
Its a full backup, and I dont know why, but the log file its not cleared
after the backup. I thought the full backup, when executed, would clear the
database log. Is there something missing ?
EvandroFull backups do not truncate the log. You need to issue a log backup for
that or set the recovery mode to SIMPLE.
Andrew J. Kelly
SQL Server MVP
"Evandro Braga" <evandro_braga@.hotmail.com> wrote in message
news:uGI4TGn9DHA.2640@.TK2MSFTNGP10.phx.gbl...
> Hello everybody,
> I set up a backup routine to one of my SQL-Server databases.
> Its a full backup, and I dont know why, but the log file its not
cleared
> after the backup. I thought the full backup, when executed, would clear
the
> database log. Is there something missing ?
>
> Evandro
>

clear identical submissions

i want to clear from my db the doubles and trebles submisions and keep a unique.my db looks like:

a.a code lastname firstname phone
----------------
1 101 smith john 23452345
2 101 smith john 23452345
3 123 black mary 57645756
4 654 white peter 45634564

i want to look like

a.a code lastname firstname phone
----------------
1 101 smith john 23452345
3 123 black mary 57645756
4 654 white peter 45634564

thank you .angeloOracle 9i,

delete from table
where NOT (rowid = (select min(rowid) from table group by KEY));

KEY is defined as a candidate key that is not enforced.|||dear sir
thank you for your reply.

message returned: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
The statement has been terminated.

thank you in advance|||Select distinct columns into #temp from tableA
delete from tableA
insert into tableA select * from #temp
drop table #temp
alter table tableA add primary key (columns)|||I'd use something like:DELETE FROM table
WHERE EXISTS (SELECT *
FROM table AS b
WHERE b.id < table.id
AND b.code = table.code
AND b.lastname = table.lastname
AND b.firstname = table.firstname
AND b.phone = table.phone)

Note that I used the php tag to get a monospaced font, this is just standard SQL-92 syntax that should run on any SQL based product.

-PatP

Clear error in for each loop container

Hi,

I have a "Data Flow Task" inside" For Each container". Data flow task is processing file and updating the DB.

If one of the file is correpted i want to move to error folder and continue with the next file. i have given red arrow to a script Task which move the file to error folder. but its not continuing with next file. how can Ido that?

Any help

Set the Max Error count to 0 for the container. 0 = unlimited.|||thanks crispin.. its works.

clear DB fields at midnight

I have a dataBase named pricesDB. I would like to clear all the prices column everyday at midnight.
How should i do that?Scheduled task using either the Windows Scheduler or a SQL Agent tasksqlsql

Clear data out of multiple tables.

I need to load a database with new data, from an existing parallelsystem, but the database schema has changed and I did not make a script to do the changes. Sure wish I had. So, now I would like to copy the data from the existing parallel system into the new SQL database that has the correct schema. I have built the new database from the existing changed database. Now I would like to know if there is an easy way to clear the data out of all the tables, then copy all the data from the old schema into the new schema's tables.
Thanks!
Vic(1) to create a blank DB from an existing DB
generate a full script from the existing DB and run it. it is that easy
(2) to populate data
use DTS. just a few clicks and u r done. all matching fields will be copied automatically. u can even map fields manually for non-matching fields if u feel like.
(3) reporting differences
there r tools that can compare 2 DB and generate a report of differences|||Thank you for your reply. Because I am quite new to SQL Server, I really don't know where to go to do those "few clicks." Could you please give me a little bit more direction. I'm using SQL Server 2005.
Thanks,|||sorry, i assumed that u r on SQL 2K. i do not have much knowldge about sql 2005. all that i can say is it is having options to generate scripts for sure and it does not support DTS. others might help u with details.|||Why not just drop this new database and atttach\detach or backup\restore the version you want to move? After the move you want the schema and data to be identical in both databases right? Do it in one go rather than schema then data.|||In SQL Server Management Studio:
Management >> Legacy >> Data Transformation Services.
They're fairl self explanatory...
Add connections for both datasources and then add an Transform Data Task (black arrow/cog icon). Right click your task and go to properties.
You can write a simple SQL SELECT statement in the first tab, then select it's destination in the 2nd and then map the columns in the 3rd.

Hope that's of some help. Feel free to ask away!|||I just went back and re-read my original post and I see where I need to clarify a few points.

I'm working in SQL Server 2005. Both databases are in 2005. Older SQL Servers have not been involved.
Database one (SAC) was developed, then the need for a separate, but the same database was identified (we are opening another office in another state) So, we now have added a new database, VGS.

Sort of major modification have been made to SAC, and VGS has been left alone, except data has been being added now for 6 to 8 weeks. I should have made scripts of all the modifications to SAC, but did not. I now know better! :o

So, the challenge before me is to make a copy of SAC but with VGS data in it. I know what I want, but sure don't know my way around SQL Server enough to do it without some help.

Thanks in advance for helping.|||to generate scripts for all objects, you might try this free app I wrote, using SMO. The source is available so you can tweak it if it's not exactly what you are looking for:

http://www.elsasoft.org/tools.htm

it will also script all the data out (optionally) using bcp.exe.

I am guessing you are not using source control. you should be. scriptdb.exe will help you with that because it generates a separate file for each object, which you can then check in to your favorite source control system.

Clear connections to DB

Hello,
In enterprise manager when detaching DB, there is an option to clear open
connections without actually detaching a database.
I'm looking for the sp with the ability of clearing cconnections.
I failed looking it up in books on line.
I'm using SQL 2000
I would be grateful for any help
Thanks,
GaryGary
KILL command
"Gary" <Gary@.discussions.microsoft.com> wrote in message
news:6E40C618-73B0-4BB0-9B8E-FB4A83BD5199@.microsoft.com...
> Hello,
> In enterprise manager when detaching DB, there is an option to clear open
> connections without actually detaching a database.
> I'm looking for the sp with the ability of clearing cconnections.
> I failed looking it up in books on line.
> I'm using SQL 2000
> I would be grateful for any help
> Thanks,
> Gary|||If the suggested KILL is too time-consuming as you have to kill all the
connections you can use:
ALTER DATABASE <Name> SET SINGLE_USER WITH ROLLBACK IMMEDIATE
HTH, jens Suessmeyer.|||Gary,
if you want to drop all the connections and prevent new ones:
"alter database YOURDB SET single_user with rollback immediate"
If you just want to drop all the existing connections without preventing new
ones, then you can cursor through master.dbo.sysprocesses, filter on
database and use exec to run the KILL command. If you want to go down this
route, have a go at the cursor and post it up.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)|||Hello Uri,
Kill terminates process, I need to terminate all connections to the specific
databse.
Thanks
Gary
"Uri Dimant" wrote:
> Gary
> KILL command
>
> "Gary" <Gary@.discussions.microsoft.com> wrote in message
> news:6E40C618-73B0-4BB0-9B8E-FB4A83BD5199@.microsoft.com...
> > Hello,
> >
> > In enterprise manager when detaching DB, there is an option to clear open
> > connections without actually detaching a database.
> > I'm looking for the sp with the ability of clearing cconnections.
> > I failed looking it up in books on line.
> >
> > I'm using SQL 2000
> >
> > I would be grateful for any help
> >
> > Thanks,
> > Gary
>
>|||Gary
KILL can be used to terminate a normal connection, which internally
terminates the transactions associated with the given SPID
"Gary" <Gary@.discussions.microsoft.com> wrote in message
news:B9D5EA96-C63D-452B-8082-62CFDF5CBC96@.microsoft.com...
> Hello Uri,
> Kill terminates process, I need to terminate all connections to the
> specific
> databse.
> Thanks
> Gary
> "Uri Dimant" wrote:
>> Gary
>> KILL command
>>
>> "Gary" <Gary@.discussions.microsoft.com> wrote in message
>> news:6E40C618-73B0-4BB0-9B8E-FB4A83BD5199@.microsoft.com...
>> > Hello,
>> >
>> > In enterprise manager when detaching DB, there is an option to clear
>> > open
>> > connections without actually detaching a database.
>> > I'm looking for the sp with the ability of clearing cconnections.
>> > I failed looking it up in books on line.
>> >
>> > I'm using SQL 2000
>> >
>> > I would be grateful for any help
>> >
>> > Thanks,
>> > Gary
>>

Clear connections to DB

Hello,
In enterprise manager when detaching DB, there is an option to clear open
connections without actually detaching a database.
I'm looking for the sp with the ability of clearing cconnections.
I failed looking it up in books on line.
I'm using SQL 2000
I would be grateful for any help
Thanks,
Gary
Gary
KILL command
"Gary" <Gary@.discussions.microsoft.com> wrote in message
news:6E40C618-73B0-4BB0-9B8E-FB4A83BD5199@.microsoft.com...
> Hello,
> In enterprise manager when detaching DB, there is an option to clear open
> connections without actually detaching a database.
> I'm looking for the sp with the ability of clearing cconnections.
> I failed looking it up in books on line.
> I'm using SQL 2000
> I would be grateful for any help
> Thanks,
> Gary
|||If the suggested KILL is too time-consuming as you have to kill all the
connections you can use:
ALTER DATABASE <Name> SET SINGLE_USER WITH ROLLBACK IMMEDIATE
HTH, jens Suessmeyer.
|||Gary,
if you want to drop all the connections and prevent new ones:
"alter database YOURDB SET single_user with rollback immediate"
If you just want to drop all the existing connections without preventing new
ones, then you can cursor through master.dbo.sysprocesses, filter on
database and use exec to run the KILL command. If you want to go down this
route, have a go at the cursor and post it up.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Hello Uri,
Kill terminates process, I need to terminate all connections to the specific
databse.
Thanks
Gary
"Uri Dimant" wrote:

> Gary
> KILL command
>
> "Gary" <Gary@.discussions.microsoft.com> wrote in message
> news:6E40C618-73B0-4BB0-9B8E-FB4A83BD5199@.microsoft.com...
>
>
|||Gary
KILL can be used to terminate a normal connection, which internally
terminates the transactions associated with the given SPID
"Gary" <Gary@.discussions.microsoft.com> wrote in message
news:B9D5EA96-C63D-452B-8082-62CFDF5CBC96@.microsoft.com...[vbcol=seagreen]
> Hello Uri,
> Kill terminates process, I need to terminate all connections to the
> specific
> databse.
> Thanks
> Gary
> "Uri Dimant" wrote:

Clear connections to DB

Hello,
In enterprise manager when detaching DB, there is an option to clear open
connections without actually detaching a database.
I'm looking for the sp with the ability of clearing cconnections.
I failed looking it up in books on line.
I'm using SQL 2000
I would be grateful for any help
Thanks,
GaryGary
KILL command
"Gary" <Gary@.discussions.microsoft.com> wrote in message
news:6E40C618-73B0-4BB0-9B8E-FB4A83BD5199@.microsoft.com...
> Hello,
> In enterprise manager when detaching DB, there is an option to clear open
> connections without actually detaching a database.
> I'm looking for the sp with the ability of clearing cconnections.
> I failed looking it up in books on line.
> I'm using SQL 2000
> I would be grateful for any help
> Thanks,
> Gary|||If the suggested KILL is too time-consuming as you have to kill all the
connections you can use:
ALTER DATABASE <Name> SET SINGLE_USER WITH ROLLBACK IMMEDIATE
HTH, jens Suessmeyer.|||Gary,
if you want to drop all the connections and prevent new ones:
"alter database YOURDB SET single_user with rollback immediate"
If you just want to drop all the existing connections without preventing new
ones, then you can cursor through master.dbo.sysprocesses, filter on
database and use exec to run the KILL command. If you want to go down this
route, have a go at the cursor and post it up.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)|||Hello Uri,
Kill terminates process, I need to terminate all connections to the specific
databse.
Thanks
Gary
"Uri Dimant" wrote:

> Gary
> KILL command
>
> "Gary" <Gary@.discussions.microsoft.com> wrote in message
> news:6E40C618-73B0-4BB0-9B8E-FB4A83BD5199@.microsoft.com...
>
>|||Gary
KILL can be used to terminate a normal connection, which internally
terminates the transactions associated with the given SPID
"Gary" <Gary@.discussions.microsoft.com> wrote in message
news:B9D5EA96-C63D-452B-8082-62CFDF5CBC96@.microsoft.com...[vbcol=seagreen]
> Hello Uri,
> Kill terminates process, I need to terminate all connections to the
> specific
> databse.
> Thanks
> Gary
> "Uri Dimant" wrote:
>

Clear Connections command

Scenario: You have a database that is in use and you want to detach it. You
use enterprise manager to Detach Database. A dialog box appears that lists
the Database status. It says there are 3 Connections using this database.
Next to the number of connections is a Clear button that will drop the active
connections so that you can continue the process of detaching the database.
Does anyone know the command equivalent to pressing the Clear button?
I don't want to use the graphical interface and I assume there must be some
way to issue a command that would do the same thing as pressing the Clear
button in the GUI.
I don't actually want to detach the database - I am just trying to clear all
connections other than my own. Once I finish clearing connections, I want to
issue a command to temporarily put the database into single user mode.
Alter Database FOO set SINGLE_USER with Rollback Immediate
Everything you want in one command. See the ALTER DATABASE command in BOL
for details.
To set it back to 'normal'
Alter Database FOO set MULTI_USER
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"cathedr@.wa.state.gov" <cathedrwastategov@.discussions.microsoft.com> wrote
in message news:7EA72915-DF9B-437B-9E08-AE54D210AB9C@.microsoft.com...
> Scenario: You have a database that is in use and you want to detach it.
You
> use enterprise manager to Detach Database. A dialog box appears that
lists
> the Database status. It says there are 3 Connections using this database.
> Next to the number of connections is a Clear button that will drop the
active
> connections so that you can continue the process of detaching the
database.
> Does anyone know the command equivalent to pressing the Clear button?
> I don't want to use the graphical interface and I assume there must be
some
> way to issue a command that would do the same thing as pressing the Clear
> button in the GUI.
> I don't actually want to detach the database - I am just trying to clear
all
> connections other than my own. Once I finish clearing connections, I want
to
> issue a command to temporarily put the database into single user mode.
sqlsql

Clear Connections command

Scenario: You have a database that is in use and you want to detach it. You
use enterprise manager to Detach Database. A dialog box appears that lists
the Database status. It says there are 3 Connections using this database.
Next to the number of connections is a Clear button that will drop the active
connections so that you can continue the process of detaching the database.
Does anyone know the command equivalent to pressing the Clear button?
I don't want to use the graphical interface and I assume there must be some
way to issue a command that would do the same thing as pressing the Clear
button in the GUI.
I don't actually want to detach the database - I am just trying to clear all
connections other than my own. Once I finish clearing connections, I want to
issue a command to temporarily put the database into single user mode.Alter Database FOO set SINGLE_USER with Rollback Immediate
Everything you want in one command. See the ALTER DATABASE command in BOL
for details.
To set it back to 'normal'
Alter Database FOO set MULTI_USER
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"cathedr@.wa.state.gov" <cathedrwastategov@.discussions.microsoft.com> wrote
in message news:7EA72915-DF9B-437B-9E08-AE54D210AB9C@.microsoft.com...
> Scenario: You have a database that is in use and you want to detach it.
You
> use enterprise manager to Detach Database. A dialog box appears that
lists
> the Database status. It says there are 3 Connections using this database.
> Next to the number of connections is a Clear button that will drop the
active
> connections so that you can continue the process of detaching the
database.
> Does anyone know the command equivalent to pressing the Clear button?
> I don't want to use the graphical interface and I assume there must be
some
> way to issue a command that would do the same thing as pressing the Clear
> button in the GUI.
> I don't actually want to detach the database - I am just trying to clear
all
> connections other than my own. Once I finish clearing connections, I want
to
> issue a command to temporarily put the database into single user mode.

Clear connection history

Hello
Does anybody know how to clear the logon/connection history in SQL
Management Studio 2005.
Many thanks
DarrenDo you mean in the SQL Server Logs? You can use sp_cycle_errorlog to cycle
the log into Archive logs...
Mohit K. Gupta
B.Sc. CS, Minor Japanese
MCTS: SQL Server 2005
"Darren.Ratcliffe@.gmail.com" wrote:
> Hello
> Does anybody know how to clear the logon/connection history in SQL
> Management Studio 2005.
> Many thanks
> Darren
>

Clear connection history

Hello
Does anybody know how to clear the logon/connection history in SQL
Management Studio 2005.
Many thanks
Darren
Do you mean in the SQL Server Logs? You can use sp_cycle_errorlog to cycle
the log into Archive logs...
Mohit K. Gupta
B.Sc. CS, Minor Japanese
MCTS: SQL Server 2005
"Darren.Ratcliffe@.gmail.com" wrote:

> Hello
> Does anybody know how to clear the logon/connection history in SQL
> Management Studio 2005.
> Many thanks
> Darren
>

Clear connection history

Hello
Does anybody know how to clear the logon/connection history in SQL
Management Studio 2005.
Many thanks
DarrenDo you mean in the SQL Server Logs? You can use sp_cycle_errorlog to cycle
the log into Archive logs...
Mohit K. Gupta
B.Sc. CS, Minor Japanese
MCTS: SQL Server 2005
"Darren.Ratcliffe@.gmail.com" wrote:

> Hello
> Does anybody know how to clear the logon/connection history in SQL
> Management Studio 2005.
> Many thanks
> Darren
>

clear Cached reports in Visual studio

I am frequently going back and forth between the making changes to my reports and previewing those changes, all from within visual studio without publishing the reports each time.

The problem is that frequently changes that I make are not reflected in the preview window unless I either close out of visual studio completely or I wait some length of time, (I'm not sure how long exactly, but 1/2 an hour seems to always do the trick.)

Is there a way to clear any cache and force visual studio to completely reprocess a report?

At least when it is formatting changes I can identify whether the change has stuck, but when I'm fixing bugs in the code, I can't tell if I didn't fix it or if the change just hasn't taken effect.

the simplest method is to rebuild the report, after that hit the refresh button.sqlsql

Clear Cache via Trigger?

I have a dimension table that gets updated nightly. The dimension table is used by a ROLAP cube.

If you wanted to Clear Cache through the use of a trigger once the dimension table is updated, how would you do it? Is there an easy way to execute the XMLA ClearCache from within T-SQL?

If you have a way to call external process from your procedure, you can use ascmd utility to send any XMLA command to Analysis Server. (http://msdn2.microsoft.com/en-us/ms365187.aspx)

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Clear Cache sqlrs.ReportingServices2005

I'm using the Reporting Service web service to display my reports to the user.
I've deleted some test reports from the server. I've also gone to the
server, searched the test report names, and deleted those.
When I display the available reports using the Reporting Services web
service, the test reports are still there.
How do I remove Reporting Service reports from the server?
Thanks
--
RandyGo to Report Manager, click on report, properties, delete button.
--
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"randy1200" <randy1200@.newsgroups.nospam> wrote in message
news:F873554A-B145-4800-B93F-65B3F6AFC955@.microsoft.com...
> I'm using the Reporting Service web service to display my reports to the
> user.
> I've deleted some test reports from the server. I've also gone to the
> server, searched the test report names, and deleted those.
> When I display the available reports using the Reporting Services web
> service, the test reports are still there.
> How do I remove Reporting Service reports from the server?
> Thanks
> --
> Randy

Clear all the tables of a database

Is there a simple method to clear all the tables in a db or to make a copy of the mdf file without the datas ?

I tried to use TRUNCATE TABLE but it doesn't work with foreign keys ?

Thanks.

You probably will have to write your own script for this to reflect the dependencies. As you already stated, Truncate won′t work with foreign keys and an undocumented procedure like sp_msforeachtable won′t care about the order to process.

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de

cleanup the tempdb

Is there currently a way to cleanup the tempdb?
Tried dbcc shrinkfile , but no luck.
Can't restart the server ,because it's on production.
What makes tempDB big ?
and tempDB is in Simple mode , but not truncating it .
ThanksDid you post to enough groups<g>? Tempdb is used for all sorts <pun
intended> of things during the normal operation of the database. What makes
it so big (by the way how big is it?) depends on what your doing but it is
most likely the result of a large join or sort operation. You might want to
check to see if you have a long running open tran as well. If it got that
big once it is most likely going to get that big again (unless it was a
mistake) so you shouldn't bother to shrink it until you know for sure. Next
time you see a lot of activity in it you can investigate to see what is
happening at the time with profiler, sp_who etc.
--
Andrew J. Kelly
SQL Server MVP
"Abraham" <binu_ca@.yahoo.com> wrote in message
news:eADnDxBQDHA.1556@.TK2MSFTNGP10.phx.gbl...
> Is there currently a way to cleanup the tempdb?
> Tried dbcc shrinkfile , but no luck.
> Can't restart the server ,because it's on production.
> What makes tempDB big ?
> and tempDB is in Simple mode , but not truncating it .
> Thanks
>

Cleanup Task / xp_delete_file

Hello together,
I found some posts but not really a solution. Environment: SQL 2005 EE, SP1,
english version but german regional settings
The following code does nothing in the file system, but allways states
success:
EXECUTE master.dbo.xp_delete_file 0,N'D:\test\',N'.bak',N'06/03/2006 15:06:2
9'
nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
include subfolders.
Any ideas ?
MichaelOne problem is that xp_delete_file belongs to the sys schema
not the dbo schema.
-Sue
On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
<MichaelRoedeske@.discussions.microsoft.com> wrote:

>Hello together,
>I found some posts but not really a solution. Environment: SQL 2005 EE, SP1
,
>english version but german regional settings
>The following code does nothing in the file system, but allways states
>success:
>EXECUTE master.dbo.xp_delete_file 0,N'D:\test',N'.bak',N'06/03/2006 15:06:
29'
>nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
>include subfolders.
>Any ideas ?
>Michael
>|||Dear Sue,
even when I call the sp with the sys schema it happes nothing in the
filesystem.
Any other idea?
Regards
Michael
"Sue Hoegemeier" wrote:

> One problem is that xp_delete_file belongs to the sys schema
> not the dbo schema.
> -Sue
> On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
> <MichaelRoedeske@.discussions.microsoft.com> wrote:
>
>|||Others have had the same issue - and others do not. I
haven't seen anyone post what solved the issue.
Try removing the dot from the extension - for example have
it list as just BAK without the dot before BAK. Make sure
the extension is exactly what you are using for your
backups.
It was suppose to be fixed in SP1 but you may also want to
try creating a cleanup task for each subdirectory.
People sometimes spend too much time troubleshooting
maintenance plans when they can just write scripts that have
more flexibility in less time then they spend addressing
maintenance plan issue.
-Sue
On Wed, 7 Jun 2006 01:06:02 -0700, Michael Roedeske
<MichaelRoedeske@.discussions.microsoft.com> wrote:
[vbcol=seagreen]
>Dear Sue,
>even when I call the sp with the sys schema it happes nothing in the
>filesystem.
>Any other idea?
>Regards
>Michael
>"Sue Hoegemeier" wrote:
>sqlsql

Cleanup Task / xp_delete_file

Hello together,
I found some posts but not really a solution. Environment: SQL 2005 EE, SP1,
english version but german regional settings
The following code does nothing in the file system, but allways states
success:
EXECUTE master.dbo.xp_delete_file 0,N'D:\test\',N'.bak',N'06/03/2006 15:06:29'
nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
include subfolders.
Any ideas ?
MichaelOne problem is that xp_delete_file belongs to the sys schema
not the dbo schema.
-Sue
On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
<MichaelRoedeske@.discussions.microsoft.com> wrote:
>Hello together,
>I found some posts but not really a solution. Environment: SQL 2005 EE, SP1,
>english version but german regional settings
>The following code does nothing in the file system, but allways states
>success:
>EXECUTE master.dbo.xp_delete_file 0,N'D:\test\',N'.bak',N'06/03/2006 15:06:29'
>nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
>include subfolders.
>Any ideas ?
>Michael
>|||Dear Sue,
even when I call the sp with the sys schema it happes nothing in the
filesystem.
Any other idea?
Regards
Michael
"Sue Hoegemeier" wrote:
> One problem is that xp_delete_file belongs to the sys schema
> not the dbo schema.
> -Sue
> On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
> <MichaelRoedeske@.discussions.microsoft.com> wrote:
> >Hello together,
> >
> >I found some posts but not really a solution. Environment: SQL 2005 EE, SP1,
> >english version but german regional settings
> >
> >The following code does nothing in the file system, but allways states
> >success:
> >
> >EXECUTE master.dbo.xp_delete_file 0,N'D:\test\',N'.bak',N'06/03/2006 15:06:29'
> >
> >nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
> >include subfolders.
> >
> >Any ideas ?
> >
> >Michael
> >
>|||Others have had the same issue - and others do not. I
haven't seen anyone post what solved the issue.
Try removing the dot from the extension - for example have
it list as just BAK without the dot before BAK. Make sure
the extension is exactly what you are using for your
backups.
It was suppose to be fixed in SP1 but you may also want to
try creating a cleanup task for each subdirectory.
People sometimes spend too much time troubleshooting
maintenance plans when they can just write scripts that have
more flexibility in less time then they spend addressing
maintenance plan issue.
-Sue
On Wed, 7 Jun 2006 01:06:02 -0700, Michael Roedeske
<MichaelRoedeske@.discussions.microsoft.com> wrote:
>Dear Sue,
>even when I call the sp with the sys schema it happes nothing in the
>filesystem.
>Any other idea?
>Regards
>Michael
>"Sue Hoegemeier" wrote:
>> One problem is that xp_delete_file belongs to the sys schema
>> not the dbo schema.
>> -Sue
>> On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
>> <MichaelRoedeske@.discussions.microsoft.com> wrote:
>> >Hello together,
>> >
>> >I found some posts but not really a solution. Environment: SQL 2005 EE, SP1,
>> >english version but german regional settings
>> >
>> >The following code does nothing in the file system, but allways states
>> >success:
>> >
>> >EXECUTE master.dbo.xp_delete_file 0,N'D:\test\',N'.bak',N'06/03/2006 15:06:29'
>> >
>> >nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
>> >include subfolders.
>> >
>> >Any ideas ?
>> >
>> >Michael
>> >
>>

Cleanup of unused database objects

Hi,
I've inherited a database with lots of unused objects (tables and
SPs). What I want to do is determine wich objects have not been used
for the last 30 days and remove them from the database.
Is there a way to determine the last time a table was accessed or a
Stored Procedure was run. I've looked at the system tables but haven't
seen anything that indicates this. I can put a trace on, but that
would consume too many cycles of the machine.
Any ideas or help would be greatly appreciated.
EdThe way I have handled this is to run a trace (SQL Profiler) to see which
objects are accessed.
Store the Object_id's ... you can store the trace in a table if you want and
select them later ... this will give you a list of objects that are being
used.
-Lars
"Edward Roepe" <edward@.roepe.com> wrote in message
news:d383b36b.0401271358.3ac36806@.posting.google.com...
quote:

> Hi,
> I've inherited a database with lots of unused objects (tables and
> SPs). What I want to do is determine wich objects have not been used
> for the last 30 days and remove them from the database.
> Is there a way to determine the last time a table was accessed or a
> Stored Procedure was run. I've looked at the system tables but haven't
> seen anything that indicates this. I can put a trace on, but that
> would consume too many cycles of the machine.
> Any ideas or help would be greatly appreciated.
> Ed

Cleanup of unused database objects

Hi,
I've inherited a database with lots of unused objects (tables and
SPs). What I want to do is determine wich objects have not been used
for the last 30 days and remove them from the database.
Is there a way to determine the last time a table was accessed or a
Stored Procedure was run. I've looked at the system tables but haven't
seen anything that indicates this. I can put a trace on, but that
would consume too many cycles of the machine.
Any ideas or help would be greatly appreciated.
EdThe way I have handled this is to run a trace (SQL Profiler) to see which
objects are accessed.
Store the Object_id's ... you can store the trace in a table if you want and
select them later ... this will give you a list of objects that are being
used.
-Lars
"Edward Roepe" <edward@.roepe.com> wrote in message
news:d383b36b.0401271358.3ac36806@.posting.google.com...
> Hi,
> I've inherited a database with lots of unused objects (tables and
> SPs). What I want to do is determine wich objects have not been used
> for the last 30 days and remove them from the database.
> Is there a way to determine the last time a table was accessed or a
> Stored Procedure was run. I've looked at the system tables but haven't
> seen anything that indicates this. I can put a trace on, but that
> would consume too many cycles of the machine.
> Any ideas or help would be greatly appreciated.
> Ed

Cleanup of Expired Snapshots

Do you have anonymous subscriptions? If so, the snapshots
are kept for the retention period of the subscription to
given anonymous subscriptions time to synchronize.
HTH,
Paul Ibison
Bill,
sp_helpdistributor will confirm the value you report (default is 72 hours)
so it might be worth running it just to check.
If the directorynames/filenames have changed, or if the system clock has
altered then I can understand it.
Regards,
Paul Ibison
|||Paul,
From sp_helpdistributor the max distrib retention is 720
and the min is set to 0. This is 720 hours (30 days). I
believe the system clock has not been altered.
I think the retention is working perfectly related to the
distribution database transactions, so I don't think the
retention value is the issue. It is just the snapshot
files that are not being cleaned up.
Do you know which stored procedure removes the snapshot
files and how it invokes the OS to delete the files?
Bill

>--Original Message--
>Bill,
>sp_helpdistributor will confirm the value you report
(default is 72 hours)
>so it might be worth running it just to check.
>If the directorynames/filenames have changed, or if the
system clock has
>altered then I can understand it.
>Regards,
>Paul Ibison
>
>.
>
|||Bill,
I believe the procedure you're looking for is sp_MSDelete_publisherdb_trans.
You can follow the logic from sp_MSdistribution_cleanup and
sp_MSdistribution_delete.
HTH,
Paul Ibison

Cleanup / Foreign key

I have an address table I set up so that I can start our address campaign.
There is a field (open entry) called Business Type that I allowed the works
to enter to describe the business type. Now that the entry is complete ,
I'm in cleanup phase. Now, my question is: I know I need to make Business
Type a foreign key and select distinct to insert them into their own table,
but should I do that before I clean up the inconsistencies or after. There
are about 2000 different business types, some are minor like - Car Sales -
Used and Car Sales _ Used. I'm creating a system that allows users to
search based on business type.
Before or after, why?Before!
Why? Do not allow bad data into the database in the first place, it is a
bitch getting it out.
Jay
<programmingcodeATjards.com> wrote in message
news:uVedQl9%23HHA.320@.TK2MSFTNGP04.phx.gbl...
>I have an address table I set up so that I can start our address campaign.
>There is a field (open entry) called Business Type that I allowed the works
>to enter to describe the business type. Now that the entry is complete ,
>I'm in cleanup phase. Now, my question is: I know I need to make Business
>Type a foreign key and select distinct to insert them into their own table,
>but should I do that before I clean up the inconsistencies or after. There
>are about 2000 different business types, some are minor like - Car Sales -
>Used and Car Sales _ Used. I'm creating a system that allows users to
>search based on business type.
> Before or after, why?
>sqlsql

cleansing data

I have a table with 1.5 million rows. Each field has a text qualifier of " ". Could you please tell me if and how I can write a script to remove the quotes from each field?
Thank you. ;)Using which database engine? The string operations aren't very standard between engines, so which engine makes a considerable difference.

-PatP|||Using MS SQL Server 2000|||Hi mary10k, try this
update tablename set filedname=replace(filedname,""" ""","")

Madhivanan

cleaning wrong PK FK

Hello

my 2 tables in MS SQL 2000

Report :
Report_id (PK)
name

Product :
Product_id (PK)
Report_id (FK)
name

the Foreign Key and Primary Key have been added later (when the tables were allready full)
product has a few millions of lines and report a few 10.thousand

now I want to clean the 2 tables and remove all the lines which are not conected by PK > FK or FK > PK

i am trying :

DELETE FROM Product WHERE (Product.Report_id NOT IN (SELECT Report.Report_id FROM Report))

DELETE FROM Report WHERE (Report.Report_id NOT IN (SELECT Product.Report_id FROM Product))

but the database crash : time overflow !

how can I do it ?

thank youHere are two alternate methods:--Method #1: EXISTS
delete
from Product
where not exists (select * from Report where Report.Report_id = Product.Report_id)

--Method #2: LEFT OUTER JOIN
delete
from Product
left outer join Report on Product.Report_id = Report.Report_id
where Report.Report_id is null
In either method, make sure Report_id is indexed in both tables.|||thank you BlindMan

on the 2nd method i am getting :
Incorrect syntax near the keyword 'left'.|||Post your code.|||that one

delete
from Product
left outer join Report on Product.Report_id = Report.Report_id
where Report.Report_id is null

Incorrect syntax near the keyword 'left'

you said : in either method, make sure Report_id is indexed in both tables.

they are PK to FK but they are not indexed
how can I do it when the tables are allready full

ALTER TABLE create index ?

thank's a lot|||I'm assuming that you defined the PK and FK in your head, but haven't done anything with the database. A PK that doesn't exist using a PRIMARY KEY definition is only a good intention from my perspective. ;) I would suggest using something like:CREATE INDEX dropme01 ON Report (Report_Id)
CREATE INDEX dropme02 ON Product (Report_Id)

DELETE FROM Report
WHERE NOT EXISTS (SELECT *
FROM Product
WHERE Product.Report_Id = Report.Report_Id)

DELETE FROM Product
WHERE NOT EXISTS (SELECT *
FROM Report
WHERE Report.Report_id = Product.Report_Id)

DROP INDEX Report.dropme01
DROP INDEX Product.dropme02

ALTER TABLE Report
ADD CONSTRAINT XPKReport
PRIMARY KEY (Report_Id)

ALTER TABLE Product
ADD CONSTRAINT XPKProduct
PRIMARY KEY (Product_Id)

ALTER TABLE Product
ADD CONSTRAINT XFK01Report
FOREIGN KEY (Report_Id)
REFERENCES Report (Report_Id)-PatP|||Pat FK and PK are allready in the tables|||Pat FK and PK are allready in the tables
Are you having any problem now?|||with your first method it works very well
i was just wondering why it doesnt with the second method

but it works ...

thanks a lot|||Public kya Time pass karne aati hai kya idhar?|||Public kya Time pass karne aati hai kya idhar?

No Hindi man, I think English would be more appropiate to express anything that you post here.

He was telling ," Do the people come here only to pass time?"|||No Hindi man, I think English would be more appropiate to express anything that you post here.

He was telling ," Do the people come here only to pass time?"Lol - well I think we all know the answer to that.

Joydeep - you are becoming the SQL Server Forum Official Translator (Asian Languages Division) :)|||Lol - well I think we all know the answer to that.

Joydeep - you are becoming the SQL Server Forum Official Translator (Asian Languages Division) :)
........;)|||Sorry. There was a syntax error in my second example. This should work:
delete Product
from Product
left outer join Report on Product.Report_id = Report.Report_id
where Report.Report_id is null|||i try it thank you

Sunday, March 25, 2012

cleaning up system objects left by a merge repl.

Hi,
After disabling publishing on my server, there were
numerous merge replication related objects.
How do I clean them up. I tried to drop them, but I get a
message saying I am trying to drop system objects, and the
effort fails.
Thanks,
Sang
Sang,
try sp_removedbreplication (assuming the database is no longer contains any
publications/subscriptions).
Hilary Cotter sent me a link to a script he created at http://www.ava.co.uk
(technical resouces section) that you might want to look at, if the above
stored proc doesn't remove all the objects.
HTH,
Paul Ibison

Cleaning up server logins

I have a host of server logins, NT and SQL access level on my server. I'd
like to iterate through the databases and find users who have no logins to
non-system databases. Using master.dbo.sysdatabases, I can retrieve a list
of all databases on the server, but not the type (System or User), unless a
SID of 0x01 is an appropriate measure of a system database. After obtaining
a list of databases, how can I iterate through them with T-SQL code?Using cursors.
Example:
use northwind
go
declare @.db sysname
declare databases_cursor cursor
local
static
read_only
for
select
[name]
from
master..sysdatabases
where
dbid > 6
order by
[name]
open databases_cursor
while 1 = 1
begin
fetch next from databases_cursor into @.db
if @.@.error <> 0 or @.@.fetch_status <> 0 break
print @.db
end
close databases_cursor
deallocate databases_cursor
go
"Kevin Bowker" wrote:

> I have a host of server logins, NT and SQL access level on my server. I'd
> like to iterate through the databases and find users who have no logins to
> non-system databases. Using master.dbo.sysdatabases, I can retrieve a lis
t
> of all databases on the server, but not the type (System or User), unless
a
> SID of 0x01 is an appropriate measure of a system database. After obtaini
ng
> a list of databases, how can I iterate through them with T-SQL code?

Cleaning up merge metadata manually is not working

Hi all,

I'm trying delete metadata of a sql 2005 sp1 subscriber from a sql 2005 sp1 merge publication, but is not working, the "retention" parameter for the publication is 999 and this is the code I'm using:

declare @.num_genhistory_rows int,

@.num_contents_rows int,

@.num_tombstone_rows int

declare @.retcode smallint

--select count(*) from msmerge_contents

-- records before 2,633,848

exec @.retcode = sys.sp_mergemetadataretentioncleanup @.num_genhistory_rows OUTPUT , @.num_contents_rows OUTPUT , @.num_tombstone_rows OUTPUT

select retcode =@.retcode

select num_genhistory_rows =@.num_genhistory_rows

select num_contents_rows=@.num_contents_rows

select num_tombstone_rows=@.num_tombstone_rows

--select count(*) from msmerge_contents

-- records after 2,633,8

Results :

retcode

0

num_genhistory_rows

0

num_contents_rows

0

num_tombstone_rows

0

Has omebody any idea why this is not working ?

I did check "sp_mergemetadataretentioncleanup " and I note that is using a function to calculate the limit date, but I could not testing because it give me the below error :

declare @.curdate datetime, @.cutoffdate datetime

select @.curdate = getdate()

select @.cutoffdate = null

-- find max retention of all pubs the article belongs to.

-- add some safety margin to compensate for different clock speeds

select @.cutoffdate = min(sys.fn_subtract_units_from_date(isnull(retention,0), retention_period_unit, @.curdate))

from dbo.sysmergepublications where

pubid in (select pubid from dbo.sysmergearticles where nickname = 5088000)

select @.cutoffdate

and this is the message error:

Msg 4121, Level 16, State 1, Line 7

Cannot find either column "sys" or the user-defined function or aggregate "sys.fn_subtract_units_from_date", or the name is ambiguous.

I looked this function but I didn't find it.

any help will be appreciated !

Firstly, if your retention is 999 days, it means that only metadata that is 1000 days old will be cleaned up (automatically or manually). So do you have data that old in the first place?

If you want to cleanup metadata, set the retention to a lower value and you will start seeing the metadata getting cleaned up automatically when merge anget runs.

The sys.xxx functions are internal functions that reside in the system resource and hence cannot be called explictly by a user.

|||

Thanks for your response Mahesh,

Here is the complete story about 999

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1148225&SiteID=1

I suspect that if I change the "retention" parameter the subscriptions will expire, is this possible with sql 2005 SP1 ?

|||

the retention cleanup will take time on the first call because it will need to do some real cleanup.

However after this, every time merge agent runs, this proc will be called. But the delta to cleanup will be very less so it should not take that long, unless of course your data load everyday is very huge.

So now if you set it to 999, and not enough metadata is genenrated, you will not see anything/or see less metadata cleaned up.

sqlsql

Cleaning up hardcode sections

Hey,

Does anyone know of a neat and easy way to modify this section of hardcode:

SELECT CASE dbo.requestsbyyeartemp.themonth when '1' then 'January' when '2' then 'Febuary' when '3' then 'March'
when '4' then 'April' when '5' then 'May' when '6' then 'June' when '7' then 'July' when '8' then 'August'
when '9' then 'September' when '10' then 'October' when '11' then 'November' when '12' then 'December' end as 'themonth', etc...

Requestsbyyeartemp is a table where the numbers corresponding to the months are stored. This statement is used to make a new table where the months are stored with the proper names and not numbers.

I would sooner not have any hardcode at all if there is a simple way to do it.

ThanksSELECT DATENAME(month, dbo.requestsbyyeartemp.themonth) AS 'themonth'|||Would this work if the <date> portion of DATENAME(<datepart>, <date>) is only a single number?

I ask this because I keep getting January no matter what number 'themonth' is?|||Any more suggestions anyone?|||How about:SELECT DateName(month, DateAdd(month
, dbo.requestsbyyeartemp.themonth, '1950-12-01')) AS themonth-PatP

Cleaning up goldmine Data

Hi,
We are running version 6.5 of Goldmine, which I am migrating to CRM
3.0 - I used access to get the data into SQL, but am now stuck with a
Last Name Column and a Name Column. To get the data into CRM Server
(which the boss is very keen to have done ASAP) I need to have First
Name and Last Name.
Is there an easy (!?) way to get the SQL server to remove the last
name data from name data? For example as it stands at the moment I
would have two cells:
Last Name: Smith
Name: John Smith
What I want to end up with is:
First Name: John
Last Name: Smith
Any suggestions gratefully received!
Many thanks
BlairIf there are only ever two names (one first name followed by one last name,
separated by a space or other distinct charactere) you can simply use
SUBSTRING and CHARINDEX (maybe even PATINDEX). Books Online has several
examples.
ML
--
Matija Lah, SQL Server MVP
http://milambda.blogspot.com/

Cleaning up goldmine Data

Hi,
We are running version 6.5 of Goldmine, which I am migrating to CRM
3.0 - I used access to get the data into SQL, but am now stuck with a
Last Name Column and a Name Column. To get the data into CRM Server
(which the boss is very keen to have done ASAP) I need to have First
Name and Last Name.
Is there an easy (!?) way to get the SQL server to remove the last
name data from name data? For example as it stands at the moment I
would have two cells:
Last Name: Smith
Name: John Smith
What I want to end up with is:
First Name: John
Last Name: Smith
Any suggestions gratefully received!
Many thanks
Blair
If there are only ever two names (one first name followed by one last name,
separated by a space or other distinct charactere) you can simply use
SUBSTRING and CHARINDEX (maybe even PATINDEX). Books Online has several
examples.
ML
Matija Lah, SQL Server MVP
http://milambda.blogspot.com/

Cleaning up goldmine Data

Hi,
We are running version 6.5 of Goldmine, which I am migrating to CRM
3.0 - I used access to get the data into SQL, but am now stuck with a
Last Name Column and a Name Column. To get the data into CRM Server
(which the boss is very keen to have done ASAP) I need to have First
Name and Last Name.
Is there an easy (!?) way to get the SQL server to remove the last
name data from name data? For example as it stands at the moment I
would have two cells:
Last Name: Smith
Name: John Smith
What I want to end up with is:
First Name: John
Last Name: Smith
Any suggestions gratefully received!
Many thanks
BlairIf there are only ever two names (one first name followed by one last name,
separated by a space or other distinct charactere) you can simply use
SUBSTRING and CHARINDEX (maybe even PATINDEX). Books Online has several
examples.
ML
Matija Lah, SQL Server MVP
http://milambda.blogspot.com/

cleaning up duplicates

I have a table with column a, b, c, and d
column b has several duplicates and I want to clean this up.
a b c d
=============
1 name1 aa gg
2 name2 bb hh
3 name3 cc ii
4 name3 dd jj
5 name4 ee kk
6 name5 ff ll
etc.
I'm guessing I would need to select all the unique rows from the table,
put them into temp_table1, select the rows with the duplicates as
single rows into temp_table2, and join those two back into the original
table.
would this be the right idea? and if so,
exactly how would I accomplish this?
any help would be greatly appreciated!create table #test
(
col1 int not null primary key,
col2 varchar(10) not null,
col3 char(1) not null
)
insert into #test values (1,'name1','a')
insert into #test values (2,'name2','b')
insert into #test values (3,'name2','y')
insert into #test values (4,'name3','c')
insert into #test values (5,'name4','f')
select * from #test
where col1=(select max(col1) from #test t where t.col2=#test.col2)
order by col1
<guilesf2@.hotmail.com> wrote in message
news:1141542444.740452.81000@.e56g2000cwe.googlegroups.com...
>I have a table with column a, b, c, and d
> column b has several duplicates and I want to clean this up.
> a b c d
> =============
> 1 name1 aa gg
> 2 name2 bb hh
> 3 name3 cc ii
> 4 name3 dd jj
> 5 name4 ee kk
> 6 name5 ff ll
> etc.
>
> I'm guessing I would need to select all the unique rows from the table,
> put them into temp_table1, select the rows with the duplicates as
> single rows into temp_table2, and join those two back into the original
> table.
> would this be the right idea? and if so,
> exactly how would I accomplish this?
> any help would be greatly appreciated!
>|||thanks a lot! just for my understanding, what would be the reason the
table
needs that alias-type thing in order to work? just curious and trying
to learn.
again, thanks a lot|||If your goal is to cleanup only column b then you probably need to
normalize the table further. You will need to store unique names in a
separate table called NAMES (id, name) and store the id in column b.sqlsql