Showing posts with label together. Show all posts
Showing posts with label together. Show all posts

Tuesday, March 27, 2012

Cleanup Task / xp_delete_file

Hello together,
I found some posts but not really a solution. Environment: SQL 2005 EE, SP1,
english version but german regional settings
The following code does nothing in the file system, but allways states
success:
EXECUTE master.dbo.xp_delete_file 0,N'D:\test\',N'.bak',N'06/03/2006 15:06:2
9'
nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
include subfolders.
Any ideas ?
MichaelOne problem is that xp_delete_file belongs to the sys schema
not the dbo schema.
-Sue
On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
<MichaelRoedeske@.discussions.microsoft.com> wrote:

>Hello together,
>I found some posts but not really a solution. Environment: SQL 2005 EE, SP1
,
>english version but german regional settings
>The following code does nothing in the file system, but allways states
>success:
>EXECUTE master.dbo.xp_delete_file 0,N'D:\test',N'.bak',N'06/03/2006 15:06:
29'
>nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
>include subfolders.
>Any ideas ?
>Michael
>|||Dear Sue,
even when I call the sp with the sys schema it happes nothing in the
filesystem.
Any other idea?
Regards
Michael
"Sue Hoegemeier" wrote:

> One problem is that xp_delete_file belongs to the sys schema
> not the dbo schema.
> -Sue
> On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
> <MichaelRoedeske@.discussions.microsoft.com> wrote:
>
>|||Others have had the same issue - and others do not. I
haven't seen anyone post what solved the issue.
Try removing the dot from the extension - for example have
it list as just BAK without the dot before BAK. Make sure
the extension is exactly what you are using for your
backups.
It was suppose to be fixed in SP1 but you may also want to
try creating a cleanup task for each subdirectory.
People sometimes spend too much time troubleshooting
maintenance plans when they can just write scripts that have
more flexibility in less time then they spend addressing
maintenance plan issue.
-Sue
On Wed, 7 Jun 2006 01:06:02 -0700, Michael Roedeske
<MichaelRoedeske@.discussions.microsoft.com> wrote:
[vbcol=seagreen]
>Dear Sue,
>even when I call the sp with the sys schema it happes nothing in the
>filesystem.
>Any other idea?
>Regards
>Michael
>"Sue Hoegemeier" wrote:
>sqlsql

Cleanup Task / xp_delete_file

Hello together,
I found some posts but not really a solution. Environment: SQL 2005 EE, SP1,
english version but german regional settings
The following code does nothing in the file system, but allways states
success:
EXECUTE master.dbo.xp_delete_file 0,N'D:\test\',N'.bak',N'06/03/2006 15:06:29'
nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
include subfolders.
Any ideas ?
MichaelOne problem is that xp_delete_file belongs to the sys schema
not the dbo schema.
-Sue
On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
<MichaelRoedeske@.discussions.microsoft.com> wrote:
>Hello together,
>I found some posts but not really a solution. Environment: SQL 2005 EE, SP1,
>english version but german regional settings
>The following code does nothing in the file system, but allways states
>success:
>EXECUTE master.dbo.xp_delete_file 0,N'D:\test\',N'.bak',N'06/03/2006 15:06:29'
>nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
>include subfolders.
>Any ideas ?
>Michael
>|||Dear Sue,
even when I call the sp with the sys schema it happes nothing in the
filesystem.
Any other idea?
Regards
Michael
"Sue Hoegemeier" wrote:
> One problem is that xp_delete_file belongs to the sys schema
> not the dbo schema.
> -Sue
> On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
> <MichaelRoedeske@.discussions.microsoft.com> wrote:
> >Hello together,
> >
> >I found some posts but not really a solution. Environment: SQL 2005 EE, SP1,
> >english version but german regional settings
> >
> >The following code does nothing in the file system, but allways states
> >success:
> >
> >EXECUTE master.dbo.xp_delete_file 0,N'D:\test\',N'.bak',N'06/03/2006 15:06:29'
> >
> >nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
> >include subfolders.
> >
> >Any ideas ?
> >
> >Michael
> >
>|||Others have had the same issue - and others do not. I
haven't seen anyone post what solved the issue.
Try removing the dot from the extension - for example have
it list as just BAK without the dot before BAK. Make sure
the extension is exactly what you are using for your
backups.
It was suppose to be fixed in SP1 but you may also want to
try creating a cleanup task for each subdirectory.
People sometimes spend too much time troubleshooting
maintenance plans when they can just write scripts that have
more flexibility in less time then they spend addressing
maintenance plan issue.
-Sue
On Wed, 7 Jun 2006 01:06:02 -0700, Michael Roedeske
<MichaelRoedeske@.discussions.microsoft.com> wrote:
>Dear Sue,
>even when I call the sp with the sys schema it happes nothing in the
>filesystem.
>Any other idea?
>Regards
>Michael
>"Sue Hoegemeier" wrote:
>> One problem is that xp_delete_file belongs to the sys schema
>> not the dbo schema.
>> -Sue
>> On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
>> <MichaelRoedeske@.discussions.microsoft.com> wrote:
>> >Hello together,
>> >
>> >I found some posts but not really a solution. Environment: SQL 2005 EE, SP1,
>> >english version but german regional settings
>> >
>> >The following code does nothing in the file system, but allways states
>> >success:
>> >
>> >EXECUTE master.dbo.xp_delete_file 0,N'D:\test\',N'.bak',N'06/03/2006 15:06:29'
>> >
>> >nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
>> >include subfolders.
>> >
>> >Any ideas ?
>> >
>> >Michael
>> >
>>

Monday, March 19, 2012

Choosing the PK

If you have two fields that together form a unique identifier, would it be
better to use them as PK or is it better to create another field (ex. Auto
Increment) and use that.
Specially if there are many tables that contain a FK to this table. It seems
a bit inpractical to keep several copies of these fields in several tables,
even though they are used as a PK<->FK.
(Assuming the two fields cannot be null)
What are your thoughts from experience?
Hi,
I will go with an Identity column with a clustered Index on it. This will
provide you a better data modeling and better data retrieval.
Thanks
Hari
SQL Server MVP
"Ash" wrote:

> If you have two fields that together form a unique identifier, would it be
> better to use them as PK or is it better to create another field (ex. Auto
> Increment) and use that.
> Specially if there are many tables that contain a FK to this table. It seems
> a bit inpractical to keep several copies of these fields in several tables,
> even though they are used as a PK<->FK.
> (Assuming the two fields cannot be null)
> What are your thoughts from experience?
>
|||Ash,
in most cases I would say that using an identity key column as PK suits your
situation well. Consider extreme case: you have two columns as the
components of the key, each char(4000). In order to do a join, you use the
index that's created by setting up the PK. But each evaluation of each
index page would provide you only the information that the next index
reading should go up or down in pages. On the other hand, if you use an
identity column, with bigint it's only 8 bit, so your index search would be
much shorter. Normal life is not so extreme, but this is to illustrate.
Same principle applies where you do not want to use the identity key column.
If your key components are all narrow, and there are not many columns, AND
the values of those columns are frequently used in the child tables' access,
don't you think you would rather have them in those tables?
DB design and performance go hand in hand, and their optimization depends on
your situation.
hth
"Ash" <Ash@.discussions.microsoft.com> wrote in message
news:F3C186BB-70D3-4316-A17F-B676D674CC25@.microsoft.com...
> If you have two fields that together form a unique identifier, would it be
> better to use them as PK or is it better to create another field (ex. Auto
> Increment) and use that.
> Specially if there are many tables that contain a FK to this table. It
> seems
> a bit inpractical to keep several copies of these fields in several
> tables,
> even though they are used as a PK<->FK.
> (Assuming the two fields cannot be null)
> What are your thoughts from experience?
>
|||I see your point..
So which scenario would the join run faster (I'm talking about > 2,000,000
records in each table)
1)ID is PK, F1 & F2 are unique and index
table1 table2
ID <IDENTITY> ID<IDENTIT>
F1<CHAR(10)> F1<CHAR(10)>
F2<CHAR(10)> F2<CHAR(10)>
F3<CHAR(10)> F4<CHAR(10)>
SELECT * FROM table1 t1,table2 t2 WHERE t1.ID=t2.ID AND t1.f1='Somthing' AND
t1.f2='Somthing Else'
2)F1 & F2 are PK and index
table1 table2
F1<CHAR(10)> F1<CHAR(10)>
F2<CHAR(10)> F2<CHAR(10)>
F3<CHAR(10)> F4<CHAR(10)>
SELECT * FROM table1 t1,table2 t2 WHERE t1.f1=t2.f1 AND t1.f2=t2.f2 AND
t1.f1='Somthing' AND t1.f2='Somthing Else'
Thanks..

Choosing the PK

If you have two fields that together form a unique identifier, would it be
better to use them as PK or is it better to create another field (ex. Auto
Increment) and use that.
Specially if there are many tables that contain a FK to this table. It seems
a bit inpractical to keep several copies of these fields in several tables,
even though they are used as a PK<->FK.
(Assuming the two fields cannot be null)
What are your thoughts from experience?Hi,
I will go with an Identity column with a clustered Index on it. This will
provide you a better data modeling and better data retrieval.
Thanks
Hari
SQL Server MVP
"Ash" wrote:
> If you have two fields that together form a unique identifier, would it be
> better to use them as PK or is it better to create another field (ex. Auto
> Increment) and use that.
> Specially if there are many tables that contain a FK to this table. It seems
> a bit inpractical to keep several copies of these fields in several tables,
> even though they are used as a PK<->FK.
> (Assuming the two fields cannot be null)
> What are your thoughts from experience?
>|||Ash,
in most cases I would say that using an identity key column as PK suits your
situation well. Consider extreme case: you have two columns as the
components of the key, each char(4000). In order to do a join, you use the
index that's created by setting up the PK. But each evaluation of each
index page would provide you only the information that the next index
reading should go up or down in pages. On the other hand, if you use an
identity column, with bigint it's only 8 bit, so your index search would be
much shorter. Normal life is not so extreme, but this is to illustrate.
Same principle applies where you do not want to use the identity key column.
If your key components are all narrow, and there are not many columns, AND
the values of those columns are frequently used in the child tables' access,
don't you think you would rather have them in those tables?
DB design and performance go hand in hand, and their optimization depends on
your situation.
hth
"Ash" <Ash@.discussions.microsoft.com> wrote in message
news:F3C186BB-70D3-4316-A17F-B676D674CC25@.microsoft.com...
> If you have two fields that together form a unique identifier, would it be
> better to use them as PK or is it better to create another field (ex. Auto
> Increment) and use that.
> Specially if there are many tables that contain a FK to this table. It
> seems
> a bit inpractical to keep several copies of these fields in several
> tables,
> even though they are used as a PK<->FK.
> (Assuming the two fields cannot be null)
> What are your thoughts from experience?
>|||I see your point..
So which scenario would the join run faster (I'm talking about > 2,000,000
records in each table)
1)ID is PK, F1 & F2 are unique and index
table1 table2
ID <IDENTITY> ID<IDENTIT>
F1<CHAR(10)> F1<CHAR(10)>
F2<CHAR(10)> F2<CHAR(10)>
F3<CHAR(10)> F4<CHAR(10)>
SELECT * FROM table1 t1,table2 t2 WHERE t1.ID=t2.ID AND t1.f1='Somthing' AND
t1.f2='Somthing Else'
2)F1 & F2 are PK and index
table1 table2
F1<CHAR(10)> F1<CHAR(10)>
F2<CHAR(10)> F2<CHAR(10)>
F3<CHAR(10)> F4<CHAR(10)>
SELECT * FROM table1 t1,table2 t2 WHERE t1.f1=t2.f1 AND t1.f2=t2.f2 AND
t1.f1='Somthing' AND t1.f2='Somthing Else'
Thanks..

Choosing the PK

If you have two fields that together form a unique identifier, would it be
better to use them as PK or is it better to create another field (ex. Auto
Increment) and use that.
Specially if there are many tables that contain a FK to this table. It seems
a bit inpractical to keep several copies of these fields in several tables,
even though they are used as a PK<->FK.
(Assuming the two fields cannot be null)
What are your thoughts from experience?Hi,
I will go with an Identity column with a clustered Index on it. This will
provide you a better data modeling and better data retrieval.
Thanks
Hari
SQL Server MVP
"Ash" wrote:

> If you have two fields that together form a unique identifier, would it be
> better to use them as PK or is it better to create another field (ex. Auto
> Increment) and use that.
> Specially if there are many tables that contain a FK to this table. It see
ms
> a bit inpractical to keep several copies of these fields in several tables
,
> even though they are used as a PK<->FK.
> (Assuming the two fields cannot be null)
> What are your thoughts from experience?
>|||Ash,
in most cases I would say that using an identity key column as PK suits your
situation well. Consider extreme case: you have two columns as the
components of the key, each char(4000). In order to do a join, you use the
index that's created by setting up the PK. But each evaluation of each
index page would provide you only the information that the next index
reading should go up or down in pages. On the other hand, if you use an
identity column, with bigint it's only 8 bit, so your index search would be
much shorter. Normal life is not so extreme, but this is to illustrate.
Same principle applies where you do not want to use the identity key column.
If your key components are all narrow, and there are not many columns, AND
the values of those columns are frequently used in the child tables' access,
don't you think you would rather have them in those tables?
DB design and performance go hand in hand, and their optimization depends on
your situation.
hth
"Ash" <Ash@.discussions.microsoft.com> wrote in message
news:F3C186BB-70D3-4316-A17F-B676D674CC25@.microsoft.com...
> If you have two fields that together form a unique identifier, would it be
> better to use them as PK or is it better to create another field (ex. Auto
> Increment) and use that.
> Specially if there are many tables that contain a FK to this table. It
> seems
> a bit inpractical to keep several copies of these fields in several
> tables,
> even though they are used as a PK<->FK.
> (Assuming the two fields cannot be null)
> What are your thoughts from experience?
>|||I see your point..
So which scenario would the join run faster (I'm talking about > 2,000,000
records in each table)
1)ID is PK, F1 & F2 are unique and index
table1 table2
ID <IDENTITY> ID<IDENTIT>
F1<CHAR(10)> F1<CHAR(10)>
F2<CHAR(10)> F2<CHAR(10)>
F3<CHAR(10)> F4<CHAR(10)>
SELECT * FROM table1 t1,table2 t2 WHERE t1.ID=t2.ID AND t1.f1='Somthing' AND
t1.f2='Somthing Else'
2)F1 & F2 are PK and index
table1 table2
F1<CHAR(10)> F1<CHAR(10)>
F2<CHAR(10)> F2<CHAR(10)>
F3<CHAR(10)> F4<CHAR(10)>
SELECT * FROM table1 t1,table2 t2 WHERE t1.f1=t2.f1 AND t1.f2=t2.f2 AND
t1.f1='Somthing' AND t1.f2='Somthing Else'
Thanks..

Choosing replication type

Hello!
I'm trying to put together replication for a small application that
works like this: The central database contains data that is to be
copied to several laptops. The laptops are going to be used on
inspections of real-estate all over this city. When the user is
finished with his work, he wants his data transferred/updated to the
main database. This updating only concerns a few columns in a few
tables. NOT the whole database.
I figured Snaphot Replication was a good idea, but this only work one
way - to the laptop, with the database structure and all the data.
When data is going back to the central database, snapshot replication
will only try to add rows, as it doesn't know which rows to update.
Merge Replication will update all of the data both ways, and that is
not "allowed" in this context.
Any good suggestions?
Thanx!
Dagfinn Rosnes
Use Merge replication - only replicate the tables and columns that the
laptops will be changing.
"Dagfinn Rosnes" wrote:

> Hello!
> I'm trying to put together replication for a small application that
> works like this: The central database contains data that is to be
> copied to several laptops. The laptops are going to be used on
> inspections of real-estate all over this city. When the user is
> finished with his work, he wants his data transferred/updated to the
> main database. This updating only concerns a few columns in a few
> tables. NOT the whole database.
> I figured Snaphot Replication was a good idea, but this only work one
> way - to the laptop, with the database structure and all the data.
> When data is going back to the central database, snapshot replication
> will only try to add rows, as it doesn't know which rows to update.
> Merge Replication will update all of the data both ways, and that is
> not "allowed" in this context.
> Any good suggestions?
> Thanx!
> Dagfinn Rosnes
>
|||Clarify my previous post:
Use a "one-way" merge replication
http://www.microsoft.com/sql/techinf...t/mergerep.asp
"Dagfinn Rosnes" wrote:

> Hello!
> I'm trying to put together replication for a small application that
> works like this: The central database contains data that is to be
> copied to several laptops. The laptops are going to be used on
> inspections of real-estate all over this city. When the user is
> finished with his work, he wants his data transferred/updated to the
> main database. This updating only concerns a few columns in a few
> tables. NOT the whole database.
> I figured Snaphot Replication was a good idea, but this only work one
> way - to the laptop, with the database structure and all the data.
> When data is going back to the central database, snapshot replication
> will only try to add rows, as it doesn't know which rows to update.
> Merge Replication will update all of the data both ways, and that is
> not "allowed" in this context.
> Any good suggestions?
> Thanx!
> Dagfinn Rosnes
>