Showing posts with label working. Show all posts
Showing posts with label working. Show all posts

Sunday, March 25, 2012

Cleaning up merge metadata manually is not working

Hi all,

I'm trying delete metadata of a sql 2005 sp1 subscriber from a sql 2005 sp1 merge publication, but is not working, the "retention" parameter for the publication is 999 and this is the code I'm using:

declare @.num_genhistory_rows int,

@.num_contents_rows int,

@.num_tombstone_rows int

declare @.retcode smallint

--select count(*) from msmerge_contents

-- records before 2,633,848

exec @.retcode = sys.sp_mergemetadataretentioncleanup @.num_genhistory_rows OUTPUT , @.num_contents_rows OUTPUT , @.num_tombstone_rows OUTPUT

select retcode =@.retcode

select num_genhistory_rows =@.num_genhistory_rows

select num_contents_rows=@.num_contents_rows

select num_tombstone_rows=@.num_tombstone_rows

--select count(*) from msmerge_contents

-- records after 2,633,8

Results :

retcode

0

num_genhistory_rows

0

num_contents_rows

0

num_tombstone_rows

0

Has omebody any idea why this is not working ?

I did check "sp_mergemetadataretentioncleanup " and I note that is using a function to calculate the limit date, but I could not testing because it give me the below error :

declare @.curdate datetime, @.cutoffdate datetime

select @.curdate = getdate()

select @.cutoffdate = null

-- find max retention of all pubs the article belongs to.

-- add some safety margin to compensate for different clock speeds

select @.cutoffdate = min(sys.fn_subtract_units_from_date(isnull(retention,0), retention_period_unit, @.curdate))

from dbo.sysmergepublications where

pubid in (select pubid from dbo.sysmergearticles where nickname = 5088000)

select @.cutoffdate

and this is the message error:

Msg 4121, Level 16, State 1, Line 7

Cannot find either column "sys" or the user-defined function or aggregate "sys.fn_subtract_units_from_date", or the name is ambiguous.

I looked this function but I didn't find it.

any help will be appreciated !

Firstly, if your retention is 999 days, it means that only metadata that is 1000 days old will be cleaned up (automatically or manually). So do you have data that old in the first place?

If you want to cleanup metadata, set the retention to a lower value and you will start seeing the metadata getting cleaned up automatically when merge anget runs.

The sys.xxx functions are internal functions that reside in the system resource and hence cannot be called explictly by a user.

|||

Thanks for your response Mahesh,

Here is the complete story about 999

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1148225&SiteID=1

I suspect that if I change the "retention" parameter the subscriptions will expire, is this possible with sql 2005 SP1 ?

|||

the retention cleanup will take time on the first call because it will need to do some real cleanup.

However after this, every time merge agent runs, this proc will be called. But the delta to cleanup will be very less so it should not take that long, unless of course your data load everyday is very huge.

So now if you set it to 999, and not enough metadata is genenrated, you will not see anything/or see less metadata cleaned up.

sqlsql

cleaning up backups

I recently took responsibility for a sharepoint 2003 server & corresponding
SQL 2000 server. The backups are working through a maintenance plan, but
I'd like to implement a richer plan and am having difficulties developing
the best approach. understand I've not done much care & feeding of SQL in
my life. (lots of other infrastructure work, through). I have read the
technet articles on sql 2000 backup & restore and the pocket consultant:
database backup & recovery. Perhaps you can refer me to a better book?
I'd like to implement a weekly full db backup,
nightly differentials
nightly transaction log backups.
All of this would be done to a file server
When I tried to do this in the maintenance plan wizard, it didn't offer the
differential option. So then I looked at creating a backup job directly and
there the option no (apparent) to limit the number of backup files kept in
the file system. Am I missing something?
Then on the transaction logs, I see when I configure the job through the
backup tool, it allows me to select "remove inactive entries from
transaction log". This option is not available when I create the back
through a maintenance plan. Do I need to do this and why can't I do it
through both tools.
Thanks in advance.
\\Greg> Perhaps you can refer me to a better book?
SQL Server Books Online (the documentation that comes with the product)? It is very good.
> I'd like to implement a weekly full db backup,
> nightly differentials
> nightly transaction log backups.
Hmm, why only nightly log backups? For above, I'd expect something like hourly or every 10 minutes
or so.
> When I tried to do this in the maintenance plan wizard, it didn't offer the differential option.
Correct.
> So then I looked at creating a backup job directly and there the option no (apparent) to limit the
> number of backup files kept in the file system. Am I missing something?
Nope, you are correct again. You would have to roll your own. Google is your friend, and you are
likely to find code "out there" for this. For instance
http://www.karaszi.com/SQLServer/util_backup_script_like_MP.asp, which is 2005 (shouldn't be too
hard to make 2000) and do not include code to remove old backup files.
> Then on the transaction logs, I see when I configure the job through the backup tool, it allows me
> to select "remove inactive entries from transaction log". This option is not available when I
> create the back through a maintenance plan. Do I need to do this and why can't I do it through
> both tools.
The GUI is confusing and badly designed. Un-checking this option will add the NO_TRUNCATE option to
your backup command. This option is badly named and should have been named
ALLOW_LOG_BACKUP_OF_A_CORRUPT_DATABASE. As you imagine, this option is *not* something you want to
specify for other than extreme cases. Leaving this checked in the GUI is same as not specifying the
option, which is same as how Maint Plan does it. Some more reading:
http://www.karaszi.com/SQLServer/info_restore_no_truncate.asp
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"uSlackr" <gmartin@.gmartin.org> wrote in message news:%23jJDy2xeIHA.4164@.TK2MSFTNGP05.phx.gbl...
>I recently took responsibility for a sharepoint 2003 server & corresponding SQL 2000 server. The
>backups are working through a maintenance plan, but I'd like to implement a richer plan and am
>having difficulties developing the best approach. understand I've not done much care & feeding of
>SQL in my life. (lots of other infrastructure work, through). I have read the technet articles on
>sql 2000 backup & restore and the pocket consultant: database backup & recovery. Perhaps you can
>refer me to a better book?
> I'd like to implement a weekly full db backup,
> nightly differentials
> nightly transaction log backups.
> All of this would be done to a file server
> When I tried to do this in the maintenance plan wizard, it didn't offer the differential option.
> So then I looked at creating a backup job directly and there the option no (apparent) to limit the
> number of backup files kept in the file system. Am I missing something?
> Then on the transaction logs, I see when I configure the job through the backup tool, it allows me
> to select "remove inactive entries from transaction log". This option is not available when I
> create the back through a maintenance plan. Do I need to do this and why can't I do it through
> both tools.
> Thanks in advance.
> \\Greg|||On Mar 1, 3:59=A0am, "Tibor Karaszi"
<tibor_please.no.email_kara...@.hotmail.nomail.com> wrote:
> > Perhaps you can refer me to a better book?
> SQL Server Books Online (the documentation that comes with the product)? I=t is very good.
> > I'd like to implement a weekly full dbbackup,
> > nightly differentials
> > nightly transaction log backups.
> Hmm, why only nightly log backups? For above, I'd expect something like ho=urly or every 10 minutes
> or so.
> > When I tried to do this in the maintenance plan wizard, it didn't offer =the differential option.
> Correct.
> > So then I looked at creating abackupjob directly and there the option no= (apparent) to limit the
> > number ofbackupfiles kept in the file system. =A0Am I missing something?=
> Nope, you are correct again. You would have to roll your own. Google is yo=ur friend, and you are
> likely to find code "out there" for this. For instancehttp://www.karaszi.c=
om/SQLServer/util_backup_script_like_MP.asp, which is 2005 (shouldn't be too=
> hard to make 2000) and do not include code to remove oldbackupfiles.
> > Then on the transaction logs, I see when I configure the job through the=backuptool, it allows me
> > to select "remove inactive entries from transaction log". =A0This option= is not available when I
> > create the back through a maintenance plan. =A0Do I need to do this and =why can't I do it through
> > both tools.
> The GUI is confusing and badly designed. Un-checking this option will add =the NO_TRUNCATE option to
> yourbackupcommand. This option is badly named and should have been named
> ALLOW_LOG_BACKUP_OF_A_CORRUPT_DATABASE. As you imagine, this option is *no=t* something you want to
> specify for other than extreme cases. Leaving this checked in the GUI is s=ame as not specifying the
> option, which is same as how Maint Plan does it. Some more reading:http://=
www.karaszi.com/SQLServer/info_restore_no_truncate.asp
> --
> Tibor Karaszi, SQL Server MVPhttp://www.karaszi.com/sqlserver/default.asph=
ttp://sqlblog.com/blogs/tibor_karaszi
>
> "uSlackr" <gmar...@.gmartin.org> wrote in messagenews:%23jJDy2xeIHA.4164@.TK=2MSFTNGP05.phx.gbl...
> >I recently took responsibility for asharepoint2003 server & corresponding= SQL 2000 server. =A0The
> >backups are working through a maintenance plan, but I'd like to implement= a richer plan and am
> >having difficulties developing the best approach. =A0understand I've not =done much care & feeding of
> >SQL in my life. (lots of other infrastructure work, through). I have read= the technet articles on
> >sql 2000backup& restore and the pocket consultant: databasebackup& recove=ry. =A0Perhaps you can
> >refer me to a better book?
> > I'd like to implement a weekly full dbbackup,
> > nightly differentials
> > nightly transaction log backups.
> > All of this would be done to a file server
> > When I tried to do this in the maintenance plan wizard, it didn't offer =the differential option.
> > So then I looked at creating abackupjob directly and there the option no= (apparent) to limit the
> > number ofbackupfiles kept in the file system. =A0Am I missing something?=
> > Then on the transaction logs, I see when I configure the job through the=backuptool, it allows me
> > to select "remove inactive entries from transaction log". =A0This option= is not available when I
> > create the back through a maintenance plan. =A0Do I need to do this and =why can't I do it through
> > both tools.
> > Thanks in advance.
> > \\Greg- Hide quoted text -
> - Show quoted text -
Greg,
Why read books on how to use the native tools?
Go to our website www.avepoint.com and see if our backup solutions are
sufficient.
Give me a call or drop me an email if you have any questions.
Thanks,
John Hohenadel
312-558-1694
john.hohenadel@.avepoint.com

cleaning up backups

I recently took responsibility for a sharepoint 2003 server & corresponding
SQL 2000 server. The backups are working through a maintenance plan, but
I'd like to implement a richer plan and am having difficulties developing
the best approach. understand I've not done much care & feeding of SQL in
my life. (lots of other infrastructure work, through). I have read the
technet articles on sql 2000 backup & restore and the pocket consultant:
database backup & recovery. Perhaps you can refer me to a better book?
I'd like to implement a weekly full db backup,
nightly differentials
nightly transaction log backups.
All of this would be done to a file server
When I tried to do this in the maintenance plan wizard, it didn't offer the
differential option. So then I looked at creating a backup job directly and
there the option no (apparent) to limit the number of backup files kept in
the file system. Am I missing something?
Then on the transaction logs, I see when I configure the job through the
backup tool, it allows me to select "remove inactive entries from
transaction log". This option is not available when I create the back
through a maintenance plan. Do I need to do this and why can't I do it
through both tools.
Thanks in advance.
\\Greg
On Mar 1, 3:59Xam, "Tibor Karaszi"
<tibor_please.no.email_kara...@.hotmail.nomail.com> wrote:
> SQL Server Books Online (the documentation that comes with the product)? It is very good.
>
> Hmm, why only nightly log backups? For above, I'd expect something like hourly or every 10 minutes
> or so.
>
> Correct.
>
> Nope, you are correct again. You would have to roll your own. Google is your friend, and you are
> likely to find code "out there" for this. For instancehttp://www.karaszi.com/SQLServer/util_backup_script_like_MP.asp, which is 2005 (shouldn't be too
> hard to make 2000) and do not include code to remove oldbackupfiles.
>
> The GUI is confusing and badly designed. Un-checking this option will add the NO_TRUNCATE option to
> yourbackupcommand. This option is badly named and should have been named
> ALLOW_LOG_BACKUP_OF_A_CORRUPT_DATABASE. As you imagine, this option is *not* something you want to
> specify for other than extreme cases. Leaving this checked in the GUI is same as not specifying the
> option, which is same as how Maint Plan does it. Some more reading:http://www.karaszi.com/SQLServer/info_restore_no_truncate.asp
> --
> Tibor Karaszi, SQL Server MVPhttp://www.karaszi.com/sqlserver/default.asphttp://sqlblog.com/blogs/tibor_karaszi
>
> "uSlackr" <gmar...@.gmartin.org> wrote in messagenews:%23jJDy2xeIHA.4164@.TK2MSFTNGP05.phx.gb l...
>
>
>
> - Show quoted text -
Greg,
Why read books on how to use the native tools?
Go to our website www.avepoint.com and see if our backup solutions are
sufficient.
Give me a call or drop me an email if you have any questions.
Thanks,
John Hohenadel
312-558-1694
john.hohenadel@.avepoint.com

Cleaning large tabel MSdistribution_history and MSrepl_commands

Hy
I noticed that my transactional repl is working slower when on the other side MSdistribution_history and MSrepl_commands tabel geting biger and biger. Replications still works OK, but anyway slower. Cleaning agents work OK, there are no errors, so I dont
understand why MSdistribution_history tabel is so big.I would undersand that if there were some undelivered transactions, but everything looks fine.
Is there anyway I can delete some data in MSdistribution_history (600Mb) and MSrepl_commands (200Mb )to decrease tabel size and gain performance to speed up replication like it work 2 months ago?
Frank,
You must rely on the distribution cleanup agent to remove these records,
although the default schedule is every 10 mins so that is not likely to be
the issue here.
Do you have anonymous subscriptions. If so, then the transactions will hang
around until the timeout (3 days default). Also the same applies if you have
a subscriber who doesn't synchronize often which is worth checking.
You can also investigate this using sp_browsereplcmds to have a look at the
commands in readable format to check against the data on the subscribers.
HTH,
Paul Ibison
|||Hi Paul
I don`t have anonymous subsribtions and there is no subscriber which has not recently synchronized with publisher.
I also noticed, that anythinks I do on the distribution database (querying etc) takes a lot of time. I am running maintainance plan periodicly, I dont see any problems with database.
sp_browsereplcmds returned 725005 records and I see "data" for all artices (8) and related stored procedures (sp_insert,sp_delete)
Just one thing:I almost forget the most important thing, I "recently" (1 week ago) change "transaction retention" thrue EM from default value "at least 0 days but no more than 3 day" to "at least 3 days but not more than 5 days". But yestarday when I not
iced that replication works slower, I changed this propertis back to default value. But I did not noticed that database size was reduced.
Thank you Paul for any kind off help.
|||I would strongly advise against running sp_browsereplcmds. It is a very expensive and time consuming procedure to run when you have large numbers of row in the msrepl_commands and msrepl_transactions table.
You might want to look up sp_browsereplcmds in BOL where it talks about some of the swtichs on this command where you can limit the results set returned.
If I were you I would stop your SQL Server Agent on the Publisher/Distributor and then run the distributrion clean up agent.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
-- Frank wrote: --
Hi Paul
I don`t have anonymous subsribtions and there is no subscriber which has not recently synchronized with publisher.
I also noticed, that anythinks I do on the distribution database (querying etc) takes a lot of time. I am running maintainance plan periodicly, I dont see any problems with database.
sp_browsereplcmds returned 725005 records and I see "data" for all artices (8) and related stored procedures (sp_insert,sp_delete)
Just one thing:I almost forget the most important thing, I "recently" (1 week ago) change "transaction retention" thrue EM from default value "at least 0 days but no more than 3 day" to "at least 3 days but not more than 5 days". But yestarday when
I noticed that replication works slower, I changed this propertis back to default value. But I did not noticed that database size was reduced.
Thank you Paul for any kind off help.
|||Frank,
I was considering the use of sp_browsereplcmds to see if you have a definite
record that has been there for several days and has been distributed to all
subscribers. I know its a lot of transactions, but selecting a few at random
from the top of the table should confirm this.
The maintenance plan, if it is doing index maintenance should be disabled as
it takes out a table lock (dbcc dbreindex).
Also, in the current activity is there any evidence of blocking problems?
HTH,
Paul Ibison
|||Thank you Hillary
I done that, but there ware no records to delete from tables:
Here is the result:
Removed 0 history records from MSmerge_history.
Removed 0 history records from MSsnapshot_history.
Removed 0 history records from MSlogreader_history.
Removed 0 history records from MSdistribution_history.
Removed 0 history records from MSqreader_history.
Removed 0 history records from MSrepl_errors.
Removed 0 history records from sysreplicationalerts.
Removed 0 replication history records in 17.923 seconds (0 row/secs).
Would I damage replication system if I manualy delete data from MSdistribution_history tabel? I am almost sure, that this is the main problem. Because, when this tabel was small, replication worked much faster then now.
Thank you
|||I allready looked at the beging for blocking on database. But there is none.
|||you can safely whack the contents of msdistribution_history table.
The other tables I'd think twice or thrice about.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
|||Hi Hillary
I will truncate table and we will see if this will gain any performance on replication speed.
I will inform any progress.
Thank you and Paul for your help.

Monday, March 19, 2012

Chr(13) not working as intended

When I use Chr(13) in my textbox expression, it does not produce a carriage return in the text. For example if I use "Test 1" & Chr(13) & "Test2", Test1 and Test2 come in the same line but if I use just Chr(10) in the place of Chr(13), it works as intended. In Crystal reports, Chr(13) does produce a carriage return and the text come in 2 different lines.

Thanks,

Shyam

It is a common issue here and you are right the difference with crystal is apparent.

You have to use a line feed to get an actual line return (10) as the carraige return (13) just feeds the symbol for systems to translate as they feel and the HTML report viewer will not action a (13) whereas the crystal viewer does.

Sunday, March 11, 2012

Choices of creating database files

Working on a database structure on SQL 2000 server, I have MDF and LDF creat
ed.
I need to create NDF files to use 5 logical drives on the server. All
logical drives are located in SAN storage with RAID10. When I create the ND
F
files , should I create one file on each drive OR create multiple files on
each drive? Which way is better for SQL server performance?
Thanks for recommendation!Mike,
Like so many other concepts, it depends. What is the architecture of your
SAN? If it is of later technology, then your disk allocation may be
"virtuallized" anyway. This means that you really don't have physical
control over which drives your data goes to (within the LUN) - the SAN will
determine that. For example, the HP EVA might span a RAID 10 configuration
across 100 drives in 2K blocks if so configured, even if you are allocating
only 10 GB. If you map 5 logical drives the data corresponding to all 5
drives will be interleaved and spread out over the same 100 drives.
There may be other reasons for separating the NDF files (file backups,
process isolation, etc.). You may want to isolate the log and data for
snapshots, or other reasons, but I think any performance gain would be
nominal.
Also, allocation additional space is usually a straight forward process, but
deleting space usually requires deleting and re-adding the configured space.
If you break up your data files, undoubtedly some will be a larger size than
others and this could result in a maintenance issue. You may be in a
situation where you want to take space from drive A and add it to drive B.
There could be some nominal performance having multiple drives due to SQL
Server having more I/O buffers, but it probably would not be noticable.
Unless you want to have a more sophisticated file backup scheme, I would
start with two drives (for isolation purposes), one for the log and one for
the data, indexes and tempdb.
If your architecture is not of later technology, then it depends (again!).
What kind of SAN are you working with?
-- Bill
"Mike Torry" <MikeTorry@.discussions.microsoft.com> wrote in message
news:A2FD70DA-F8C1-4446-AB9F-4C5D73BB60EE@.microsoft.com...
> Working on a database structure on SQL 2000 server, I have MDF and LDF
> created.
> I need to create NDF files to use 5 logical drives on the server. All
> logical drives are located in SAN storage with RAID10. When I create the
> NDF
> files , should I create one file on each drive OR create multiple files on
> each drive? Which way is better for SQL server performance?
> Thanks for recommendation!

Choices of creating database files

Working on a database structure on SQL 2000 server, I have MDF and LDF created.
I need to create NDF files to use 5 logical drives on the server. All
logical drives are located in SAN storage with RAID10. When I create the NDF
files , should I create one file on each drive OR create multiple files on
each drive? Which way is better for SQL server performance?
Thanks for recommendation!
Mike,
Like so many other concepts, it depends. What is the architecture of your
SAN? If it is of later technology, then your disk allocation may be
"virtuallized" anyway. This means that you really don't have physical
control over which drives your data goes to (within the LUN) - the SAN will
determine that. For example, the HP EVA might span a RAID 10 configuration
across 100 drives in 2K blocks if so configured, even if you are allocating
only 10 GB. If you map 5 logical drives the data corresponding to all 5
drives will be interleaved and spread out over the same 100 drives.
There may be other reasons for separating the NDF files (file backups,
process isolation, etc.). You may want to isolate the log and data for
snapshots, or other reasons, but I think any performance gain would be
nominal.
Also, allocation additional space is usually a straight forward process, but
deleting space usually requires deleting and re-adding the configured space.
If you break up your data files, undoubtedly some will be a larger size than
others and this could result in a maintenance issue. You may be in a
situation where you want to take space from drive A and add it to drive B.
There could be some nominal performance having multiple drives due to SQL
Server having more I/O buffers, but it probably would not be noticable.
Unless you want to have a more sophisticated file backup scheme, I would
start with two drives (for isolation purposes), one for the log and one for
the data, indexes and tempdb.
If your architecture is not of later technology, then it depends (again!).
What kind of SAN are you working with?
-- Bill
"Mike Torry" <MikeTorry@.discussions.microsoft.com> wrote in message
news:A2FD70DA-F8C1-4446-AB9F-4C5D73BB60EE@.microsoft.com...
> Working on a database structure on SQL 2000 server, I have MDF and LDF
> created.
> I need to create NDF files to use 5 logical drives on the server. All
> logical drives are located in SAN storage with RAID10. When I create the
> NDF
> files , should I create one file on each drive OR create multiple files on
> each drive? Which way is better for SQL server performance?
> Thanks for recommendation!

Wednesday, March 7, 2012

checksum_agg on field list not working

Help this was working. Moved on to another DB.
Whenever I use checksum_agg on the result of Binary checksum (with a
field list) always returns zero.
When I try with * - all columns it works.
Cannot use all columns - too slow
select distinct binary_checksum('sp1,sp2,sp3,sp4,sp5,sp6
') from myTable
===> 1949676592 (result)
select checksum_agg(binary_checksum('sp1,sp2,sp
3,sp4,sp5,sp6')) from
myTable
===> 0 (result)
select checksum_agg(1949676592)
===> 1949676592 (result)
select checksum_agg(binary_checksum(*)) from myTable
===> -2019434987 (result)
select checksum_agg(binary_checksum(-2019434987))
===> -2019434987 (result)
? have no ideaSorted Field list should not be in quotes

Saturday, February 25, 2012

Checking which tables were modified by a program?

Is it possible to know what tables were modified when running an application
?
I'm working on a program that has several different components, but I'm only
testing one. I would like to see what tables were affected by this one
component (either by updating or inserting data to a table or several tables
).
Thanks.You can use SQL Server Profiler to show all the commands sent to the
database. I would filter the commands at least down to a specific user if
you are running profiler on a multi user system.
"VMI" wrote:

> Is it possible to know what tables were modified when running an applicati
on?
> I'm working on a program that has several different components, but I'm on
ly
> testing one. I would like to see what tables were affected by this one
> component (either by updating or inserting data to a table or several tabl
es).
> Thanks.|||You can also filter by applicationname in profiler. Its part of the selecte4
d
data in the default trace template.
--
-Omnibuzz (The SQL GC)
http://omnibuzz-sql.blogspot.com/
"VMI" wrote:

> Is it possible to know what tables were modified when running an applicati
on?
> I'm working on a program that has several different components, but I'm on
ly
> testing one. I would like to see what tables were affected by this one
> component (either by updating or inserting data to a table or several tabl
es).
> Thanks.

Checking to see if values are in a table or not -- if not then inserting the values.

I'm trying to checking my production table table_a against a working table table_b (which i'm downlading data to)
Here are the collumns i have in table_a and table_b
Description | FundID (this is not my PK) | Money
I'm running an update if there is already vaule in the money collumn. I check to see if table_a matches table_b...if not i update table a with table b's value where FundID match up.
What i'm having trouble on is if there is no record in table_a but there is a record in table_b. How can I insert that record into table_a? I would like to do all of this (the update and insert statement in one stored proc. if possible. )
If anyone has this answer please let me know.
Thanks,
RB

You'll have to check this before you use it, but this should give you all rows in b that are not in a:
select b.FundID, a.FundID
from b
left join a on a.FundID = b.FundID
where a.FundID is null
Then if this is right, you can write a simple insert statement before it, like this:
INSERT INTO a
select b.Description, b.FundID, b.Money
from b
left join a on a.FundID = b.FundID
where a.FundID is null

|||The select statement looks good however I'm trying to insert into table b so wouldn't it be
INSERT INTO b
...
Just clarify for me if you could.
Thanks for the help
RB|||Oh, if you are inserting INTO b, then you'll just have to reverseeverything I wrote. I thought you were inserting into A what wasin B but not in A.

Sunday, February 19, 2012

Checking if user has a login

Greetings -
I'm working with a SQL Server 2000 database via an Access front-end and I'd
like to check if the user has a login (via a pass through query I imagine) on
the database before attempting to pass the connection string. The user
login name will be the same as their PC login name which I can pick up. Any
assistance is appreciated.
Thanks,
-b
Why not just try to login? If the user doesn't have a login, your login
attempt will fail and that will tell you all you want to know. You just have
to catch the error in the Access front-end.
It wouldn't be very secure if you could find out information about login
credentials without actually being logged in.
Jacco Schalkwijk
SQL Server MVP
"Robert Pfister" <rpfister@.hotmail.com> wrote in message
news:bYOdnSXvu-J5v2vcRVn-2w@.giganews.com...
> Greetings -
> I'm working with a SQL Server 2000 database via an Access front-end and
> I'd
> like to check if the user has a login (via a pass through query I imagine)
> on
> the database before attempting to pass the connection string. The user
> login name will be the same as their PC login name which I can pick up.
> Any
> assistance is appreciated.
> Thanks,
> -b
>

Checking if user has a login

Greetings -
I'm working with a SQL Server 2000 database via an Access front-end and I'd
like to check if the user has a login (via a pass through query I imagine) o
n
the database before attempting to pass the connection string. The user
login name will be the same as their PC login name which I can pick up. Any
assistance is appreciated.
Thanks,
-bWhy not just try to login? If the user doesn't have a login, your login
attempt will fail and that will tell you all you want to know. You just have
to catch the error in the Access front-end.
It wouldn't be very secure if you could find out information about login
credentials without actually being logged in.
Jacco Schalkwijk
SQL Server MVP
"Robert Pfister" <rpfister@.hotmail.com> wrote in message
news:bYOdnSXvu-J5v2vcRVn-2w@.giganews.com...
> Greetings -
> I'm working with a SQL Server 2000 database via an Access front-end and
> I'd
> like to check if the user has a login (via a pass through query I imagine)
> on
> the database before attempting to pass the connection string. The user
> login name will be the same as their PC login name which I can pick up.
> Any
> assistance is appreciated.
> Thanks,
> -b
>

Checking if user has a login

Greetings -
I'm working with a SQL Server 2000 database via an Access front-end and I'd
like to check if the user has a login (via a pass through query I imagine) o
n
the database before attempting to pass the connection string. The user
login name will be the same as their PC login name which I can pick up. Any
assistance is appreciated.
Thanks,
-bWhy not just try to login? If the user doesn't have a login, your login
attempt will fail and that will tell you all you want to know. You just have
to catch the error in the Access front-end.
It wouldn't be very secure if you could find out information about login
credentials without actually being logged in.
Jacco Schalkwijk
SQL Server MVP
"Robert Pfister" <rpfister@.hotmail.com> wrote in message
news:bYOdnSXvu-J5v2vcRVn-2w@.giganews.com...
> Greetings -
> I'm working with a SQL Server 2000 database via an Access front-end and
> I'd
> like to check if the user has a login (via a pass through query I imagine)
> on
> the database before attempting to pass the connection string. The user
> login name will be the same as their PC login name which I can pick up.
> Any
> assistance is appreciated.
> Thanks,
> -b
>

Sunday, February 12, 2012

Check value type before inserting into a table

Is there a way to check the type of a value before inserting the value into
the table? I am working with sql 7 and I need to verify the input values to
a table before inserting them.
For example...
table A, field1 is an integer field type
default value=0;
value//input value from application
if value (future value) is an integer to be inserted into field1 then
insert(value)
else
insert(default value)
This is just a simple logic procedure I want the trigger to check before ins
erting or updating field1You could use isnumeric, e.g.
DECLARE @.def INT
SET @.def = 0
INSERT tbl(col) SELECT CASE WHEN ISNUMERIC(@.param)=1 THEN @.param ELSE @.def
END
Though mind that IsPerfect(IsNumeric())=0.
http://www.aspfaq.com/2390
http://www.aspfaq.com/
(Reverse address to reply.)
"Carlos Aguero" <carlos986@.hotmail.com> wrote in message
news:C1FDEFB3-88D3-4EA7-8A24-3B663EEF71EE@.microsoft.com...
> Is there a way to check the type of a value before inserting the value
into the table? I am working with sql 7 and I need to verify the input
values to a table before inserting them.
> For example...
> table A, field1 is an integer field type
> default value=0;
> value//input value from application
> if value (future value) is an integer to be inserted into field1 then
> insert(value)
> else
> insert(default value)
> This is just a simple logic procedure I want the trigger to check before
inserting or updating field1|||You could use the CASE statement in an insert statement:
create table #test(c char(1), i int)
declare @.x varchar(10)
set @.x = 1
insert into #test values('a', case when isnumeric(@.x) = 1 then @.x else 0
end)
set @.x = 'xxx'
insert into #test values('a', case when isnumeric(@.x) = 1 then @.x else 0
end)
select * from #test
drop table #test
----
----
--
Need SQL Server Examples check out my website at
http://www.geocities.com/sqlserverexamples
"Carlos Aguero" <carlos986@.hotmail.com> wrote in message
news:C1FDEFB3-88D3-4EA7-8A24-3B663EEF71EE@.microsoft.com...
> Is there a way to check the type of a value before inserting the value
into the table? I am working with sql 7 and I need to verify the input
values to a table before inserting them.
> For example...
> table A, field1 is an integer field type
> default value=0;
> value//input value from application
> if value (future value) is an integer to be inserted into field1 then
> insert(value)
> else
> insert(default value)
> This is just a simple logic procedure I want the trigger to check before
inserting or updating field1

Check value type before inserting into a table

Is there a way to check the type of a value before inserting the value into the table? I am working with sql 7 and I need to verify the input values to a table before inserting them
For example..
table A, field1 is an integer field typ
default value=0
value//input value from applicatio
if value (future value) is an integer to be inserted into field1 the
insert(value
els
insert(default value
This is just a simple logic procedure I want the trigger to check before inserting or updating field1You could use isnumeric, e.g.
DECLARE @.def INT
SET @.def = 0
INSERT tbl(col) SELECT CASE WHEN ISNUMERIC(@.param)=1 THEN @.param ELSE @.def
END
Though mind that IsPerfect(IsNumeric())=0.
http://www.aspfaq.com/2390
--
http://www.aspfaq.com/
(Reverse address to reply.)
"Carlos Aguero" <carlos986@.hotmail.com> wrote in message
news:C1FDEFB3-88D3-4EA7-8A24-3B663EEF71EE@.microsoft.com...
> Is there a way to check the type of a value before inserting the value
into the table? I am working with sql 7 and I need to verify the input
values to a table before inserting them.
> For example...
> table A, field1 is an integer field type
> default value=0;
> value//input value from application
> if value (future value) is an integer to be inserted into field1 then
> insert(value)
> else
> insert(default value)
> This is just a simple logic procedure I want the trigger to check before
inserting or updating field1|||You could use the CASE statement in an insert statement:
create table #test(c char(1), i int)
declare @.x varchar(10)
set @.x = 1
insert into #test values('a', case when isnumeric(@.x) = 1 then @.x else 0
end)
set @.x = 'xxx'
insert into #test values('a', case when isnumeric(@.x) = 1 then @.x else 0
end)
select * from #test
drop table #test
----
----
--
Need SQL Server Examples check out my website at
http://www.geocities.com/sqlserverexamples
"Carlos Aguero" <carlos986@.hotmail.com> wrote in message
news:C1FDEFB3-88D3-4EA7-8A24-3B663EEF71EE@.microsoft.com...
> Is there a way to check the type of a value before inserting the value
into the table? I am working with sql 7 and I need to verify the input
values to a table before inserting them.
> For example...
> table A, field1 is an integer field type
> default value=0;
> value//input value from application
> if value (future value) is an integer to be inserted into field1 then
> insert(value)
> else
> insert(default value)
> This is just a simple logic procedure I want the trigger to check before
inserting or updating field1

Check value type before inserting into a table

Is there a way to check the type of a value before inserting the value into the table? I am working with sql 7 and I need to verify the input values to a table before inserting them.
For example...
table A, field1 is an integer field type
default value=0;
value//input value from application
if value (future value) is an integer to be inserted into field1 then
insert(value)
else
insert(default value)
This is just a simple logic procedure I want the trigger to check before inserting or updating field1
You could use isnumeric, e.g.
DECLARE @.def INT
SET @.def = 0
INSERT tbl(col) SELECT CASE WHEN ISNUMERIC(@.param)=1 THEN @.param ELSE @.def
END
Though mind that IsPerfect(IsNumeric())=0.
http://www.aspfaq.com/2390
http://www.aspfaq.com/
(Reverse address to reply.)
"Carlos Aguero" <carlos986@.hotmail.com> wrote in message
news:C1FDEFB3-88D3-4EA7-8A24-3B663EEF71EE@.microsoft.com...
> Is there a way to check the type of a value before inserting the value
into the table? I am working with sql 7 and I need to verify the input
values to a table before inserting them.
> For example...
> table A, field1 is an integer field type
> default value=0;
> value//input value from application
> if value (future value) is an integer to be inserted into field1 then
> insert(value)
> else
> insert(default value)
> This is just a simple logic procedure I want the trigger to check before
inserting or updating field1
|||You could use the CASE statement in an insert statement:
create table #test(c char(1), i int)
declare @.x varchar(10)
set @.x = 1
insert into #test values('a', case when isnumeric(@.x) = 1 then @.x else 0
end)
set @.x = 'xxx'
insert into #test values('a', case when isnumeric(@.x) = 1 then @.x else 0
end)
select * from #test
drop table #test
----
Need SQL Server Examples check out my website at
http://www.geocities.com/sqlserverexamples
"Carlos Aguero" <carlos986@.hotmail.com> wrote in message
news:C1FDEFB3-88D3-4EA7-8A24-3B663EEF71EE@.microsoft.com...
> Is there a way to check the type of a value before inserting the value
into the table? I am working with sql 7 and I need to verify the input
values to a table before inserting them.
> For example...
> table A, field1 is an integer field type
> default value=0;
> value//input value from application
> if value (future value) is an integer to be inserted into field1 then
> insert(value)
> else
> insert(default value)
> This is just a simple logic procedure I want the trigger to check before
inserting or updating field1