Showing posts with label creating. Show all posts
Showing posts with label creating. Show all posts

Tuesday, March 20, 2012

Circular dependencies from constraints

Hey!

I am creating a kind of file browser for an application of mine. The principle is quite straight forward. It consists of folders and files. Each folder can contain other folders and files and so on. The twitch however is that i need a special root entity called site. The site is very much alike a folder but has some other properties. The site can contain folders and files.

To achieve this ive created te following tables (truncated for clearity):

###################
# Sites #
###################
# ID [Int] #
# ... #
###################

###################
# Folders #
###################
# ID [Int] #
# SiteID [Int] #
# FolderID [Int] #
# ... #
###################

###################
# Files #
###################
# ID [Int] #
# SiteID [Int] #
# FolderID [Int] #
# ... #
###################

Both the folder table and the files table have a check constraint that ensures that either SiteID or FolderID is NULL. They WILL be part of EITHER a folder or a site. Not both!

Then i set up the foreign constraints as follows:
Folders.FolderD -> Folders.ID
Folders.SiteID -> Sites.ID
Files.FolderID -> Folders.ID
Files.SiteID -> Sites.ID

All constraints have cascade on delete and therefore the last of them cannot be created as it would be circular. (Wich it wont in this case, but theoreticaly its possible)

Iknow WHY this is rendering an error. But how can i work around it? Or would you suggest another design of the tables?

Hi Supermajs,

Firslty, I would remove the constraints.

Secondly, remove the SiteID and FolderID fields from the Folders and Files tables.

Thirdly, add the fields ParentID and ParentTypeID to the Folders and Files tables.

Then add a new table called tblParentType with the following records: -

ParentTypeID ParentType

1 Site

2 Folder

Then add a constraint between ParentType.ParentTypeID and Folders.ParentTypeID and between ParentTypeID.ParentTypeID and Files.ParentTypeID.

Now a record within Folders will look like this: -

ID ParentID ParentTypeID

1 3 1 this folder is a folder under site id 3

2 14 2 this folder is a folder under folder id 14

The same applies for the Files table.

By doing it this way, you can easily add new levels without needing extra fields that always contain NULL.

For example, you could add a new layer above site called server. The ParentType table would be as follows: -

ParentTypeID ParentType

1 Server

2 Site

3 Folder

|||

.In relational modeling it is files and association, I think you have designed flat files that could be a problem because DRI(declarative referential integrity) comes with restrictions. If you have 50 files that are associated they belong together that is how you can start with 100 files and end up with 5 tables. Try the Normalization tutorial and free data models to clean up your design and post again so I can help you with the DRI Cascade operation because it comes with fixed requirement of if A references B then B must exist. Hope this helps

http://www.utexas.edu/its/windows/database/datamodeling/rm/rm7.html

http://www.databaseanswers.org/data_models/

Monday, March 19, 2012

Choosing Performance measurements from a profiler workload

I'm creating a performance testing server. I'll describe my approach,
maybe the group gurus can review it and suggest improvements. Please
tell me if this is a sound approach.
I want to have global performance measurements. They should guide us
in improving the database overall responsiveness. So, if we add a
complex clustered index that speed a query, but slows a frequent
update, it should appear in the measurements.
Here are the steps I followed:
1) Backup of the production database.
2) Recording of a representative SQL Profiler trace file of the
production database
using the TSQL_Replay template. It will be used as my base
workload.
3) Restoring the backup to a Microsoft Virtual Server instance
(http://www.microsoft.com/windowsserversystem/virtualserver/ )
4) Waste a lot of time configuring login permissions and making the
database have the same
Id number of the original one so I can replay the trace file.
5) Turning on the "Enable Undo disks" option in the Virtual Hard Disk
Properties of the Virtual
server instance.
6) Replay the saved trace file, saving another trace file to gather
performance data.
7) Load the new trace file to a table and run some queries to measure
the database
performance.
8) Discard all the modifications of the virtual server instances,
Tweak the database and go to
step 6 to gather new data.
Is it a sound approach? Are there better ways to do it?
Now is my doubt: which measurements should I take from the saved
profile?
There are a lot of possible events to record in the Profiler trace,
and in each one the fields mean something different.
Here is my initial list of queries on the trace table:
This is probably my most important measure. It gives me the total
duration and CPU spent:
SELECT sum(Duration) as total_time , sum(CPU) total_cpu,
sum(Duration - CPU) as total_wait FROM trace WHERE EventClass = 12 -- 12 is SQL:BatchCompleted
If instead of SQL:BatchCompleted, I use SQL:StmtCompleted I get a
smaller value for sum(duration). If I choose to sum the Duration
column of the SQLTransaction EventClass (number 50) with the commit
EventSubClass (1) the sum(duration) is bigger. Which one gives me the
better measurement?
Here is the commited transactions total, average, max and standard
deviation duration:
SELECT sum(Duration) as trans_total_time , AVG(Duration) as
average_time, max(Duration) as max_transaction_time, stdev(Duration)
as standard_deviation
FROM trace WHERE EventClass = 50 and EventSubClass =1
This is the time wasted rolling back transactions:
SELECT sum(Duration) FROM trace
WHERE EventClass = 50 and EventSubClass =2 -- duration of roolback
transactions
Time spent waiting for locks:
SELECT sum(Duration) FROM trace
WHERE EventClass = 24 -- 24 is Lock:Acquired event
Number of deadlocks:
SELECT MAX(IntegerData) FROM trace WHERE EventClass=59 --
Lock:Deadlock Chain
How do I measure the time wasted when a deadlock happens and a query
is canceled? Is it the duration of the Lock:Cancel event class?
Do the gurus here have any idea of more good data to collect?
BTW, I'm using SQL Server 2000
kind regards,
Paulo Eduardo Neves
http://NaoTemMosquito.blogspot.comHi Paulo
You will need to start your replay trace before yuo take the backup to make
sure it contains all changes between the backup finishing and the trace
starting. You can remove the backup commands after as well as other steps you
don't want. You will need to match database ids with the new system (or
change the trace). If you create the corresponding logins on the new server
you should be able to match the SIDs using sp_change_users_login, so mapping
users should not be a great deal!
For analysis you can use Bill Graziano's cleartrace at
www.cleardata.biz/cleartrace/download
If you are tuning then you should look at statements, frequency as well as
duration and I/O may also be important.
John
"Paulo Eduardo Neves" wrote:
> I'm creating a performance testing server. I'll describe my approach,
> maybe the group gurus can review it and suggest improvements. Please
> tell me if this is a sound approach.
> I want to have global performance measurements. They should guide us
> in improving the database overall responsiveness. So, if we add a
> complex clustered index that speed a query, but slows a frequent
> update, it should appear in the measurements.
> Here are the steps I followed:
> 1) Backup of the production database.
> 2) Recording of a representative SQL Profiler trace file of the
> production database
> using the TSQL_Replay template. It will be used as my base
> workload.
> 3) Restoring the backup to a Microsoft Virtual Server instance
> (http://www.microsoft.com/windowsserversystem/virtualserver/ )
> 4) Waste a lot of time configuring login permissions and making the
> database have the same
> Id number of the original one so I can replay the trace file.
> 5) Turning on the "Enable Undo disks" option in the Virtual Hard Disk
> Properties of the Virtual
> server instance.
> 6) Replay the saved trace file, saving another trace file to gather
> performance data.
> 7) Load the new trace file to a table and run some queries to measure
> the database
> performance.
> 8) Discard all the modifications of the virtual server instances,
> Tweak the database and go to
> step 6 to gather new data.
> Is it a sound approach? Are there better ways to do it?
> Now is my doubt: which measurements should I take from the saved
> profile?
> There are a lot of possible events to record in the Profiler trace,
> and in each one the fields mean something different.
> Here is my initial list of queries on the trace table:
> This is probably my most important measure. It gives me the total
> duration and CPU spent:
> SELECT sum(Duration) as total_time , sum(CPU) total_cpu,
> sum(Duration - CPU) as total_wait FROM trace WHERE EventClass => 12 -- 12 is SQL:BatchCompleted
> If instead of SQL:BatchCompleted, I use SQL:StmtCompleted I get a
> smaller value for sum(duration). If I choose to sum the Duration
> column of the SQLTransaction EventClass (number 50) with the commit
> EventSubClass (1) the sum(duration) is bigger. Which one gives me the
> better measurement?
> Here is the commited transactions total, average, max and standard
> deviation duration:
> SELECT sum(Duration) as trans_total_time , AVG(Duration) as
> average_time, max(Duration) as max_transaction_time, stdev(Duration)
> as standard_deviation
> FROM trace WHERE EventClass = 50 and EventSubClass =1
> This is the time wasted rolling back transactions:
> SELECT sum(Duration) FROM trace
> WHERE EventClass = 50 and EventSubClass =2 -- duration of roolback
> transactions
> Time spent waiting for locks:
> SELECT sum(Duration) FROM trace
> WHERE EventClass = 24 -- 24 is Lock:Acquired event
> Number of deadlocks:
> SELECT MAX(IntegerData) FROM trace WHERE EventClass=59 --
> Lock:Deadlock Chain
> How do I measure the time wasted when a deadlock happens and a query
> is canceled? Is it the duration of the Lock:Cancel event class?
> Do the gurus here have any idea of more good data to collect?
> BTW, I'm using SQL Server 2000
> kind regards,
> Paulo Eduardo Neves
> http://NaoTemMosquito.blogspot.com
>|||On Wed, 03 Oct 2007 12:22:03 -0700, Paulo Eduardo Neves
<pauloneves@.gmail.com> wrote:
>Do the gurus here have any idea of more good data to collect?
Looks like you've got the idea, but the basic structure I always
follow does a group by on the command text. Now, this can be tricky,
I think SQL 2005 has added some features to help there ... but you're
on 2K, so never mind that!
Anyway, my preferred figure of merit is reads, not duration. They
tend to correspond heavily, but reads is logical reads and is somewhat
independent of load, and tells you what it is costing to run that
command.
So,
select
top 20
substring(textdata,1,30) as cmd,
sum(reads) as sumreads,
avg(reads) as avgreads,
count(*) as cnt
from dbo.yourtractable
where eventclass in (rpc done, batch done)
group by substring(textdata,1,30)
order by 2
Now, if you're tuning an entire system, you need to create a total
picture, you need to come up with a way to baseline your performance
and measure the quantitative components and improvements you make. I
guess clocktime through a trace is a good overall measure.
... but this also assumes a lot of stuff about your IO system not
being saturated, and such. When the hardware starts to run out, these
softer statistics become hard to interpret. And replaying it on a VM,
boy, I dunno about that. But in any case, probably the most robust
and significant figure is logical reads, in my ever so humble (or not
so humble) opinion.
Josh|||On Oct 4, 6:17 am, John Bell <jbellnewspo...@.hotmail.com> wrote:
> You will need to start your replay trace before yuo take the backup to make
> sure it contains all changes between the backup finishing and the trace
> starting. You can remove the backup commands after as well as other steps you
> don't want.
I did a backup just a little before. I don't mind if some queries fail
later during the replay. They fail and the replay continues. They
would be insignificant when I calculate the total time.
You will need to match database ids with the new system (or
> change the trace). If you create the corresponding logins on the new server
> you should be able to match the SIDs using sp_change_users_login, so mapping
> users should not be a great deal!
And the default database of the replay users should be the one you are
testing.
> For analysis you can use Bill Graziano's cleartrace atwww.cleardata.biz/cleartrace/download
I've already seen Cleartrace. It looks very useful. The problem is
that I currently just have access to a SQL Server 2000 installation
and it doesn't run in the free version of SQL Server 2005. It looks
like that cleartrace doesn't give me global measurements, just
aggregate the statements.
Thanks for your tips.
regards,
Paulo Eduardo Neves
http://NaoTemMosquito.blogspot.com|||On Oct 5, 2:33 am, JXStern <JXSternChange...@.gte.net> wrote:
> On Wed, 03 Oct 2007 12:22:03 -0700, Paulo Eduardo Neves
> <paulone...@.gmail.com> wrote:
> >Do the gurus here have any idea of more good data to collect?
>
...
> Anyway, my preferred figure of merit is reads, not duration. They
> tend to correspond heavily, but reads is logical reads and is somewhat
> independent of load, and tells you what it is costing to run that
> command.
Yes. But I believe that the problem in my hands are caused due to
excessive locking in the database. Measuring Reads and Writes won't
display the locking problems.
> So,
> select
> top 20
> substring(textdata,1,30) as cmd,
> sum(reads) as sumreads,
> avg(reads) as avgreads,
> count(*) as cnt
> from dbo.yourtractable
> where eventclass in (rpc done, batch done)
> group by substring(textdata,1,30)
> order by 2
I'll try to use the cleartrace tool to get the statement aggregate
data.
> Now, if you're tuning an entire system, you need to create a total
> picture, you need to come up with a way to baseline your performance
> and measure the quantitative components and improvements you make. I
> guess clocktime through a trace is a good overall measure.
Will the clocktime value be different from the total transactions
duration?
> ... but this also assumes a lot of stuff about your IO system not
> being saturated, and such. When the hardware starts to run out, these
> softer statistics become hard to interpret. And replaying it on a VM,
> boy, I dunno about that. But in any case, probably the most robust
> and significant figure is logical reads, in my ever so humble (or not
> so humble) opinion.
First I'm working in a proof of concept server. After showing its
utility, I'll lobby for a dedicated performance testing machine. If I
guarantee that there's nothing else running in the machine, and put
the replay trace file and the performance recording trace in other
harddisk than the one with the database file, I believe that I will
have consistent performance measurements.
I've had a nice idea about the setup. Filtering the trace events where
Duration != 0, will record just the interesting log events.
Thanks for the good ideas!
Paulo
http://NaoTemMosquito.blogspot.com|||Hi Paulo
Replaying traces will potentially highlight problems and a very useful
facility, it is not guaranteed that they always find your problems. Check out
books online to see the restictions when replaying a trace. You may want to
consider using a test tool such as Microsoft Team Suite, LoadRunner or
Rational Performance Tester etc..
If you have locking problems look at usings some of the locking events to
check what your system is doing, rather than relying on time/reads/writes etc.
John
"Paulo Eduardo Neves" wrote:
> On Oct 5, 2:33 am, JXStern <JXSternChange...@.gte.net> wrote:
> > On Wed, 03 Oct 2007 12:22:03 -0700, Paulo Eduardo Neves
> >
> > <paulone...@.gmail.com> wrote:
> > >Do the gurus here have any idea of more good data to collect?
> >
> ...
> > Anyway, my preferred figure of merit is reads, not duration. They
> > tend to correspond heavily, but reads is logical reads and is somewhat
> > independent of load, and tells you what it is costing to run that
> > command.
> Yes. But I believe that the problem in my hands are caused due to
> excessive locking in the database. Measuring Reads and Writes won't
> display the locking problems.
> > So,
> >
> > select
> > top 20
> > substring(textdata,1,30) as cmd,
> > sum(reads) as sumreads,
> > avg(reads) as avgreads,
> > count(*) as cnt
> > from dbo.yourtractable
> > where eventclass in (rpc done, batch done)
> > group by substring(textdata,1,30)
> > order by 2
> I'll try to use the cleartrace tool to get the statement aggregate
> data.
> >
> > Now, if you're tuning an entire system, you need to create a total
> > picture, you need to come up with a way to baseline your performance
> > and measure the quantitative components and improvements you make. I
> > guess clocktime through a trace is a good overall measure.
> Will the clocktime value be different from the total transactions
> duration?
> >
> > ... but this also assumes a lot of stuff about your IO system not
> > being saturated, and such. When the hardware starts to run out, these
> > softer statistics become hard to interpret. And replaying it on a VM,
> > boy, I dunno about that. But in any case, probably the most robust
> > and significant figure is logical reads, in my ever so humble (or not
> > so humble) opinion.
> First I'm working in a proof of concept server. After showing its
> utility, I'll lobby for a dedicated performance testing machine. If I
> guarantee that there's nothing else running in the machine, and put
> the replay trace file and the performance recording trace in other
> harddisk than the one with the database file, I believe that I will
> have consistent performance measurements.
> I've had a nice idea about the setup. Filtering the trace events where
> Duration != 0, will record just the interesting log events.
> Thanks for the good ideas!
> Paulo
> http://NaoTemMosquito.blogspot.com
>
>

Sunday, March 11, 2012

Choices of creating database files

Working on a database structure on SQL 2000 server, I have MDF and LDF creat
ed.
I need to create NDF files to use 5 logical drives on the server. All
logical drives are located in SAN storage with RAID10. When I create the ND
F
files , should I create one file on each drive OR create multiple files on
each drive? Which way is better for SQL server performance?
Thanks for recommendation!Mike,
Like so many other concepts, it depends. What is the architecture of your
SAN? If it is of later technology, then your disk allocation may be
"virtuallized" anyway. This means that you really don't have physical
control over which drives your data goes to (within the LUN) - the SAN will
determine that. For example, the HP EVA might span a RAID 10 configuration
across 100 drives in 2K blocks if so configured, even if you are allocating
only 10 GB. If you map 5 logical drives the data corresponding to all 5
drives will be interleaved and spread out over the same 100 drives.
There may be other reasons for separating the NDF files (file backups,
process isolation, etc.). You may want to isolate the log and data for
snapshots, or other reasons, but I think any performance gain would be
nominal.
Also, allocation additional space is usually a straight forward process, but
deleting space usually requires deleting and re-adding the configured space.
If you break up your data files, undoubtedly some will be a larger size than
others and this could result in a maintenance issue. You may be in a
situation where you want to take space from drive A and add it to drive B.
There could be some nominal performance having multiple drives due to SQL
Server having more I/O buffers, but it probably would not be noticable.
Unless you want to have a more sophisticated file backup scheme, I would
start with two drives (for isolation purposes), one for the log and one for
the data, indexes and tempdb.
If your architecture is not of later technology, then it depends (again!).
What kind of SAN are you working with?
-- Bill
"Mike Torry" <MikeTorry@.discussions.microsoft.com> wrote in message
news:A2FD70DA-F8C1-4446-AB9F-4C5D73BB60EE@.microsoft.com...
> Working on a database structure on SQL 2000 server, I have MDF and LDF
> created.
> I need to create NDF files to use 5 logical drives on the server. All
> logical drives are located in SAN storage with RAID10. When I create the
> NDF
> files , should I create one file on each drive OR create multiple files on
> each drive? Which way is better for SQL server performance?
> Thanks for recommendation!

Choices of creating database files

Working on a database structure on SQL 2000 server, I have MDF and LDF created.
I need to create NDF files to use 5 logical drives on the server. All
logical drives are located in SAN storage with RAID10. When I create the NDF
files , should I create one file on each drive OR create multiple files on
each drive? Which way is better for SQL server performance?
Thanks for recommendation!
Mike,
Like so many other concepts, it depends. What is the architecture of your
SAN? If it is of later technology, then your disk allocation may be
"virtuallized" anyway. This means that you really don't have physical
control over which drives your data goes to (within the LUN) - the SAN will
determine that. For example, the HP EVA might span a RAID 10 configuration
across 100 drives in 2K blocks if so configured, even if you are allocating
only 10 GB. If you map 5 logical drives the data corresponding to all 5
drives will be interleaved and spread out over the same 100 drives.
There may be other reasons for separating the NDF files (file backups,
process isolation, etc.). You may want to isolate the log and data for
snapshots, or other reasons, but I think any performance gain would be
nominal.
Also, allocation additional space is usually a straight forward process, but
deleting space usually requires deleting and re-adding the configured space.
If you break up your data files, undoubtedly some will be a larger size than
others and this could result in a maintenance issue. You may be in a
situation where you want to take space from drive A and add it to drive B.
There could be some nominal performance having multiple drives due to SQL
Server having more I/O buffers, but it probably would not be noticable.
Unless you want to have a more sophisticated file backup scheme, I would
start with two drives (for isolation purposes), one for the log and one for
the data, indexes and tempdb.
If your architecture is not of later technology, then it depends (again!).
What kind of SAN are you working with?
-- Bill
"Mike Torry" <MikeTorry@.discussions.microsoft.com> wrote in message
news:A2FD70DA-F8C1-4446-AB9F-4C5D73BB60EE@.microsoft.com...
> Working on a database structure on SQL 2000 server, I have MDF and LDF
> created.
> I need to create NDF files to use 5 logical drives on the server. All
> logical drives are located in SAN storage with RAID10. When I create the
> NDF
> files , should I create one file on each drive OR create multiple files on
> each drive? Which way is better for SQL server performance?
> Thanks for recommendation!

chng default loca where new DBs created?

Well, I changed the default location as advised (to F:
drive). I also restarted Sql Server. But the Access ADP
is still creating the Sql DB's on the C drive. Do I need
to reboot the server computer for the default location
change to take effect?

>--Original Message--
>Thank you all very much. I completely missed that.
>
>2002
an
>fly
the
12
>.
>Perhaps Access doesn't respect SQL Server's default setting?
Try below from Query Analyzer or OSQL.EXE:
CREATE DATABASE filepathtest
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Ron" <anonymous@.discussions.microsoft.com> wrote in message
news:7b4d01c52677$685a76d0$a401280a@.phx.gbl...
> Well, I changed the default location as advised (to F:
> drive). I also restarted Sql Server. But the Access ADP
> is still creating the Sql DB's on the C drive. Do I need
> to reboot the server computer for the default location
> change to take effect?
>
> an
> the
> 12|||>Perhaps Access doesn't respect SQL Server's default
>setting?

>Try below from Query Analyzer or OSQL.EXE:

>CREATE DATABASE filepathtest
Good idea. Well,l filepathtest went to the correct drive,
F instead of C. But when I create a new Access.adp either
on the same machine as Sql Server or from a remote
workstation, the new DB still ends up on the C drive. I
think Access does not respect Sql Server's default
settings.
Is there something that I could override in the .adp DDL
maybe? It would be a real nightmare if I had to manually
create a bunch of new adhoc DB's for Access2002 users
(since I would have to restrict them from doing it from
their .adp's).

>--Original Message--
>Perhaps Access doesn't respect SQL Server's default
setting?
>Try below from Query Analyzer or OSQL.EXE:
>CREATE DATABASE filepathtest
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>http://www.solidqualitylearning.com/
>http://www.sqlug.se/
>
>"Ron" <anonymous@.discussions.microsoft.com> wrote in
message
>news:7b4d01c52677$685a76d0$a401280a@.phx.gbl...
need
The
the
gigs
default
>
>.
>|||The default database creation location set under Enterprise Manager is
saved in the local computer registry and ONLY applies to databases
created via EM on that computer.
If you create a database without using EM and without specifying a
location, the location will be the same as either the master or model
database files (I do not remember which db controls)
Carl Federl
Please post DDL (create table) with datatypes, primary and foreign keys.
*** Sent via Developersdex http://www.examnotes.net ***
Don't just participate in USENET...get rewarded for it!|||> Is there something that I could override in the .adp DDL
> maybe?
I don't know, don't do Access... :-)
I'd ask this in an Access group, perhaps they have some hints about some set
ting inside Access.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Ron" <anonymous@.discussions.microsoft.com> wrote in message
news:7bb901c52680$7266b450$a401280a@.phx.gbl...
>
>
> Good idea. Well,l filepathtest went to the correct drive,
> F instead of C. But when I create a new Access.adp either
> on the same machine as Sql Server or from a remote
> workstation, the new DB still ends up on the C drive. I
> think Access does not respect Sql Server's default
> settings.
> Is there something that I could override in the .adp DDL
> maybe? It would be a real nightmare if I had to manually
> create a bunch of new adhoc DB's for Access2002 users
> (since I would have to restrict them from doing it from
> their .adp's).
>
>
> setting?
> message
> need
> The
> the
> gigs
> default

Thursday, March 8, 2012

checksums and data types

I'm creating a checksum column in a table that is to be calculated over several columns within the table.

One of the columns to be included in the checksum formula has a data type ntext.

But on trying to complete this new table design (or similarly using alter table in QA) - both return an error stating that the data type is invalid for the checksum function.

This happens for both ntext and text data types.

Can anyone tell me if there is a way round this without having to change the data type - or the valid data types that can be used for the checksum funciton?

Also reasons why would be helpful!

Thanksplease ignore - http://www.dbforums.com/t989557.html shows this not to be possible...

nevermind|||you might want to try something like so:

SELECT checksum(col1,col2,CAST(CAST(col3 as varchar(1)) as int))
FROM testTable

I am not sure if this totally works. Text and Ntext are meant to hold large amounts of text data like notes field in a customer service application. There is no implicit data conversion between int and ntext\text in sql server because that is just one of the rules and it would'nt make much since do so. To tell the truth it sounds like your problem is a design issue. However you can explicitly convert data types as shown above but please keep in mind if you try to cast character data in col3 above to an int, you will recieve an error. So you might have to add an IsNumeric in there as well.|||CHECKSUM works with non-numeric data, so there is no need to recast as INT in your formula.

Though I'm still not sure that is going to give him what he needs...|||Why are you doing this? To enforce data integrity upon INSERT/UPDATE? Or to support some business rule? Either way you're already using a database, so the answer should be in design, not checksum-based tricks.|||hey trotsky!!
calm down.|||I'd love to hear the reason behind this... I can't for the life of me figure out why you might want/need to do it. I'm also with rdjabarov, and think that this smells very strongly of a high GQ (geek quotient) workaround for a case of poor relational design!

-PatP|||that's twice that you have agreed with RDjabarov.
hmmmmmmm is the feud over?

:D|||Feud? Did I miss a meeting?

-PatP|||must have been the coma. when i first go here you guys would go at it like turtles and bunnies.

Friday, February 24, 2012

checking the database existence in sql server

hi to everybody

i'm new member to this site and i don't know where

can be a good place for creating this topic

pardon me for probable mistake.

i need to attach and detach the database in/from

sql server in vb.net with hard-code .i can do this

without using stored procedure and with SQL-DMO

But i can't checking The database Existence

for attaching it.now, how can i check the database

existense for preventing the attach operation.

please help me

thanks a lot

Hi,

using SMO you can use the following script:

new Server("SomeServer").Databases.Contains("SomeDatabase")

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de

|||

hi and thanks for your answer

where can i write that script

can you explain more by some example

thanks a lot

|||Hi,

sure a small application would be (with no extensive exception handling)

using System;

using Microsoft.SqlServer.Management.Smo;

using Microsoft.Win32;

namespace SMOProject

{

class Program

{

static void Main(string[] args)

{

Server s = new Server(args[0]);

if (s.Databases.Contains(args[1]))

Console.WriteLine(string.Format("Database {0} exists",args[1]));

}

}

}

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de

|||

that is brillient,

Sir could you tell me how to attach "database.dbf"?

Thanks

|||

Hi,

if it is a non-SQL Server database file you will have to import from the file using the DTS Import wizard or any other tools which offer a driver for the dbf files.

HTH, Jens K. Suessmeyer.


http://www.sqlserver2005.de

|||

it is a database file

Thanks

|||

But a non-sql server, right ? Or is it just a file which was named with the extension dbf instead of mdf or ndf ?

Jens K. Suessmeyer.


http://www.sqlserver2005.de

|||

mmmmm Sory sir, i really forgotten

it is ".mdf"

Dont mind it

Thanks

|||If you have the logfile you can use sp_attachdb or the appropiate GUI task, if you don′t have the logfile you can use sp_attach_single_file_db. But this will only works if the database was closed correctly.

HTH, jens K. Suessmeyer.

http://www.sqlserver2005.de|||

Yes Sir, Problem Solved by following scripts

sp_attach_db 'nwd','C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\nwd.mdf',

'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\nwd_log.LDF'

Checking performance problem

Hello everybody,
I am experiencing some performance problems and i need to check what
database is creating a large work on my server, whats analisys parameter i'd
add to performance monitor to check this.
Atenciosamente
Leandro Loureiro dos SantosSql Profiler -- You can run a trace on your server, and include the fields (TextData, HostName, Duration, DatabaseID, DatabaseName, CPU, Reads, Writes) Then run it for events TSQL:StmtComplete and/or Stored Procs-SP:Completed. I like to set the trade to log to a sql server table, then you can query the avg cpu, reads, and writes by database id... You can also use TextData to find out what procedures are the most resource intensive.
Or, a simpler, but less exact method, is to run sp_who2... This will show you connections, what database it is connected to, and accumlative cpu time, reads and writes.|||Stored Procs-SP:Completed does not report CPU or Reads,
you will need Stored Procs-RPC:Completed for that
>--Original Message--
>Sql Profiler -- You can run a trace on your server, and
include the fields (TextData, HostName, Duration,
DatabaseID, DatabaseName, CPU, Reads, Writes) Then run it
for events TSQL:StmtComplete and/or Stored Procs-
SP:Completed. I like to set the trade to log to a sql
server table, then you can query the avg cpu, reads, and
writes by database id... You can also use TextData to find
out what procedures are the most resource intensive.
>Or, a simpler, but less exact method, is to run
sp_who2... This will show you connections, what database
it is connected to, and accumlative cpu time, reads and
writes.
>.
>|||If i run this trace on the Host(SQLSERVER) this will impact the server
performance?
Would be better to execute in a client machine?
Thanks
"joe chang" <anonymous@.discussions.microsoft.com> escreveu na mensagem
news:084501c3ad24$24b9f360$a501280a@.phx.gbl...
> Stored Procs-SP:Completed does not report CPU or Reads,
> you will need Stored Procs-RPC:Completed for that
> >--Original Message--
> >Sql Profiler -- You can run a trace on your server, and
> include the fields (TextData, HostName, Duration,
> DatabaseID, DatabaseName, CPU, Reads, Writes) Then run it
> for events TSQL:StmtComplete and/or Stored Procs-
> SP:Completed. I like to set the trade to log to a sql
> server table, then you can query the avg cpu, reads, and
> writes by database id... You can also use TextData to find
> out what procedures are the most resource intensive.
> >
> >Or, a simpler, but less exact method, is to run
> sp_who2... This will show you connections, what database
> it is connected to, and accumlative cpu time, reads and
> writes.
> >.
> >|||the most critical item to avoid adverse performance impact
is:
DO NOT save the profiler trace to a table on the
production server
my preference is to save to a file, doesn't matter too
much where, but probably not the server log disk
the reason for this is that single row inserts to a
database is a cpu intensive task, i believe profiler may
be issuing each row separately,
when saving to a file, i believe profiler buffers 64k,
before writing to disk
also, don't too many events to trace, start with the one
recommended below. if the captures exceed 100-200
events/sec, you might think about filtering out the less
interesting items
>--Original Message--
>If i run this trace on the Host(SQLSERVER) this will
impact the server
>performance?
>Would be better to execute in a client machine?
>Thanks
>"joe chang" <anonymous@.discussions.microsoft.com>
escreveu na mensagem
>news:084501c3ad24$24b9f360$a501280a@.phx.gbl...
>> Stored Procs-SP:Completed does not report CPU or Reads,
>> you will need Stored Procs-RPC:Completed for that
>> >--Original Message--
>> >Sql Profiler -- You can run a trace on your server, and
>> include the fields (TextData, HostName, Duration,
>> DatabaseID, DatabaseName, CPU, Reads, Writes) Then run
it
>> for events TSQL:StmtComplete and/or Stored Procs-
>> SP:Completed. I like to set the trade to log to a sql
>> server table, then you can query the avg cpu, reads, and
>> writes by database id... You can also use TextData to
find
>> out what procedures are the most resource intensive.
>> >
>> >Or, a simpler, but less exact method, is to run
>> sp_who2... This will show you connections, what database
>> it is connected to, and accumlative cpu time, reads and
>> writes.
>> >.
>> >
>
>.
>|||On Mon, 17 Nov 2003 15:22:02 -0300, "Leandro Loureiro dos Santos"
<leandro@.email.com> wrote:
>If i run this trace on the Host(SQLSERVER) this will impact the server
>performance?
Yes, there is a small impact.
Of coure, this hurts the most when you already have a performance
problem!
>Would be better to execute in a client machine?
Of course.
But as Joe says, you can run the trace on the host, and save the
results to a database or file on another machine, to split the impact.
J.

Tuesday, February 14, 2012

Checking a Column Data Type

Here is the issue,

I was given the task of creating a datamart by combining the information from several different database servers. While doing this I ran into an interesting issue that I can't seem to figure out.

I have two tables, one table has a code value in it pointing to the corresponding lookup table. The lookup table for whatever reason (I didn't build the application or the database, yay legacy support! ) has two lookup columns, one is the standard incrementing numbers and the other is a series of letters. The problem lies in the fact that the code in the first table is a char(1) and can either be a letter or a number. The look up has type int and char so I run into a simple conversion problem. Here is my datamart select statement that fails because it can not convert a char to an int. My question is what would be an easy way to test which value type the char(1) is and use the appropriate left outer join.. I tried several different approaches but just can't seem to figure it out.

SELECT *

FROM PERMIT

LEFT OUTER JOIN PERMIT_STATUS_LU

ON

(

(PERMIT.endorse_status_cd = PERMIT_STATUS_LU.PSLU_PERMIT_STATUS_NM) OR

(PERMIT.endorse_status_cd = PERMIT_STATUS_LU.pslu_cd)

)

the above of course fails due to type conversion. status_cd is the char(1), status_nm is the char and pslu_cd is the int. I have no clue why they set it up this way, but I get to figure out how to deal with it.

Confused,

-Andrew

Try to use ISNUMERIC:

declare @.c char(1)

set @.c='5'

select isnumeric(@.c)

Result: 1

declare @.c2 char(1)

set @.c2='a'

select isnumeric(@.c2)

Result: 0

|||

yeah I thought about that but then the question is, how do you use IF ELSE Logic inside a JOIN statement

Can you even do something like this? ( I just tried it and got parse errors) So I guess the question is now how do you put conditional logic inside a JOIN ON ?

LEFT OUTER JOIN PERMIT_STATUS_LU

ON

(

IF ( isnumeric(PERMIT.endorse_status_cd) = 1)

BEGIN

(PERMIT.endorse_status_cd = PERMIT_STATUS_LU.PSLU_PERMIT_STATUS_NM)

END

ELSE
BEGIN

(PERMIT.endorse_status_cd = PERMIT_STATUS_LU.pslu_cd)

END

)

|||

Use CASE instead.

Also, isnumeric, under some situations, can be unreliable. Refer to this article for details.

isnumeric -What is wrong?
http://www.aspfaq.com/show.asp?id=2390

|||

Good idea using the CASE. I am still getting errors though, I am still unsure if its even possible to put a CASE statement inside the JOIN on clause. For example I tried

....

FROM PERMIT

CASE isnumeric(PERMIT.endorse_status_cd)

WHEN 0 THEN LEFT OUTER JOIN PERMIT_STATUS_LU ON PERMIT.endorse_status_cd = PERMIT_STATUS_LU.PSLU_PERMIT_STATUS_NM

WHEN 1 THEN LEFT OUTER JOIN PERMIT_STATUS_LU ON PERMIT.endorse_status_cd = PERMIT_STATUS_LU.pslu_cd

END

That didnt work so I tried

....

FROM PERMIT

LEFT OUTER JOIN PERMIT_STATUS_LU ON

(

CASE isnumeric(PERMIT.endorse_status_cd)

WHEN 0 THEN PERMIT.endorse_status_cd = PERMIT_STATUS_LU.PSLU_PERMIT_STATUS_NM

WHEN 1 THEN PERMIT_STATUS_LU ON PERMIT.endorse_status_cd = PERMIT_STATUS_LU.pslu_cd

END

)

that failed as well

Does anyone have a definative answer to the above question?

|||

I created 2 separate queries: One that contains only the rows that have a numeric value for endorse_status_cd and another for the non numeric values. I used a union to merge them into one singe result set. Here is the query:

Code Snippet

SELECT *

FROM PERMIT

LEFT OUTER JOIN PERMIT_STATUS_LU

ON PERMIT.endorse_status_cd = PERMIT_STATUS_LU.PSLU_PERMIT_STATUS_NM

WHERE IsNumeric(PERMIT.endorse_status_cd) = 0

UNION

SELECT *

FROM PERMIT

LEFT OUTER JOIN PERMIT_STATUS_LU

ON PERMIT.endorse_status_cd = PERMIT_STATUS_LU.pslu_cd

WHERE IsNumeric(PERMIT.endorse_status_cd) = 1

Based on the article that Arnie included in his reply, it might be better to create your own function (check the article) instead of IsNumeric.

I hope this answers your question.

Best regards,

Sami Samir