Tuesday, March 27, 2012
Cleanup Task / xp_delete_file
I found some posts but not really a solution. Environment: SQL 2005 EE, SP1,
english version but german regional settings
The following code does nothing in the file system, but allways states
success:
EXECUTE master.dbo.xp_delete_file 0,N'D:\test\',N'.bak',N'06/03/2006 15:06:2
9'
nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
include subfolders.
Any ideas ?
MichaelOne problem is that xp_delete_file belongs to the sys schema
not the dbo schema.
-Sue
On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
<MichaelRoedeske@.discussions.microsoft.com> wrote:
>Hello together,
>I found some posts but not really a solution. Environment: SQL 2005 EE, SP1
,
>english version but german regional settings
>The following code does nothing in the file system, but allways states
>success:
>EXECUTE master.dbo.xp_delete_file 0,N'D:\test',N'.bak',N'06/03/2006 15:06:
29'
>nor does it work with 'bak" ,'trn, '*.trn' or '*.*' or the seeting to
>include subfolders.
>Any ideas ?
>Michael
>|||Dear Sue,
even when I call the sp with the sys schema it happes nothing in the
filesystem.
Any other idea?
Regards
Michael
"Sue Hoegemeier" wrote:
> One problem is that xp_delete_file belongs to the sys schema
> not the dbo schema.
> -Sue
> On Tue, 6 Jun 2006 06:17:01 -0700, Michael Roedeske
> <MichaelRoedeske@.discussions.microsoft.com> wrote:
>
>|||Others have had the same issue - and others do not. I
haven't seen anyone post what solved the issue.
Try removing the dot from the extension - for example have
it list as just BAK without the dot before BAK. Make sure
the extension is exactly what you are using for your
backups.
It was suppose to be fixed in SP1 but you may also want to
try creating a cleanup task for each subdirectory.
People sometimes spend too much time troubleshooting
maintenance plans when they can just write scripts that have
more flexibility in less time then they spend addressing
maintenance plan issue.
-Sue
On Wed, 7 Jun 2006 01:06:02 -0700, Michael Roedeske
<MichaelRoedeske@.discussions.microsoft.com> wrote:
[vbcol=seagreen]
>Dear Sue,
>even when I call the sp with the sys schema it happes nothing in the
>filesystem.
>Any other idea?
>Regards
>Michael
>"Sue Hoegemeier" wrote:
>sqlsql
Sunday, March 25, 2012
Clean Caches command vs recompile
Before I do performance check for stored procedure, I always clean the
caches using the following commands.
DBCC DROPCLEANBUFFERS
Go
DBCC FREEPROCCACHE
GO
But sometimes I think it's annoying as it basically purges all the caches
from the SQL server.
So if I just test one procedure's performance, I can just add 'WITH
recompile' keyword in the end, right?
is that the same thing since this way stored procedure will not use the
caches even caches exist.no, it's not the same thing. both 'with recompile' and 'dbcc freeprocbuffer'
will force recompile of the procedure, but 'dbcc dropcleanbuffers' will
purge the data cache (which is the reason it's not advisable to do it on a
production server).
dean
"Britney" <britneychen_2001@.yahoo.com> wrote in message
news:%23Ut4V3EFFHA.1084@.tk2msftngp13.phx.gbl...
>
> Before I do performance check for stored procedure, I always clean the
> caches using the following commands.
>
>
> DBCC DROPCLEANBUFFERS
> Go
> DBCC FREEPROCCACHE
> GO
>
> But sometimes I think it's annoying as it basically purges all the caches
> from the SQL server.
> So if I just test one procedure's performance, I can just add 'WITH
> recompile' keyword in the end, right?
> is that the same thing since this way stored procedure will not use the
> caches even caches exist.
>
>
>
>
Thursday, March 22, 2012
Classifieds Starter Kit in SQL Server 2005
Hello,
I am trying to get Classifieds Starter Kit work in SQL Server 2005. I did following:
1. Create DBs; The sql scripts came with the starter kit in the App_Data folder.
2. To create aspnetdb: aspnet_regsql.exe -E -S localhost -A mr
3. Open SQL Server Configuration Manager. Select "SQL Server 2005 Network Configuration | Protocols for MSSQLSERVER" then enable the protocols 'TCP/IP' and possibly 'Named Pipes'
4. My connections string <add name="classifiedsConnection" connectionString="Data Source=MyServerName;;User ID=user;Password=pass;Initial Catalog=ClassAds" providerName="System.Data.SqlClient" />
I still get the following error when I debug the application:
An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)
How can I resolve this problem?
Looking at the steps that you have gone through you have set up the system to use the SQLExpress system with the database already attached to the engine, but have set up the location of the database files to use the sql express system as a user instance. To fix it you will have to either attach the database files to the engine using the comand line or the SQL Express management studio, or change your connection string to attach the files at run time.
|||Thanks for the reply, I am new in this, can you give me an example how I can do it?
Class not registered
when i am trying to open a table from Microsoft Visua Studio 2005
I got the following error
Class Not Registered.Loking for object with classID:......
how resolve this?
Hi,
What is the class ID that it complains about. This seems more like a Visual Studio setup/installation issue than a SQL Server Data Access issue.
Thanks
Waseem
Clarification requested for 'Estimating the Size of a Table' (Estimating the Size of a Heap)
CREATE TABLE [picaweb].[temp_response] (
[ResponseID] [varchar] (30) NOT NULL ,
[Term] [varchar] (5) NOT NULL ,
[Subject] [varchar] (4) NOT NULL ,
[Course] [varchar] (4) NOT NULL ,
[Sect] [varchar] (3) NOT NULL ,
[MidEndFlag] [varchar] (3) NOT NULL ,
[SID] [varchar] (9) NOT NULL ,
[TemplateID] [varchar] (30) NOT NULL ,
[LastModified] [datetime] NULL ,
[College] [varchar] (30) NULL ,
[Classification] [varchar] (10) NULL ,
[CourseRequired] [varchar] (3) NULL ,
[ExpectedGrade] [varchar] (10) NULL ,
[Sex] [varchar] (6) NULL ,
[ItemAnswer1] [varchar] (1) NULL ,
[ItemComments1] [text] NULL ,
[ItemAnswer2] [varchar] (1) NULL ,
[ItemComments2] [text] NULL ,
[ItemAnswer3] [varchar] (1) NULL ,
[ItemComments3] [text] NULL ,
[ItemAnswer4] [varchar] (1) NULL ,
[ItemComments4] [text] NULL ,
[ItemAnswer5] [varchar] (1) NULL ,
[ItemComments5] [text] NULL ,
[ItemAnswer6] [varchar] (1) NULL ,
[ItemComments6] [text] NULL ,
[ItemAnswer7] [varchar] (1) NULL ,
[ItemComments7] [text] NULL ,
[ItemAnswer8] [varchar] (1) NULL ,
[ItemComments8] [text] NULL ,
[ItemAnswer9] [varchar] (1) NULL ,
[ItemComments9] [text] NULL ,
[ItemAnswer10] [varchar] (1) NULL ,
[ItemComments10] [text] NULL ,
[ItemAnswer11] [varchar] (1) NULL ,
[ItemComments11] [text] NULL ,
[ItemAnswer12] [varchar] (1) NULL ,
[ItemComments12] [text] NULL ,
[ItemAnswer13] [varchar] (1) NULL ,
[ItemComments13] [text] NULL ,
[ItemAnswer14] [varchar] (1) NULL ,
[ItemComments14] [text] NULL ,
[ItemAnswer15] [varchar] (1) NULL ,
[ItemComments15] [text] NULL ,
[ItemAnswer16] [varchar] (1) NULL ,
[ItemComments16] [text] NULL ,
[ItemAnswer17] [varchar] (1) NULL ,
[ItemComments17] [text] NULL ,
[ItemAnswer18] [varchar] (1) NULL ,
[ItemComments18] [text] NULL ,
[ItemAnswer19] [varchar] (1) NULL ,
[ItemComments19] [text] NULL ,
[ItemAnswer20] [varchar] (1) NULL ,
[ItemComments20] [text] NULL ,
[EssayQuestionAnswer1] [text] NULL ,
[EssayQuestionAnswer2] [text] NULL ,
[EssayQuestionAnswer3] [text] NULL ,
[EssayQuestionAnswer4] [text] NULL ,
[EssayQuestionAnswer5] [text] NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
I would appreciate any insight anyone might be able to provide. My calculations were wrong somewhere, and I believe it was on the Variable_Data_Size variable determination.
Best,
B.
Have you checked in the 2K5 version of Books Online? I'm not sure if it was updated...
http://msdn2.microsoft.com/en-us/library/ms187445.aspx
|||Yes, it's pretty much the same as for 2000. Now that I've really thought about it, I suppose the size of the data to be stored "in row" should be treated like a fixed-length field, especially since it will reduce row density per page. I think that's what I'll do.Question answered.
Best,
B.
Monday, March 19, 2012
Choosing Lesser of Evils
files before matching records against existing ones in the database:
UPDATE SampleSourceArchive
SET CompanyName = LTRIM(RTRIM(ISNULL(REPLACE(CompanyName,'
"',''),
''))),
CompanyPhoneArea = LTRIM(RTRIM(ISNULL(REPLACE(CompanyPhoneA
rea,'"',''),
''))),
CompanyPhoneNumber =
LTRIM(RTRIM(ISNULL(REPLACE(CompanyPhoneN
umber,'"',''), ''))),
CompanyAddress1 = LTRIM(RTRIM(ISNULL(REPLACE(CompanyAddres
s1,'"',''),
''))),
CompanyAddress2 = LTRIM(RTRIM(ISNULL(REPLACE(CompanyAddres
s2,'"',''),
''))),
CompanyAddress3 = LTRIM(RTRIM(ISNULL(REPLACE(CompanyAddres
s3,'"',''),
''))),
City = LTRIM(RTRIM(ISNULL(REPLACE(City,'"',''), ''))),
StateProvince = LTRIM(RTRIM(ISNULL(REPLACE(StateProvince
,'"',''),
''))),
PostalCode = LTRIM(RTRIM(ISNULL(REPLACE(PostalCode,'"',''), ''))),
Country = LTRIM(RTRIM(ISNULL(REPLACE(Country,'"',''), ''))),
SubIndustryCode = LTRIM(RTRIM(ISNULL(REPLACE(SubIndustryCo
de,'"',''),
''))),
SicCode = LTRIM(RTRIM(ISNULL(REPLACE(SicCode,'"',''), ''))),
RdhType = LTRIM(RTRIM(ISNULL(REPLACE(RdhType,'"',''), ''))),
RdhID = LTRIM(RTRIM(ISNULL(REPLACE(RdhID,'"',''), ''))),
WWCustomerNumber = LTRIM(RTRIM(ISNULL(REPLACE(WWCustomerNum
ber,'"',''),
''))),
DunsNumber = LTRIM(RTRIM(ISNULL(REPLACE(DunsNumber,'"',''), ''))),
EstablishmentSize =
LTRIM(RTRIM(ISNULL(REPLACE(Establishment
Size,'"',''), ''))),
ContactTitle = LTRIM(RTRIM(ISNULL(REPLACE(ContactTitle,
'"',''), ''))),
FirstName = LTRIM(RTRIM(ISNULL(REPLACE(FirstName,'"',''), ''))),
FullName = LTRIM(RTRIM(ISNULL(REPLACE(FullName,'"',''), ''))),
ContactJob = LTRIM(RTRIM(ISNULL(REPLACE(ContactJob,'"',''), ''))),
ContactPhoneNumber =
LTRIM(RTRIM(ISNULL(REPLACE(ContactPhoneN
umber,'"',''), ''))),
IbmContactNumber = LTRIM(RTRIM(ISNULL(REPLACE(IbmContactNum
ber,'"',''),
''))),
ProductVendor = LTRIM(RTRIM(ISNULL(REPLACE(ProductVendor
,'"',''),
''))),
ProductName = LTRIM(RTRIM(ISNULL(REPLACE(ProductName,'
"',''), ''))),
SoftwareVersion = LTRIM(RTRIM(ISNULL(REPLACE(SoftwareVersi
on,'"',''),
''))),
SoftwareOS = LTRIM(RTRIM(ISNULL(REPLACE(SoftwareOS,'"',''), ''))),
HardwareSeries = LTRIM(RTRIM(ISNULL(REPLACE(HardwareSerie
s,'"',''),
''))),
HardwareProductGroup =
LTRIM(RTRIM(ISNULL(REPLACE(HardwareProdu
ctGroup,'"',''), ''))),
SoftwareProductID =
LTRIM(RTRIM(ISNULL(REPLACE(SoftwareProdu
ctID,'"',''), ''))),
SoftwareComponentID =
LTRIM(RTRIM(ISNULL(REPLACE(SoftwareCompo
nentID,'"',''), ''))),
SoftwarePartNumber =
LTRIM(RTRIM(ISNULL(REPLACE(SoftwarePartN
umber,'"',''), ''))),
HardwareModel = LTRIM(RTRIM(ISNULL(REPLACE(HardwareModel
,'"',''),
''))),
HardwareMachineType =
LTRIM(RTRIM(ISNULL(REPLACE(HardwareMachi
neType,'"',''), ''))),
ContactEmail = LTRIM(RTRIM(ISNULL(REPLACE(ContactEmail,
'"',''), ''))),
CompanyNameLong = LTRIM(RTRIM(ISNULL(REPLACE(CompanyNameLo
ng,'"',''),
''))),
InterviewLanguage =
LTRIM(RTRIM(ISNULL(REPLACE(InterviewLang
uage,'"',''), ''))),
Salutation = LTRIM(RTRIM(ISNULL(REPLACE(Salutation,'"',''), ''))),
CompanyNameAbbreviation =
LTRIM(RTRIM(ISNULL(REPLACE(CompanyNameAb
breviation,'"',''), ''))),
UniversalCountryCode = LTRIM(RTRIM(ISNULL(REPLACE(EmailFlag,'"',''),
''))),
EmailFlag = LTRIM(RTRIM(ISNULL(REPLACE(EmailFlag,'"',''), ''))),
SurveyFlag = LTRIM(RTRIM(ISNULL(REPLACE(SurveyFlag,'"',''), '')))
WHERE SampleSourceKey = @.sampleSourceKey
Each of the columns is NVARCHAR(255) Don't be alarmed, the records are never
actually that big. The nature of the data is that certain records use
certain columns (not my fault).
Business Rule:
Each value has to get trimmed, replace nulls with ZLS's and drop any
existing double-quotes.
My Complaint:
Since a function gets invoked once for every record, then if there are 42
columns getting updated, and each one requires 4 nested function calls, that
seems to me that each record getting updated needs 168 function calls, and a
@.sampleSourceKey in the where clause could make it touch anywhere from a
half million records to over a million records.
My Worse Complaint:
The have just changed the business rules to also require me to remove any of
these characters from every value in the SampleSource block:
10, 13, 34, 39, 46, 145, 146, 147, 148
I *really* don't want to make this uglier than it already is, so I have
thought of three different ways to do it:
1.
ltrim(rtrim(replace(replace(replace(repl
ace(replace(...n...(isnull(column,
''))))))))) for each column in the set clause (I REALLY don't want to do
this)
2. Make a CleanGarbage(columnvalue) UDF that basically does #1 above, but at
least it encapsulates the mess (still has gobbles of function call
overhead)
3. Use sp_OACreate to get an instance of the Scripting.RegExp COM-based
regular expression object, hold onto its handle, and pass the handle and
value to be cleaned to a UDF. In this example all the UDF calls that run
the regular expression would be using the same instance of the same object,
have cleaner code, be more maintainable because when the character list to
be replaced changes, I'd only have to modify the regex pattern, and wouldn't
have the vast number of calls as #1 & #2, but on the other hand, this is
pretty obscure technique and I'm afraid that other people that come after me
might choke on it.
What are the comments about my three ideas? Feel free to suggest something
I haven't thought of -- I'd really like to have this be cleaner, and faster,
if possible. Speed is actually not a requirement, as this is about 3
percent of the work of this batch, and the stuff runs for hours at a time
because of other reasons.
--
Peace & happy computing,
Mike Labosh, MCSD
"Escriba coda ergo sum." -- vbSenseiIs DTS an option in this situation?
Rick Sawtell
MCT, MCSD, MCDBA|||> 1.
> ltrim(rtrim(replace(replace(replace(repl
ace(replace(...n...(isnull(column,
> ''))))))))) for each column in the set clause (I REALLY don't want to do
> this)
urgg, i never ever want to maintain that !
> 2. Make a CleanGarbage(columnvalue) UDF that basically does #1 above, but
> at least it encapsulates the mess (still has gobbles of function call
> overhead)
As text base functions are very slow in 2000 that wouldnt make it better, I
working on a exmaple to comapre the effeciency of UDT and SQL Server 2005
.NET Assemblies, just checkout my sites from time to time, they will be on
in the next few w
your data.
3. Use sp_OACreate to get an instance of the Scripting.RegExp COM-based
> regular expression object, hold onto its handle, and pass the handle and
Wait a minute, that wouldnt be really nice at all. :-(
Nor of your suggestions really make me happy, if you are doing such
"immense" inserts in the database, I would write some code in .NET with some
kind of tracing. Using a serverside cursor and scrolling down the pages with
skipping the rows that doesnt really work with your translation and mark
them with some kind of (bit)flag that you can clean if some malicious data
was inserted into the database and coulnt be transformed. This application
could be run via commandprompt / winApp / WebApp or service.
If you cant code .NET well have to find something else.
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
> I *really* don't want to make this uglier than it already is, so I have
> thought of three different ways to do it:
> 1.
> ltrim(rtrim(replace(replace(replace(repl
ace(replace(...n...(isnull(column,
> ''))))))))) for each column in the set clause (I REALLY don't want to do
> this)
> 2. Make a CleanGarbage(columnvalue) UDF that basically does #1 above, but
> at least it encapsulates the mess (still has gobbles of function call
> overhead)
> 3. Use sp_OACreate to get an instance of the Scripting.RegExp COM-based
> regular expression object, hold onto its handle, and pass the handle and
> value to be cleaned to a UDF. In this example all the UDF calls that run
> the regular expression would be using the same instance of the same
> object, have cleaner code, be more maintainable because when the character
> list to be replaced changes, I'd only have to modify the regex pattern,
> and wouldn't have the vast number of calls as #1 & #2, but on the other
> hand, this is pretty obscure technique and I'm afraid that other people
> that come after me might choke on it.
> What are the comments about my three ideas? Feel free to suggest
> something I haven't thought of -- I'd really like to have this be cleaner,
> and faster, if possible. Speed is actually not a requirement, as this is
> about 3 percent of the work of this batch, and the stuff runs for hours at
> a time because of other reasons.
> --
> Peace & happy computing,
> Mike Labosh, MCSD
> "Escriba coda ergo sum." -- vbSensei
>|||> Is DTS an option in this situation?
I personally am not that well versed in DTS. I would have to read up on
some of its different transform features.
Also in the end, the package would have to be invokable either from a stored
procedure, or .NET. Can that be done?
--
Peace & happy computing,
Mike Labosh, MCSD
"Escriba coda ergo sum." -- vbSensei
"Rick Sawtell" <r_sawtell@.hotmail.com> wrote in message
news:%23qP7bYNVFHA.4092@.TK2MSFTNGP12.phx.gbl...
> Rick Sawtell
> MCT, MCSD, MCDBA
>|||I would rather code an assembly which can be used by calling it from a
stored procedure (via XP_CMDSHELL) or just at thta time as you want to (via
WInApp, etc.). In DTS the great deal is doing things parallel, theres no
user for that in here because it will be updated as a whole.
HTH, Jens Suessmeyer.
http://www.sqlsever2005.de
--
"Mike Labosh" <mlabosh@.hotmail.com> schrieb im Newsbeitrag
news:uQcxufNVFHA.4092@.TK2MSFTNGP12.phx.gbl...
> I personally am not that well versed in DTS. I would have to read up on
> some of its different transform features.
> Also in the end, the package would have to be invokable either from a
> stored procedure, or .NET. Can that be done?
> --
> Peace & happy computing,
> Mike Labosh, MCSD
> "Escriba coda ergo sum." -- vbSensei
> "Rick Sawtell" <r_sawtell@.hotmail.com> wrote in message
> news:%23qP7bYNVFHA.4092@.TK2MSFTNGP12.phx.gbl...
>|||I prefer to do all this in a pre-processing stage on the application side,
often via a .NET app. It has much better text processing/manipulation
capabilities.
Based on doing it in SQL 2K, I would think any of your possible solutions
would run pretty poorly... sp_OACreate especially... That seems like some
serious overkill. If you're stuck with doing this SQL Server-side, might as
well put it in a SP and/or UDF to at least keep some level of
maintainability... Can you imagine the mess in trying to update your #1
solution below when they decide they want to eliminate all ASCII character
code 9's from the data next w
"Mike Labosh" <mlabosh@.hotmail.com> wrote in message
news:OwFpG9MVFHA.2664@.TK2MSFTNGP15.phx.gbl...
>I have the following SP that is the first step of cleaning bulk inserted
>files before matching records against existing ones in the database:
> UPDATE SampleSourceArchive
> SET CompanyName = LTRIM(RTRIM(ISNULL(REPLACE(CompanyName,'
"',''),
> ''))),
> CompanyPhoneArea =
> LTRIM(RTRIM(ISNULL(REPLACE(CompanyPhoneA
rea,'"',''), ''))),
> CompanyPhoneNumber =
> LTRIM(RTRIM(ISNULL(REPLACE(CompanyPhoneN
umber,'"',''), ''))),
> CompanyAddress1 = LTRIM(RTRIM(ISNULL(REPLACE(CompanyAddres
s1,'"',''),
> ''))),
> CompanyAddress2 = LTRIM(RTRIM(ISNULL(REPLACE(CompanyAddres
s2,'"',''),
> ''))),
> CompanyAddress3 = LTRIM(RTRIM(ISNULL(REPLACE(CompanyAddres
s3,'"',''),
> ''))),
> City = LTRIM(RTRIM(ISNULL(REPLACE(City,'"',''), ''))),
> StateProvince = LTRIM(RTRIM(ISNULL(REPLACE(StateProvince
,'"',''),
> ''))),
> PostalCode = LTRIM(RTRIM(ISNULL(REPLACE(PostalCode,'"',''), ''))),
> Country = LTRIM(RTRIM(ISNULL(REPLACE(Country,'"',''), ''))),
> SubIndustryCode = LTRIM(RTRIM(ISNULL(REPLACE(SubIndustryCo
de,'"',''),
> ''))),
> SicCode = LTRIM(RTRIM(ISNULL(REPLACE(SicCode,'"',''), ''))),
> RdhType = LTRIM(RTRIM(ISNULL(REPLACE(RdhType,'"',''), ''))),
> RdhID = LTRIM(RTRIM(ISNULL(REPLACE(RdhID,'"',''), ''))),
> WWCustomerNumber =
> LTRIM(RTRIM(ISNULL(REPLACE(WWCustomerNum
ber,'"',''), ''))),
> DunsNumber = LTRIM(RTRIM(ISNULL(REPLACE(DunsNumber,'"',''), ''))),
> EstablishmentSize =
> LTRIM(RTRIM(ISNULL(REPLACE(Establishment
Size,'"',''), ''))),
> ContactTitle = LTRIM(RTRIM(ISNULL(REPLACE(ContactTitle,
'"',''),
> ''))),
> FirstName = LTRIM(RTRIM(ISNULL(REPLACE(FirstName,'"',''), ''))),
> FullName = LTRIM(RTRIM(ISNULL(REPLACE(FullName,'"',''), ''))),
> ContactJob = LTRIM(RTRIM(ISNULL(REPLACE(ContactJob,'"',''), ''))),
> ContactPhoneNumber =
> LTRIM(RTRIM(ISNULL(REPLACE(ContactPhoneN
umber,'"',''), ''))),
> IbmContactNumber =
> LTRIM(RTRIM(ISNULL(REPLACE(IbmContactNum
ber,'"',''), ''))),
> ProductVendor = LTRIM(RTRIM(ISNULL(REPLACE(ProductVendor
,'"',''),
> ''))),
> ProductName = LTRIM(RTRIM(ISNULL(REPLACE(ProductName,'
"',''), ''))),
> SoftwareVersion = LTRIM(RTRIM(ISNULL(REPLACE(SoftwareVersi
on,'"',''),
> ''))),
> SoftwareOS = LTRIM(RTRIM(ISNULL(REPLACE(SoftwareOS,'"',''), ''))),
> HardwareSeries = LTRIM(RTRIM(ISNULL(REPLACE(HardwareSerie
s,'"',''),
> ''))),
> HardwareProductGroup =
> LTRIM(RTRIM(ISNULL(REPLACE(HardwareProdu
ctGroup,'"',''), ''))),
> SoftwareProductID =
> LTRIM(RTRIM(ISNULL(REPLACE(SoftwareProdu
ctID,'"',''), ''))),
> SoftwareComponentID =
> LTRIM(RTRIM(ISNULL(REPLACE(SoftwareCompo
nentID,'"',''), ''))),
> SoftwarePartNumber =
> LTRIM(RTRIM(ISNULL(REPLACE(SoftwarePartN
umber,'"',''), ''))),
> HardwareModel = LTRIM(RTRIM(ISNULL(REPLACE(HardwareModel
,'"',''),
> ''))),
> HardwareMachineType =
> LTRIM(RTRIM(ISNULL(REPLACE(HardwareMachi
neType,'"',''), ''))),
> ContactEmail = LTRIM(RTRIM(ISNULL(REPLACE(ContactEmail,
'"',''),
> ''))),
> CompanyNameLong = LTRIM(RTRIM(ISNULL(REPLACE(CompanyNameLo
ng,'"',''),
> ''))),
> InterviewLanguage =
> LTRIM(RTRIM(ISNULL(REPLACE(InterviewLang
uage,'"',''), ''))),
> Salutation = LTRIM(RTRIM(ISNULL(REPLACE(Salutation,'"',''), ''))),
> CompanyNameAbbreviation =
> LTRIM(RTRIM(ISNULL(REPLACE(CompanyNameAb
breviation,'"',''), ''))),
> UniversalCountryCode = LTRIM(RTRIM(ISNULL(REPLACE(EmailFlag,'"',''),
> ''))),
> EmailFlag = LTRIM(RTRIM(ISNULL(REPLACE(EmailFlag,'"',''), ''))),
> SurveyFlag = LTRIM(RTRIM(ISNULL(REPLACE(SurveyFlag,'"',''), '')))
> WHERE SampleSourceKey = @.sampleSourceKey
> Each of the columns is NVARCHAR(255) Don't be alarmed, the records are
> never actually that big. The nature of the data is that certain records
> use certain columns (not my fault).
> Business Rule:
> Each value has to get trimmed, replace nulls with ZLS's and drop any
> existing double-quotes.
> My Complaint:
> Since a function gets invoked once for every record, then if there are 42
> columns getting updated, and each one requires 4 nested function calls,
> that seems to me that each record getting updated needs 168 function
> calls, and a @.sampleSourceKey in the where clause could make it touch
> anywhere from a half million records to over a million records.
> My Worse Complaint:
> The have just changed the business rules to also require me to remove any
> of these characters from every value in the SampleSource block:
> 10, 13, 34, 39, 46, 145, 146, 147, 148
> I *really* don't want to make this uglier than it already is, so I have
> thought of three different ways to do it:
> 1.
> ltrim(rtrim(replace(replace(replace(repl
ace(replace(...n...(isnull(column,
> ''))))))))) for each column in the set clause (I REALLY don't want to do
> this)
> 2. Make a CleanGarbage(columnvalue) UDF that basically does #1 above, but
> at least it encapsulates the mess (still has gobbles of function call
> overhead)
> 3. Use sp_OACreate to get an instance of the Scripting.RegExp COM-based
> regular expression object, hold onto its handle, and pass the handle and
> value to be cleaned to a UDF. In this example all the UDF calls that run
> the regular expression would be using the same instance of the same
> object, have cleaner code, be more maintainable because when the character
> list to be replaced changes, I'd only have to modify the regex pattern,
> and wouldn't have the vast number of calls as #1 & #2, but on the other
> hand, this is pretty obscure technique and I'm afraid that other people
> that come after me might choke on it.
> What are the comments about my three ideas? Feel free to suggest
> something I haven't thought of -- I'd really like to have this be cleaner,
> and faster, if possible. Speed is actually not a requirement, as this is
> about 3 percent of the work of this batch, and the stuff runs for hours at
> a time because of other reasons.
> --
> Peace & happy computing,
> Mike Labosh, MCSD
> "Escriba coda ergo sum." -- vbSensei
>|||Thanks All.
After a "team meeting", the boss has decided to go with my #1
ltrim(rtrim(replace(replace(replace...
Because, "It's the only thing you're going to get a solution on my the end
of today. Just load the SP into notepad and change the expression with a
find-replace."
I have very well documented my concerns. This is one of those places where
there solution is to do it fast and dumb, and if it runs like death, we'll
just throw a bigger server at it. LOL!
Sometimes there just aren't enough drugs.
Peace & happy computing,
Mike Labosh, MCSD
"Escriba coda ergo sum." -- vbSensei|||Reminds me of the relationshsip between Dilbert and his boss --> Just do it
quick and dirty, you will have enough time for rest of your life to maintain
the errors ;-)
http://www.dailydilbert.com
TWH, Jens Suessmeyer.
"Mike Labosh" <mlabosh@.hotmail.com> schrieb im Newsbeitrag
news:eV5wsEOVFHA.3944@.tk2msftngp13.phx.gbl...
> Thanks All.
> After a "team meeting", the boss has decided to go with my #1
> ltrim(rtrim(replace(replace(replace...
> Because, "It's the only thing you're going to get a solution on my the end
> of today. Just load the SP into notepad and change the expression with a
> find-replace."
> I have very well documented my concerns. This is one of those places
> where there solution is to do it fast and dumb, and if it runs like death,
> we'll just throw a bigger server at it. LOL!
> Sometimes there just aren't enough drugs.
> --
> Peace & happy computing,
> Mike Labosh, MCSD
> "Escriba coda ergo sum." -- vbSensei
>|||The best managers tend to handle the business rules and leave the
implementation details to the developers. After all you don't see the
President sitting in the middle of a battlefield calculating coordinates for
outbound artillery shells.
"Mike Labosh" <mlabosh@.hotmail.com> wrote in message
news:eV5wsEOVFHA.3944@.tk2msftngp13.phx.gbl...
> Thanks All.
> After a "team meeting", the boss has decided to go with my #1
> ltrim(rtrim(replace(replace(replace...
> Because, "It's the only thing you're going to get a solution on my the end
> of today. Just load the SP into notepad and change the expression with a
> find-replace."
> I have very well documented my concerns. This is one of those places
> where there solution is to do it fast and dumb, and if it runs like death,
> we'll just throw a bigger server at it. LOL!
> Sometimes there just aren't enough drugs.
> --
> Peace & happy computing,
> Mike Labosh, MCSD
> "Escriba coda ergo sum." -- vbSensei
>
Choosing database between MSDE and Access
I need to choose a database based on the following criteria (using .NET app):
1) a light but fully functional database, preferably with the support of store proc and constraints, less than 8000 transaction a day.
2) portable or the database can be export/import very easily
3) reliable and stable
4) least maintenance
I have two db in my mind, Access and MSDE?
Does anyone have some hand-ons experience on the above two? Or any other better suggestions?
Any advice is appreciated.
thanks,
bryanUsing MSDE will generally be more trouble-free. A file-based database on a Web server is not ideal.
Sunday, March 11, 2012
Choosing a Licensing Mode
Services for SQL Server 2K. We wish to install according to the following
scheme,
- Report Server: on our web server (ws1)
- Report Manager: on our web server (ws1)
- Report Server Database: on our SQL Server db (sql1)
- Report Designer: on our workstations
Reading what is on this page
http://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/rsdepgd.mspx,
it says that "You must have a valid SQL Server license for each computer
that has a SQL Server 2000 component (such as Reporting Services) installed.
For example, the computer running Reporting Services requires a SQL Server
license. If the report server database is located remotely, a separate
license is required for the remote computer running SQL Server."
Does this mean that we must have a separate SQL Server license for our ws1
machine to be able to run Report Server and Report Manager than the license
we have on our sql1 machine? Thanks in advance :-)The following is general guidance, I'm not an expert on licensing so what I
say definitely does not 'go'. I recommend you speak with your TAM/Sales rep
to ensure you're correctly licensed based on your deployment, hardware, etc.
In general, you need a SQL licence for each computer on which you deploy
server components. Thus, if you deploy report server on computer A and your
SQL engine is on computer B, you need to have a SQL licence for of A and
another for B, totalling 2 processor licenses.
Thyen if you have multi-proc machines, it gets interesting. Speak with
your TAM/Sales rep.
-Lukasz
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"dw" <cougarmana_NOSPAM@.uncw.edu> wrote in message
news:e9KNnVZbEHA.1644@.tk2msftngp13.phx.gbl...
> Hello, all. We have a question as to the licensing mode of Reporting
> Services for SQL Server 2K. We wish to install according to the following
> scheme,
> - Report Server: on our web server (ws1)
> - Report Manager: on our web server (ws1)
> - Report Server Database: on our SQL Server db (sql1)
> - Report Designer: on our workstations
> Reading what is on this page
> http://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/rsdepgd.mspx,
> it says that "You must have a valid SQL Server license for each computer
> that has a SQL Server 2000 component (such as Reporting Services)
> installed.
> For example, the computer running Reporting Services requires a SQL Server
> license. If the report server database is located remotely, a separate
> license is required for the remote computer running SQL Server."
> Does this mean that we must have a separate SQL Server license for our ws1
> machine to be able to run Report Server and Report Manager than the
> license
> we have on our sql1 machine? Thanks in advance :-)
>|||Thanks Lukasz. That's what I was afraid of.
"Lukasz Pawlowski [MSFT]" <lukaszp@.online.microsoft.com> wrote in message
news:esiORlcbEHA.2340@.TK2MSFTNGP10.phx.gbl...
> The following is general guidance, I'm not an expert on licensing so what
I
> say definitely does not 'go'. I recommend you speak with your TAM/Sales
rep
> to ensure you're correctly licensed based on your deployment, hardware,
etc.
> In general, you need a SQL licence for each computer on which you deploy
> server components. Thus, if you deploy report server on computer A and
your
> SQL engine is on computer B, you need to have a SQL licence for of A and
> another for B, totalling 2 processor licenses.
> Thyen if you have multi-proc machines, it gets interesting. Speak with
> your TAM/Sales rep.
> -Lukasz
> --
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>
> "dw" <cougarmana_NOSPAM@.uncw.edu> wrote in message
> news:e9KNnVZbEHA.1644@.tk2msftngp13.phx.gbl...
> > Hello, all. We have a question as to the licensing mode of Reporting
> > Services for SQL Server 2K. We wish to install according to the
following
> > scheme,
> >
> > - Report Server: on our web server (ws1)
> > - Report Manager: on our web server (ws1)
> > - Report Server Database: on our SQL Server db (sql1)
> > - Report Designer: on our workstations
> >
> > Reading what is on this page
> >
http://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/rsdepgd.mspx,
> > it says that "You must have a valid SQL Server license for each computer
> > that has a SQL Server 2000 component (such as Reporting Services)
> > installed.
> > For example, the computer running Reporting Services requires a SQL
Server
> > license. If the report server database is located remotely, a separate
> > license is required for the remote computer running SQL Server."
> >
> > Does this mean that we must have a separate SQL Server license for our
ws1
> > machine to be able to run Report Server and Report Manager than the
> > license
> > we have on our sql1 machine? Thanks in advance :-)
> >
> >
>
Thursday, March 8, 2012
Child/ Parent relationship within table
attributes
CategoryName, CategoryID <- Identity , ParentCategoryID
What I need help doing is constructing a Procedure/ SQL query where I can
show the expanded relationships for each record in the table.
e.g. If I have three records in the table (Following the attributes
described above)
ParentCategory, 1, 0
ChildCategory, 2,1
SubChildCategory, 3,2
I want to be able to dynamically return the following information when I
execute the query (Each row looks at the parent id and concatenates itself
to its parents CategoryName).
ParentCategory
ParentCategory/ChildCategory
ParentCategory/ChildCategory/SubChildCategory
Thanks in advance
MarlkEverything you are doing is wrong. Let's get back to the basics of an
RDBMS. Rows are not records; fields are not columns; tables are not
files; there is no sequential access or ordering in an RDBMS, so
"first", "next" and "last" are totally meaningless.
Stop using IDENTITY and learn what a relational key is.
Get a copy of TREES & HIERARCHIES IN SQL You are trying to write
(uughh!) procedural code to build a traversal.that will create a path.
This is not the best way; google a "nested sets model" instead.|||There are many, many resources for describing ways to efficiently model
hierarchies and trees in SQL.
Google for the following topics: Nested Sets, Nested Intervals, Adjacency
List, Materialized Path
Here is a good starting point for finding information about this topic:
http://troels.arvin.dk/db/rdbms/links/#hierarchical
As a first step, you might consider creating a new table to manage the
relationship between categories - right now, you are modeling both the
category and the relationship between categories in the same table.
Normalizing the design can give you added flexibility depending on your
requirements.
What you have here is basically an Adjacency List model. This model has
excellent characteristics with regard to modifying the layout of the
hierarchy; you simply change the ParentCategoryID of a node to a different
value, and you instantly "move" that node and all referencing nodes to a
different location in the hierarchy. However, it does not work very well for
retrieving the structure, as you are seeing. SQL Server Books Online has a
section titled "Expanding Hierarchies" that describes an iterative process
of querying the hierarchy that involves using a temporary table as a stack,
but SQL is really optimized for set-based operations. You will probably find
something at the link above that better meets your needs.
"Mark" <dont@.spam.me> wrote in message
news:%233O9T%23KuFHA.1560@.TK2MSFTNGP09.phx.gbl...
> Hi everyone, I have a categories table which has the following main
> attributes
> CategoryName, CategoryID <- Identity , ParentCategoryID
> What I need help doing is constructing a Procedure/ SQL query where I can
> show the expanded relationships for each record in the table.
> e.g. If I have three records in the table (Following the attributes
> described above)
> ParentCategory, 1, 0
> ChildCategory, 2,1
> SubChildCategory, 3,2
> I want to be able to dynamically return the following information when I
> execute the query (Each row looks at the parent id and concatenates itself
> to its parents CategoryName).
> ParentCategory
> ParentCategory/ChildCategory
> ParentCategory/ChildCategory/SubChildCategory
> Thanks in advance
> Marlk
>
>
>
>|||Whoa! Cool, thanks for the links
Cheers
Mark
"Jeremy Williams" <jeremydwill@.netscape.net> wrote in message
news:ekyuNZLuFHA.2072@.TK2MSFTNGP14.phx.gbl...
> There are many, many resources for describing ways to efficiently model
> hierarchies and trees in SQL.
> Google for the following topics: Nested Sets, Nested Intervals, Adjacency
> List, Materialized Path
> Here is a good starting point for finding information about this topic:
> http://troels.arvin.dk/db/rdbms/links/#hierarchical
> As a first step, you might consider creating a new table to manage the
> relationship between categories - right now, you are modeling both the
> category and the relationship between categories in the same table.
> Normalizing the design can give you added flexibility depending on your
> requirements.
> What you have here is basically an Adjacency List model. This model has
> excellent characteristics with regard to modifying the layout of the
> hierarchy; you simply change the ParentCategoryID of a node to a different
> value, and you instantly "move" that node and all referencing nodes to a
> different location in the hierarchy. However, it does not work very well
for
> retrieving the structure, as you are seeing. SQL Server Books Online has a
> section titled "Expanding Hierarchies" that describes an iterative process
> of querying the hierarchy that involves using a temporary table as a
stack,
> but SQL is really optimized for set-based operations. You will probably
find
> something at the link above that better meets your needs.
> "Mark" <dont@.spam.me> wrote in message
> news:%233O9T%23KuFHA.1560@.TK2MSFTNGP09.phx.gbl...
can
itself
>
checktable error
CHECKTABLE found 0 allocation errors and 2 consistency errors in table
'RequestWords' (object ID 910678342).
repair_allow_data_loss is the minimum repair level for the errors found by
DBCC CHECKTABLE (synergy.dbo.RequestWords ).
I have a backup and would like to know the best way to proceed to correct
the consistency errors...also where can I find the sp_repair_allow_data_loss?
Thanks> I have a backup and would like to know the best way to proceed to correct
> the consistency errors...also where can I find the
> sp_repair_allow_data_loss?
Restoring from your last known good backup is usually the best option rather
than allowing data loss. There may be other methods depending on the
specific types of corruption and affected object types.
REPAIR_ALLOW_DATA_LOSS is an option of the DBCC CHECKDB and DBCC CHECKTABLE
commands. See the Books online for details.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
news:264DC93B-200F-4567-B853-8B0DCF91852C@.microsoft.com...
>I receive the following after running checktable:
> CHECKTABLE found 0 allocation errors and 2 consistency errors in table
> 'RequestWords' (object ID 910678342).
> repair_allow_data_loss is the minimum repair level for the errors found by
> DBCC CHECKTABLE (synergy.dbo.RequestWords ).
> I have a backup and would like to know the best way to proceed to correct
> the consistency errors...also where can I find the
> sp_repair_allow_data_loss?
> Thanks|||Not sure when my last known good is and it could take quite awhile to try and
find it...is there a way to see what data would be lost first before running
REPAIR_ALLOW_DATA_LOSS ?
"Dan Guzman" wrote:
> > I have a backup and would like to know the best way to proceed to correct
> > the consistency errors...also where can I find the
> > sp_repair_allow_data_loss?
> Restoring from your last known good backup is usually the best option rather
> than allowing data loss. There may be other methods depending on the
> specific types of corruption and affected object types.
> REPAIR_ALLOW_DATA_LOSS is an option of the DBCC CHECKDB and DBCC CHECKTABLE
> commands. See the Books online for details.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
> news:264DC93B-200F-4567-B853-8B0DCF91852C@.microsoft.com...
> >I receive the following after running checktable:
> >
> > CHECKTABLE found 0 allocation errors and 2 consistency errors in table
> > 'RequestWords' (object ID 910678342).
> > repair_allow_data_loss is the minimum repair level for the errors found by
> > DBCC CHECKTABLE (synergy.dbo.RequestWords ).
> >
> > I have a backup and would like to know the best way to proceed to correct
> > the consistency errors...also where can I find the
> > sp_repair_allow_data_loss?
> >
> > Thanks
>|||> Not sure when my last known good is and it could take quite awhile to try
> and
> find it...is there a way to see what data would be lost first before
> running
> REPAIR_ALLOW_DATA_LOSS ?
The DBCC CHECKTABLE output should specify the problem pages and error
details. You can use DBCC PAGE (google is your friend) to examine the page
contents and get an idea of data might be affected. I don't believe it's
possible to provide exact details of lost data beforehand; I would think
DBCC could recover the data without loss if that were possible.
You might try posting the DBCC error details in case we can provide an
alternate solution. You could also try running the DBCC CHECKTABLE WITH
REPAIR_ALLOW_DATA_LOSS on a database copy to see what DBCC did to correct
the problem and identify lost data.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
news:60356673-18A9-41B8-929D-265E419590DA@.microsoft.com...
> Not sure when my last known good is and it could take quite awhile to try
> and
> find it...is there a way to see what data would be lost first before
> running
> REPAIR_ALLOW_DATA_LOSS ?
> "Dan Guzman" wrote:
>> > I have a backup and would like to know the best way to proceed to
>> > correct
>> > the consistency errors...also where can I find the
>> > sp_repair_allow_data_loss?
>> Restoring from your last known good backup is usually the best option
>> rather
>> than allowing data loss. There may be other methods depending on the
>> specific types of corruption and affected object types.
>> REPAIR_ALLOW_DATA_LOSS is an option of the DBCC CHECKDB and DBCC
>> CHECKTABLE
>> commands. See the Books online for details.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
>> news:264DC93B-200F-4567-B853-8B0DCF91852C@.microsoft.com...
>> >I receive the following after running checktable:
>> >
>> > CHECKTABLE found 0 allocation errors and 2 consistency errors in table
>> > 'RequestWords' (object ID 910678342).
>> > repair_allow_data_loss is the minimum repair level for the errors found
>> > by
>> > DBCC CHECKTABLE (synergy.dbo.RequestWords ).
>> >
>> > I have a backup and would like to know the best way to proceed to
>> > correct
>> > the consistency errors...also where can I find the
>> > sp_repair_allow_data_loss?
>> >
>> > Thanks|||Dan, I am going to paste the following detail, the table is a simple 2 column
that is used by a full text indexing job, it seems, the first field is just
text and the second is a guid...I went to record 128, but saw nothing
funny.....appreciate all your help
Server: Msg 8928, Level 16, State 1, Line 1
Object ID 910678342, index ID 0: Page (1:369538) could not be processed. See
other errors for details.
Server: Msg 8944, Level 16, State 1, Line 1
Table error: Object ID 910678342, index ID 0, page (1:369538), row 125. Test
(ColumnOffsets <= (nextRec - pRec)) failed. Values are 2088 and 43.
DBCC results for 'RequestWords'.
There are 1313272 rows in 11072 pages for object 'RequestWords'.
CHECKTABLE found 0 allocation errors and 2 consistency errors in table
'RequestWords' (object ID 910678342).
repair_allow_data_loss is the minimum repair level for the errors found by
DBCC CHECKTABLE (synergy.dbo.RequestWords ).
"Dan Guzman" wrote:
> > Not sure when my last known good is and it could take quite awhile to try
> > and
> > find it...is there a way to see what data would be lost first before
> > running
> > REPAIR_ALLOW_DATA_LOSS ?
> The DBCC CHECKTABLE output should specify the problem pages and error
> details. You can use DBCC PAGE (google is your friend) to examine the page
> contents and get an idea of data might be affected. I don't believe it's
> possible to provide exact details of lost data beforehand; I would think
> DBCC could recover the data without loss if that were possible.
> You might try posting the DBCC error details in case we can provide an
> alternate solution. You could also try running the DBCC CHECKTABLE WITH
> REPAIR_ALLOW_DATA_LOSS on a database copy to see what DBCC did to correct
> the problem and identify lost data.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
> news:60356673-18A9-41B8-929D-265E419590DA@.microsoft.com...
> > Not sure when my last known good is and it could take quite awhile to try
> > and
> > find it...is there a way to see what data would be lost first before
> > running
> > REPAIR_ALLOW_DATA_LOSS ?
> >
> > "Dan Guzman" wrote:
> >
> >> > I have a backup and would like to know the best way to proceed to
> >> > correct
> >> > the consistency errors...also where can I find the
> >> > sp_repair_allow_data_loss?
> >>
> >> Restoring from your last known good backup is usually the best option
> >> rather
> >> than allowing data loss. There may be other methods depending on the
> >> specific types of corruption and affected object types.
> >>
> >> REPAIR_ALLOW_DATA_LOSS is an option of the DBCC CHECKDB and DBCC
> >> CHECKTABLE
> >> commands. See the Books online for details.
> >>
> >> --
> >> Hope this helps.
> >>
> >> Dan Guzman
> >> SQL Server MVP
> >>
> >> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
> >> news:264DC93B-200F-4567-B853-8B0DCF91852C@.microsoft.com...
> >> >I receive the following after running checktable:
> >> >
> >> > CHECKTABLE found 0 allocation errors and 2 consistency errors in table
> >> > 'RequestWords' (object ID 910678342).
> >> > repair_allow_data_loss is the minimum repair level for the errors found
> >> > by
> >> > DBCC CHECKTABLE (synergy.dbo.RequestWords ).
> >> >
> >> > I have a backup and would like to know the best way to proceed to
> >> > correct
> >> > the consistency errors...also where can I find the
> >> > sp_repair_allow_data_loss?
> >> >
> >> > Thanks
> >>
>|||The DBCC error indicates bad column offsets, which will prevent the problem
row from being parsed. I think DBCC will need to delete that row to remove
the error. You can probably fix the error with a normal delete command if
you can identify the key value of the problem row from the raw DBCC PAGE
output.
If you haven't already done so, you might try DBCC PAGE print option 3
(http://support.microsoft.com/kb/83065) to print the individual column
values (example below). I don't know how this will behave with bad column
offsets but I'm curious to find out, if you don't mind giving that a try.
The DBCC PAGE dump is probably the only data you'll have to salvage the
deleted row, unless you contact Microsoft PSS.
Hope this helps.
Dan Guzman
SQL Server MVP
"Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
news:042EAEE7-D5C2-4847-9AF9-DFC92E3885FB@.microsoft.com...
> Dan, I am going to paste the following detail, the table is a simple 2
> column
> that is used by a full text indexing job, it seems, the first field is
> just
> text and the second is a guid...I went to record 128, but saw nothing
> funny.....appreciate all your help
> Server: Msg 8928, Level 16, State 1, Line 1
> Object ID 910678342, index ID 0: Page (1:369538) could not be processed.
> See
> other errors for details.
> Server: Msg 8944, Level 16, State 1, Line 1
> Table error: Object ID 910678342, index ID 0, page (1:369538), row 125.
> Test
> (ColumnOffsets <= (nextRec - pRec)) failed. Values are 2088 and 43.
> DBCC results for 'RequestWords'.
> There are 1313272 rows in 11072 pages for object 'RequestWords'.
> CHECKTABLE found 0 allocation errors and 2 consistency errors in table
> 'RequestWords' (object ID 910678342).
> repair_allow_data_loss is the minimum repair level for the errors found by
> DBCC CHECKTABLE (synergy.dbo.RequestWords ).
>
>
> "Dan Guzman" wrote:
>> > Not sure when my last known good is and it could take quite awhile to
>> > try
>> > and
>> > find it...is there a way to see what data would be lost first before
>> > running
>> > REPAIR_ALLOW_DATA_LOSS ?
>> The DBCC CHECKTABLE output should specify the problem pages and error
>> details. You can use DBCC PAGE (google is your friend) to examine the
>> page
>> contents and get an idea of data might be affected. I don't believe it's
>> possible to provide exact details of lost data beforehand; I would think
>> DBCC could recover the data without loss if that were possible.
>> You might try posting the DBCC error details in case we can provide an
>> alternate solution. You could also try running the DBCC CHECKTABLE WITH
>> REPAIR_ALLOW_DATA_LOSS on a database copy to see what DBCC did to correct
>> the problem and identify lost data.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
>> news:60356673-18A9-41B8-929D-265E419590DA@.microsoft.com...
>> > Not sure when my last known good is and it could take quite awhile to
>> > try
>> > and
>> > find it...is there a way to see what data would be lost first before
>> > running
>> > REPAIR_ALLOW_DATA_LOSS ?
>> >
>> > "Dan Guzman" wrote:
>> >
>> >> > I have a backup and would like to know the best way to proceed to
>> >> > correct
>> >> > the consistency errors...also where can I find the
>> >> > sp_repair_allow_data_loss?
>> >>
>> >> Restoring from your last known good backup is usually the best option
>> >> rather
>> >> than allowing data loss. There may be other methods depending on the
>> >> specific types of corruption and affected object types.
>> >>
>> >> REPAIR_ALLOW_DATA_LOSS is an option of the DBCC CHECKDB and DBCC
>> >> CHECKTABLE
>> >> commands. See the Books online for details.
>> >>
>> >> --
>> >> Hope this helps.
>> >>
>> >> Dan Guzman
>> >> SQL Server MVP
>> >>
>> >> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
>> >> news:264DC93B-200F-4567-B853-8B0DCF91852C@.microsoft.com...
>> >> >I receive the following after running checktable:
>> >> >
>> >> > CHECKTABLE found 0 allocation errors and 2 consistency errors in
>> >> > table
>> >> > 'RequestWords' (object ID 910678342).
>> >> > repair_allow_data_loss is the minimum repair level for the errors
>> >> > found
>> >> > by
>> >> > DBCC CHECKTABLE (synergy.dbo.RequestWords ).
>> >> >
>> >> > I have a backup and would like to know the best way to proceed to
>> >> > correct
>> >> > the consistency errors...also where can I find the
>> >> > sp_repair_allow_data_loss?
>> >> >
>> >> > Thanks
>> >>|||Dan, sorry took so long to get back...
I was able to locate the corrupt record in the referenced table...it was a
null id that had a guid assigned to it from another table...very weird...the
delete statement would not work on it....had to finally run
repair_allow_data_loss and then repair_fast. I lost 132 records but they
were all bogus references to the corruption...so it turned out all good.
Thanks for your assistance.
"Dan Guzman" wrote:
> The DBCC error indicates bad column offsets, which will prevent the problem
> row from being parsed. I think DBCC will need to delete that row to remove
> the error. You can probably fix the error with a normal delete command if
> you can identify the key value of the problem row from the raw DBCC PAGE
> output.
> If you haven't already done so, you might try DBCC PAGE print option 3
> (http://support.microsoft.com/kb/83065) to print the individual column
> values (example below). I don't know how this will behave with bad column
> offsets but I'm curious to find out, if you don't mind giving that a try.
> The DBCC PAGE dump is probably the only data you'll have to salvage the
> deleted row, unless you contact Microsoft PSS.
>
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
> news:042EAEE7-D5C2-4847-9AF9-DFC92E3885FB@.microsoft.com...
> > Dan, I am going to paste the following detail, the table is a simple 2
> > column
> > that is used by a full text indexing job, it seems, the first field is
> > just
> > text and the second is a guid...I went to record 128, but saw nothing
> > funny.....appreciate all your help
> >
> > Server: Msg 8928, Level 16, State 1, Line 1
> > Object ID 910678342, index ID 0: Page (1:369538) could not be processed.
> > See
> > other errors for details.
> > Server: Msg 8944, Level 16, State 1, Line 1
> > Table error: Object ID 910678342, index ID 0, page (1:369538), row 125.
> > Test
> > (ColumnOffsets <= (nextRec - pRec)) failed. Values are 2088 and 43.
> > DBCC results for 'RequestWords'.
> > There are 1313272 rows in 11072 pages for object 'RequestWords'.
> > CHECKTABLE found 0 allocation errors and 2 consistency errors in table
> > 'RequestWords' (object ID 910678342).
> > repair_allow_data_loss is the minimum repair level for the errors found by
> > DBCC CHECKTABLE (synergy.dbo.RequestWords ).
> >
> >
> >
> >
> > "Dan Guzman" wrote:
> >
> >> > Not sure when my last known good is and it could take quite awhile to
> >> > try
> >> > and
> >> > find it...is there a way to see what data would be lost first before
> >> > running
> >> > REPAIR_ALLOW_DATA_LOSS ?
> >>
> >> The DBCC CHECKTABLE output should specify the problem pages and error
> >> details. You can use DBCC PAGE (google is your friend) to examine the
> >> page
> >> contents and get an idea of data might be affected. I don't believe it's
> >> possible to provide exact details of lost data beforehand; I would think
> >> DBCC could recover the data without loss if that were possible.
> >>
> >> You might try posting the DBCC error details in case we can provide an
> >> alternate solution. You could also try running the DBCC CHECKTABLE WITH
> >> REPAIR_ALLOW_DATA_LOSS on a database copy to see what DBCC did to correct
> >> the problem and identify lost data.
> >>
> >> --
> >> Hope this helps.
> >>
> >> Dan Guzman
> >> SQL Server MVP
> >>
> >> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
> >> news:60356673-18A9-41B8-929D-265E419590DA@.microsoft.com...
> >> > Not sure when my last known good is and it could take quite awhile to
> >> > try
> >> > and
> >> > find it...is there a way to see what data would be lost first before
> >> > running
> >> > REPAIR_ALLOW_DATA_LOSS ?
> >> >
> >> > "Dan Guzman" wrote:
> >> >
> >> >> > I have a backup and would like to know the best way to proceed to
> >> >> > correct
> >> >> > the consistency errors...also where can I find the
> >> >> > sp_repair_allow_data_loss?
> >> >>
> >> >> Restoring from your last known good backup is usually the best option
> >> >> rather
> >> >> than allowing data loss. There may be other methods depending on the
> >> >> specific types of corruption and affected object types.
> >> >>
> >> >> REPAIR_ALLOW_DATA_LOSS is an option of the DBCC CHECKDB and DBCC
> >> >> CHECKTABLE
> >> >> commands. See the Books online for details.
> >> >>
> >> >> --
> >> >> Hope this helps.
> >> >>
> >> >> Dan Guzman
> >> >> SQL Server MVP
> >> >>
> >> >> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
> >> >> news:264DC93B-200F-4567-B853-8B0DCF91852C@.microsoft.com...
> >> >> >I receive the following after running checktable:
> >> >> >
> >> >> > CHECKTABLE found 0 allocation errors and 2 consistency errors in
> >> >> > table
> >> >> > 'RequestWords' (object ID 910678342).
> >> >> > repair_allow_data_loss is the minimum repair level for the errors
> >> >> > found
> >> >> > by
> >> >> > DBCC CHECKTABLE (synergy.dbo.RequestWords ).
> >> >> >
> >> >> > I have a backup and would like to know the best way to proceed to
> >> >> > correct
> >> >> > the consistency errors...also where can I find the
> >> >> > sp_repair_allow_data_loss?
> >> >> >
> >> >> > Thanks
> >> >>
> >>
>|||I'm glad you were able to get your database fixed.
--
Dan Guzman
SQL Server MVP
"Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
news:8525A1AF-4A88-47B3-AA8F-1144B9814DA8@.microsoft.com...
> Dan, sorry took so long to get back...
> I was able to locate the corrupt record in the referenced table...it was
> a
> null id that had a guid assigned to it from another table...very
> weird...the
> delete statement would not work on it....had to finally run
> repair_allow_data_loss and then repair_fast. I lost 132 records but they
> were all bogus references to the corruption...so it turned out all good.
> Thanks for your assistance.
> "Dan Guzman" wrote:
>> The DBCC error indicates bad column offsets, which will prevent the
>> problem
>> row from being parsed. I think DBCC will need to delete that row to
>> remove
>> the error. You can probably fix the error with a normal delete command
>> if
>> you can identify the key value of the problem row from the raw DBCC PAGE
>> output.
>> If you haven't already done so, you might try DBCC PAGE print option 3
>> (http://support.microsoft.com/kb/83065) to print the individual column
>> values (example below). I don't know how this will behave with bad
>> column
>> offsets but I'm curious to find out, if you don't mind giving that a try.
>> The DBCC PAGE dump is probably the only data you'll have to salvage the
>> deleted row, unless you contact Microsoft PSS.
>>
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
>> news:042EAEE7-D5C2-4847-9AF9-DFC92E3885FB@.microsoft.com...
>> > Dan, I am going to paste the following detail, the table is a simple 2
>> > column
>> > that is used by a full text indexing job, it seems, the first field is
>> > just
>> > text and the second is a guid...I went to record 128, but saw nothing
>> > funny.....appreciate all your help
>> >
>> > Server: Msg 8928, Level 16, State 1, Line 1
>> > Object ID 910678342, index ID 0: Page (1:369538) could not be
>> > processed.
>> > See
>> > other errors for details.
>> > Server: Msg 8944, Level 16, State 1, Line 1
>> > Table error: Object ID 910678342, index ID 0, page (1:369538), row 125.
>> > Test
>> > (ColumnOffsets <= (nextRec - pRec)) failed. Values are 2088 and 43.
>> > DBCC results for 'RequestWords'.
>> > There are 1313272 rows in 11072 pages for object 'RequestWords'.
>> > CHECKTABLE found 0 allocation errors and 2 consistency errors in table
>> > 'RequestWords' (object ID 910678342).
>> > repair_allow_data_loss is the minimum repair level for the errors found
>> > by
>> > DBCC CHECKTABLE (synergy.dbo.RequestWords ).
>> >
>> >
>> >
>> >
>> > "Dan Guzman" wrote:
>> >
>> >> > Not sure when my last known good is and it could take quite awhile
>> >> > to
>> >> > try
>> >> > and
>> >> > find it...is there a way to see what data would be lost first before
>> >> > running
>> >> > REPAIR_ALLOW_DATA_LOSS ?
>> >>
>> >> The DBCC CHECKTABLE output should specify the problem pages and error
>> >> details. You can use DBCC PAGE (google is your friend) to examine the
>> >> page
>> >> contents and get an idea of data might be affected. I don't believe
>> >> it's
>> >> possible to provide exact details of lost data beforehand; I would
>> >> think
>> >> DBCC could recover the data without loss if that were possible.
>> >>
>> >> You might try posting the DBCC error details in case we can provide an
>> >> alternate solution. You could also try running the DBCC CHECKTABLE
>> >> WITH
>> >> REPAIR_ALLOW_DATA_LOSS on a database copy to see what DBCC did to
>> >> correct
>> >> the problem and identify lost data.
>> >>
>> >> --
>> >> Hope this helps.
>> >>
>> >> Dan Guzman
>> >> SQL Server MVP
>> >>
>> >> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
>> >> news:60356673-18A9-41B8-929D-265E419590DA@.microsoft.com...
>> >> > Not sure when my last known good is and it could take quite awhile
>> >> > to
>> >> > try
>> >> > and
>> >> > find it...is there a way to see what data would be lost first before
>> >> > running
>> >> > REPAIR_ALLOW_DATA_LOSS ?
>> >> >
>> >> > "Dan Guzman" wrote:
>> >> >
>> >> >> > I have a backup and would like to know the best way to proceed to
>> >> >> > correct
>> >> >> > the consistency errors...also where can I find the
>> >> >> > sp_repair_allow_data_loss?
>> >> >>
>> >> >> Restoring from your last known good backup is usually the best
>> >> >> option
>> >> >> rather
>> >> >> than allowing data loss. There may be other methods depending on
>> >> >> the
>> >> >> specific types of corruption and affected object types.
>> >> >>
>> >> >> REPAIR_ALLOW_DATA_LOSS is an option of the DBCC CHECKDB and DBCC
>> >> >> CHECKTABLE
>> >> >> commands. See the Books online for details.
>> >> >>
>> >> >> --
>> >> >> Hope this helps.
>> >> >>
>> >> >> Dan Guzman
>> >> >> SQL Server MVP
>> >> >>
>> >> >> "Gerry M" <GerryM@.discussions.microsoft.com> wrote in message
>> >> >> news:264DC93B-200F-4567-B853-8B0DCF91852C@.microsoft.com...
>> >> >> >I receive the following after running checktable:
>> >> >> >
>> >> >> > CHECKTABLE found 0 allocation errors and 2 consistency errors in
>> >> >> > table
>> >> >> > 'RequestWords' (object ID 910678342).
>> >> >> > repair_allow_data_loss is the minimum repair level for the errors
>> >> >> > found
>> >> >> > by
>> >> >> > DBCC CHECKTABLE (synergy.dbo.RequestWords ).
>> >> >> >
>> >> >> > I have a backup and would like to know the best way to proceed to
>> >> >> > correct
>> >> >> > the consistency errors...also where can I find the
>> >> >> > sp_repair_allow_data_loss?
>> >> >> >
>> >> >> > Thanks
>> >> >>
>> >>
Wednesday, March 7, 2012
CHECKSUM_AGG and BINARY_CHECKSUM performance problems
I am using the following query to get a list of grouped checksum data.
SELECT CAST(Field0_datetime AS INT),
CHECKSUM_AGG(BINARY_CHECKSUM(Field1_bigint, Field2_datetime,
Field3_datetime, Field4_bigint, Field5_bigint, CAST(Field6_float
Decimal(38,6)), Field7_datetime))
FROM Table1
WHERE Field0_datetime BETWEEN '2003-01-01' AND '2003-01-20'
GROUP BY CAST(Field0_datetime AS INT)
Please notice the used filter: from January 1 to January 20.
That query takes about 6 minutes do return the data. The result is 18
records.
However, when I execute the same query filtering BETWEEN '2003-01-01' and
'2003-01-10', this time it takes only 1 second to return data.
When I execute the query filtering BETWEEN '2003-01-10' and '2003-01-20' the
query takes another 1 second to return data.
So why 6 minutes to process them together??
The table have an index by Field0_datetime.
It contains about 1.5 millions records total, using around 1.7Gb of
diskspace, indexes included.
From 2003-01-01 and 2003-01-20, there are 11401 records selected. Don't look
like that much.
The situation is repeatable, I mean, if I execute the queries back and
again, they takes the about the same ammount of time to execute, so I don't
think this problem is related to cache or something like that.
I would appreciate any advice about what might be wrong with my situation.
Thanks a lot and kind regards,
Orly Junior
IT ProfessionalBy using the profiler, I found that while executing the first query (20 days
span), the system don't use the index. How it possible?
A simpler version of the query that causes the same problem is:
select checksum_agg(binary_checksum([dc])) from [table1] where [dc] between
'2003-01-01' and '2003-01-20'
The profiler reports it will be using a clustered index scan wich is
unacceptable since the table have a lot of records.
Why the hell it is not using the [dc] index ?? If a tight the criteria to
between a 10-day span it uses the index correctly.
Do you have any idea why is that happening?
Thanks in advance and best regards,
Orly Junior
IT Professional
"Orly Junior" <nomail@.nomail.com> wrote in message
news:42b0c9e6$0$32014$a729d347@.news.telepac.pt...
> Gentlemen,
> I am using the following query to get a list of grouped checksum data.
> SELECT CAST(Field0_datetime AS INT),
> CHECKSUM_AGG(BINARY_CHECKSUM(Field1_bigint, Field2_datetime,
> Field3_datetime, Field4_bigint, Field5_bigint, CAST(Field6_float
> Decimal(38,6)), Field7_datetime))
> FROM Table1
> WHERE Field0_datetime BETWEEN '2003-01-01' AND '2003-01-20'
> GROUP BY CAST(Field0_datetime AS INT)
> Please notice the used filter: from January 1 to January 20.
> That query takes about 6 minutes do return the data. The result is 18
> records.
> However, when I execute the same query filtering BETWEEN '2003-01-01' and
> '2003-01-10', this time it takes only 1 second to return data.
> When I execute the query filtering BETWEEN '2003-01-10' and '2003-01-20'
> the query takes another 1 second to return data.
> So why 6 minutes to process them together??
> The table have an index by Field0_datetime.
> It contains about 1.5 millions records total, using around 1.7Gb of
> diskspace, indexes included.
> From 2003-01-01 and 2003-01-20, there are 11401 records selected. Don't
> look like that much.
> The situation is repeatable, I mean, if I execute the queries back and
> again, they takes the about the same ammount of time to execute, so I
> don't think this problem is related to cache or something like that.
> I would appreciate any advice about what might be wrong with my situation.
> Thanks a lot and kind regards,
> Orly Junior
> IT Professional|||Orly Junior (nomail@.nomail.com) writes:
> By using the profiler, I found that while executing the first query (20
> days span), the system don't use the index. How it possible?
When you have a non-clustered index that can be used to compute a query,
SQL Server cannot always use this index blindly. If the selection is
small, the index is find. If the selection is large, the index spells
disaster. This is because every hit in the pages, requires an access to
the data pages. This can up with more pages reads, than use scanning the
table once.
Now, in your case, there are 11041 rows that matches the WHERE clause.
The table is 1.7 GB, which is 207 000 pages. Even if some of those
1.7 GB are indexes, the table scan is obviously more expensive.
But SQL Server does not build query plans from full knowledge, but from
statistics it has saved about the table. If this statistics is inaccurate
for some reason, the estimate may be incorrect. By default, SQL Server
does only sample data for its statistics.
You can try "UPDATE STATISTICS tbl WITH FULLSCAN" and see if this
has any effect. SQL Server will now look at all rows. However, it
saves data in a histogramme, so you may still lose accuracy. DBCC
SHOW_STATISTICS may give some information.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp
CheckSAPwdPolicy Return=''28001''
I need help for the following problem:
We use SQL Server 2005 Express. With a Microsoft Windows Server 2003 family, Enterprise Edition Service Pack 1 (Build 3790) we get the following installation problem.
In the log-file SQLSetup0005_M067RT1107P1_SQL.log we get these lines:
...
Failed to validate sa password error 2704
<EndFunc Name='CheckSAPwdPolicy' Return='28001' GetLastError='0'>
Error Code: 0x80076d61 (28001)
Windows Error Text: Source File Name: sqlca\sqlcax.cpp
Compiler Timestamp: Fri Feb 9 22:35:05 2007
Function Name: SAPasswordPolicyCheck
Source Line Number: 2727
...
I try it on other PC a Microsoft Windows Server 2003 family, Enterprise Edition Service Pack 1 (Build 3790) - Installation.
Here I got no problems!
What is the reason?
Thank you!
Seems that the first computer has a password policy enabled in Windows which is applied to the sa account (and all appropiate SQL accounts on the server). Either a Local Security Policy or a domain policy is applied to the server which sets the password complexity on the server. If your password is not conform to the complexity rules, the creation of the user will fail. (use the MMC snapin for configuration)
Jens K. Suessmeyer
http://www.sqlserver2005.de
|||
Thank you very much for your help!
The problem is the local security policy!
CheckQueryProcessorAlive: sqlexecdirect failed
I have the following error in my sql error log.
2004-05-05 09:44:34.23 spid3 SQL Server is terminating
due to 'stop' request from Service Control Manager.
I have the following error in my cluster log just before
my entry into the sql error log. Could you let me know
what could be the cause? The server load was not high and
the system was not running any dbcc at that time.
000006b4.000008bc::2004/05/05-09:43:16.050 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.050 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = HYT00;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Timeout expired
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] OnlineThread: QP is not online.
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] ODBC sqldriverconnect failed
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] checkODBCConnectError: sqlstate =
08001; native error = b; message = [Microsoft][ODBC SQL
Server Driver][DBNETLIB]General network error. Check your
network documentation.
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] ODBC sqldriverconnect failed
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] checkODBCConnectError: sqlstate =
01000; native error = 274c; message = [Microsoft][ODBC SQL
Server Driver][DBNETLIB]ConnectionOpen (PreLoginHandshake
()).
Operating System and Version: Windows 2000 Server SP4
Database: MSSQL2000 SP3
HARDWARE CONFIGURATION INFORMATION
Machine Type: HP Netserver LT6000R
Amount of RAM: 4 GB
Type of Network Card: HP NetServer 10/100TX PCI LAN
Adapter (Slot)
Disk Type & Size: 4 x 36GB
Controller(s): HP NetRAID-2M Controller
Integrated HP NetRaid
Tape Unit, make/model:
CD-ROM make/model: TEAC CD-224E
regards,
bharathThe cluster service was unable to connect to SQL Server to verify that it
was running. There are numerous reasons why this could happen. I then
shutdown SQL Server and, I assume, failed it over to the other node or
restarted on the same node.
Rand
This posting is provided "as is" with no warranties and confers no rights.
CheckQueryProcessorAlive: sqlexecdirect failed
I have the following error in my sql error log.
2004-05-05 09:44:34.23 spid3 SQL Server is terminating
due to 'stop' request from Service Control Manager.
I have the following error in my cluster log just before
my entry into the sql error log. Could you let me know
what could be the cause? The server load was not high and
the system was not running any dbcc at that time.
000006b4.000008bc::2004/05/05-09:43:16.050 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.050 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = HYT00;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Timeout expired
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] OnlineThread: QP is not online.
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] ODBC sqldriverconnect failed
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] checkODBCConnectError: sqlstate = 08001; native error = b; message = [Microsoft][ODBC SQL
Server Driver][DBNETLIB]General network error. Check your
network documentation.
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] ODBC sqldriverconnect failed
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] checkODBCConnectError: sqlstate = 01000; native error = 274c; message = [Microsoft][ODBC SQL
Server Driver][DBNETLIB]ConnectionOpen (PreLoginHandshake
()).
Operating System and Version: Windows 2000 Server SP4
Database: MSSQL2000 SP3
HARDWARE CONFIGURATION INFORMATION
Machine Type: HP Netserver LT6000R
Amount of RAM: 4 GB
Type of Network Card: HP NetServer 10/100TX PCI LAN
Adapter (Slot)
Disk Type & Size: 4 x 36GB
Controller(s): HP NetRAID-2M Controller
Integrated HP NetRaid
Tape Unit, make/model:
CD-ROM make/model: TEAC CD-224E
regards,
bharathThe cluster service was unable to connect to SQL Server to verify that it
was running. There are numerous reasons why this could happen. I then
shutdown SQL Server and, I assume, failed it over to the other node or
restarted on the same node.
Rand
This posting is provided "as is" with no warranties and confers no rights.
CheckQueryProcessorAlive: sqlexecdirect failed
I have the following error in my sql error log.
2004-05-05 09:44:34.23 spid3 SQL Server is terminating
due to 'stop' request from Service Control Manager.
I have the following error in my cluster log just before
my entry into the sql error log. Could you let me know
what could be the cause? The server load was not high and
the system was not running any dbcc at that time.
000006b4.000008bc::2004/05/05-09:43:16.050 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.050 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = HYT00;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Timeout expired
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] OnlineThread: QP is not online.
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] CheckQueryProcessorAlive:
sqlexecdirect failed
000006b4.000008bc::2004/05/05-09:43:16.066 SQL Server <SQL
Server>: [sqsrvres] printODBCError: sqlstate = 08S01;
native error = 0; message = [Microsoft][ODBC SQL Server
Driver]Communication link failure
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] ODBC sqldriverconnect failed
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] checkODBCConnectError: sqlstate =
08001; native error = b; message = [Microsoft][ODBC SQL
Server Driver][DBNETLIB]General network error. Check your
network documentation.
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] ODBC sqldriverconnect failed
000006b4.000008bc::2004/05/05-09:43:37.300 SQL Server <SQL
Server>: [sqsrvres] checkODBCConnectError: sqlstate =
01000; native error = 274c; message = [Microsoft][ODBC SQL
Server Driver][DBNETLIB]ConnectionOpen (PreLoginHandshake
()).
Operating System and Version: Windows 2000 Server SP4
Database: MSSQL2000 SP3
HARDWARE CONFIGURATION INFORMATION
Machine Type: HP Netserver LT6000R
Amount of RAM: 4 GB
Type of Network Card: HP NetServer 10/100TX PCI LAN
Adapter (Slot)
Disk Type & Size: 4 x 36GB
Controller(s): HP NetRAID-2M Controller
Integrated HP NetRaid
Tape Unit, make/model:
CD-ROM make/model: TEAC CD-224E
regards,
bharath
The cluster service was unable to connect to SQL Server to verify that it
was running. There are numerous reasons why this could happen. I then
shutdown SQL Server and, I assume, failed it over to the other node or
restarted on the same node.
Rand
This posting is provided "as is" with no warranties and confers no rights.
Saturday, February 25, 2012
checkpoint on tempdb
was applied) the following situation started occuring:
- There is a simple scripted trace running on the server that captures
STMTCompleted and BatchCompleted events;
- A scheduled tasks fires every minute and captures the results using
::fn_trace_gettable function;
- tempdb is in Simple recovery mode (doh, that's the only mode that this
database can be in), but without explicitly issuing CHECKPOINT or BACKUP LOG
TEMPDB WITH TRUNCATE_ONLY space used in the log device continues growing.
While investigating the issue I set 3502 trace flag on, and also discovered
that even after explicitly issuing CHECKPOINT the entry about the checkpoint
on tempdb is not made.
As a workaround I added BACKUP LOG TEMPDB WITH TRUNCATE_ONLY to the job that
runs every minute. But what's interesting is that exactly the same scenario
with exactly the same trace works fine in 818 build, and space used in TEMPD
B
log device gets periodically cleared without having to explicitly issue any
checkpoint-related commands.
Is it a bug in SP4? And why 3502 trace flag does not post the entry for
TEMPDB (dbid 2)?
Any pointers would be appreciated.Thanks for reporting this problem.
TF3502 does not post entry in the errorlog for tempdb. This has been the
case since SQL 7.0.
There is another way to find out if there has been a checkpoint:
use tempdb
select * from ::fn_dblog(null, null)
This dumps the log since the last checkpoint.
Can you try that and let us know if there is indeed no checkpoint log
record? Or you could send me the data+log file of your tempdb in a zip file
and I will take a look. But that file may be huge.
This could be a bug. In the mean time I will ask the devs around here to see
if
anybody knows.
Finally, please note that both TF3502 and fn_dblog are undocumented
commands. There might be issues with using them on production systems. You
could open a case with Microsoft Product Support if you are not comfortable
with diagnosing the problem on your own.
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://blogs.msdn.com/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"Robert Djabarov" <Robert Djabarov@.discussions.microsoft.com> wrote in
message news:B1CA9B5B-34B2-46D2-927B-01815B92E912@.microsoft.com...
> It appears that after applying SP4 (the box has 4GB of RAM, so no post-SP4
> was applied) the following situation started occuring:
> - There is a simple scripted trace running on the server that captures
> STMTCompleted and BatchCompleted events;
> - A scheduled tasks fires every minute and captures the results using
> ::fn_trace_gettable function;
> - tempdb is in Simple recovery mode (doh, that's the only mode that this
> database can be in), but without explicitly issuing CHECKPOINT or BACKUP
> LOG
> TEMPDB WITH TRUNCATE_ONLY space used in the log device continues growing.
> While investigating the issue I set 3502 trace flag on, and also
> discovered
> that even after explicitly issuing CHECKPOINT the entry about the
> checkpoint
> on tempdb is not made.
> As a workaround I added BACKUP LOG TEMPDB WITH TRUNCATE_ONLY to the job
> that
> runs every minute. But what's interesting is that exactly the same
> scenario
> with exactly the same trace works fine in 818 build, and space used in
> TEMPDB
> log device gets periodically cleared without having to explicitly issue
> any
> checkpoint-related commands.
> Is it a bug in SP4? And why 3502 trace flag does not post the entry for
> TEMPDB (dbid 2)?
> Any pointers would be appreciated.