Monday, January 28, 2008

Notification Service Deployment :: Timeout expired

    While working on one of the Notification services deployment I was getting following error message.

        An attempt to run a Transact-SQL CREATE PROCEDURE statement failed.
        SqlCommandText: CREATE PROCEDURE dbo.<procedure name>

        Source: .Net SqlClient Data Provider
        Number: -2
        State: 0
        Class: 11
        Server: myServer
        Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

        Line Number: 0

        Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

    Timeout error for just creation of stored procedure? - I did check the stored procedure, it's T-SQL was 200 line long that; should not matter anyway. However I went ahead and ran the "CREATE PROCEDURE" code in query analyzer. I got created fine, just in slit seconds.

    Did some search on web for "Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." - lot of forums were discussing about increasing timeout - command, connection everything, and some counter-argueing why increase timeout parametes its like "passing the buck to next developer " :) which is alo true. Anyway in my case I didn't had a way to change any parameters as NSControl.exe (the one which runs all the T-SQL scripts) is provided by MS and I am not aware of any way to set it. So we just gave a shot to reboot the server and it worked.

    I did further research and found that this is a know issue with NSControl.exe. Here is the link Per this article "problem occurs if the application databases for the instance of Notification Services are large", but in my case the databases are very small, like 50 subscriptions and 50 members in the list (ofcourse this was TEST environment). Anyway I have decided to apply the patch, with the hope that this issue will not occur again. If it occurs again, I will use my technique - "just reboot".

Saturday, January 26, 2008

Caution about Temporary Tables !

    If you run the following statement, it will create a temporary table with Primary key named as "Pk_Col",

        CREATE TABLE #Tmp (

    I will open another connection using same username and run the same statement it should run successfully. However it will fail with following error.

        Msg 2714, Level 16, State 4, Line 1
        There is already an object named 'Pk_Col' in the database.
        Msg 1750, Level 16, State 0, Line 1
        Could not create constraint. See previous errors.

    From the error message it is very clear that there is a object name conflict not the #Tmp, but the Pk_Col. Unlike #Tmp the Pk_Col is not treated as temporary object.

    So to avoide such issue, and in the sprit of "good practice" don't give name to the constraint. If you don't give name to a constraint SQL Server will automatically give some arbitrary name like Pk_Col_A2340 which would never conflict if you rerun the command again and again. So the T-SQL should look like,

        CREATE TABLE #Tmp (
        col_01 INT PRIMARY KEY

    Note that I have not given a name, so the primary key was named as PK__#Tmp______________1CF15040

    Actually this issue occurred bit differently in my case. I was troubleshooting some T-SQL which was part of schedule job. While troubleshooting I ran the statement and everything looked fine. So went to Management studio and opened the job and pasted this modified T-SQL and executed the job. The job failed! - I looked at the job history and found that its failing due same error as above "Object name conflict". First I was surprise, b'cos both connections are separate and the primary key was on temporary table, but suddenly I realized the issue.

    So, best option all the time is NOT to give a name to a constraints used for temporary tables (# tables). This behaviour is seen in SQL 2000 & 2005 both.

Thursday, January 24, 2008

Not enough server storage is available to process this command


    In one of the deployment procedure we detach the orginal databases and attach them as bak_<Original Name> while doing this we have to rename the file so that we can avoide file name conflict while createing a new instance of databases. In order to rename we use \\<ServerName>\I$ as the access path, where I$ is the administrative share. But for some reason this was not accessable and was showing an on-screen error message as follows.

      "Not enough server storage is available to process this command"

    And in the System event logs following error message was logged.

      "The server's configuration parameter "irpstacksize" is too small for the server to use a local device. Please increase the value of this parameter".


    As the onscreen error message was showing "Not enough server storage is available to process this command" we tried rebooting the system once, as the article suggested there could be possible memory issue, but that did not resolve the issue. Then contacted NT Admin team to trouble shoot this issue. From the update that I got from that team was that - they followed the steps given in aticle, but still they also don't know the root cause of this behaviour. After this change mentioned in the article was implemented the $ share path was accessable as usuall.

    Intrim Solution Steps( This was creative me) :

    While NT Admin team was working on this, we decided to run our deployment procedure manually. So were not sure how do we go about it, b'cos our deployment procedure is made up of lot DOS and SQL Scripts and some of these scripts are created as dynamically during run time and executed. So decided to run the steps one-by-one manually, but that was very tedious. Instantaly we got an idea to introduced a PAUSE in one of the CMD files, just before the line where it was erroring out. So when we triggered the process it came until that pause and was waiting to hit some key. Here we went opened dynamically created files and modified the commands in one of the files to use I:\ instead of I$. Then went back to process window and hit enter key to continue. We were able to run the deploy process successfully. This was just an intrim workaround for that day only.

Thursday, January 17, 2008

SQL 2008 - Backup Compression feature

    All these days we spent time and resources on identifying the best SQL Backup utility that has compression mechanism. Finally we heard that MS-IT uses SQL Litespeed for its operations and most of the people out there bought LiteSpeed, those who couldn't afford settled with other softwares like Red-Gate etc. Put some time in implementing it with custom code to fire backup commands and to clean the old backup files as these tools are not tightly integrated with SQL Maintenance Plans.

    In SQL-2008 MS has introduced "Compressed Backups" - yes all your energy is (that we talked about in above paragraph) is wasted. Instead of saying wasted just assume that you got some knoledge and working of some product like LiteSpeed and Red-Gate etc. Also this "compression of backups" feature is available in "Enterprise Edision" only. And certainly you can resore the "Compressed backups" onto other edition like Standard, & Developer etc.

    Now, Lets look what actually is this new feature in SQL 2008 is doing. Ok - it’s the same compression technique, basically it will compress the data in the memory and dump it into a file in that format which reduced the disk space used, plus the time taken to complete the backup - b'cos the disk I/O is less in this case. I don't see encryption feature in here which is mostly available with third party tools. And the other point is that there are couple of restrictions on the having compressed backups, which are,

    • This feature is only available on SQL 2008 - "Enterprise Edition"
    • Compressed and Un-compressed backups cannot be in the same media set (i.e SQL 2008 - "Enterprise Edition" will understand the media set, however the older versions of SQL Server will fail to read the media set.)
    • NTbackups cannot share a tape with compressed SQL Server backups (I am NOT sure why - may be bc'os of the above reason)

    So how do we enable this feature on SQL 2008 ? - This is done via sp_configure system stored procedure. By default when instance is installed this value is set to OFF (0), so in order to enable it you need to run

      USE master
      EXEC sp_configure 'backup compression default', '1'

    Also, here is the command that you can use in case if you don't want to enable the compression at server level, but want only for specific backup command then,

              TO DISK='Z:\SQLServerBackups\AdvWorksData.bak'

    After reading all this I was curious to think - is there any change in backupset system table in MSDB. And yes, there is a new column in backupset table named  "compressed_backup_size". The value in this column will tell you the byte count of the file (compressed) stored in the disk. If this value is NULL then that backup set is NOT_COMPRESSED.

    Performance wise - Certainly compression demands extra CPU cycles, good that MS has provided two more counter to help use deal with performance related to compression. The counter are Device Throughput Bytes/sec (in SQLServer:Backup Device object) and Backup/Restore Throughput/sec counter (in SQLServer:Databases object) with these two I think we should be able to deal with perf. Issue related to backup process involving compression.

    Compression Ratio - This is something that depends on couple of factors like is the data repeated too much in that case you can achieve high level of compression. As you migh know binary data cannot be compressed much so in those situation as well as in case of encrypted data character repeation happen very less, so you get less compression ratios.

Tuesday, January 8, 2008

Capturing PSSDIAG/SQLDump, Right In Time

    Usually when you contact Microsoft PSS for SQL Server related issue they will ask you to run the PSSDIAG if its performance issue or if its SQL Server hung/restart issue they will ask you to capture memory dumps. If the issue is known to occur at certain time or if you can reprooduce the issue its fine you can trigger PSSDIAG or SQLDump well in time. But usually that is not the case. The issues are totally random and unknown in that kind of situation as a DBA you will face one problem - how will you run PSSDIAG or SQLDump right at the time of issue occurance.

    Understand the symtoms: If you know the symptoms like heavy blocking, slow down, unresponsiveness etc you can write a VBScript or combination of DOS and OSQL to detect the symptoms. So lets take un-responsiveness as an example. If this is the case I would write an Vscript and ping the SQL Server if the connection time is beyound 3-4 seconds then I will trigger the PSSDIAG or SQLDump whatever MS Support engineer has asked me to capture. This would totally automate the process and no human intervention is required.

    In one of my cases what happened was there was an issue with SQL Server and it used to restart automatically. This behaviour was very much random was not sure when would the next re-start would occur. After initial investigation we found that the MS Clustering service was causing the SQL Server to restart. The way MS Clustering service works is - if the resource (in this case SQL service) is not responding (failure of isAlive check) then it will attempt to restart it. After restart attempts exceeds the configured thershold value Clustering service will attempt to failover. Which means the SQL Service will stop & has to start from the other node and hence I used to see the restart event occuring. So what I did was, I whipped out a simple VBScript to detect the SQL Server's situation and trigger the memory dump. Along with this I also changed the threshold timer and number of attempts in cluadmin so that Clustering service will not failover before my script does its job.

    Note: Pls evaluate the Pros/Cons or risks before making any change in Cluster Configurations.

    So this is how I was successful in capturing the SQLDump right in time and sent it to PSS for analysis. We found that the SQL Server was running out of "worker thread" and due to which applications were failing (new connections) to connect to SQL Server. And even Clustering service was unable to do its "isAlive" checks and it used to interpret this as serice failure.

    Similarly like what I explained above you can execute PSSDIAG or any other utility of choice if required.

Saturday, January 5, 2008

Who is incharge ?

All the organs of the body were having a meeting, trying to decide who was the one in charge .

'I should be in charge,' said the brain , 'Because I run all the body's systems, so without me nothing would happen .'

'I should be in charge,' said the blood , 'Because I circulate oxygen all over so without me you'd all waste away .' 'I should be in charge,' said the stomach,' Because I process food and gi! ve all o f you energy .'

'I should be in charge,' said the legs , 'because I carry the body wherever it needs to go .'

'I should be in charge,' said the eyes , 'Because I allow the body to see where it goes .'

'I should be in charge,' said the rectum-Asshole' 'Because I'm responsible for waste removal.' All the other body parts laughed at the rectum And insulted him, so in a huff, he shut down tight.
Within a few days, the brain had a terrible headache, the stomach was bloated, the legs got wobbly, the eyes got watery, and the blood was toxic . They all decided that the rectum should be the boss . The Moral of the story? Even though the others do all the work.

The asshole is usually in charge .

Wednesday, January 2, 2008

SQL Server 2008 - New Security Features

Transparent Data Encryption: What actually this means is - in earlier versions of SQL Server anybody could restore or attach and use the data in the database, of course if assuming if the unauthorized person got hold of the physical backup/data files. But SQL 2008 enables encryption of an entire database, data files or log files, without the need for application changes, plus search the encrypted data using both range and fuzzy searches, search secure data from unauthorized users, and data encryption without any changes in existing applications (developers must me happy). So in this scenerio if unauthorised person has to get access to data he needs to have the Master key, not just copy of physical backup/data file. Once encryption is turned on, the data is secured. This is not available in current CTP, soon it might be available to explore. I am expecting some kind of performance degradation on encrypted databases access, as one extra layer of data transformation is involved, but I am not sure.

Extensible Key Management: This feature was there in SQL Server 2005 but was kinda basic solution for encryption and key management. In SQL Server 2008 it even supports third-party key management and HSM (Hardware Security Modules) products, and they say usage of HSM's are much more secured. Hardware Security Module (HSM) is a hardware-based security device that generates, stores and protects cryptographic keys. It provides the foundation for a high-level secure campus certification authority. Certification modules are also available in software, but a hardware device provides a higher level of security. (source:

Auditing: I am very happy to see this feature since SQL 2005. You can create and manage auditing via DDL while simplifying compliance by providing more comprehensive data auditing. This enables organizations to answer "basic" questions such as "What data was retrieved?".

All these above great features are available for usage via your favourite T-SQL command, isn't it great?

Why use object type identifier ?

Following is the typical T-SQL statement found in most of the database object deployment scripts. Let me describe the script a little bit - the below script will find the object "[dbo].[fnHasMarketingOptions]" which is of TYPE FN (Scalar function), IF (In-lined table-function) OR TF (Table function) and if FOUND it will drop that function.

    IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[fnHasMarketingOptions]') and xtype in (N'FN', N'IF', N'TF'))

    DROP FUNCTION [dbo].[fnHasMarketingOptions]

I don't understand why use "and xtype in (N'FN', N'IF', N'TF')" this part - because anyway you can't create a TABLE and PROCEDURE/FUNCTION with the same names in a database. Below is the illustration T-SQL statements,

    CREATE TABLE Test (col1 varchar(30))


    CREATE PROC test

Here is the error message….

    Msg 2714, Level 16, State 5, Procedure test, Line 4
    There is already an object named 'test' in the database.

If two object types (TABLE/PROCEDURE/TRIGGER) cannot be created with same name then why use "and xtype in (N'FN', N'IF', N'TF')" ?

About Me

By profession I am a Database Administrator (DBA) with total 13 yrs. of experience in the field of Information Technology, out of that 9 yrs as SQL DBA and last 3 years in IBM System i/iSeries and DB2 LUW 9. I have handled Developer, plus production support roles, and I like both the roles. I love and live information technology hence the name "Techonologyyogi" Apart from that I am a small, retail investor, with small investments in India and United States in the form of Equity holdings via common stocks. Don't ask me if I have made money, I have been loosing money in stocks.