31 December 2013

Error: 3241 in sql server

There are two reasons for getting this error. They are:

First reason:

It looks like, you are getting error message due to a mismatch of SQL server versions between the backup and restore. Generally, you can't restore a backup created using higher version to lower version.


Second Reason:

It seems that your backup is corrupted.

Try to take another back up and attempt restore that. If that works, which means this backup has some corruption.

 


29 December 2013

Error 701 in sql server


 Error: 701 There is insufficient system memory, I've never come across this problem before in SQL Server 2008 R2, but long back I had faced similar kind of problem for SQL Server 2005, after installing hotfix it got resolved. To resolve 701 issue on SQL Server 2008 R2 I have followed below steps and it worked forking fine:

1. Increase SQL Server Maximum Memory through SQL Server Properties.
 
2. It is actually a bug in the SQL Server. It is not releasing the memory which it holds when it is necessary.
DBCC FreeProcCache -- Clear entire proc cache
DBCC DropCleanBuffers -- Clear entire data cache
DBCC DROPCLEANBUFFERS
DBCC FREESESSIONCACHE

3. If above steps not resolved, check min memory per query, Default value is 1024 KB, it is not recommended to decrease the value but still you can try.

Error 7391 in Sql server

Msg 7391, Level 16, State 2, Line 8

One of our application team members was trying to run Distributed Transaction query through linked server. He is saying whenever he is running the select query with Begin Distributed transaction it is throwing an error but the simple SELECT statement is working fine. So here no issue with linked server only issue with Distributed Transaction.

Below error message he is getting while running Distributed transaction through linked server.

Error Description:

OLE DB provider "SQLNCLI10" for linked server "Linkserver1" returned message "No transaction is active.".
Msg 7391, Level 16, State 2, Line 3
The operation could not be performed because OLE DB provider "SQLNCLI10" for linked server "SERVERNAME" was unable to begin a distributed transaction.

Workaround/Solution:

Step 1Select Administrative Tools from start menu and click Component Services 
Step 2:  After Clicking Component Services it will open the Component Services console like below.



Step 3:  Expand the Component Services inside that expand folder "Distributed Transaction Coordinator", now you can see Local DTC. 
Step 4:  Right click on the Local DTC Properties, it will open a new popup like below.



Step 5:  On the Local DTC Properties window, click on the Security Tab like below.



Step 6:  Check Network DTC Access under Security Settings like above.

Step 7:  In the Transaction Manager Communication box, Check Allow Inbound and Allow Outbound  and check the required Authentication method as per your requirement.

FIX Transaction Log Full Error 9002‏

We have to find out full transaction log of any database depends on what condition caused the log to fill entire drive.
Here is the script for releasing transactional log from the database

STEP-1
Run the below script for finding logical name of your log file 
EXEC SP_Helpdb 'XXX'
GO 

STEP- 2
USE XXX
GO
CHECKPOINT
GO
BACKUP LOG XXX WITH TRUNCATE_ONLY
GO
dbcc shrinkfile ('XXX_log',TRUNCATEONLY)
go
dbcc SHRINKFILE (XXX_log, 1)
GO

Change your database name with XXX and logical name of your database log file with XXX_log.

Error: 9004, Severity: 21, State: 1 in sql server

An operation in SQL Server that needs to read or process the transaction log can fail with an error like the following if the transaction log is damaged.

Error: 9004, Severity: 21, State: 1.
An error occurred while processing the log for database 'mydb'.  If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.

Microsoft suggest 2 options for this 9004 error, let me explain the steps in details. Click here for BOL. 

Option 1: 
First you have to create an empty database with the same name and physical file structure, then shut down the server, swap the both .log and .mdf files you want to attach in place of the empty DB files, and start the server. After above step the database should come up in suspect mode. You can then run this script ALTER DATABASE <database name> SET EMERGENCY to put it in emergency mode, and then run DBCC CHECKDB REPAIR_ALLOW_DATA_LOSS . The above command clear the some data in log to make the database consistent, but may have to delete some data in order to make the database consistent.  This is the best option which is most likely to get the maximum data back. 

Option 2: 
You can attempt to use the CREATE DATABASE FOR ATTACH_REBUILD_LOG to see if that will bring it back.  If the database was cleanly shut down, you MIGHT be able to succeed.  There is also the chance that the database will be inconsistent or corrupt if there are transactions which could not be rolled back.  You should run DBCC CHECKDB REPAIR_ALLOW_DATA_LOSS to make your database consistent. If there were transactions in process no rollback will be possible, so the ALLOW_DATA_LOSS will be required

Error: 945 in sql server

Error 945 Database cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server error log for details.

This error is very common for any database administrator who are handling more databases in a single instance. It will leads to suspect the database most of the time. 

Fix / Solution:
Step 1: if the drive is specifically assigned to Log Files and Log drive was full, clear the drive space by shrinking the log space of other databases which occupied more log space. 
Step 2: ALTER DATABASE [Database name] SET ONLINE 
Step 3: If possible add more drive space by adding new hard drive to server. 
Step 4: Increase the Database log size to more or set to Unlimited grow. 
Step 5: Find out if the account which is trying to access the database has enough permission to access the database. 
Step 6: Configure one maintenance job for backup log.

Microsoft SQL Server, Error: 1222

Issue Lock request time out period exceeded. (Microsoft SQL Server, Error: 1222).
Issue: In Sql Server 2000 we face this issue very frequently due to bug. Please see the screen shot above, which we got while expanding the tables (+) button.

In SQL server 2000 cannot grant a lock to a transaction on a resource because another transaction already owns a conflicting lock on that resource, the first transaction becomes blocked waiting on that resource. We will get these errors due to any transaction containing the statement is not rolled back or canceled by SQL Server.
Workaround: 
Step 1. Check SP_who2 and SP_lock for any locks or blocks and who is blocking it.
Step 2. If you found any locks then kill those locks.
Step 3. If you are not found any blocks then push this databases to single user mode and bring back to Multi user mode after 10 seconds. then it will works fine. 
Step 4. After doing above steps then all users can access the table through EM or SSMS Step 5. Microsoft released one bug fix for this as well check msdn.

Here is the SQL Query for changing the database options  In 2000 for pushing the Database to Single User Mode, use below script. 
use master  GO  EXEC sp_dboption 'DatabaseName', 'single user', 'true'  GO 
In 2000 for bringing the Database to Multi User Mode, use below script. use master  GO  EXEC sp_dboption 'DatabaseName', 'single user', 'false'  GO

Error 18456 in sql server


In SQL Server's common issue in securities is login failed for user, Error 18456.
To resolve 18456 error, you have to change the Server Authentication mode "SQL Server and Windows Authentication mode".

In order to resolve the above issue through selecting "SQL Server and Windows Authentication mode", please refer to the following steps:
Here is the error #18456 getting while connecting to the server through "SA" login and password.

Workaround:  Step 1. Login to the Microsoft SQL Server Management Studio with Windows Authentication.


Step 2. In SQL Server Management Studio Object Explorer, right-click on the server, and then click Properties.

Step 3. Under the Server Properties, select "Security" page.

Step 4. Select the Server authentication as "SQL Server and Windows Authentication mode" and click Ok.


Step 5. After changing the server authentication mode from windows to SQL Server and Windows Authentication Mode, server needs to restart then only changes will effect.

Backup verify query or How to verify backup in sql server

Here is a list of commands to verify the backup files in sql server. 

1.  Execute below command for finding the backup set is complete and that all volumes are readable. 

Restore verifyonly from disk='C:\MSSQL\shiva.bak' 

2.  Execute below command for finding the backup header information for all backup sets on a particular backup device.

Restore headeronly from disk='C:\MSSQL\shiva.bak' 

3.  Execute below command for finding the database and log files contained in the backup set.

Restore filelistonly from disk='C:\MSSQL\shiva.bak'

Performance tuning in sql server

Dear users some time we have to increase the server or database performance. So based on that our application give us quick response. Here is the some best tips for performance tunings.

Performance Tuning :
The process of modifying the Application or Database in order to make the entire system runs faster.

1. Application server giving poor performance (App. Users issue)
2. Open performance monitor, monitor application specific counters.
3. Identify application process in SQL Server (SPID), Blocking & Deadlocks in the database server
4. Database level tuning
5. Identify queries & Stored procedures submitted by the application (Query tuning / Stored Procedures optimization) based on “SPID”
6. Hardware resources allocated for SQL Server with respect to users load on server
7. Prepare baseline with measures and metrics.
8. Periodically monitor the measures and compare with baseline 

Identify Blocking & Deadlock in the Database Server.
1. Blocking is a situation where multiple transactions are waiting for resource that was exclusively locked by a transaction for a long period of time.
 
2. Deadlock when two processes each having a lock on one piece of data, attempt to acquire a lock on the other’s piece. Each process would wait indefinitely for the other to release the lock unless one of the user processes is terminated. SQL server detects deadlocks and terminates one user’s.
 
3. Blocking and Deadlocks degrades the database server performance as well as application performance.
4. Livelock is one where a request for an exclusive lock is repeatedly denied because a series of over lapping shared locks keeps interfering. A live lock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely. 

Database Level Tuning:
1. Index maintenance on regular basis.
2. Statistics updation on regular basis.
3. Data partitions for huze tables it works from SQL Server 2005 onwards.

Index Level Tuning:
1. Table scan brings a poor performance, add indexes where more table scan is happening.
2. Indexing: Index is nothing but ordered list of values taken from one or more columns of a table and organized into B-Tree (Balanced Tree) structure.
3. Index will improve the select query performance.
4. Indexes will degrade the performance of insert, delete, and update statements.

cpu utilization is 100 / High

We experience regular slowdowns on our Microsoft SQL database. After analyzing the memory usage we would like to continue the root cause investigation by examining CPU bottlenecks.

Here is my general recommendations to recognize CPU related bottlenecks in SQL Server. 
Recommendation:
There are many reasons for CPU related performance problems on SQL Server. The main reason for CPU bottlenecks is insufficient hardware resources. CPU utilization can usually be reduced by configuration changes and query tuning so think before you rush out to buy faster or more processors.

You can use Performance Monitor to check the load on your CPU. Look for the Processor:% Processor Time counter: if it regularly exceeds 80% of the processor time per CPU then most probably you're facing a CPU related bottleneck.

Type 'perfmon' run and click enter, now performance window will open you have to create counter log and monitor below counters.

These are many counters, I usually prefer below counters to find any performance related issue in my production box. Right click on "Counter log" and select "New Log Setting" it will ask for name give any name, and click ok, 

After clicking OK, now new pop up box will open, here 3 tabs you can see, in general tab click 'Add Counter', again one more pop up box will open like below.
Here you have to select 'performance object' first then highlight the 'counter name' and click 'add'. Do the same for following counters.

    Memory – Available MBytes
    Paging File – % Usage
    Physical Disk – Avg. Disk sec/Read
    Physical Disk – Avg. Disk sec/Write
    Physical Disk – Disk Reads/sec
    Physical Disk – Disk Writes/sec
    Processor – % Processor Time
    SQLServer: Buffer Manager – Buffer cache hit ratio
    SQLServer: Buffer Manager – Page life expectancy
    SQLServer: General Statistics – User Connections
    SQLServer: Memory Manager – Memory Grants Pending
    SQLServer: SQL Statistics – Batch Requests/sec
    SQLServer: SQL Statistics – Compilations/sec
    SQLServer: SQL Statistics – Recompilations/sec
    System – Processor Queue Length

After adding above counters click close, now you came back to previous window, now go 'Log File' tab, in this tab select 'log type' is "Text File (Tab Delimited)" and click 'configure' and select the file location, where you want to save the output.

And 3rd Tab is for Scheduling to do run this in a any particular time to capture the data, if suppose if any specific time you are facing CPU bottleneck or something else you can automate this.

After running one or two days or hours of these counters, and stop the counters, format the excel sheet for easy understanding like below, put the formulas for Average, median, min, max, Std Deviation. and you can easily figure it out what was the maximum value and it average value for each individual counters.


Below sheet are just recommendations and not exact recommendations because each scenario and each server configuration is different from other. You must collect a base-line of data to compare the above values.


Object
Counter
You Want
                      Description
Memory
Available Mbytes
> 100MB
Unused physical memory (not page file).
Paging File
% Usage
< 70%
Amount of Page File in use, which indicates the server is substituting disk space for memory.
Physical Disk
Avg. Disk sec/Read
< 8ms
A key measure of disk latency representing the average time, in milliseconds, of each read to disk where > 20 is poor, <20 is good/fair, <12 is better, <8 is best
Physical Disk
Avg. Disk sec/Write
< 8ms (non cached)
< 1ms (cached)
A key measure of disk latency representing the average time, in milliseconds, of each write to disk, where non-cached writes ( > 20 poor, <20 fair, <12 better, <8 best) differ significantly from cached writes (> 4 poor, <4 fair, <2 better, <1 best ). For OLTP databases, the lower this number the better, especially for disks holding the transaction log.
Physical Disk
Disk Reads/sec


Physical Disk
Disk Writes/sec


Processor
% Processor Time
< 80%
Percentage of elapsed time the processor spends executing non-idle threads.
SQLServer: Buffer Manager
Buffer cache hit ratio
>90%
The percentage of pages that were found in the memory. Higher the value the better. Preferred around the 90% mark. This included the availability of the procedure and the data cache in the system.
SQLServer: Buffer Manager
Page life expectancy
> 300
Tells, on average, how many seconds SQL Server expects a data page to stay in cache. The target on an OLTP system should be at least 300 (5 min). When under 300, this may indicate poor index design (leading to increased disk I/O and less effective use of memory) or, simply, a potential shortage of memory.
SQLServer: General Statistics
User Connections
See Description
The number of users currently connected to the SQL Server. This counter should roughly track with “Batch Requests/Sec”. They should generally rise and fall together. For example, blocking problems could be revealed by rising user connections, lock waits and lock wait time coupled with declining batch requests/sec.
SQLServer: Memory Manager
Memory Grants Pending
See Description
Total number of processes waiting for a workspace memory grant.
SQLServer: SQL Statistics
Compilations/sec
< 10% of the number of Batch Requests/Sec
Number of times that Transact-SQL compilations occurred, per second (including recompiles). The lower this value is the better. High values often indicate excessive adhoc querying and should be as low as possible. If excessive adhoc querying is happening, try rewriting the queries as procedures or invoke the queries using sp_executeSQL. When rewriting isn’t possible, consider using a plan guide or setting the database to parameterization forced mode.
SQLServer: SQL Statistics
Recompilations/sec
< 10% of the number of SQL Compilations/sec
Number of times, per second, that Transact-SQL objects attempted to be executed but had to be recompiled before completion. This number should be at or near zero, since recompiles can cause deadlocks and exclusive compile locks. This counter’s value should follow in proportion to “Batch Requests/sec” and “SQL Compilations/sec”. This needs to be nil in your system as much as possible.
System
Processor Queue Length
< 4 per CPU
Number of threads waiting for CPU cycles, where < 12 per CPU is good/fair, < 8 is better, < 4 is best.

Error 601: Could not continue scan with NOLOCK due to data movement

Error:Msg 601: Could not continue scan with NOLOCK due to data movement.
Here are some of the causes for this error: 

1. When scanning with the NOLOCK locking hint or with the transaction isolation level set to READ UNCOMMITTED, it is possible for the page at the current position of the scan to be deleted. When this happens, SQL Server is not able to continue the scan.

2.  SQL BLOB fields in combination with the NOLOCK query hint.“ Because BLOB fields can store big data amount (>8kb) it uses (in default) a pointer to storee this data as a stream outside the tablee page. Due to this pointer construction the reading of a blob field will be in streaming mode. During this strem the error handling is not the same as by normal sql fields. Due to this we get if a blob field is changd during the read process SQL server will close the stream and generate the NOLOCK error.

3.  We can experience this issue after changing some of  C# code that connects to the database.

4.  Index fragmentation causes excessive unnecessary operations which apparently conflict with "dirty" NOLOCK reads.

5.  NOLOCK means there is no lock when you doing the query using NOLOCK parameter, and it allows to read dirty data because there existing no consistence on the part of data, when do the query using NOLOCK on the tables which has frequent data change it may come out the error. This is not the data error, or bug, but just one phenomenon when reading data under NOLOCK model.

Order of Query execution in sql server

Some times we want to know about query execution order, the same question has got from my student when I'm giving seminar about SQL. It is very simple.


1.  FROM clause 
2.  WHERE clause 
3.  GROUP BY clause 
4.  HAVING clause 
5.  SELECT clause 
6.  ORDER BY clause