Ads
30 December 2010
28 December 2010
22 December 2010
Install SQL from Script
Script 1:-
http://www.megaupload.com/?d=AN5UQOGN
Script 2:-
http://www.megaupload.com/?d=F3LQIVNZ
Thanks To :- http://www.databasejournal.com/features/mssql/article.php/3672926/Scripting-the-Installation-of-SQL-Server-2005.htm
16 December 2010
Microsoft SQL Server Denali - Service Packs
Microsoft SQL Server Denali description
A comprehensive data platform that is more secure, reliable, manageable and scalableMicrosoft SQL Server code-named “Denali” empowers organizations to be more agile in today’s competitive market. Customers can efficiently deliver mission-critical solutions through a highly scalable and available platform. Industry-leading tools help developers quickly build innovative applications while data integration and management tools help deliver credible data reliably to the right users and new user experiences expand the reach of BI to enable meaningful insights. Here are some key features of "Microsoft SQL Server Denali": What's New in This Release: [ read full changelog ] Thanks To :- http://www.softpedia.com/get/Internet/Servers/Database-Utils/Microsoft-SQL-Server.shtml |
07 December 2010
Migration Checklist
Thanks to :- http://www.mssqltips.com/tip.asp?tip=1936
Pre-Migration Checklist
- Analyze the disk space of the target server for the new database, if the disk space is not enough add more space on the target server
- Confirm the data and log file location for the target server
- Collect the information about the Database properties (Auto Stats, DB Owner, Recovery Model, Compatibility level, Trustworthy option etc)
- Collect the information of dependent applications, make sure application services will be stopped during the database migration
- Collect the information of database logins, users and their permissions. (Optional)
- Check the database for the Orphan users if any
- Check the SQL Server for any dependent objects (SQL Agent Jobs and Linked Servers)
- Check, if the database is part of any maintenance plan
Below are various scripts you can run to collect data.
Script to Check the Disk and Database Size
-- Procedure to check disc space |
Script to Check Database Properties
select |
Another Script to Check Database Properties
declare @dbdesc varchar(max) |
Script to List Orphan Users
sp_change_users_login 'report' |
Script to List Linked Servers
select * |
Script to List Database Dependent Jobs
select |
Database Migration Checklist
These are the steps you would go through to make the change.
1. Stop the application services2. Change the database to read-only mode (Optional)
-- Script to make the database readonly |
4. Restore the databases on the target server on the appropriate drives
5. Cross check the database properties as per the database property script output, change the database properties as per the pre migration- checklist
Script to Change DB Owner
This will change the database owner to "sa". This can be used to change to any owner you would like.
USE databaseName |
Script to Turn on Trustworthy Option
If trustworthy option was set, this will turn it on for the database.
ALTER DATABASE database_name SET TRUSTWORTHY ON |
Script to Change the Database Compatibility Level
When you upgrade to a new version, the old compatibility level will remain. This script shows how to change the compatibility level to SQL Server 2005 compatibility .
ALTER DATABASE DatabaseName |
6. Execute the output of Login transfer script on the target server, to create logins on the target server you can get the code from this technet article: http://support.microsoft.com/kb/246133.
7. Check for Orphan Users and Fix Orphan Users
Script to Check and Fix Orphan Users
-- Script to check the orphan user |
8. Execute DBCC UPDATEUSAGE on the restored database.
Run the DBCC UPDATEUSAGE command against the migrated database when upgrading to a newer version of SQL Server.
DBCC UPDATEUSAGE('database_name') WITH COUNT_ROWS |
9. Rebuild Indexes (Optional) As per the requirement and time window you can execute this option.
Take a look at this tip to rebuild all indexes.
This will rebuild or reorganize all indexes for a particular table.
Index Rebuild :- This process drops the existing Index and Recreates the index.
Index Reorganize :- This process physically reorganizes the leaf nodes of the index.
-- Script for Index Rebuild |
sp_updatestats |
11. Recompile procedures
Take a look at this tip to recompile all objects.
This will recompile a particular stored procedure.
sp_recompile 'procedureName' |
12. Start the application services, check the application functionality and check the Windows event logs.
13. Check the SQL Server Error Log for login failures and other errors
Take a look at this tip on how to read SQL Server error logs.
EXEC xp_readerrorlog 0,1,"Error",Null |
14. Once the application team confirms that application is running fine take the databases offline on the source server or make them read only
-- Script to make the database readonly |
Next Steps
- Test the process to determine how much time and disk space would be needed by using the backup and recovery process.
- Meet with your technical and business teams to find out how much time is available for the migration and plan the activity
- Design the rollback plan, if the application is not working fine
- Add more migration cases in your checklist, for example check if the database requires any server level change (For example CLR, XP_Cmdshell etc)
- Some of these scripts give you the base command to update a portion of the data, enhance the process to hit each object in your database.
29 November 2010
24 November 2010
Cannot set a credential for principal 'sa'. (SQL Server 2005/2008)
To resolve this issue, make sure that you select the Map To Credential check box on the General tab of the Login Properties - sa dialog box before you change the "sa" login properties and then click OK to close the dialog box.
11 November 2010
20 October 2010
DUPLICATE INDEXES
with indexcols as
(
select object_id as id, index_id as indid, name,
(select case keyno when 0 then NULL else colid end as [data()]
from sys.sysindexkeys as k
where k.id = i.object_id
and k.indid = i.index_id
order by keyno, colid
for xml path('')) as cols,
(select case keyno when 0 then colid else NULL end as [data()]
from sys.sysindexkeys as k
where k.id = i.object_id
and k.indid = i.index_id
order by colid
for xml path('')) as inc
from sys.indexes as i
)
select
object_schema_name(c1.id) + '.' + object_name(c1.id) as 'table',
c1.name as 'index',
c2.name as 'exactduplicate'
from indexcols as c1
join indexcols as c2
on c1.id = c2.id
and c1.indid < c2.indid
and c1.cols = c2.cols
and c1.inc = c2.inc;
==================================================================
-----------------------------
Overlapping indxes
------------------------------
with indexcols as
(
select object_id as id, index_id as indid, name,
(select case keyno when 0 then NULL else colid end as [data()]
from sys.sysindexkeys as k
where k.id = i.object_id
and k.indid = i.index_id
order by keyno, colid
for xml path('')) as cols
from sys.indexes as i
)
select
object_schema_name(c1.id) + '.' + object_name(c1.id) as 'table',
c1.name as 'index',
c2.name as 'partialduplicate'
from indexcols as c1
join indexcols as c2
on c1.id = c2.id
and c1.indid < c2.indid
and (c1.cols like c2.cols + '%'
or c2.cols like c1.cols + '%') ;
=====================================================
13 October 2010
SQL Server Profiler Stored Procedures
sp_trace_create
sp_trace_setfilter
sp_trace_generateevent
sp_trace_setstatus
sp_trace_setevent
12 October 2010
SQL SQRVER with SP's
08 October 2010
Instant File Initialization Speeds SQL Server
In my first two examples, before instance file initialization was turned on, the reason it took so long for the database to be created, or the database to be restored (before a database can be restored, its space must first be pre-allocated, much like creating a new database), SQL Server had to go to every page in the 50 GB database and zero each one of them out. It can take a lot of time for SQL Server to go to every 8K page in a file (especially very large files) and physically zero out each page. When instant file initialization is turned on, SQL Server doesn’t have to zero out every 8K page that has been allocated. Instead, the space is just allocated to SQL Server by the operating system in one fell swoop, which is a very quick process, potentially saving you a great deal of time.
Read more: http://www.bradmcgehee.com/2010/07/instant-file-initialization-speeds-sql-server
06 October 2010
Bidirectional, Transactional Replication VS Peer-to-Peer Transactional Replicatio
Now if your users are banging away on your first publisher while you do this it will be hard to achive consistency between both nodes in your bi-directional replication toplogy, as these updates may or may not be in the backup. You can run validations after the fact but it hard to cobble together a consistent topology while you are live. It can be done, but its hard. Plus table locks are held when the publication is created. So you can run into a lot of locking.
With peer-to-peer in SQL 2005 you again have to quiesce the system, make your change, ensure it is everywhere and then let your users back on. Its the same problem.
WIth merge replication and bi-di transactional the model is a hub and spoke methodology. You have a central publisher and multiple subscribers all talking to this central publisher which is the clearing house for changes. Peer to peer is a mesh topology which means a<->b, b<->c, c<->a. b can drop off and a can replicate with c until b comes back on.
You can't do this using bi-directional transactional replication and merge.
More on
http://technet.microsoft.com/en-us/library/ms151196.aspx
======================================================
How to Configure Peer-to-Peer Transactional Replication (SQL Server Management Studio)
http://technet.microsoft.com/en-us/library/ms152536.aspx
05 October 2010
01 October 2010
When using SQL Server Management Studio Express I get the error "SQL Server does not allow remote connections". How to resolve this error?
In SQL Server 2005 when trying to connect to a remote database with SQL Server Manager Studio Express, I get this error: "An error has occured while establishing a connection to the server. When connecting to SQL Server 2005,this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.(provider:Named Pipes Provider,error:40-Could not open connection to SQL Server))" How to resolve this error?
Answer:
In addition to the above error the user might get the following errors too:
*
SQL Server does not allow remote connections
*
SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified
*
Server does not exist or access denied
The errors are self-explanatory for a Database Administrator, but for a beginner or Developer these errors will be more frustating.
It should be noted that SQL Server 2005 Express is not automatically configured for 'remote access' during the installation. This is not a bug merely the default configuration. The simple solution for this problem is to enable the network protocols for the SQL Server Express instance using Surfact Area Configuration (SAC) utility for SQL Server 2005 programs group on your machine.
Surface area reduction is a security measure that involves stopping or disabling unused components. Surface area reduction helps to improve security by providing fewer avenues for potential attacks on a system.
For new installations of Microsoft SQL Server 2005, some features, services, and connections are disabled or stopped to reduce the SQL Server surface area. For upgraded installations, all features, services, and connections remain in their pre-upgrade state."
SQL Server Surface Area Configuration is available on the SQL Server Start menu:
*
On the Start menu, point to All Programs, Microsoft SQL Server 2005, Configuration Tools, and then click SQL Server Surface Area Configuration.
*
Choose SQL Server 2005 Network Configuration and select the installed SQLEXPRESS instance.
*
On right-hand pane select the required protocol either TCP/IP or Named-Pipes (or both), right-click with the mouse to set it to 'Enabled'.
In some special cases it may be an additional requirement to start the SQL Browser service, the usage and benefit of having this service run is that users connecting remotely do not have to specify the port in the connection string. It is a Best Practice in Security aspects to not to run the SQL Browser service as it reduces the attack surface area by eliminating the need to list on to an UDP port from client.
If your network is tightened with relevant firewall and security access then (in special cases) you might need to enable SQL Server and SQL Server Browser to be exempted by the firewall on the server machine. This is done by putting sqlservr.exe and sqlbrowser.exe as an exception in the windows firewall or any other firewall software application that is used.
Lastly, you might need to reboot the machine in order for these network configuration changes to take affect as simply restarting SQLServer and SQL Browser services may be insufficient.
For more information on usage of Surface Area Configuration tool refer to updated SQL Server Books OnLine.
SQL Server Management Studio takes a long time to load, how to improve the load performance of this tool?
Good Link.
Microsoft SQL Sever management Studio: Failed to open connection dialog
Solution:
This is a common question in the forums and newsgroups when any of the SQL Client tools gets corrupted. There might be chances that few times by reinstalling the client tools and if there is an issue with a corrupted or deleted (mistakenly) a .DLL file then the only way is to repair the tools.
You can either follow the GUI with SETUP of Management Studio tools or use the command-line operation.
Repairing SQL Server is a command-line operation. You can repair by inserting the installation media into the disk drive and typing the following:
start /wait \setup.exe /qb REINSTALL=SQL_Tools90 REINSTALLMODE=OMUS
23 September 2010
Surface Area Config - Missing - SQL Server 2008 R2
Improving SQL Server Performance - CHECK LIST
http://msdn.microsoft.com/en-us/library/ff647681.aspx
10 September 2010
Detect CPU pressure
Detect CPU pressure
===================
SELECT COUNT(*) AS workers_waiting_for_cpu, t2.Scheduler_id
FROM sys.dm_os_workers AS t1, sys.dm_os_schedulers AS t2
WHERE t1.state = 'RUNNABLE' AND
t1.scheduler_address = t2.scheduler_address AND
t2.scheduler_id < 255
GROUP BY t2.scheduler_id
Time spent by workers in RUNNABLE state:
============================================
SELECT SUM(signal_wait_time_ms)
FROM sys.dm_os_wait_stats
25 August 2010
DBA INTERVIEW QUESTIONS - Part2
SQL Server can be started from command line, using the SQLSERVR.EXE. This EXE has some very important parameters with which a DBA should be familiar with. -m is used for starting SQL Server in single user mode and -f is used to start the SQL Server in minimal confuguration mode.
As a part of your job, what are the DBCC commands that you commonly use for database maintenance?
DBCC CHECKDB, DBCC CHECKTABLE, DBCC CHECKCATALOG, DBCC CHECKALLOC, DBCC SHOWCONTIG, DBCC SHRINKDATABASE, DBCC SHRINKFILE etc. But there are a whole load of DBCC commands which are very useful for DBAs.
Define candidate key, alternate key, composite key.
A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys. A key formed by combining at least two or more columns is called composite key.
What is RAID and what are different types of RAID configurations?
RAID stands for Redundant Array of Inexpensive Disks, used to provide fault tolerance to database servers. There are six RAID levels 0 through 5 offering different levels of performance, fault tolerance.
What is blocking and how would you troubleshoot it?
Blocking happens when one connection from an application holds a lock and a second connection requires a conflicting lock type. This forces the second connection to wait, blocked on the first.
Can you have a nested transaction?
Yes, very much. Check out BEGIN TRAN, COMMIT, ROLLBACK, SAVE TRAN and @@TRANCOUNT
What are statistics, under what circumstances they go out of date, how do you update them?
Statistics determine the selectivity of the indexes. If an indexed column has unique values then the selectivity of that index is more, as opposed to an index with non-unique values. Query optimizer uses these indexes in determining whether to choose an index or not while executing a query.
Some situations under which you should update statistics:
1) If there is significant change in the key values in the index
2) If a large amount of data in an indexed column has been added, changed, or removed (that is, if the distribution of key values has changed), or the table has been truncated using the TRUNCATE TABLE statement and then repopulated
3) Database is upgraded from a previous version
Look up SQL Server books online for the following commands: UPDATE STATISTICS, STATS_DATE, DBCC SHOW_STATISTICS, CREATE STATISTICS, DROP STATISTICS, sp_autostats, sp_createstats, sp_updatestats
DBA INTERVIEW QUESTIONS - Part1
* One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships. One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships. Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table. It will be a good idea to read up a database designing fundamentals text book.
# What’s the difference between a primary key and a unique key?
* Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key allows one NULL only.
# What are user defined datatypes and when you should go for them?
* User defined datatypes let you extend the base SQL Server datatypes by providing a descriptive name, and format to the database. Take for example, in your database, there is a column called Flight_Num which appears in many tables. In all these tables it should be varchar(8). In this case you could create a user defined datatype called Flight_num_type of varchar(8) and use it across all your tables. See sp_addtype, sp_droptype in books online.
# What is bit datatype and what’s the information that can be stored inside a bit column?
* Bit datatype is used to store boolean information like 1 or 0 (true or false). Untill SQL Server 6.5 bit datatype could hold either a 1 or 0 and there was no support for NULL. But from SQL Server 7.0 onwards, bit datatype can represent a third state, which is NULL.
# Define candidate key, alternate key, composite key.
* A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys. A key formed by combining at least two or more columns is called composite key.
16 August 2010
Lock escalation
A lock escalation occurs when the number of locks held on rows and tables in the database equals the percentage of the lock list specified by the maxlocks database configuration parameter. Lock escalation might not affect the table that acquires the lock triggering the escalation. To reduce the number of locks to about half the number held when the lock escalation began, the database manager begins converting many small row locks to table locks for all active tables, beginning with any locks on large object (LOB) or long VARCHAR elements. An exclusive lock escalation is a lock escalation in which the table lock acquired is an exclusive lock.
For partitioned tables, lock escalation is to the data partition level. This allows the table to be accessible to other transactions even if the data partition is escalated to share, exclusive, or super exclusive, as other non-escalated data partitions are unaffected. The transaction may continue to row lock on other data partitions after escalation for a given data partition. For partitioned tables, the notification log messages for escalations will include the data partition escalated as well as the table name.
Triggering Lock Escalation:
A lock escalation is triggered when any of the following conditions is true
· The number of locks held (different from acquired) by a statement on an index or a heap within a statement exceeds the threshold (currently set to 5000 (approx)). These locks include the intent locks as well. Note the lock escalation will not trigger if
o The transaction acquires 2,500 locks each on two index/heap(s) in a single statement.
o The transaction acquires 2,500 locks on the non-clustered index and 2,500 locks on the corresponding base table in a single statement.
o The same heap/index is referenced more than one time in a statement; the locks on each instance of those are counted separately. So for example, in the case of a self-join on a table t1, if each instance has 3000 locks within the statement, it will not trigger lock escalation
· The memory taken by lock resources > 40% of the non-AWE (32-bit) or regular (64-bit) enabled memory when the locks configuration option is set to 0, the default value. In this case, the lock memory is allocated dynamically as needed.
· The memory taken by lock resources is > 40% of the configured memory of locks (i.e. when a non-zero value for the locks configuration option). When locks configuration option is used, the locks memory is statically allocated when SQL Server starts.
When the lock escalation is triggered, the SQL Server attempts to escalate the lock to table level but the attempt may fail if there are conflicting locks. So for example, if the SH locks need to be escalated to the table level and there are concurrent X locks on one or more rows/pages of the target table, the lock escalation attempt will fail. However, SQL Server periodically, for every 1250 (approx) new locks acquired by the lock owner (e.g. transaction), attempts to escalate the lock. If the lock escalation succeeds, the SQL Server releases the lower granularity locks, and the associated lock memory, on the index or the heap. A successful lock escalation can potentially lead to blocking (because at the time of lock escalation, there cannot be any conflicting access) of future concurrent access to the index or the heap by transactions in conflicting lock mode. So the lock escalation is not always a good idea for all applications.
Disabling Lock Escalation:
SQL2005 provides supports disabling lock escalation using two trace flags as follows:
· TraceFlag-1211: It disables lock escalation at the current threshold (5000) on a per index/heap per statement basis. When this trace flag is in effect, the locks are never escalated. It also instructs SQL Sever to ignore the memory acquired by the lock manager up to a maximum statically allocated lock memory or 60% of non-AWE(32-bit)/regular(64-bit) of the dynamically allocated memory. At this time an out of lock memory error is generated. This can potentially be damaging as a misbehaving application can exhaust SQL Server memory by acquiring large number of locks. This, in the worst case, can stall the Server or degrade its performance to an unacceptable level. For these reasons, a caution must be exercised when using this trace flag
· TraceFlag-1224: This trace flag is similar to trace flag 1211 with one key difference. It enables lock escalation when lock manager acquires 40% of the statically allocated memory or (40%) non-AWE(32-bit)/regular(64-bit) dynamically allocated memory. Additionally, if this memory cannot be allocated due to other components taking up more memory, the lock escalation can be triggered earlier. SQL Server will generate an out of memory error when memory allocated to lock manager exceeds the statically allocated memory or 60% of non-AWE(32-bit)/regular memory for dynamic allocation.
If both trace flags (1211 and 1224) are set at the same time, the trace flag 1211 takes precedence. You can use dbcc tracestatus (-1) command to find the status of all trace flags enabled in SQL Server.
Please also refer to the http://support.microsoft.com/kb/323630/en-us.
Limitations of Lock Escalation:
There are some limitations in the current lock escalation mechanism in SQL Server. We will consider removing one or more of these limitations in future.
· Trace flags can only be used to disable lock escalation at an instance level. More often than not, you want to disable lock escalation at an object level. You can get around this issue by starting a dummy transaction and locking a resource (e.g. a row) to prevent lock escalation.
· The lock escalation triggering is hard coded to approx 5000 locks which may be too many locks for a small table and too few for a large table.
Locks are not escalated to individual table partitions, but to the table instead. So two users accessing distinct partitions of a table in conflicting mode may get blocked immediately after lock escalation.
Live Lock
A Live lock is one, where a request for exclusive lock is denied continuously because a series of overlapping shared locks keeps on interfering each other and to adapt from each other they keep on changing the status which further prevents them to complete the task. In SQL Server Live Lock occurs when read transactions are applied on table which prevents write transaction to wait indefinitely. This is different then deadlock as in deadlock both the processes wait on each other.
A human example of live lock would be two people who meet face-to-face in a corridor and each moves aside to let the other pass, but they end up moving from side to side without making any progress because they always move the same way at the same time and never cross each other. This is good example of live lock.
PERFMON TOOL
Database performance can be identified in 3 levels they are Server Level (Operating Systems, Networking Protocols); Database Level (Sql Server Databsae Engine, SQL Server) ; Object Level (Objects within the database).
Perfmon is mainly used to identify the sever levels activities, based on the identified results we can take appropriate action to improve the performance. It can be Hardware, Memory (or) System changes.
PERFMON : PERFMON is a windows inbuilt tool which can provide the workload of the resources running in the system. It can be used to find out Windows resources data as well as SQL Server resources. With PERFMON We can analyze the following,
• Understand workload and its effect on your system's resources.
• Test configuration changes or other tuning efforts by monitoring the results.
Start by monitoring the activity of the following components in order:
• Memory
• Processors
• Disks
• Network
Following counters can be helpful to trace the data
1:
Component : Disk
Performance aspect being monitored : Usage
Counters to monitor :
Physical Disk\Disk Reads/sec, Physical Disk\Disk Writes/sec, LogicalDisk\% Free Space, Interpret the % Disk Time counter carefully. Because the _Total instance of this counter may not accurately reflect utilization on multiple-disk systems, it is important to use the % Idle Time counter as well. Note that these counters cannot display a value exceeding 100%.
2 :
Component : Disk
Performance aspect being monitored : Hindrances
Counters to Monitor : Physical Disk\Avg. Disk Queue Length (all instances)
3:
Component : Memory
Performance aspect being monitored : Usage
Counters to Monitor : Memory\Available Bytes, Memory\Cache Bytes
4:
Component : Memory
Performance aspect being monitored : Hindrances
Counters to Monitor : Memory\Pages/sec, Memory\Page Reads/sec, Memory\Transition Faults/sec, Memory\Pool Paged Bytes, Memory\Pool Nonpaged Bytes.
Although not specifically Memory object counters, the following are also useful for memory analysis: Paging File\% Usage object (all instances), Cache\Data Map Hits %, Server\Pool Paged Bytes and Server\Pool Nonpaged Bytes
5:
Component : Network
Performance aspect being monitored : Throughput
Counters to Monitor : Protocol transmission counters (varies with networking protocol); for TCP/IP: Network Interface\Bytes total/sec, Network Interface\ Packets/sec, Server\Bytes Total/sec, or Server\Bytes Transmitted/sec and Server\Bytes Received/sec
6:
Component : Processor
Performance aspect being monitored : Usage
Counters to Monitor : Processor\% Processor Time (all instances)
7:
Component : Processor
Performance aspect being monitored : Hindrances
Counters to Monitor : System\Processor Queue Length (all instances),
Processor\ Interrupts/sec, System\Context switches/sec
How to Create and perform :
1. Go to RUN and type PERFMON then Enter
2. Double-click Performance Logs and Alerts, and then double-click Counter Logs. Any existing logs will be listed in the details pane. A green icon indicates that a log is running; a red icon indicates that a log has been stopped.
3. Right-click a blank area of the details pane, and click New Log Settings.
4. In Name, type the name of the log, and then click OK.
5. On the General tab, click Add Objects and select the performance objects you want to add, or click Add Counters to select the individual counters you want to log.
It can be run for certain time period. The data can be saved as .CSV or text files. When you get the data you can make a charts using Excel and it can be understandable what necessary action to be taken for improving performance.
04 August 2010
Blocked Process Report
GO
IF EXISTS (SELECT name FROM msdb.dbo.sysoperators WHERE name = N'TEST')
EXEC msdb.dbo.sp_delete_operator @name=N'TEST'
USE [msdb]
GO
EXEC msdb.dbo.sp_add_operator @name=N'TEST',
@enabled=1,
@weekday_pager_start_time=90000,
@weekday_pager_end_time=180000,
@saturday_pager_start_time=90000,
@saturday_pager_end_time=180000,
@sunday_pager_start_time=90000,
@sunday_pager_end_time=180000,
@pager_days=0,
@category_name=N'[Uncategorized]'
/* creating the table to capture the Event information */
USE Master
GO
IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[BLOCKED_PROCESS_REPORT]') AND OBJECTPROPERTY(id, N'IsUserTable') = 1)
DROP TABLE [dbo].[BLOCKED_PROCESS_REPORT]
GO
CREATE TABLE [dbo].[BLOCKED_PROCESS_REPORT] (
[PostTime] [datetime] NOT NULL ,
[SQLInstance] varchar(20),
[Databaseid] int,
[computerName] Varchar(20),
[SessionLoginName] Varchar(30),
[SPID] int,
[TransactionID] int,
[EventSequence] int,
[objectID] int,
[IndexID] int,
[TextData] nvarchar(4000) ,
[duration] int,
[RecordID] [int] IDENTITY (1,1) NOT FOR REPLICATION NOT NULL,
[Flag] [int] NOT NULL CONSTRAINT [DF_BLOCKED_PROCESS_REPORT_Flag] DEFAULT ((0))
) ON [PRIMARY]
GO
CREATE INDEX [BLOCKED_PROCESS_REPORT_IDX01] ON [dbo].[BLOCKED_PROCESS_REPORT]([Posttime]) WITH FILLFACTOR = 100 ON [PRIMARY]
GO
/*Creating the Job that will enter values into the Deadlockevents table created above*/
/*Service account and sql operator option are optional*/
USE [msdb]
GO
IF EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'Capture BLOCKED_PROCESS_REPORT Event')
EXEC msdb.dbo.sp_delete_job @job_name = N'Capture BLOCKED_PROCESS_REPORT Event', @delete_unused_schedule=1
GO
--DECLARE @ServiceAccount varchar(128)
--SET @ServiceAccount = N'
--DECLARE @SQLOperator varchar(128)
--SET @SQLOperator = N'
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
END
DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'Capture BLOCKED_PROCESS_REPORT Event',
@enabled=1,
@notify_level_eventlog=2,
@notify_level_email=3,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N'Job for responding to blocking events',
@category_name=N'[Uncategorized (Local)]',
--@owner_login_name=@ServiceAccount,
--@notify_email_operator_name=@SQLOperator,
@job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/* Insert graph into LogEvents*/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Insert data into LogEvents',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'TSQL',
@command=N'
INSERT INTO BLOCKED_PROCESS_REPORT (
[PostTime] ,
[SQLInstance] ,
[Databaseid] ,
[computerName],
[SessionLoginName],
[SPID] ,
[TransactionID] ,
[EventSequence] ,
[objectID] ,
[IndexID] ,
[TextData],
[duration]
)
VALUES (
GETDATE(),
N''$(ESCAPE_NONE(WMI(SQLInstance)))'',
N''$(ESCAPE_NONE(WMI(Databaseid)))'',
N''$(ESCAPE_NONE(WMI(ComputerName)))'',
N''$(ESCAPE_NONE(WMI(SessionLoginname)))'',
N''$(ESCAPE_NONE(WMI(SPID)))'',
N''$(ESCAPE_NONE(WMI(TransactionID)))'',
N''$(ESCAPE_NONE(WMI(EventSequence)))'',
N''$(ESCAPE_NONE(WMI(objectid)))'',
N''$(ESCAPE_NONE(WMI(indexid)))'',
N''$(ESCAPE_SQUOTE(WMI(Textdata)))'',
N''$(ESCAPE_NONE(WMI(Duration)))''
)',
@database_name=N'master',
@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO
/*Creating the alert and associating it with the Job to be fired */
USE [msdb]
GO
IF EXISTS (SELECT name FROM msdb.dbo.sysalerts WHERE name = N'Respond to BLOCKED_PROCESS_REPORT')
EXEC msdb.dbo.sp_delete_alert @name=N'Respond to BLOCKED_PROCESS_REPORT'
GO
DECLARE @server_namespace varchar(255)
IF ISNULL(CHARINDEX('\', @@SERVERNAME), 0) > 0
SET @server_namespace = N'\\.\root\Microsoft\SqlServer\ServerEvents\' + SUBSTRING(@@SERVERNAME, ISNULL(CHARINDEX('\', @@SERVERNAME), 0) + 1, LEN(@@SERVERNAME) - ISNULL(CHARINDEX('/', @@SERVERNAME), 0))
ELSE
SET @server_namespace = N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER'
EXEC msdb.dbo.sp_add_alert @name=N'Respond to BLOCKED_PROCESS_REPORT',
@enabled=1,
@notification_message=N'Your Message',
@wmi_namespace=@server_namespace,
@wmi_query=N'SELECT * FROM BLOCKED_PROCESS_REPORT Where Duration > 30',
@job_name='Capture BLOCKED_PROCESS_REPORT Event' ;
EXEC msdb.dbo.sp_add_notification @alert_name=N'Respond to BLOCKED_PROCESS_REPORT', @operator_name=N'Test', @notification_method = 1
GO
select * from master..BLOCKED_PROCESS_REPORT
Memory Consumption_Cached Pages Count
,name ,index_id
FROM sys.dm_os_buffer_descriptors AS bd
INNER JOIN
(
SELECT object_name(object_id) AS name
,index_id ,allocation_unit_id
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND (au.type = 1 OR au.type = 3)
UNION ALL
SELECT object_name(object_id) AS name
,index_id, allocation_unit_id
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND au.type = 2
) AS obj
ON bd.allocation_unit_id = obj.allocation_unit_id
WHERE database_id = db_id()
GROUP BY name, index_id
ORDER BY cached_pages_count DESC
29 July 2010
To Search Table Name In List Of Databases
IF EXISTS(Select * from [?].sys.tables where name like
''SEARCHCRITERIA OR TABLENAME'') print ''[?]'''
Batch File To Fetch Details From Set Of Servers
@echo off
@cls
@del CheckReport.txt
@del RepOutput.txt
@cls
@@for /f %%a in (serverlist.txt) do (
@@sqlcmd -S %%a -E -s+ -i test.sql -o RepOutput.txt
@@type RepOutput.txt>>CheckReport.txt
@@echo --------------------------------------------------+----------------------------------------------------------------------------------------------------+-----------+-----------------------+-----------------------+---------------->>CheckReport.txt
)
The above will use 2 text files one is Serverlist.txt and CheckReport.txt
Serverlist.txt is file where we need to mention the server list.
CheckReport is the Output File.
Test.sql is the SQL Query which will run using SQLCMD. The above will use the trusted connection.
I am using Test.SQL to fetch the Backup information and Recovery Model
===========================================================
set nocount on
use msdb
Select Distinct convert(varchar(25),@@Servername) as Servername,
convert(varchar(30),e.database_name) as DBname,
convert(varchar(20),convert(sysname,DatabasePropertyEx
(e.database_name,'Recovery'))),
(Select convert(varchar(25),Max(backup_finish_date) , 100)
From backupset a
Where a.database_name=e.database_name
and a.server_name = @@servername
and type='D'
Group by a.database_name) Database_Bk,
(Select convert(varchar(25),Max(backup_finish_date) , 100)
From backupset b
Where b.database_name=e.database_name
and b.server_name = @@servername
and type='L' Group by b.database_name) Log_Bk,
(Select convert(varchar(25),Max(backup_finish_date) , 100)
From backupset c
Where c.database_name=e.database_name
and c.server_name = @@servername
and type='I' Group by c.database_name) Diff_Bk,
(Select convert(varchar(25),Max(backup_finish_date) , 100)
From backupset d
Where d.database_name=e.database_name
and d.server_name = @@servername
and type='F' Group by d.database_name) File_Bk
From backupset e
Where e.database_name Not in ('tempdb','pubs','northwind','model')
and e.server_name = @@Servername
and e.database_name in (Select Distinct name from master..sysdatabases)
-- never backed up
Union all
select Distinct convert(varchar(25),@@Servername) as Servername,
convert(varchar(30),name) as DBname,
convert(varchar(20),convert(sysname,DatabasePropertyEx(name,'Recovery'))),
NULL, NULL , NULL, NULL
from master..sysdatabases as record
where name not in (select distinct database_name from msdb..backupset)
and name not in ('tempdb','pubs','northwind','model')
order by 1,2
28 July 2010
List Past Time Exected Query
qs.execution_count,
DatabaseName = DB_NAME(qp.dbid),
ObjectName = OBJECT_NAME(qp.objectid,qp.dbid),
StatementDefinition = SUBSTRING ( st.text,( qs.statement_start_offset / 2) + 1,(
(CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(st.text)
ELSE qs.statement_end_offset
END - qs.statement_start_offset ) / 2) + 1 ),
query_plan,
st.text, total_elapsed_time
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY sys.dm_exec_query_plan (qs.plan_handle) qp
WHERE st.encrypted = 0
ORDER BY qs.execution_count DESC
Lock Escalation
•When locking rows or index key ranges, the Database Engine places an intent lock on the pages that contain the rows or keys.
•When locking pages, the Database Engine places an intent lock on the higher level objects that contain the pages. In addition to intent lock on the object, intent page locks are requested on the following objects:
◦Leaf-level pages of nonclustered indexes
◦Data pages of clustered indexes
◦Heap data pages
Reducing Lock Escalation
•Using an isolation level that does not generate shared locks for read operations like Read Commited
•Using trace flags 1211 and 1224 to disable all or some lock escalations
-
Find More On : - http://www.sqlserver-training.com/250-sql-queries-interview-question-answers/-#5+ SQL Queries Interview Questions ...
-
ERROR: report server remote procedure call (RPC) end point. Verify that the Report Server Windows service is running, and then retry the o...
-
http://technet.microsoft.com/en-us/library/cc731002(WS.10).aspx#BKMK_requirements How accounts are created through wizards in ...