Ads

17 January 2011

Top 10 SQL Server 2008 Features for the Database Administrator (DBA)

Author: Mike Weiner

Contributor: Burzin Patel,Sanjay Mishra

Reviewers: Lubor Kollar, Kevin Cox, Bill Emmert, Greg Husemeier, Paul Burpo, Joseph Sack, Denny Lee, Lindsey Allen, Mark Souza

Microsoft SQL Server 2008 provides a number of enhancements and new functionality, building on previous versions. Administration, database maintenance, manageability, availability, security, and performance, among others, all fall into the roles and responsibilities of the database administrator. This article provides the top ten new features of SQL Server 2008 (referenced in alphabetical order) that can help DBAs fulfill their responsibilities. In addition to a brief description of each feature, we include how this feature can help and some important use considerations.

1

Activity Monitor

When troubleshooting a performance issue or monitoring a server in real time, it is common for the DBA to execute a number of scripts or check a number of sources to collect general information about what processes are executing and where the problem may be. SQL Server 2008 Activity Monitor consolidates this information by detailing running and recently executed processes, graphically. The display gives the DBA a high-level view and the ability to drill down on processes and view wait statistics to help understand and resolve problems.

To open up Activity Monitor, just right-click on the registered server name in Object Explorer and then click Activity Monitor, or utilize the standard toolbar icon in SQL Server Management Studio. Activity Monitor provides the DBA with an overview section producing output similar to Windows Task Manager and drilldown components to look at specific processes, resource waits, data file I/Os, and recent expensive queries, as noted in Figure 1.

Figure 1: Display of SQL Server 2008 Activity Monitor view from Management Studio

image

NOTE: There is a refresh interval setting accessed by right-clicking on the Activity Monitor. Setting this value to a low threshold, under 10 seconds, in a high-volume production system can impact overall system performance.

DBAs can also use Activity Monitor to perform the following tasks:

  • Pause and resume Activity Monitor with a simple right-click. This can help the DBA to “save” a particular point-in-time for further investigation without it being refreshed or overwritten. However, be careful, because if you manually refresh, expand, or collapse a section, the data will be refreshed.
  • Right-click a line item to display the full query text or graphical execution plan via Recent Expensive Queries.
  • Execute a Profiler trace or kill a process from the Processes view. Profiler events include RPC:Completed, SQL:BatchStarting, and SQL:BatchCompleted events, and Audit Login and Audit Logout.

Activity Monitor also provides the ability to monitor activity on any SQL Server 2005 instance, local or remote, registered in SQL Server Management Studio.

2

[SQL Server] Audit

Having the ability to monitor and log events, such as who is accessing objects, what changes occurred, and what time changes occurred, can help the DBA to meet compliance standards for regulatory or organizational security requirements. Gaining insight into the events occurring within their environment can also help the DBA in creating a risk mitigation plan to keep the environment secure.

Within SQL Server 2008 (Enterprise and Developer editions only), SQL Server Audit provides automation that allows the DBA and others to enable, store, and view audits on various server and database components. The feature allows for auditing at a granularity of the server and/or database level.

There are server-level audit action groups, such as:

  • FAILED_LOGIN_GROUP, which tracks failed logins.
  • BACKUP_RESTORE_GROUP, which shows when a database was backed up or restored.
  • DATABASE_CHANGE_GROUP, which audits when a database is created, altered, or dropped.

Database-level audit action groups include:

  • DATABASE_OBJECT_ACCESS_GROUP, which is raised whenever a CREATE, ALTER, or DROP statement is executed on database objects.
  • DATABASE_OBJECT_PERMISSION_CHANGE_GROUP, which is raised when GRANT, REVOKE, or DENY is utilized for database objects.

There are also audit actions, such as SELECT, DELETE, or EXECUTE. For more information, including a full list of the audit groups and actions, see SQL Server Audit Action Groups and Actions.

Audit results can be sent to a file or event log (Windows Security or System) for viewing. Audit information is created utilizing Extended Events, another new SQL Server 2008 feature.

By using SQL Server 2008 audits, the DBA can now answer questions that were previously very difficult to retroactively determine, such as “Who dropped this index?”, “When was the stored procedure modified?”, “What changed which might not be allowing this user to access this table?”, or even “Who ran SELECT or UPDATE statements against the [dbo.Payroll] table?”

For more information about using SQL Server Audit and some examples of implementation, see the SQL Server 2008 Compliance Guide.

3

Backup Compression

This feature has long been a popular request of DBAs for SQL Server. The wait is finally over, and just in time! Many factors, including increased data retention periods and the need to physically store more data have contributed to the recent explosion in database size. Backing up a large database can require a significant time window to be allotted to backup operations and a large amount of disk space allocated for use by the backup file(s).

With SQL Server 2008 backup compression, the backup file is compressed as it is written out, thereby requiring less storage, less disk I/O, and less time. In lab tests conducted with real customer data, we observed in many cases a reduction in the backup file size between 70% and 85%. Testing also revealed around a 45% reduction in the backup and restore time. It is important to note that the additional processing results in higher processor utilization. To help segregate the CPU intensive backup and minimize its effect on other processes, one might consider utilizing another feature mentioned in this paper, Resource Governor.

The compression is achieved by specifying the WITH COMPRESSION clause in the BACKUP command (for more information, see SQL Server Books Online) or by selecting it in the Options page in the Back Up Database dialog box. To prevent having to modify all existing backup scripts, there is also a global setting to enable compressing all backups taken on a server instance by default. (This setting is accessed by using the Database Settings page of the Server Properties dialog box or by running sp_configure with backup compression default set to 1.) While the compression option on the backup command needs to be explicitly specified, the restore command automatically detects that a backup is compressed and decompresses it during the restore operation.

Backup compression is a very useful feature that can help the DBA save space and time. For more information about tuning backup compression, see the technical note on Tuning the Performance of Backup Compression in SQL Server 2008. NOTE: Creating compressed backups is only supported in SQL Server 2008 Enterprise and Developer editions; however, every SQL Server 2008 edition allows for a compressed backup to be restored.

4

Central Management Servers

DBAs are frequently responsible for managing not one but many SQL Server instances in their environment. Having the ability to centralize the management and administration of a number of SQL Server instances from a single source can allow the DBA to save significant time and effort. The Central Management Servers implementation, which is accessed via the Registered Servers component in SQL Server Management Studio, allows the DBA to perform a number of administrative tasks on SQL Servers within the environment, from a single management console.

Central Management Servers allow the DBA to register a group of servers and apply functionality to the servers, as a group, such as:

  • Multiserver query execution: A script can now be executed from one source, across multiple SQL Servers, and be returned to that source, without the need to distinctly log into every server. This can be extremely helpful in cases where data from tables on two or more SQL Servers needs to be viewed or compared without the execution of a distributed query. Also, as long as the syntax is supported in earlier server versions, a query executed from the Query Editor in SQL Server 2008 can run against SQL Server 2005 and SQL Server 2000 instances as well. For more information, see the SQL Server Manageability Team Blog, specifically Multiple Server Query Execution in SQL Server 2008 .
  • Import and evaluate policies across servers: As part of Policy-Based Management (another new SQL Server 2008 feature discussed in this article), SQL Server 2008 provides the ability to import policy files into particular Central Management Server Groups and allows policies to be evaluated across all of the servers registered in the group
  • Control Services and bring up SQL Server Configuration Manager: Central Management Servers help provide a central place where DBAs can view service status and even change status for the services, assuming they have the appropriate permissions
  • Import and export the registered servers: Servers within Central Management Servers can be exported and imported for use between DBAs or different SQL Server Management Studio instance installations. This is an alternative to DBAs importing or exporting into their own local groupings within SQL Server Management Studio.

Be aware that permissions are enforced via Windows authentication, so a user might have different rights and permissions depending on the server registered within the Central Management Server group. For more information, see Administering Multiple Servers Using Central Management Servers and a Kimberly Tripp blog: SQL Server 2008 Central Management Servers-have you seen these?

5

Data Collector and Management Data Warehouse

Performance tuning and troubleshooting are a time-consuming tasks that can require in-depth SQL Server skills and an understanding of database internals. Windows System monitor (Perfmon), SQL Server Profiler, and dynamic management views (DMVs) helped with some of this, but they were often intrusive, laborious to use, or the dispersed data collection methods were cumbersome to easily summarize and interpret.

To provide actionable performance insight, SQL Server 2008 delivers a fully extensible performance data collection and warehouse tool also known as the data collector. The tool includes several out-of-the-box data collection agents, a centralized data repository for storing performance data called management data warehouse, and several precanned reports to present the captured data. The data collector is a scalable tool that can collect and assimilate data from multiple sources such as dynamic management views , Perfmon, Transact-SQL queries, by using a fully customizable data collection frequency. The data collector can be extended to collect data for any measurable attribute of an application.

Another helpful feature of the management data warehouse is that it can be installed on any SQL Server and then collect data from one or more SQL Server instances within the environment. This can help minimize the performance impact on production systems and improve the scalability in terms of monitoring and collecting data from a number of servers. In lab testing we observed around a 4% reduction in throughput when running the agents and the management data warehouse on a server running at capacity (via an OLTP workload). The impact can vary based on the collection interval (as the test was over an extended workload with 15-minute-pulls into the warehouse), and it can be exacerbated during intervals of data collection. Finally, some capacity should be considered, because the DCExec.exe process will take up some memory and processor resources, and writes to the management data warehouse will increase the I/O workload and space allocation needed where the data and log files are located.

The diagram (Figure 2) below depicts a typical data collector report.

Figure 2: Display of SQL Server 2008 Data Collector Report

image

This report shows SQL Server processing over the period of time data was collected. Events such as waits, CPU, I/O, memory usage, and expensive query statistics are collected and displayed. A DBA can also drill down into the reports to focus on a particular query or operation to further investigate, detect, and resolve performance problems. This data collection, storage, and reporting can allow the DBA to establish proactive monitoring of the SQL Server(s) in the environment and go back over time to understand and assess changes to performance over the time period monitored. The data collector and management data warehouse feature is supported in all editions (except SQL Server Express) of SQL Server 2008.

6

Data Compression

The ability to easily manage a database can greatly enhance the opportunity for DBAs to accomplish their regular task lists. As table, index, and file sizes grow and very large databases (VLDBs) become commonplace, the management of data and unwieldy file sizes has become a growing pain point. Also, with more data being queried, the need for large amounts of memory or the necessity to do physical I/O can place a larger burden on DBAs and their organizations. Many times this results in DBAs and organizations securing servers with more memory and/or I/O bandwidth or having to pay a performance penalty.

Data compression, introduced in SQL Server 2008, provides a resolution to help address these problems. Using this feature, a DBA can selectively compress any table, table partition, or index, resulting in a smaller on-disk footprint, smaller memory working-set size, and reduced I/O. The act of compression and decompression will impact CPU; however, this impact is in many cases offset by the gains in I/O savings. Configurations that are bottlenecked on I/O can also see an increase in performance due to compression.

In some lab tests, enabling data compression resulted in a 50-80% saving in disk space. The space savings did vary significantly with minimal savings on data that did not contain many repeating values or where the values required all the bytes allocated by the specified data type. There were also workloads that did not show any gains in performance. However, on data that contained a lot of numeric data and many repeating values, we saw significant space savings and observed performance increases from a few percentage points up to 40-60% on some sample query workloads.

SQL Server 2008 supports two types of compressions: row compression, which compresses the individual columns of a table, and page compression, which compresses data pages using row, prefix, and dictionary compression. The amount of compression achieved is highly dependent on the data types and data contained in the database. In general we have observed that using row compression results in lower overhead on the application throughput but saves less space. Page compression, on the other hand, has a higher impact on application throughput and processor utilization, but it results in much larger space savings. Page compression is a superset of row compression, implying that an object or partition of an object that is compressed using page compression also has row compression applied to it. Also, SQL Server 2008 does support the vardecimal storage format of SQL Server 2005 SP2. However, because this storage format is a subset of row compression, it is a depreciated feature and will be removed from future product versions.

Both row and page compression can be applied to a table or index in an online mode that is without any interruption to the application availability. However, a single partition of a partitioned table cannot be compressed or uncompressed online. In our testing we found that using a hybrid approach, where only the largest few tables were compressed, resulted in the best performance in terms of saving significant disk space while having a minimal negative impact on performance. Because there are disk space requirements, similar to what would be needed to create or rebuild an index, care should be taken in implementing compression as well. We also found that compressing the smallest objects first, from the list of objects you desire to compress, minimized the need for additional disk space during the compression process.

Data compression can be implemented via Transact-SQL or the Data Compression Wizard. To determine how compressing an object will affect its size, you can use the sp_estimate_data_compression_savings system stored procedure or the Data Compression Wizard to calculate the estimated space savings. Database compression is only supported in SQL Server 2008 Enterprise and Developer editions. It is implemented entirely within the database and does not require any application modification.

For more information about using compression, see Creating Compressed Tables and Indexes.

7

Policy-Based Management

In a number of business scenarios, there is a need to maintain certain configurations or enforce policies either within a specific SQL Server, or many times across a group of SQL Servers. A DBA or organization may require a particular naming convention to be implemented on all user tables or stored procedures that are created, or a required configuration change to be defined across a number of servers in the same manner.

Policy-Based Management (PBM) provides DBAs with a wide variety of options in managing their environment. Policies can be created and checked for compliance. If a target (such as a SQL Server database engine, a database, a table, or an index) is out of compliance, the administrator can automatically reconfigure it to be in compliance. There are also a number of evaluation modes (of which many are automated) that can help the DBA check for policy compliance, log and notify when a policy violation occurs, and even roll back the change to keep in compliance with the policy. For more information about evaluation modes and how they are mapped to facets (a PBM term also discussed in the blog), see the SQL Server Policy-Based Management blog.

The policies can be exported and imported as .xml files for evaluation and implementation across multiple server instances. Also, in SQL Server Management Studio and the Registered Servers view, policies can be evaluated across multiple servers if they are registered under a local server group or a Central Management Server group.

Not all of the functionality of Policy-Based Management can be implemented on earlier versions of SQL Server. However, the policy reporting feature can be utilized on SQL Server 2005 and SQL Server 2000. For more information about administering servers by using Policy-Based Management, see Administering Servers by Using Policy-Based Management in SQL Server Books Online. For more information about the technology itself, including examples, see the SQL Server 2008 Compliance Guide.

8

Predictable Performance and Concurrency

A significant problem many DBAs face is trying to support SQL Servers with ever-changing workloads, and achieving some level of predictable performance (or minimizing variance in plans and performance). Unexpected query performance, plan changes, and/or general performance issues can come about due to a number of factors, including increased application load running against SQL Server or version upgrades of the database itself. Getting predictable performance from queries or operations run against SQL Server can greatly enhance the DBAs ability to meet and maintain availability, performance, and/or business continuity goals (OLAs or SLAs).

SQL Server 2008 provides a few feature changes that can help provide more predictable performance. In SQL Server 2008, there exist some enhancements to the SQL Server 2005 plan guides (or plan freezing) and a new option to control lock escalation at a table level. Both of these enhancements can provide a more predictable and structured interaction between the application and the database.

First, plan guides:

SQL Server 2005 enabled greater query performance stability and predictability by providing a new feature called plan guides to enable specifying hints for queries that could not be modified directly in the application. For more information, see the Forcing Query Plans white paper. While a very powerful feature, the USE PLAN query hint only supported SELECT DML operations and were often cumbersome to use due to the sensitivity of the plan guides to the formatting.

SQL Server 2008 builds on the plan guides mechanism in two ways: It expands the support for the USE PLAN query hint to cover all DML statements (INSERT, UPDATE, DELETE, MERGE), and it introduces a new plan freezing feature that can be used to directly create a plan guide (freeze) any query plan that exists in the SQL Server plan cache, as in the following example.

sp_create_plan_guide_from_handle

@name = N'MyQueryPlan',

@plan_handle = @plan_handle,

@statement_start_offset = @offset;

A plan guide created by either means has a database scope and is stored in the sys.plan_guides table. Plan guides are only used to influence the query plan selection process of the optimizer and do not eliminate the need for the query to be compiled. A new function, sys.fn_validate_plan_guide, has also been introduced to validate existing SQL Server 2005 plan guides and ensure their compatibility with SQL Server 2008. Plan freezing is available in the SQL Server 2008 Standard, Enterprise, and Developer editions.

Next, lock escalation:

Lock escalation has often caused blocking and sometimes even deadlocking problems, which the DBA is forced to troubleshoot and resolve. Previous versions of SQL Server permitted controlling lock escalation (trace flags 1211 and 1224), but this was only possible at an instance-level granularity. While this helped some applications work-around the problem, it caused severe issues for others. Another problem with the SQL Server 2005 lock escalation algorithm was that locks on partitioned tables were directly escalated to the table level, rather than the partition level.

SQL Server 2008 offers a solution for both of these problems. A new option has been introduced to control lock escalation at a table level. By using an ALTER TABLE command, option locks can be specified to not escalate, or escalate to the partition level for partitioned tables. Both these enhancements help improve the scalability and performance without having negative side-effects on other objects in the instance. Lock escalation is specified at the database-object level and does not require any application change. It is supported in all editions of SQL Server 2008.

9

Resource Governor

Maintaining a consistent level of service by preventing runaway queries and guaranteeing resources for mission-critical workloads has been a challenge. In the past there was no way of guaranteeing a certain amount of resources to a set of queries and prioritizing the access. All queries had equal access to all the available resources.

SQL Server 2008 introduces a new feature called Resource Governor, which helps address this issue by enabling users to differentiate workloads and allocate resources as they are requested. Resource Governor limits can easily be reconfigured in real time with minimal impact on the workloads that are executing. The allocation of the workload to a resource pool is configurable at the connection level, and the process is completely transparent to the application.

The diagram below depicts the resource allocation process. In this scenario three workload pools (Admin Workload, OLTP Workload, and Report Workload) are configured, and the OLTP Workload pool is assigned a high priority. In parallel, two resource pools (Admin Pool and Application Pool) are configured with specific memory and processor (CPU) limits as shown. As a final step the Admin Workload is assigned to the Admin Pool and the OLTP and Report workloads are assigned to the Application Pool.

image

Below are some other points you need to consider when using Resource Governor.

  • Resource Governor relies on login credentials, host name, or application name as a ‘resource pool identifier’, so using a single login for an application, depending on the number of clients per server, might make creating pools more difficult.
  • Database-level object grouping, in which the resource governing is done based on the database objects being referenced, is not supported.
  • Resource Governor only allows resource management within a single SQL Server instance. For managing multiple SQL Server instances or processes within a server from a single source, Windows System Resource Manager should be considered.
  • Only processor and memory resources can be configured. I/O resources cannot be controlled.
  • Dynamically switching workloads between resource pools once a connection is made is not possible.
  • Resource Governor is only supported in SQL Server 2008 Enterprise and Developer editions and can only be used for the SQL Server database engine; SQL Server Analysis Services (SSAS), SQL Server Integration Services (SSIS), and SQL Server Reporting Services (SSRS) cannot be controlled.

10

Transparent Data Encryption (TDE)

Security is one of the top concerns of many organizations. There are many different layers to securing one of the most important assets of an organization: its data. In most cases, organizations do well at securing their active data via the use of physical security, firewalls, and tightly controlled access policies. However, when physical medium such as the backup tape or disk on which the data resides is compromised, the above security measures are of no use, because a rouge user can simply restore the database and get full access to the data.

SQL Server 2008 offers a solution to this problem by way of transparent data encryption (TDE). TDE performs real-time I/O encryption and decryption of the data and log files by using a database encryption key (DEK). The DEK is a symmetric key secured by using a certificate stored in the master database of the server, or an asymmetric key protected by an Extensible Key Management (EKM) module.

TDE is designed to protect data ‘at rest’, which means the data stored in the .mdf, .ndf, and .ldf files cannot be viewed using a hex editor or other means. However, data that is not at rest, such as the results of a SELECT statement in SQL Server Management Studio, will continue to be visible to users who have rights to view the table. Also, because TDE is implemented at the database level, the database can leverage indexes and keys for query optimization. TDE should not be confused with column-level encryption, which is a separate feature that allows encryption of data even when it is not at rest.

Encrypting a database is a one-time process that can be initiated via a Transact-SQL command or SQL Server Management Studio, and it is executed as a background thread. You can monitor the encryption or decryption status using the sys.dm_database_encryption_keys dynamic management view. In a lab test we conducted, we were able to encrypt a 100 GB database using the AES_128 encryption algorithm in about an hour. While the overhead of using TDE is largely dictated by the application workload, in some of the testing conducted that overhead was measured to be less than 5%. One potential performance impact to be aware of is this: If any database within the instance does have TDE applied, the tempDB system database is also encrypted. Finally, of note when combining features:

  • When backup compression is used to compress an encrypted database, the size of the compressed backup is larger than if the database were not encrypted, because encrypted data does not compress well.
  • Encrypting the database does not affect data compression (row or page).

TDE enables organizations to meet the demands of regulatory compliance and overall concern for data privacy. TDE is only supported in the SQL Server 2008 Enterprise and Developer editions and can be enabled without changing existing applications. For more information, see Database Encryption in SQL Server 2008 Enterprise Edition or the SQL Server 2008 Compliance Guide discussion on Using Transparent Data Encryption.

In conclusion, SQL Server 2008 offers features, enhancements, and functionality to help improve the Database Administrator experience. While a Top 10 list was provided above, there are many more features included within SQL Server 2008 that help improve the experience for DBA and other users alike. For a Top 10 feature set for other SQL Server focus areas, see the other SQL Server 2008 Top 10 articles on this site. For a full list of features and detailed descriptions, see SQL Server Books Online and the SQL Server 2008 Overview Web site.







Courtesy :- SQLCAT

14 January 2011

Changing Roles Between Primary and Secondary Servers

Performing the Initial Role Change

The first time you want to fail over to the secondary database and make it your new primary database, there is a series of steps you must take. After you have followed these initial steps, you will be able to swap roles between the primary database and the secondary database easily.

  1. Manually fail over from the primary database to a secondary database. Be sure to back up the active transaction log on your primary server with NORECOVERY. For more information, see Failing Over to a Log Shipping Secondary.

  2. Disable the log shipping backup job on the original primary server, and the copy and restore jobs on the original secondary server.

  3. On your secondary database (the database you want to be the new primary), configure log shipping using SQL Server Management Studio. For more information, see How to: Enable Log Shipping (SQL Server Management Studio). Include the following steps:

    • Use the same share for creating backups that you created for the original primary server.

    • When adding the secondary database, in the Secondary Database Settings dialog box, enter the name of the original primary database in the Secondary database box.

    • In the Secondary Database Settings dialog box, select No, the secondary database is initialized.

Swapping Roles

After you have completed the steps above for the initial role change, you can change roles between the primary database and the secondary database by following the steps in this section. To perform a role change, follow these general steps:

  1. Bring the secondary database online, backing up the transaction log on the primary server with NORECOVERY.

  2. Disable the log shipping backup job on the original primary server, and the copy and restore jobs on the original secondary server.

  3. Enable the log shipping backup job on the secondary server (the new primary server), and the copy and restore jobs on the primary server (the new secondary server).

Important note Important

When you change a secondary database to the primary database, to provide a consistent experience to users and applications, you might have to re-create some or all of the metadata for the database, such as logins and jobs, on the new primary server instance. For more information, see Managing Metadata When Making a Database Available on Another Server Instance.

Failing Over to a Log Shipping Secondary

Failing Over to a Log Shipping Secondary

Failing over to a log shipping secondary is useful if the primary server instance fails or requires maintenance.

Preparing for a Controlled Failover

Typically, the primary and secondary databases are unsynchronized, because the primary database continues to be updated after its latest backup job. Also, in some cases, recent transaction log backups have not been copied to the secondary server instances, or some copied log backups might still not have been applied to the secondary database. We recommend that you begin by synchronizing all of the secondary databases with the primary database, if possible.

For information about log shipping jobs, see Log Shipping Overview.

Failing Over

To fail over to a secondary database:

  1. Copy any uncopied backup files from the backup share to the copy destination folder of each secondary server.

  2. Apply any unapplied transaction log backups in sequence to each secondary database. For more information, see How to: Apply a Transaction Log Backup (Transact-SQL).

  3. If the primary database is accessible, back up the active transaction log and apply the log backup to the secondary databases.

    If the original primary server instance is not damaged, back up the tail of the transaction log of the primary database using WITH NORECOVERY. This leaves the database in the restoring state and therefore unavailable to users. Eventually you will be able to roll this database forward by applying transaction log backups from the replacement primary database.

    For more information, see Working with Transaction Log Backups.

  4. After the secondary servers are synchronized, you can fail over to whichever one you prefer by recovering its secondary database and redirecting clients to that server instance. Recovering puts the database into a consistent state and brings it online. For more information, see How to: Recover a Database from a Backup Without Restoring Data (Transact-SQL).

    Note Note

    When you make a secondary database available, you should ensure that its metadata is consistent with the metadata of the original primary database. For more information, see Managing Metadata When Making a Database Available on Another Server Instance.

  5. After you have recovered a secondary database, you can reconfigure it to act as a primary database for other secondary databases. For more information, see Changing Roles Between Primary and Secondary Servers.

    If no other secondary database is available, see How to: Enable Log Shipping (SQL Server Management Studio) or How to: Enable Log Shipping (Transact-SQL).

DBA WORLD: Log Shipping Tables and Stored Procedures

DBA WORLD: Log Shipping Tables and Stored Procedures

Log Shipping Tables and Stored Procedures

Primary Server Tables

Table

Description

log_shipping_monitor_alert

Stores alert job ID. This table is only used on the primary server if a remote monitor server has not been configured.

log_shipping_monitor_error_detail

Stores error detail for log shipping jobs associated with this primary server.

log_shipping_monitor_history_detail

Stores history detail for log shipping jobs associated with this primary server.

log_shipping_monitor_primary

Stores one monitor record for this primary database.

log_shipping_primary_databases

Contains configuration information for primary databases on a given server. Stores one row per primary database.

log_shipping_primary_secondaries

Maps primary databases to secondary databases.

Primary Server Stored Procedures

Stored Procedure

Description

sp_add_log_shipping_primary_database

Sets up the primary database for a log shipping configuration, including the backup job, local monitor record, and remote monitor record.

sp_add_log_shipping_primary_secondary

Adds a secondary database name to an existing primary database.

sp_change_log_shipping_primary_database

Changes primary database settings including local and remote monitor record.

sp_cleanup_log_shipping_history

Cleans up history locally and on the monitor based on retention period.

sp_delete_log_shipping_primary_database

Removes log shipping of primary database including backup job as well as local and remote history.

sp_delete_log_shipping_primary_secondary

Removes a secondary database name from a primary database.

sp_help_log_shipping_primary_database

Retrieves primary database settings and displays the values from the log_shipping_primary_databases and log_shipping_monitor_primary tables.

sp_help_log_shipping_primary_secondary

Retrieves secondary database names for a primary database.

sp_refresh_log_shipping_monitor

Refreshes the monitor with the latest information for the specified log shipping agent.

Secondary Server Tables

Table

Description

log_shipping_monitor_alert

Stores alert job ID. This table is only used on the secondary server if a remote monitor server has not been configured.

log_shipping_monitor_error_detail

Stores error detail for log shipping jobs associated with this secondary server.

log_shipping_monitor_history_detail

Stores history detail for log shipping jobs associated with this secondary server.

log_shipping_monitor_secondary

Stores one monitor record per secondary database associated with this secondary server.

log_shipping_secondary

Contains configuration information for the secondary databases on a given server. Stores one row per secondary ID.

log_shipping_secondary_databases

Stores configuration information for a given secondary database. Stores one row per secondary database.

Note Note

Secondary databases on the same secondary server for a given primary database share the settings in the log_shipping_secondary table. If a shared setting is altered for one secondary database, the setting is altered for all of them.

Secondary Server Stored Procedures

Stored Procedure

Description

sp_add_log_shipping_secondary_database

Sets up a secondary database for log shipping.

sp_add_log_shipping_secondary_primary

Sets up the primary information, adds local and remote monitor links, and creates copy and restore jobs on the secondary server for the specified primary database.

sp_change_log_shipping_secondary_database

Changes secondary database settings including local and remote monitor records.

sp_change_log_shipping_secondary_primary

Changes secondary database settings such as source and destination directory, and file retention period.

sp_cleanup_log_shipping_history

Cleans up history locally and on the monitor based on retention period.

sp_delete_log_shipping_secondary_database

Removes a secondary database and the local history and remote history.

sp_delete_log_shipping_secondary_primary

Removes the information about the specified primary server from the secondary server.

sp_help_log_shipping_secondary_database

Retrieves secondary database settings from the log_shipping_secondary, log_shipping_secondary_databases, and log_shipping_monitor_secondary tables.

sp_help_log_shipping_secondary_primary

This stored procedure retrieves the settings for a given primary database on the secondary server.

sp_refresh_log_shipping_monitor

Refreshes the monitor with the latest information for the specified log shipping agent.

Monitor Server Tables

Table

Description

log_shipping_monitor_alert

Stores alert job ID.

log_shipping_monitor_error_detail

Stores error detail for log shipping jobs.

log_shipping_monitor_history_detail

Stores history detail for log shipping jobs.

log_shipping_monitor_primary

Stores one monitor record per primary database associated with this monitor server.

log_shipping_monitor_secondary

Stores one monitor record per secondary database associated with this monitor server.

Monitor Server Stored Procedures

Stored Procedure

Description

sp_add_log_shipping_alert_job

Creates a log shipping alert job if one has not already been created.

sp_delete_log_shipping_alert_job

Removes a log shipping alert job if there are no associated primary databases.

sp_help_log_shipping_alert_job

Returns the job ID of the alert job.

sp_help_log_shipping_monitor_primary

Returns monitor records for the specified primary database from the log_shipping_monitor_primary table.

sp_help_log_shipping_monitor_secondary

Returns monitor records for the specified secondary database from the log_shipping_monitor_secondary table.



12 January 2011

2 JoBS - Replication

Distribution clean up

If a subscription is not synchronized within the maximum distribution retention period (default of 72 hours) and there are changes in the distribution database that have not been delivered to the Subscriber, the subscription will be marked deactivated by the Distribution clean up job that runs on the Distributor. The subscription must be reinitialized.

Expired subscription clean up

If a subscription is not synchronized within the publication retention period (default of 336 hours), the subscription will expire and be dropped by the Expired subscription clean up job that runs on the Publisher. The subscription must be recreated and synchronized.

If a push subscription expires, it is completely removed, but pull subscriptions are not. You must clean up pull subscriptions at the Subscriber.

05 January 2011

Rolling Back TRUNCATE TABLE

TSQL Tuesday #11:Rolling Back TRUNCATE TABLE

TSQL2sDay150x150

“It ain't what people don't know that hurts them it's what they know that ain't so”
-- commonly attributed to Mark Twain

SQL Server has been around for a long time now, and people who learn one version well sometimes have monumental task when a new version comes out. They need to figure out what changed. The "What's New" documentation usually gives a good idea of the major new features, but sometimes when a more minor feature just change a bit, it's may get overlooked in all the hoopla of learning about the BIG new stuff.

One change that occurred very long ago that many people still haven't hooked into is that fact that a TRUNCATE TABLE operation can be rolled back! Even in my advanced classes, with some of the best DBAs out there, when I start talking about TRUNCATE TABLE, there are frequently one or two people who think it is unlogged and thus can't be rolled back.

Let me state now… THIS IS NOT TRUE.

And in fact, it is very easy to prove that it is not true… but if you don't suspect that it is not true, why would you even test it?

TRUNCATE TABLE is logged, but SQL Server doesn't log every single row as it TRUNCATEs the table, so you might think that this command falls into the category of commands we call 'minimally logged'. But although indeed minimal information is logged for this operation, it is not technically in that category. The official 'minimally logged' operations also cause a bit to be set in a big bit map, so that when you are running in BULK_LOGGED recovery model and make a log backup, SQL Server can capture all the data involved in 'minimally logged' operations into the transaction log. This allows the data to be recovered when restoring from backup.

But for TRUNCATE TABLE, SQL Server only logs the fact that the operation happened, along with information about the pages and extents that were deallocated. So that's minimal information. And by recording which pages and extents were deallocated, there's enough information to roll back, by just REallocating those pages. A log backup only needs the information that the TRUNCATE TABLE occurred. And to restore the TRUNCATE TABLE, the operation is just reapplied. The data involved is not needed during RESTORE (like it would be for a true 'minimally logged' operation like a BULK INSERT).

So what if the transaction in which the TRUNCATE TABLE occurs stay open for a long time? How does SQL Server know not to reuse the pages that belonged to the table? It turns out the pages and/or extents involved are locked with an eXclusive lock, and just like all X locks, they are held until the end of the transaction. And as long as the pages or extents are locked, they can't be deallocated, and certainly cannot be reused.

So here's some code to see for yourself. I'll make a copy of a table from the AdventureWorks database:

USE AdventureWorks
GO

IF EXISTS (SELECT * FROM sys.tables
WHERE name = 'Products'
AND schema_id = 1)
DROP TABLE dbo.Products;
GO

SELECT * INTO dbo.Products
FROM Production.Product;
GO

Now we'll look at the count of rows, and the pages that belong to the table.

SELECT COUNT(*) FROM dbo.Products;
GO
DBCC IND('AdventureWorks', 'Products', -1);
GO

We get a count of 504 rows and the following pages:

File Page
----- -------
1 23027
1 23029
1 23030
1 26992
1 26993
1 33352
1 33353
1 33354
1 33355
1 33356
1 33357
1 42486
1 42487
1 42488
1 42489

Next, start a transaction and truncate the table. Verify that DBCC IND shows no pages belonging to the table and the count is 0. Then, look at the locks.

BEGIN TRAN
TRUNCATE TABLE dbo.Products;
SELECT COUNT(*) FROM dbo.Products;

DBCC IND('AdventureWorks', 'Products', -1);
DBCC EXTENTINFO('AdventureWorks', 'Products', -1);

SELECT resource_type, resource_description,
request_mode FROM sys.dm_tran_locks
WHERE resource_type IN ('EXTENT', 'PAGE')
AND resource_database_id = DB_ID('AdventureWorks');

You should see no rows from DBCC IND, and 0 rows from count(*). But the locks info should return something like:

resource_type resource_description request_mode
------------- -------------------- ------------
EXTENT 1:33352 X
PAGE 1:42486 X
EXTENT 1:42488 X
PAGE 1:42487 X
PAGE 1:42488 X
PAGE 1:42489 X
PAGE 1:23027 X
PAGE 1:23030 X
PAGE 1:23029 X
PAGE 1:26992 X
PAGE 1:26993 X

So the extent and page locks include all the pages that we saw in the DBCC IND output. (Remember, an extent is 8 contiguous pages, starting at the one that describes the extent.)

Only after you ROLLBACK the transaction will the locks be released, and you should see all the rows and pages back in the table again.

ROLLBACK TRAN;
GO
SELECT COUNT(*) FROM dbo.Products;
DBCC IND('AdventureWorks', 'Products', -1);
GO

So now you know!

~Kalen

Posted Tuesday, October 12, 2010 2:02 PM by Kalen Delaney | 9 Comments Filed under: , , ,