Ads

01 February 2011

Reposting Cluster - 2008 - Part2

nstalling SQL Server 2008 on a Windows Server 2008 Cluster Part 2

By: Edwin Sarmiento -- 3/2/2009

Rating: | Comments (6) | Print | Share3

Problem
In a previous tip on SQL Server 2008 Installation Process, we have seen how different SQL Server 2008 installation is from its previous versions. Now, we have another challenge to face: installing SQL Server 2008 on a Windows Server 2008 Cluster. Windows Server 2008 has a lot of differences from its previous versions and one of them is the clustering feature. How do I go about building a clustered SQL Server 2008 running on Windows Server 2008?

Solution

To continue this series on Installing SQL Server 2008 on a Windows Server 2008 Cluster, we will look at building our Windows Server 2008 cluster in preparation for SQL Server 2008. In Part 1, we have completed the installation of the Application Server role in both of the servers that we will be using as part of our cluster. This tip will walk you through the installation of the Failover Cluster Feature, validating the servers that will be a part of the cluster, and creating the cluster.

Adding the Failover Cluster Feature

Windows Server 2008 calls them features which are simply software programs that can support or augment the functionality of server roles. Since we've already installed the Application Server role in our server, let's define a feature from this perspective: failover clustering simply augments the role as an application server by making it highly available. It is disabled by default, unlike in Windows Server 2003 so we need to add it on both of the servers that will form a part of our cluster.

To add the Failover Clustering feature:

  1. Open the Server Manager console and select Features.
  2. Click the Add Features link. This will run the Add Features Wizard

  3. In the Select Features dialog box, select the Failover Clustering checkbox and click Next.

  4. In the Confirm Installation Selections dialog box, click Install to confirm the selection and proceed to do the installation of the Failover Clustering feature.

  5. In the Installation Results dialog box, click Close. This completes the installation of the Failover Clustering feature on the first node.

That's how simple and easy it is to add the Failover Clustering feature in Windows Server 2008. You will have to do this on both nodes to complete the process. Once you have managed to install the Failover Cluster Feature on both nodes, we can proceed to validate our servers if they are ready for clustering.

Running the Windows Server 2008 Validate Cluster Configuration

Unlike in previous versions of Windows where Microsoft had some sort of a hardware compatibility list (HCL) from which we had to find and select components tested to be clustering-supported, this wizard is like the “seal” that tells you whether or not the hardware you are using is supported. In fact, Microsoft has partnered with hardware vendors to create the Failover Cluster Configuration Program to make the acquisition of hardware for Windows Server 2008 Failover Clustering very easy and simple. Basically, your hardware will be supported for clustering if it meets these two requirements: the server has a “Certified for Windows Server 2008” logo and it passes this wizard.

One word of caution: do not skip any error message that this wizard generates in the final report. Doing so would simply mean that your configuration going forward will be unsupported. You only need to run this wizard on either of the nodes.

To run the Validate Cluster Configuration Wizard:

  1. Open the Failover Cluster Management console
  2. Under the Management section, click the Validate a Configuration link. This will run the Validate a Configuration Wizard

  3. In the Select Servers or a Cluster dialog box, enter the hostnames of the nodes that you want to add as members of your cluster and click Next.

  4. In the Testing Options dialog box, click Next to run all the necessary tests to validate whether or not the nodes are OK for clustering. If this is the first time to run the wizard, you must run all the tests for validation. For succeding runs, especially when adding hardware like disk subsystems ot network cards on your cluster nodes, you can selectively choose which tests to run as long as you have initially validated your hardware by running all tests.

  5. In the Confirmation dialog box, click Next. This will run all the necessary validation tests.

  6. In the Summary dialog box, verify that all the report returns successful.

If you have reached this part of the process, the wizard will tell you whether or not you can proceed to the next step of creating your cluster. As I’ve mentioned earlier, do not attempt to go any further if this report returned any error messages. I have seen some installations where the shared disk is displaying an error in the validation report prompting me to reconfigure the shared disk. This could mean removing the disk subsystem from both nodes, creating new disks and presenting them on the nodes as mentioned in Part 1 of this series. It would be best to work with your storage engineers or your system administrators when in doubt as different vendors may have different implementations of their disk subsystems.

I've also seen issues pertaining to IPv6. This is a fairly common issue which can easily be resolved. The error message in the cluster validation report looks something similar to the one displayed below

Verifying that there are no duplicate IP addresses between any pair
of nodes. Found duplicate IP address fe80::100:7f:fffe%13 on node
node1.domain.local adapter Local Area Connection* X and node
node2.domain.local adapter Local Area Connection* X.

This blog post outlines the step in resolving this issue. In a few cases, however, I needed to disable the Teredo Tunneling Pseudo-Interface adapter from Device Manager before I got a successful summary report generated by the Validate Cluster Configuration wizard. The bottom line is simply to make sure that the report returns a successful validation before creating the cluster.

Creating the Windows Server 2008 Cluster

You've finally reached this step in the entire process. This means you are now ready to create your Windows Server 2008 cluster. It's as easy as running the Create Cluster Wizard on either of the nodes. Make sure that you have your virtual hostname and IP address ready before proceeding

To run the Create a Cluster Wizard:

  1. Open the Failover Cluster Management console
  2. Under the Management section, click the Create a Cluster link. This will run the Create Cluster Wizard

  3. In the Select Servers dialog box, enter the hostnames of the nodes that you want to add as members of your cluster and click Next.

  4. In the Access Point for Administering the Cluster dialog box, enter the virtual hostname and IP address that you will use to administer the cluster. Click Next

  5. In the Confirmation dialog box, click Next. This will configure Failover Clustering on both nodes of the cluster, add DNS and Active Directory entries for the cluster hostname.

  6. In the Summary dialog box, verify that all the report returns successful.

Congratulations! You now have a working Windows Server 2008 cluster. Notice how easy it was to do all of these with fewer steps and less configuration compared to its predecessors. You can now validate whether your cluster is working or not. A simple test would be to do a continuous PING on the virtual hostname or IP address that you have assigned to your cluster. Reboot one of the nodes and see how your PING test responds. At this point, you are now ready to install SQL Server 2008.

OPTIONAL: Configuring your cluster quorum

This section is sometimes necessary especially when Windows Server 2008 decides to take a different disk subsystem as a quorum other than the one you've originally intended it to. Notice that in the Create a Cluster wizard, there was no option to select the disk subsystem that we can use as a quorum disk (now called the "witness" disk). By default, the Create a Cluster wizard will use the first available disk as the witness disk. I have seen cases where the originally planned witness disk is sized 1GB while the other shared disks are sized 200 GB. The wizard then selects one of the 200GB-sized disks as a witness disk, requiring you to move the witness disk to the original allocation. To validate, check the Storage node under the Failover Cluster Management console

To configure the quorum in a failover cluster:

  1. Open the Failover Cluster Management console
  2. Select the name of the cluster you have just created. Right-click on the cluster, select More Actions, and click Configure Cluster Quorum Settings... This will open up the Configure Cluster Quorum Wizard

  3. In the Select Quorum Configuration dialog box, select the Node and Disk Majority (recommended for your current number of nodes) option. The options presented will depend on how you want your cluster configured. The current selection is for a 2-node cluster configuration

  4. In the Configure Storage Witness dialog box, validate and select the disk that you want your cluster to use as the quorum/witness disk. Click Next

  5. In the Confirmation dialog box, verify that the disk configuration for the quorum/witness disk is correct. Click Next

  6. In the Summary dialog box, verify that all the configurations are successful.

For more details on Configuring the Quorum in a Failover Cluster in Windows Server 2008, check out this Microsoft TechNet article.

Next Steps

  • Download and install an Evaluation copy of Windows Server 2008 for this tip
  • Review Part 1, Part3 and Part4 of this series on MSSQLTips
  • Start working on building your test environment in preparation for building a SQL Server 2008 cluster on Windows Server 2008

Reposting Cluster - 2008 - Part1

Thanks to MSSQL TIPS

Installing SQL Server 2008 on a Windows Server 2008 Cluster Part 1

By: Edwin Sarmiento -- 2/13/2009

Rating: | Comments (11) | Print | Share6

Problem
In a previous tip on SQL Server 2008 Installation Process, we have seen how different SQL Server 2008 installation is from its previous versions. Now, we have another challenge to face: installing SQL Server 2008 on a Windows Server 2008 Cluster. Windows Server 2008 has a lot of differences from its previous versions and one of them is the clustering feature. How do I go about building a clustered SQL Server 2008 running on Windows Server 2008?

Solution
There have been a lot of changes regarding clustering between Windows Server 2003 and Windows Server 2008. It took quite a lot of effort for us to build a cluster in Windows Server 2003 - from making sure that the server hardware for all nodes are cluster-compatible to creating resource groups. Microsoft has redefined clustering with Windows Server 2008, making it simpler and easier to implement. Now that both SQL Server 2008 and Windows Server 2008 are out in the market for quite some time, it would be a must to prepare ourselves to be able to setup and deploy a clustered environment running both. Installing SQL Server on a stand-alone server or member server in the domain is pretty straight-forward. Dealing with clustering is a totally different story. The goal of this series of tips is to be able to help DBAs who may be charged with installing SQL Server on a Windows Server 2008 cluster.

Prepare the cluster nodes

I will be working on a 2-node cluster throughout the series and you can extend it by adding nodes later on. You can do these steps on a physical hardware or a virtual environment. I opted to do this on a virtual environment running VMWare. To start with, download and install a copy of the evaluation version of Windows Server 2008 Enterprise Edition. This is pretty straight-forward and does not even require any product key or activation. Evaluation period runs for 60 days and can be extended up to 240 days so you have more than enough time to play around with it. Just make sure that you select at least the Enterprise Edition during the installation process and have at least 12GB of disk space for your local disks. This is to make sure you have enough space for both Windows Server 2008 and the binaries for SQL Server 2008. A key thing to note here is that you should already have a domain on which to join these servers and that both have at least 2 network cards - one for the public network and the other for the heartbeat. Although you can run a cluster with a single network card, it isn't recommend at all. I'll lay out the details of the network configuration as we go along. After the installation, my recommendation is to immediately install .NET Framework 3.5 with Service Pack 1 and Windows Installer 4.5 (the one for Windows Server 2008 x86 is named Windows6.0-KB942288-v2-x86.msu). These two are prerequisites for SQL Server 2008 and would speed up the installation process later on.

Carve out your shared disks

We had a lot of challenges in Windows Server 2003 when it comes to shared disks that we will use for our clusters. For one, the 2TB limit which has a lot to do with the master boot record (MBR) has been overcome by having the GUID Partition Table (GPT) support in Windows Server 2008. This allows you to have 16 Exabytes for a partition. Another has been the use of directly attached SCSI storage. This is no longer supported for Failover Clustering in Windows Server 2008. The only supported ones will be Serially Attached Storage (SAS), Fiber Channel and iSCSI. For this example, we will be using an iSCSI storage with the help of an iSCSI Software Initiator to connect to a software-based target. I am using StarWind's iSCSI SAN to emulate a disk image that my cluster will use as shared disks. In preparation for running SQL Server 2008 on this cluster, I recommend creating at least 4 disks - one for the quorum disk, one for MSDTC, one for the SQL Server system databases and one for the user databases. Your quorum and MSDTC disks can be as small as 1GB, although Microsoft TechNet specifies a 512MB minimum for the quorum disk. If you decide to use iSCSI as your shared storage in a production environment, a dedicated network should be used so as to isolate it from all other network traffic. This also means having a dedicated network card on your cluster nodes to access the iSCSI storage.

Present your shared disks to the cluster nodes

Windows Server 2008 comes with iSCSI Initiator software that enables connection of a Windows host to an external iSCSI storage array using network adapters. This differs from previous versions of Microsoft Windows where you need to download and install this software prior to connecting to an iSCSI storage. You can launch the tool from Administrative Tools and select iSCSI Initiator.

To connect to the iSCSI target:

  1. In the iSCSI Initiator Properties page, click on the Discovery tab.

  2. Under the Target Portals section, click on the Add Portal button.
  3. In the Add Target Portal dialog, enter the DNS name or IP address of your iSCSI Target and click OK. If you are hosting the target on another Windows host as an image file, make sure that you have your Windows Firewall configured to enable inbound traffic to port 3260. Otherwise, this should be okay.

  4. Back in the iSCSI Initiator Properties page, click on the Targets tab. You should see a list of the iSCSI Targets that we have defined earlier

  5. Select one of the targets and click on the Log on button.
  6. In the Log On to Target dialog, select the Automatically restore this connection when the computer starts checkbox. Click OK.

  7. Once you are done, you should see the status of the target change to Connected. Repeat this process for all the target disks we initially created on both of the servers that will become nodes of your cluster.

Once the targets have been defined using the iSCSI Initiator tool, you can now bring the disks online, initialize them, and create new volumes using the Server Manager console. I won’t go into much detail on this process as it is similar to how we used to do it in Windows Server 2003, except for the new management console. After the disks have been initialized and volumes created, you can try logging in to the other server and verify that you can see the disks there as well. You can rescan the disks if they haven’t yet appeared.

Adding Windows Server 2008 Application Server Role

Since we will be installing SQL Server 2008 later on, we will have to add the Application Server role on both of the nodes. A server role is a program that allows Windows Server 2008 to perform a specific function for multiple clients within a network. To add the Application Server role,

  1. Open the Server Manager console and select Roles.
  2. Click the Add Roles link. This will run the Add Roles Wizard

  3. In the Select Server Roles dialog box, select the Application Server checkbox. This will prompt you to add features required for Application Server role. Click Next.

  4. In the Application Server dialog box, click Next.

  5. In the Select Role Services dialog box, select Incoming Remote Transactions and Outgoing Remote Transactions checkboxes. These options will be used by MSDTC. Click Next

  6. In the Confirm Installation Selections dialog box, click Install. This will go thru the process of installing the Application Server role

  7. In the Installation Results dialog box, click Close. This completes the installation of the Application Server role on the first node. You will have to repeat this process for the other server

We have now gone thru the process of creating the cluster at this point. In the next tip in this series, we will go thru the process of installing the Failover Cluster feature, validating the nodes that will become a part of the cluster and creating the cluster itself. And that is just on the Windows side. Once we manage to create a working Windows Server 2008 cluster, that's the only time we can proceed to install SQL Server 2008.

Next Steps

  • Download and install an Evaluation copy of Windows Server 2008 for this tip
  • Start working on building your test environment in preparation for building a SQL Server 2008 cluster on Windows Server 2008
  • Read Part2, Part3 and Part4

31 January 2011

SQL SERVER NUMA

NUMA :- Non Uniform Memory Access
============================

Each group of processors has its own memory and possibly its own I/O channels. However, each CPU can access memory associated with the other groups in a coherent way. Each group is called a NUMA node.

The number of CPUs within a NUMA node depends on the hardware vendor. It is faster to access local memory than the memory associated with other NUMA nodes. This is the reason for the name, non-uniform memory access architecture.

On NUMA hardware, some regions of memory are on physically different buses from other regions. Because NUMA uses local and foreign memory, it will take longer to access some regions of memory than others. Local memory and foreign memory are typically used in reference to a currently running thread. Local memory is the memory that is on the same node as the CPU currently running the thread. Any memory that does not belong to the node on which the thread is currently running is foreign. Foreign memory is also known as remote memory. The ratio of the cost to access foreign memory over that for local memory is called the NUMA ratio. If the NUMA ratio is 1, it is symmetric multiprocessing (SMP). The greater the ratio, the more it costs to access the memory of other nodes. Windows applications that are not NUMA aware (including SQL Server 2000 SP3 and earlier) sometimes perform poorly on NUMA hardware.

The main benefit of NUMA is scalability. The NUMA architecture was designed to surpass the scalability limits of the SMP architecture. With SMP, all memory access is posted to the same shared memory bus. This works fine for a relatively small number of CPUs, but not when you have dozens, even hundreds, of CPUs competing for access to the shared memory bus. NUMA alleviates these bottlenecks by limiting the number of CPUs on any one memory bus and connecting the various nodes by means of a high speed interconnection.

NUMA node with 4 processors
Hardware-NUMA vs. Soft-NUMA

NUMA can match memory with CPUs through specialized hardware (hardware NUMA) or by configuring SQL Server memory (soft-NUMA). During startup, SQL Server configures itself based on underlying operating system and hardware configuration or the soft-NUMA setting. For both hardware and soft-NUMA, when SQL Server starts in a NUMA configuration, the SQL Server log records a multimode configuration message for each node, along with the CPU mask.

Hardware NUMA

Computers with hardware NUMA have more than one system bus, each serving a small set of processors. Each group of processors has its own memory and possibly its own I/O channels, but each CPU can access memory associated with other groups in a coherent way. Each group is called a NUMA node. The number of CPUs within a NUMA node depends on the hardware vendor. Your hardware manufacturer can tell you if your computer supports hardware NUMA.

If you have hardware NUMA, it may be configured to use interleaved memory instead of NUMA. In that case, Windows and therefore SQL Server will not recognize it as NUMA. Run the following query to find the number of memory nodes available to SQL Server:

SELECT DISTINCT memory_node_id
FROM sys.dm_os_memory_clerks

If SQL Server returns only a single memory node (node 0), either you do not have hardware NUMA, or the hardware is configured as interleaved (non-NUMA). If you think your hardware NUMA is configured incorrectly, contact your hardware vendor to enable NUMA. SQL Server ignores NUMA configuration when hardware NUMA has four or less CPUs and at least one node has only one CPU.

Soft-NUMA

SQL Server allows you to group CPUs into nodes referred to as soft-NUMA. You usually configure soft-NUMA when you have many CPUs and do not have hardware NUMA, but you can also use soft-NUMA to subdivide hardware NUMA nodes into smaller groups. Only the SQL Server scheduler and SQL Server Network Interface (SNI) are soft-NUMA aware. Memory nodes are created based on hardware NUMA and therefore not impacted by soft-NUMA. So, for example, if you have an SMP computer with eight CPUs and you create four soft-NUMA nodes with two CPUs each, you will only have one memory node serving all four NUMA nodes. Soft-NUMA does not provide memory to CPU affinity.

The benefits of soft-NUMA include reducing I/O and lazy writer bottlenecks on computers with many CPUs and no hardware NUMA. There is a single I/O thread and a single lazy writer thread for each NUMA node. Depending on the usage of the database, these single threads may be a significant performance bottleneck. Configuring four soft-NUMA nodes provides four I/O threads and four lazy writer threads, which could increase performance.

You cannot create a soft-NUMA that includes CPUs from different hardware NUMA nodes. For example, if your hardware has eight CPUs (0..7) and you have two hardware NUMA nodes (0-3 and 4-7), you can create soft-NUMA by combining CPU(0,1) and CPU(2,3). You cannot create soft-NUMA using CPU (1, 5), but you can use CPU affinity to affinitize an instance of SQL Server to CPUs from different NUMA nodes. So in the previous example, if SQL Server uses CPUs 0-3, you will have one I/O thread and one lazy writer thread. If, in the previous example SQL Server uses CPUs 1, 2, 5, and 6, you will access two NUMA nodes and have two I/O threads and two lazy writer threads.

HOW TO SET NUMA ON SQL SERVER

Consider the following example. A computer with eight CPUs does not have hardware NUMA. Three soft-NUMA nodes are configured. Database Engine instance A is configured to use CPUs 1 through 4. A second instance of the Database Engine is installed and configured to use CPUs 5 through 8. The example can be visually represented as:

CPUs 1 2 3 4 5 6 7 8

Soft-NUMA <-N0--><-N1-><----N2---->

SQL Server

Instance A, which experiences significant I/O, now has two I/O threads and two lazy writer threads, while instance B, which performs processor-intensive operations, has only one I/O thread and one lazy writer thread. Differing amounts of memory can be assigned to the instances, but unlike hardware NUMA, they both receive memory from the same operating system memory block and there is no memory-to-processor affinity.

NoteNote

The Soft-NUMA registry keys are not copied when you upgrade an instance of SQL Server.

Set the CPU affinity mask

  1. Run the following statement on instance A to configure it to use CPUs 1, 2, 3, and 4 by setting the CPU affinity mask:

    ALTER SERVER CONFIGURATION SET PROCESS AFFINITY CPU=1 TO 4
  2. Run the following statement on instance B to configure it to use CPUs 5, 6, 7, and 8 by setting the CPU affinity mask:

    ALTER SERVER CONFIGURATION SET PROCESS AFFINITY CPU=5 TO 8

Map soft-NUMA nodes to CPUs

  • Using the Registry Editor program (regedit.exe), add the following registry keys to map soft-NUMA node 0 to CPUs 1 and 2, soft-NUMA node 1 to CPUs 3 and 4, and soft-NUMA node 2 to CPUs 5, 6, 7, and 8.

    SQL Server 2005

    Type

    Value name

    Value data

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\NodeConfiguration\Node0

    DWORD

    CPUMask

    0x03

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\NodeConfiguration\Node1

    DWORD

    CPUMask

    0x0c

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\NodeConfiguration\Node2

    DWORD

    CPUMask

    0xf0

    SQL Server 2008

    Type

    Value name

    Value data

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node0

    DWORD

    CPUMask

    0x03

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node1

    DWORD

    CPUMask

    0x0c

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node2

    DWORD

    CPUMask

    0xf0

    SQL Server 2008 R2

    Type

    Value name

    Value data

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node0

    DWORD

    CPUMask

    0x03

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node0

    DWORD

    Group

    0

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node1

    DWORD

    CPUMask

    0x0c

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node1

    DWORD

    Group

    0

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node2

    DWORD

    CPUMask

    0xf0

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node2


27 January 2011

Types of Restore - 2008R2

Thanks to :- http://msdn.microsoft.com/en-us/library/ms191253.aspx

Restore and Recovery Overview (SQL Server)

SQL Server 2008 R2

SQL Server supports restoring data on the following levels:

  • The database (a complete database restore)

    The whole database is restored and recovered, and the database is offline for the duration of the restore and recovery operations.

  • The data file (a file restore)

    A data file or a set of files is restored and recovered. During a file restore, the filegroups that contain the files are automatically offline for the duration of the restore. Any attempt to access an offline filegroup causes an error.

    NoteNote

    You cannot back up or restore individual tables.

  • The data page (a page restore)

    Under the full recovery model or bulk-logged recovery model, you can restore individual databases. Page restores can be performed on any database, regardless of the number of filegroups.

NoteNote

In SQL Server 2008, you can restore a database backup that was created by using SQL Server 2000, SQL Server 2005, or SQL Server 2008. However, backups of master, model and msdb that were created by using SQL Server 2000 or SQL Server 2005 cannot be restored by SQL Server 2008. Also, SQL Server 2008 backups cannot be restored by any earlier version of SQL Server.

SQL Server backup and restore work across all supported operating systems, whether they are 64-bit or 32-bit systems. For information about the supported operating systems, see Hardware and Software Requirements for Installing SQL Server 2008 R2.

Restore Scenarios

A restore scenario in SQL Server is the process of restoring data from one or more backups and then recovering the database. The supported restore scenarios depend on the recovery model of the database and the edition of SQL Server.

The following table introduces the possible restore scenarios that are supported for different recovery models.

Restore scenario

Under simple recovery model

Under full/bulk-logged recovery models

Complete database restore

This is the basic restore strategy. A complete database restore might involve simply restoring and recovering a full database backup. Alternatively, a complete database restore might involve restoring a full database backup followed by restoring and recovering a differential backup.

For more information, see Performing a Complete Database Restore (Simple Recovery Model).

This is the basic restore strategy. A complete database restore involve restoring a full database backup and, optionally, a differential backup (if any), followed by restoring all subsequent log backups (in sequence). The complete database restore is finished by recovering the last log backup and also restoring it (RESTORE WITH RECOVERY).

For more information, see Performing a Complete Database Restore (Full Recovery Model)

File restore *

Restore one or more damaged read-only files, without restoring the entire database. File restore is available only if the database has at least one read-only filegroup.

Restores one or more files, without restoring the entire database. File restore can be performed while the database is offline or, for some editions of SQL Server 2005 and later versions, while the database remains online. During a file restore, the filegroups that contain the files that are being restored are always offline.

Page restore

Not applicable

Restores one or more damaged pages. Page restore can be performed while the database is offline or, for some editions of SQL Server 2005 and later versions, while the database remains online. During a page restore, the pages that are being restored are always offline.

An unbroken chain of log backups must be available, up to the current log file, and they must all be applied to bring the page up to date with the current log file.

For more information, see Performing Page Restores.

Piecemeal restore *

Restore and recover the database in stages at the filegroup level, starting with the primary and all read/write, secondary filegroups.

Restore and recover the database in stages at the filegroup level, starting with the primary filegroup.

* Online restore is supported only in SQL Server 2005 Enterprise Edition and later versions.

Regardless of how data is restored, before a database can be recovered, the SQL Server Database Engine guarantees that the whole database is logically consistent. For example, if you restore a file, you cannot recover it and bring it online until it has been rolled far enough forward to be consistent with the database.

Advantages of a File or Page Restore

Restoring and recovering files or pages, instead of the whole database, provides the following advantages:

  • Restoring less data reduces the time required to copy and recover it.

  • On SQL Server 2005 Enterprise Edition and later versions, restoring files or pages might allow other data in the database to remain online during the restore operation.

Recovery Models and Supported Restore Operations

The restore operations that are available for a database depend on its recovery model. The following table summarizes whether and to what extent each of the recovery models supports a given restore scenario.

Restore operation

Full recovery model

Bulk-logged recovery model

Simple recovery model

Data recovery

Complete recovery (if the log is available).

Some data-loss exposure.

Any data since last full or differential backup is lost.

Point-in-time restore

Any time covered by the log backups.

Disallowed if the log backup contains any bulk-logged changes.

Not supported.

File restore *

Full support.

Sometimes.**

Available only for read-only secondary files.

Page restore *

Full support.

Sometimes.**

None.

Piecemeal (filegroup-level) restore *

Full support.

Sometimes.**

Available only for read-only secondary files.

* Available only in the SQL Server 2005 Enterprise Edition and later versions.

** For the required conditions, see Restore Restrictions Under the Simple Recovery Model.

25 January 2011

Reporting Services Catalog Error.

Thanks to :- http://www.sqlcoffee.com/Troubleshooting028.htm


Reporting Services Catalog Error.


Applies to: Microsoft SQL Server 2008 Release Candidate 0 (RC0).

Error Message.

SQL Server 2008 RC 0 Setup shows the following error messages after running a set of rules to determine if the installation process will be blocked:

bulletRule "Reporting Services Catalog Database File Existence" failed
bulletRule "Reporting Services Catalog Temporary Database File Existence" failed.





Cause.

Setup program found the Reporting Services Catalog database file and Reporting Services Catalog temporary database file in the following path:

C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA



Solution.

Remove the following files from the "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA" directory:

bulletReportServer.mdf
bulletReportServer_log.LDF
bulletReportServerTempDB.mdf
bulletReportServerTempDB_log.LDF.






After deleting the files, click on the "Re-run" button of the Installation Rules setup page, and click "Next" after passing all the installation rules.


Log Shipping Jobs

Log Shipping Jobs

Log shipping involves four jobs, which are handled by dedicated SQL Server Agent jobs. These jobs include the backup job, copy job, restore job, and alert job.

The user controls how frequently log backups are taken, how frequently they are copied to each secondary server, and how frequently they are applied to the secondary database. To reduce the work required to bring a secondary server online, for example after the production system fails, you can copy and restore each transaction log backup soon after it is created. Alternatively, perhaps on a second secondary server, you can delay applying transaction log backups to the secondary database. This delay provides an interval during which you can notice and respond to a failure on the primary, such as accidental deletion of critical data.

Backup Job

A backup job is created on the primary server instance for each primary database. It performs the backup operation, logs history to the local server and the monitor server, and deletes old backup files and history information. By default, this job will run every 15 minutes, but the interval is customizable.

When log shipping is enabled, the SQL Server Agent job category "Log Shipping Backup" is created on the primary server instance.

SQL Server 2008 Enterprise and later versions support backup compression. When creating a log shipping configuration, you can control the backup compression behavior of log backups. For more information, see Backup Compression (SQL Server).

Copy Job

A copy job is created on each secondary server instance in a log shipping configuration. This job copies the backup files from the primary server to a configurable destination on the secondary server and logs history on the secondary server and the monitor server. The copy job schedule, which is customizable, should approximate the backup schedule.

When log shipping is enabled, the SQL Server Agent job category "Log Shipping Copy" is created on the secondary server instance.

Restore Job

A restore job is created on the secondary server instance for each log shipping configuration. This job restores the copied backup files to the secondary databases. It logs history on the local server and the monitor server, and deletes old files and old history information. The SQL Server job category "Log Shipping Restore" is created on the secondary server instance when log shipping is enabled.

On a given secondary server instance, the restore job can be scheduled as frequently as the copy job, or the restore job can delayed. Scheduling these jobs with the same frequency keeps the secondary database as closely aligned with the primary database as possible to create a warm standby database.

In contrast, delaying restore jobs, perhaps by several hours, can be useful in the event of a serious user error, such as a dropped table or inappropriately deleted table row. If the time of the error is known, you can move that secondary database forward to a time soon before the error. Then you can export the lost data and import it back into the primary database.

Alert Job

If a monitor server is used, an alert job is created on the monitor server instance. This alert job is shared by the primary and secondary databases of all log shipping configurations using this monitor server instance. Any change to the alert job (such as rescheduling, disabling, or enabling the job) affects all databases using that monitor server. This job raises alerts (for which you must specify alert numbers) for primary and secondary databases when backup and restore operations have not completed successfully within specified thresholds. You must configure these alerts to have an operator receive notification of the log shipping failure. The SQL Server Agent job category "Log Shipping Alert" is created on the monitor server instance when log shipping is enabled.

If a monitor server is not used, alert jobs are created locally on the primary server instance and each secondary server instance. The alert job on the primary server instance raises errors when backup operations have not completed successfully within a specified threshold. The alert job on the secondary server instance raises errors when local copy and restore operations have not completed successfully within a specified threshold.