LinkedIn
X
Facebook

SQL Server – Log Shipping to the Cloud Made Easy

Table of Contents

Log Shipping has always been one of my favourite SQL Server features. Compared to the alternative DR solutions, it’s simple to set up, it’s in Standard Edition, and it just works! Testament to this is the fact that it has barely changed since it was first introduced back in SQL Server 2000. For these reasons, I often find myself recommending it to our Remote DBA customers.

Customer case study

That said, I recently implemented Log Shipping for a customer who had some special requirements, as summarised below.

  • They wanted to Log Ship to a target instance on a Cloud VM (Azure in this case, but it could have been AWS).
  • They wanted all backups to remain on-premises and for their existing third-party backup solution, which currently backs up the Transaction Log (T-Log), to remain in place and unchanged.

For these reasons, a manual Log Shipping solution was required. Fortunately, this was made easy by utilising some simple scripts and open-source software in a three-step solution.This is what the final solution looked like, and I’ve broken down the steps below:

SQL Server - Log Shipping to the Cloud Blog. Image shows a three step Log Shipping solution.

Step 1 - Copy the existing T-Log files to the storage container

One of the key requirements was that the existing third-party backup solution remained unchanged, so simply changing the T-Log backups to go to the storage container was not an option. Instead, I needed an extra step to copy them to the cloud from their existing location, which required some fairly simple PowerShell. This example copies the files to an Azure Storage Container; however, it can easily be adapted to utilise an S3 Bucket instead. 

				
					# Define the Azure storage account details 
$storageAccountName = "[STORAGE ACCOUNT NAME]" 
$containerName = "[CONTAINER NAME]" 
$localDirectory = "D:\BACKUPS\LOGS" #Change this to the location of your T-Log backup files

# Define the Shared Access Signature (SAS) token 
$sasToken = "YOUR SAS TOKEN" # Generate this in the Azure Container 

# Define the AzCopy path 
$azCopyPath = "C:\AzCopy\azcopy.exe" 

# Define the retry count and delay in seconds 
$retryCount = 3 
$retryDelaySeconds = 5 

# Function to sync local directory with Azure storage account using AzCopy 

function Sync-WithAzureStorage { 
    param ( 
       [string]$localDirectory, 
       [string]$storageAccountName, 
       [string]$containerName, 
       [string]$sasToken, 
       [string]$azCopyPath 
    ) 

    # Retry logic 
    $retry = 0 
    while ($retry -lt $retryCount) { 
        try { 
            # Construct SAS URL for destination container 
            $destinationSasUrl = "https://$storageAccountName.blob.core.windows.net/$containerName/"+"?"+$sasToken # Optionally add a folder name to the path 

            # Run AzCopy sync command 
            & $azCopyPath sync "$localDirectory" "$destinationSasUrl" --recursive --delete-destination=true --exclude-pattern="EXCLUDE-*" 

            #Write-Output "Sync completed successfully."
            break 
        } 
        catch { 
            $retry++ 
            Write-Warning "Error syncing directory: $_" 
            Write-Warning "Retrying in $retryDelaySeconds seconds..." 
            Start-Sleep -Seconds $retryDelaySeconds 
        } 
    } 
} 

Sync-WithAzureStorage -localDirectory $localDirectory -storageAccountName $storageAccountName -containerName $containerName -sasToken $sasToken -azCopyPath $azCopyPath  
				
			

This script uses AzCopy to copy the T-Log files from a local folder to the storage container. By utilising the delete-destination parameter, I can also delete files from Azure as they get removed on-premises.

Step 2 – Restore the T-Logs to the target database

To restore the T-Logs to the target databases, I utilised David Wiseman’s excellent Log Shipping tool, which I converted to a Windows Service. I configured this to connect to the storage container (again, this could be S3, if required) and restore the logs to the target databases. I didn’t need to worry about tracking LSN numbers or any other nonsense – the tool handles it all!

Step 3 - Monitor

All that remained was to set up some simple monitoring to alert our team at Koderly in the event of an error. The simplest way of achieving this was to write a Stored Procedure that checks the last restored date/time on the target databases.

				
					CREATE PROCEDURE [dbo].[spReportLogShippingLatency]
(
    @ErrorOnLatencyMinutes SMALLINT = NULL,
    @Databases VARCHAR(500) = NULL,
	@Debug BIT = 0
)
AS
BEGIN

    SET NOCOUNT ON;

    DECLARE @ErrorString VARCHAR(1000) = '';	
	
    SET @ErrorOnLatencyMinutes = ISNULL(@ErrorOnLatencyMinutes, 0);
	SET @Databases = ISNULL(@Databases, 'USER_DATABASES');

	-- Handle if we're looking for all user databases
	IF @Databases = 'USER_DATABASES'
    BEGIN

        -- We're including all user databases so build up a comma-separated string of all database names
        SET @Databases = '';

        SELECT @Databases = @Databases + [D].[name] + ','
        FROM [sys].[databases] AS [D]
        WHERE d.[name] NOT IN (SELECT [value] FROM STRING_SPLIT('master,model,tempdb,msdb',',')) -- exclude system dbs
		AND d.[is_distributor] = 0

        -- Remove the last commma
        SET @Databases = LEFT(@Databases, LEN(@Databases) - 1);
    END;

	IF @Debug = 1 SELECT @Databases AS [Databases];
	
	-- Populate a temp table with all requested databases that are in a 'restoring' state
    SELECT [D].[name] AS [DatabaseName],
           [D].[state],
           [D].[state_desc],
           [D].[is_in_standby],
           [t].[backup_finish_date],
           [t].[restore_date],
           DATEDIFF(mi, [t].[restore_date], GETDATE()) AS [MinsSinceLastRestore],
           DATEDIFF(mi, [t].[backup_finish_date], GETDATE()) AS [TotalTimeBehindMins],
           [t].[physical_device_name] AS [LastLog]
    INTO [#tmp]
    FROM [sys].[databases] AS [D]
        OUTER APPLY
    (
        SELECT TOP (1)
               [rsh].[destination_database_name],
               [bs].[backup_finish_date],
               [rsh].[restore_date],
               [bmf].[physical_device_name],
               [bs].[is_force_offline]
        FROM [msdb].[dbo].[restorehistory] AS [rsh]
            INNER JOIN [msdb].[dbo].[backupset] AS [bs]
                ON [rsh].[backup_set_id] = [bs].[backup_set_id]
            INNER JOIN [msdb].[dbo].[restorefile] AS [rf]
                ON [rsh].[restore_history_id] = [rf].[restore_history_id]
            INNER JOIN [msdb].[dbo].[backupmediafamily] AS [bmf]
                ON [bmf].[media_set_id] = [bs].[media_set_id]
        WHERE [rsh].[restore_type] = 'L'
              AND [D].[name] = [rsh].[destination_database_name]
        ORDER BY [rsh].[restore_history_id] DESC
    ) AS [t]
    WHERE (
              [D].[state] = 1
              OR [D].[is_in_standby] = 1
          )	
	AND d.[name] IN (SELECT LTRIM(RTRIM([value])) FROM STRING_SPLIT(@Databases, ','))	
    ORDER BY [t].[backup_finish_date];

    
    -- Main results set
    SELECT [T].[DatabaseName],
           [T].[state],
           [T].[state_desc],
           [T].[is_in_standby],
           [T].[backup_finish_date],
           [T].[restore_date],
           [T].[MinsSinceLastRestore],
           [T].[TotalTimeBehindMins],
           [T].[LastLog]
    FROM [#tmp] AS [T]
    ORDER BY [T].[DatabaseName];
	
	-- If any of our requested databases were not found to be in the correct state (e.g. 'restoring') or didn't exist, throw an error
	SELECT @ErrorString = @ErrorString + [SD].[value] + CHAR(13) + CHAR(10)
    FROM STRING_SPLIT(@Databases, ',') AS [SD]
    WHERE NOT EXISTS
        (
            SELECT [T].[DatabaseName]
            FROM [#tmp] AS [T]
            WHERE [T].[DatabaseName] = LTRIM(RTRIM([SD].[value]))
        )    

	IF @Debug = 1 SELECT @ErrorString AS [DatabasesInErrorState];

	IF @ErrorString <> ''
    BEGIN
		SET @ErrorString = 'The following databases were either not found or not in a restoring state as expected. There may be a problem with Log Shipping: ' + @ErrorString;
        RAISERROR(@ErrorString , 16, 1);
		RETURN;
	END;
		
	-- Check if our latency threshold has been exceeded
	IF @ErrorOnLatencyMinutes > 0
    BEGIN
        IF EXISTS
        (
            SELECT *
            FROM [#tmp] AS [T]
            WHERE [T].[TotalTimeBehindMins] > @ErrorOnLatencyMinutes
        )
        BEGIN

            SELECT @errorstring
                = @errorstring + [T].[DatabaseName] + ' (' + CAST([T].[MinsSinceLastRestore] AS VARCHAR(20)) + ' mins)'
                  + CHAR(13) + CHAR(10)
            FROM [#tmp] AS [T]
            WHERE [T].TotalTimeBehindMins > @ErrorOnLatencyMinutes;

            SET @errorstring = 'Log Shipping latency threshold exeeded for database(s): ' + @errorstring;

            RAISERROR(@errorstring, 16, 1);
        END;
    END;

    DROP TABLE [#tmp];

END;

				
			

There we have it! A complete, Log Shipping solution to the cloud that didn’t require any changes the existing backup strategy.

I hope you found this useful!