Moving SQL Server Databases to New Storage with Minimal Downtime

The Challenge

We are in the process of migrating our disk from one SAN to another. We are in the enviable position of having both SANs available to our Microsoft SQL Server 2008 R2 SP1 cluster at the same time. Our SQL Server instance is not virtualized.

I began working on a process to migrate all our databases with as little downtime as possible. I imposed the “as little downtime as possible” requirement on myself to see if it was possible to migrate this data with no downtime whatsoever.  I time-boxed my efforts to about 3 days of intermittent work (when I could steal the time). Technically, our customers would be fine with a little bit of downtime outside of business hours, but I wanted to challenge myself and see what I could come up with.

Spoiler Alert

I’ll tell you at the outset, that although the approach I designed works, I didn’t end up using it and fell back to the Standard Method. So why bother with this blog post? I believe the technique has merit and I haven’t seen an approach like this documented anywhere else. It could prove useful for a DBA tasked with moving his data to new storage with as little downtime as possible. If you can live with the side effects I discovered below, and you have the time to invest understanding and bulletproofing my approach, it could work for you.

Standard Method

The best advice I could find for moving databases to new storage is:

  • With the database up and running, use ALTER DATABASE with MODIFY FILE to change where SQL Server looks for the data and log files. It will inform you that it will look to the new locations the next the time database is restarted.
  • Take the database offline.
  • Physically move the data and log files.
  • Bring the database back online.

The “take the database offline” step violated my “as little downtime as possible” self-imposed rule.

I saw many recommendations to use sp_detach and sp_attach to move databases. This is fine advice if you are moving databases to a new server. But if you can have the new LUNs attached to the existing server, detach and attach is unnecessary.

Theory

Disk Block Replication

There are products on the market that can make a perfect, block-for-block copy from one LUN to another and keep things perfectly in sync. Possibly using such a product, you could live-migrate your data and log files to another LUN. Once replication is up and running, you could alter where SQL Server looks for its data and log files, take the database offline and then immediately online again, and be done with it. It may be possible to do such a thing with SAN-to-SAN replication as well. These possibilities only just occurred to me as I’m writing this blog post, so I didn’t explore it.

Instead, I got to thinking about SQL Server files and file groups. I’ve never done anything with file groups, so I read up.

Moving the Data by Re-indexing to a New File Group

With SQL Server, you can have multiple file groups per database, multiple files per file group, and place each file group on your LUN of choice. While rebuilding an index, you can specify a target file group, essentially migrating the index.

A clustered index in SQL server is really just “the data”, so if you were to reindex every index in a database including clustered indexes, specifying an alternate file group on a separate LUN, and all tables have clustered indexes, then you have essentially moved all the data to the other LUN with zero downtime. This assumes you’re using SQL Server Enterprise Edition, which supports ONLINE mode for index rebuilding, which we are. If you don’t have Enterprise Edition, each index is going to have to be offline during the process. When you get to your clustered index, which is “the data”, you’re going to be in trouble.

There are certain indexes that cannot be rebuilt ONLINE, such as those that contain ntext, varchar(max) or nvarchar(max) columns. Those must be re-indexed OFFLINE, making the data temporarily inaccessible. This could also make trouble.

In my scripts below, I used a TRY…CATCH method to always try an ONLINE index rebuild first, falling back to an OFFLINE index rebuild if the ONLINE failed.

Once all the indexes are moved, you’d be left with the original “primary” data file which contains very little data, except some system tables. You could shrink the data file to almost nothing.

“Moving” the Transaction Log File

At this point, if the theory holds, you’ve moved all the data. What about the transaction log file?

  • Create a new log file of equal size to your existing log file on the new LUN.
  • Assuming you’re using either the FULL or BULK LOGGED recovery model, take a transaction log backup.
  • Shrink the old log file specifying EMPTYFILE
  • You’ve effectively moved log operations to the new log file on the new disk.

What’s Nice About This Approach?

Using this approach, you’re guaranteed a complete defragmentation of all your indexes, packing them perfectly onto the new LUN.  You might also apply ROW or PAGE compression while you do the migration, and save some space on the other side.  Compression is a separate, wonderful topic, as is defragmentation and reorganization of indexes.

Ah, But There’s a Catch

If you’ve followed me to this point, you’re left with the original, albeit tiny, primary data and log files stuck on the original LUN.  Deleting the primary data or log file is not possible. There’s no way to get rid of these files and make the new data and log files the primaries. You have to move the original data and log files over.

To move them, you have to use the aforementioned standard method, which involves taking the database offline, moving the files, and bringing the database back online. But, the files are now tiny and will transfer in a couple of seconds. We’ve minimized downtime significantly.

After you’ve moved those tiny files, you’re left with each database having two file groups, two data files and two log files.  My OCD doesn’t brook this offense, and thus requires that I “put everything back the way it was”: one file group, one data file and one log file per database.

So as a final step, you could rebuild the indexes again, moving them back into the original data file, and then drop the second data file and file group. I’d likewise go back to using the original log file, now that it’s been moved over.

The Code

I began cobbling together scripts to perform the following operations:

  • Create new file groups and files on the new LUN for each database.
  • Create the new transaction log file for each database.
  • Rebuild all indexes targeting a specified file group.
  • Shrink the old data and log files.
  • Take each database offline, move the tiny remaining files, bring database back online.

Executing the Same Code Once for Each Database

Most experienced SQL developers and DBAs know about the built-in sp_msforeachdb sproc.  Aaron Bertrand created a nice rewrite of that sproc called sp_foreachdb that I use in these scripts.

I caught one error in Aaron’s code where he did not consider the existence of the new ‘distribution’ system database. SQL Server creates a ‘distribution’ system database when transaction or merge replication is enabled. We use transactional replication, so I needed to adjust the script, and notified Aaron in the comment section of his blog.

Creating the New Data File Group and File

This script creates a new file group and file where you specify. It makes the new file 10% bigger than the old, which may or may not be necessary. If you want to add compression where there previously was none, you might need significantly less space.

IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_NAME =
	'CreateNewFilegroupAndFile' AND ROUTINE_TYPE='PROCEDURE')
  EXEC ('CREATE PROCEDURE dbo.CreateNewFilegroupAndFile AS SELECT ''stub version, to be replaced''')
GO
 
ALTER PROC dbo.CreateNewFilegroupAndFile
	@dbname nvarchar(100),
	@newdatafolder nvarchar(1000) = N'P:\MSSQL10_50.GREEN\MSSQL\DATA\',
	@newFilegroupName nvarchar(15) = N'SECONDARY',
	@reportOnly bit = 0
AS
 
SET NOCOUNT ON
 
IF @dbname <> DB_NAME()
BEGIN
  PRINT 'The stored procedure must exist in the same database as the one you ' +
  'are trying to affect. Otherwise the sproc operates in the context of the ' +
  'database in which the sproc resides.'
  RETURN 1
END
 
IF RIGHT(@newdatafolder, 1) <> '\'
  SET @newdatafolder = @newdatafolder + '\'
 
declare @SQL nvarchar(max)
declare @newlogicalname nvarchar(100)
declare @newphysicalname nvarchar(100)
declare @spaceneeded nvarchar(25);
set @newlogicalname = @dbname + '2'
set @newphysicalname = @dbname + '.ndf'
 
-- If the specified file group already exists, don't create new filegroup or file.
IF (select 1 from sys.filegroups where name = @newFilegroupName) IS NOT NULL
BEGIN
	PRINT 'Filegroup ' + @newFilegroupName + ' already exists.'
END
ELSE
BEGIN
	-- Calculate the space we need in the new file.
	-- Take the existing space used and add 10%. Not sure if this is needed.
	-- I'm adding 10% because I have the space, and want to avoid autogrowth.
	-- It's "just in case", although it's probably not needed.
 
	SELECT @spaceneeded = 
		(select top 1 convert(nvarchar(25), 
			CEILING((FILEPROPERTY(name, 'SpaceUsed')/128.0)*1.1)) + 'MB'
		FROM sys.database_files
		where type_desc = 'ROWS'
		order by file_id)
 
	PRINT 'Space needed ' + @spaceneeded
 
	-- Add the new file group and file.
	-- Filegrowth will be irrelevant as once we have migrated the data, 
	-- we will be moving it all back and deleting the secondary file and file group.
	select @SQL = '
	use ' + @dbname + ';
	ALTER DATABASE ' + @dbname + '
	ADD FILEGROUP ' + @newFilegroupName + ';
 
	ALTER DATABASE ' + @dbname + ' 
	ADD FILE 
	(
		NAME = ' + @newlogicalname + ',
		FILENAME = ''' + @newdatafolder + @newphysicalname + ''',
		SIZE = ' + @spaceneeded + ',
		FILEGROWTH = 256MB
	)
	TO FILEGROUP ' + @newFilegroupName + ';'
 
	PRINT @SQL
	IF @reportOnly = 0
	BEGIN
		exec (@SQL)
	END
END
 
SET NOCOUNT OFF

Create the New Log File

This script creates a second log file where specified and of the same size as the original.

As I mentioned earlier, once the new log file is in place:

  • Take a log backup
  • DBCC SHRINKFILE on original log file with EMPTYFILE
  • You might need to do this twice (log backup and shrink).
IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES 
 WHERE ROUTINE_NAME = 'CreateSecondLogFile' AND ROUTINE_TYPE='PROCEDURE')
  EXEC ('CREATE PROCEDURE dbo.CreateSecondLogFile AS SELECT ''stub version, to be replaced''')
GO
 
ALTER PROC dbo.CreateSecondLogFile
	@dbname nvarchar(100),
	@newlogfolder nvarchar(1000) = N'O:\MSSQL10_50.GREEN\MSSQL\DATA\',
	@reportOnly bit = 0
AS
 
SET NOCOUNT ON
 
IF @dbname <> DB_NAME()
BEGIN
  PRINT 'The stored procedure must exist in the same database as the one you are trying to affect. ' + 
  'Otherwise the sproc operates in the context of the database in which the sproc resides.'
  RETURN 1
END
 
IF RIGHT(@newlogfolder, 1) <> '\'
  SET @newlogfolder = @newlogfolder + '\'
 
declare @SQL nvarchar(max)
declare @newlogicalname nvarchar(100)
declare @newphysicalname nvarchar(100)
declare @spaceneeded nvarchar(25);
 
set @newlogicalname = @dbname + '_log2'
set @newphysicalname = @dbname + '_log2.ldf'
 
SELECT @spaceneeded = (select top 1 convert(nvarchar(25), CEILING(size/128.0)) + 'MB'
	FROM sys.database_files
	where type_desc = 'LOG'
	order by file_id)
 
SELECT @SQL = 'ALTER DATABASE ' + @dbname + '
	ADD LOG FILE
	(NAME = ''' + @newlogicalname + ''',
	FILENAME = ''' + @newlogfolder + @newphysicalname + ''',
	SIZE= ' + @spaceneeded + ',
	MAXSIZE = UNLIMITED, 
	FILEGROWTH = 256MB)'
 
PRINT @SQL
IF @reportOnly = 0
BEGIN
	exec (@SQL)
END
 
SET NOCOUNT OFF

Scripts to Move the Data

I found a sproc to move indexes to another file group. The original code ignored clustered indexes, and I intend to move clustered indexes, so I changed it.

Some things you might want to change/adjust to taste:

  • ONLINE vs. OFFLINE index rebuilding
  • Use of PAGE compression.
  • MAXDOP settings
  • Sorting in tempdb

I then wrapped the MoveIndexToFileGroup with my own MoveDataToNewFilegroup sproc. My script iterates over every index in the database, re-indexing it and thus moving it to the specified file group.

-- From Ramoji: http://blogs.msdn.com/b/ramoji/archive/2008/09/26/how-to-move-existing-indexes-from-one-filegroup-to-another.aspx
-- Modified by Larry Silverman to include clustered indexes, use ONLINE indexing.
 
use master;
 
IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_NAME = 'MoveIndexToFileGroup')
	EXEC ('CREATE PROC dbo.MoveIndexToFileGroup AS SELECT ''stub version, to be replaced''')
GO
ALTER PROC [dbo].[MoveIndexToFileGroup] (        
 @DBName sysname,   
 @SchemaName sysname = 'dbo',
 @ObjectNameList Varchar(Max),
 @IndexName sysname = null,
 @FileGroupName varchar(100)  
)        
WITH RECOMPILE   
AS        
 
BEGIN  
 
SET NOCOUNT ON  
 
DECLARE @IndexSQL NVarchar(Max)  
DECLARE @IndexKeySQL NVarchar(Max)  
DECLARE @IncludeColSQL NVarchar(Max)  
DECLARE @FinalSQL NVarchar(Max)  
 
DECLARE @CurLoopCount Int  
DECLARE @MaxLoopCount Int  
DECLARE @StartPos Int  
DECLARE @EndPos Int  
 
DECLARE @ObjectName sysname  
DECLARE @IndName sysname  
DECLARE @IsUnique Varchar(10)  
DECLARE @Type Varchar(25)  
DECLARE @IsPadded Varchar(5)  
DECLARE @IgnoreDupKey Varchar(5) 
DECLARE @AllowRowLocks Varchar(5)  
DECLARE @AllowPageLocks Varchar(5) 
DECLARE @FillFactor Int  
DECLARE @ExistingFGName Varchar(Max) 
DECLARE @FilterDef NVarchar(Max)
 
DECLARE @ErrorMessage NVARCHAR(4000)  
DECLARE @SQL nvarchar(4000)  
DECLARE @RetVal Bit  
 
DECLARE @ObjectList Table(Id Int Identity(1,1),ObjectName sysname)  
 
DECLARE @WholeIndexData Table  
(  
ObjectName sysname  
,IndexName sysname  
,Is_Unique Bit  
,Type_Desc Varchar(25)  
,Is_Padded Bit  
,Ignore_Dup_Key Bit  
,Allow_Row_Locks Bit  
,Allow_Page_Locks Bit  
,Fill_Factor Int  
,Is_Descending_Key Bit  
,ColumnName sysname  
,Is_Included_Column Bit  
,FileGroupName Varchar(Max)
,Has_Filter Bit
,Filter_Definition NVarchar(Max)  
)  
 
DECLARE @DistinctIndexData Table  
(  
Id Int IDENTITY(1,1)  
,ObjectName sysname  
,IndexName sysname  
,Is_Unique Bit  
,Type_Desc Varchar(25)  
,Is_Padded Bit  
,Ignore_Dup_Key Bit  
,Allow_Row_Locks Bit  
,Allow_Page_Locks Bit  
,Fill_Factor Int  
,FileGroupName Varchar(Max) 
,Has_Filter Bit
,Filter_Definition NVarchar(Max)  
)  
 
-------------Validate arguments----------------------   
 
IF(@DBName IS NULL)  
BEGIN  
 SELECT @ErrorMessage = 'Database Name must be supplied.'   
 GOTO ABEND  
END  
 
IF(@ObjectNameList IS NULL)  
BEGIN  
 SELECT @ErrorMessage = 'Table or View Name(s) must be supplied.'   
 GOTO ABEND  
END  
 
IF(@FileGroupName IS NULL)  
BEGIN  
 SELECT @ErrorMessage = 'FileGroup Name must be supplied.'   
 GOTO ABEND  
END  
 
--Check for the existence of the Database  
IF NOT EXISTS(SELECT Name FROM sys.databases where Name = @DBName) 
BEGIN 
 SET @ErrorMessage = 'The specified Database does not exist' 
 GOTO ABEND
END
 
--Check for the existence of the Schema  
IF(upper(@SchemaName) <> 'DBO')  
BEGIN  
 SET @SQL = 'SELECT @RetVal = COUNT(*) FROM ' + QUOTENAME(@DBName) + '.sys.schemas WHERE name = ''' + @SchemaName + ''''  
 BEGIN TRY  
  EXEC sp_executesql @SQL,N'@RetVal Bit OUTPUT',@RetVal OUTPUT  
 END TRY  
 BEGIN CATCH  
  SELECT @ErrorMessage = ERROR_MESSAGE()   
  GOTO ABEND  
 END CATCH  
 IF(@RetVal = 0)  
 BEGIN  
  SELECT @ErrorMessage = 'No Schema with the name ' + @SchemaName + ' exists in the Database ' + @DBName   
  GOTO ABEND  
 END   
END  
 
--Check for the existence of the FileGroup  
SET @SQL = 'SELECT @RetVal=COUNT(*) FROM ' + QUOTENAME(@DBName) + '.sys.filegroups WHERE name = ''' + @FileGroupName + ''''  
BEGIN TRY  
 EXEC sp_executesql @SQL,N'@RetVal Bit OUTPUT',@RetVal OUTPUT  
END TRY  
BEGIN CATCH  
 SELECT @ErrorMessage = ERROR_MESSAGE()   
 GOTO ABEND  
END CATCH  
 
IF(@RetVal = 0)  
BEGIN  
 SELECT @ErrorMessage = 'No FileGroup with the name ' + @FileGroupName + ' exists in the Database ' + @DBName   
 GOTO ABEND  
END  
 
----------Get the objects from the concatenated list----------------------------------------------------  
 
SET @StartPos = 0  
SET @EndPos = 0  
 
WHILE(@EndPos >= 0)  
BEGIN  
 
 SELECT @EndPos = CHARINDEX(',',@ObjectNameList,@StartPos)  
 IF(@EndPos = 0) --Means, separator is not found  
 BEGIN  
  INSERT INTO @ObjectList  
  SELECT SUBSTRING(@ObjectNameList,@StartPos,(LEN(@ObjectNameList) - @StartPos)+1)   
 
  BREAK  
 END  
 
 INSERT INTO @ObjectList  
 SELECT SUBSTRING(@ObjectNameList,@StartPos,(@EndPos - @StartPos))  
 
 SET @StartPos = @EndPos + 1  
 
END  
-------------Check for the validity of all the Objects----------------------  
 
SET @StartPos = 1  
SELECT @EndPos = COUNT(*) FROM @ObjectList  
 
WHILE(@StartPos  0 THEN  'INCLUDE(' + @IncludeColSQL + ') ' ELSE '' END
  + @FilterDef  
  + ' WITH ('   
  + 'PAD_INDEX = ' + @IsPadded   
  + 'IGNORE_DUP_KEY = ' + @IgnoreDupKey  
  + 'ALLOW_ROW_LOCKS  = ' + @AllowRowLocks   
  + 'ALLOW_PAGE_LOCKS  = ' + @AllowPageLocks   
  + 'SORT_IN_TEMPDB = OFF,'   
  + 'DROP_EXISTING = ON,'   
  + 'ONLINE = {ONLINE},'
  + 'DATA_COMPRESSION = PAGE,'  
  + 'FILLFACTOR = ' + CAST(@FillFactor AS Varchar(3))  
  + ') ON ' + QUOTENAME(@FileGroupName)  
 
  PRINT @FinalSQL  
 
  -------------Execute the Create Index statement to move to the specified filegroup----------------------  
  DECLARE @sqlTry nvarchar(max)
  BEGIN TRY  
   select @sqlTry = REPLACE(@FinalSQL,'{ONLINE}','ON')
   EXEC sp_executesql @sqlTry
  END TRY  
  BEGIN CATCH  
   BEGIN TRY
    PRINT 'Online failed, trying offline.'
    select @sqlTry = REPLACE(@FinalSQL,'{ONLINE}','OFF')
    EXEC sp_executesql @sqlTry
   END TRY
   BEGIN CATCH
    PRINT 'Offline failed too.'
    SELECT @ErrorMessage = ERROR_MESSAGE()   
    GOTO ABEND  
   END CATCH   
  END CATCH   
  PRINT 'Index ' +  @IndName + ' on Object ' + @ObjectName + ' is moved successfully.'   
 
  SET @CurLoopCount = @CurLoopCount + 1  
 
 END  
 
 SET @StartPos = @StartPos + 1  
END  
 SELECT 'The procedure completed successfully.'  
 RETURN  
 
ABEND:  
 RAISERROR 500001 @ErrorMessage  
FINAL:  
 RETURN    
END

And here’s my wrapper:

IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_NAME = 'MoveDataToNewFilegroup' AND ROUTINE_TYPE='PROCEDURE')
  EXEC ('CREATE PROCEDURE dbo.MoveDataToNewFilegroup AS SELECT ''stub version, to be replaced''')
GO
 
ALTER PROC dbo.MoveDataToNewFilegroup
	@dbToMove nvarchar(100),
	@moveDataToFilegroup nvarchar(15) = N'SECONDARY',
	@reportOnly bit = 0
 
AS
 
SET NOCOUNT ON
 
DECLARE @listOfObjectsToMove VARCHAR(MAX)
declare @currentSchema nvarchar(25)
declare @i int
 
-- This could probably be automated.
-- Here I am creating a temp table that contains the names of all the known
-- schemas in our databases.  We're going to iterate over this list of schemas.
IF OBJECT_ID('tempdb..#schemas') IS NOT NULL DROP TABLE #schemas
create table #schemas (id int primary key not null,name nvarchar(50))
insert into #schemas values (1,'dbo')
insert into #schemas values (2,'events')
insert into #schemas values (3,'L10N')
insert into #schemas values (4,'reports')
 
-- Loop over all the known schemas in our databases.
set @i = 0
while @i < 4
begin
	set @i = @i + 1
	set @listOfObjectsToMove = null
 
	select @currentSchema = name from #schemas where id = @i	
	print 'Current Schema: ' + @currentSchema
 
	-- Get a CSV list of all objects (tables, indexes) to move to the new filegroup.
	SELECT @listOfObjectsToMove = COALESCE(@listOfObjectsToMove+',' ,'') + t.name
	from sys.tables t
	join sys.schemas s on t.schema_id = s.schema_id
	where s.name = @currentSchema
 
	-- THIS MOVES THE DATA!!!
	PRINT 'List of objects to move: ' + @listOfObjectsToMove
 
	IF @reportOnly = 0 AND @listOfObjectsToMove IS NOT NULL
	BEGIN
		exec master.dbo.MoveIndexToFileGroup @DBName=@dbToMove, @SchemaName = @currentSchema, @objectnamelist=@listOfObjectsToMove, @FileGroupName = @moveDataToFilegroup
	END
end
 
SET NOCOUNT OFF

List Objects by File Group

While experimenting with my scripts, I needed to validate which objects had been successfully moved to the new file group. Like most useful SQL on the web, this code comes courtesy of the prolific Pinal Dave.

/* Get Details of Object on different filegroup Finding User Created Tables */
SELECT o.[name], o.[type], i.[name], i.[index_id], f.[name]
FROM sys.indexes i
INNER JOIN sys.filegroups f
ON i.data_space_id = f.data_space_id
INNER JOIN sys.all_objects o
ON i.[object_id] = o.[object_id]
WHERE i.data_space_id = f.data_space_id
AND o.type = 'U' -- User Created Tables
order by f.name

Lessons Learned

The theory is sound.  It works.  But there are side effects.

Log Bloat

Completely re-indexing every index in the database will inflate your transaction log file – tremendously. If you don’t have enough disk space, this could be a serious problem.  I saw a 6GB database produce a 30GB log file after re-indexing all the indexes.  I did not expect that.

If you are using transaction log shipping (perhaps for off-site disaster recovery purposes, as we are), you could be in for an unexpected bandwidth hit pushing those bloated transaction logs to their destinations. That could cost you real money in bandwidth overages.

This was the deal-breaker for continuing with my approach. I decided that the explosion of the transaction log file during re-indexing was a greater evil than the temporary downtime we would take to move our database file the “old fashioned way”.

If you aren’t concerned with log shipping, and you have the disk to spare, my approach might very well be viable for you.

What I Finally Went With

Offline, Move and Online

I investigated the best way to move the database files from LUN to LUN. I looked at copy, robocopy and xcopy. I didn’t bother with anything more exotic.

I settled on using xcopy with the /J switch to enable unbuffered copying. The xcopy help file says that /J, “Copies using unbuffered I/O. Recommended for very large files.”

In order to execute the operations as quickly as possible, I decided I would do the actual file copy commands within the SQL Server engine using xp_cmdshell. That required that I temporarily enable xp_cmdshell. It’s not something you want to leave enabled all the time, as people can get themselves into trouble with it.

I have a lot of databases to move, so I used SQL to generate the SQL commands I needed.

This script doesn’t do any work, it just generates SQL that you can copy/paste and execute at your leisure.

SET NOCOUNT ON
 
-- Note that in the script, we are moving data files from M: to P:
-- and log files from L: to O:
-- Change the script to suit your environment.
 
IF OBJECT_ID('tempdb..#commands') IS NOT NULL DROP TABLE #commands
create table #commands (command nvarchar(1000))
 
exec sp_foreachdb 
@print_command_only = 0,
@system_only = 0,
@command = '
use ?;
IF ''?'' <> ''[tempdb]''
BEGIN
	insert into #commands
	select ''ALTER DATABASE '' + DB_NAME() + '' MODIFY FILE ( NAME = '' + name + '', FILENAME = '''''' + 
		CASE 
			WHEN physical_name like ''M:\%'' THEN REPLACE(physical_name, ''M:'', ''P:'') + '''''');''
			WHEN physical_name like ''L:\%'' THEN REPLACE(physical_name, ''L:'', ''O:'') + '''''');''
		END
	FROM sys.database_files;
 
	insert into #commands
	select ''ALTER DATABASE ['' + name + ''] set offline with rollback immediate;''
	FROM sys.database_files where type_desc = ''ROWS'';
 
	insert into #commands
	select ''exec xp_cmdshell ''''echo f | xcopy "'' + physical_name + ''" "'' +
		CASE 
			WHEN physical_name like ''M:\%'' THEN REPLACE(physical_name, ''M:'', ''P:'')
			WHEN physical_name like ''L:\%'' THEN REPLACE(physical_name, ''L:'', ''O:'')
		END
	 + ''" /V /H /O /J''''''
	FROM sys.database_files;
 
	insert into #commands
	select ''ALTER DATABASE ['' + name + ''] set online;''
	FROM sys.database_files where type_desc = ''ROWS'';
 
	insert into #commands
	select ''''
 
END
'
select * from #commands

I did some copy tests from SAN to SAN and found I was getting about 140 MB/s using this technique. I could copy a 70GB database in about 8.5 minutes. The bottleneck is the paltry read speed of the old SAN, which is partly the reason for undertaking this migration project in the first place.

I was able to move many of our smaller customer databases in mere seconds.

In the final calculus, 8.5 minutes of downtime for this one customer’s database trumped the risks, complexities and costs of implementing my alternative approach.

As of this writing, I used this technique to move all my customer databases except the very largest one, which I’ll move during an agreed-upon maintenance window.

Final Cleanup – Moving System Databases

To migrate completely from the old SAN, we need to move all the system databases.  Microsoft has this procedure well documented, so I won’t reproduce any code here. Suffice to say, it does require taking the entire database down briefly. I have no amazing theories as to how to mitigate that reality.

Conclusion

I learned quite a bit working through my theory, so I certainly don’t feel like I wasted my time. I hope someone else who has more stringent uptime requirements or less understanding customers might benefit from my work.