Summary: To
shrink all data and log files for a specific database, use DBCC SHRINKDATABASE
command. To shrink one data or log file at a time for a specific database,
use DBCC
SHRINKFILE. If the database
file cannot be shrunk, check to see if (1) there is sufficient free space, (2)
there is a clustered index on the table, and (3) there are highly fragmented
extents on the partitions.
1. DBCC ShrinkDatabase
1) What’s for? - Shrinks the size of the data and log files in the specified database.
2) Syntax
DBCC
SHRINKDATABASE
( database_name | database_id | 0
[ , target_percent ]
[ , { NOTRUNCATE | TRUNCATEONLY } ]
)
[ WITH NO_INFOMSGS ]
a) NOTRUNCATE
Compacts the data in data files by
moving allocated pages from the end of a file to unallocated pages in the front
of the file. target_percent is
optional.
The free space at the end of the
file is not returned to the operating system, and the physical size of the file
does not change. Therefore, when NOTRUNCATE is specified, the database appears
not to shrink.
NOTRUNCATE is applicable only to
data files. The log file is not affected.
b) TRUNCATEONLY
Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent. target_percent is ignored if specified with TRUNCATEONLY.
TRUNCATEONLY affects the log file. To truncate only the data file, use DBCC SHRINKFILE.
3)
Examples
· Shrinking a database releasing 10% free space
--To see
how much the free space is before the action
sp_spaceused
--shrinking
10%
DBCC SHRINKDATABASE(SchoolDB,10)
--to see
how much the free space is after the action
sp_spaceused
· Truncating a database to the last allocated extent – releasing all free to the operating system
DBCC SHRINKDATABASE(SchoolDB, NOTRUNCATE)
2. DBCC ShrinkFile
1) What’s for? - Shrinks the size of the specified data or log file for the current database, or empties a file by moving the data from the specified file to other files in the same filegroup, allowing the file to be removed from the database. You can shrink a file to a size that is less than the size specified when it was created. This resets the minimum file size to the new value.
2) Syntax
DBCC SHRINKFILE
(
{ file_name | file_id }
{ [
, EMPTYFILE ]
| [
[ , target_size ] [ , { NOTRUNCATE |
TRUNCATEONLY } ] ]
}
)
[ WITH NO_INFOMSGS ]
·
file_name
Is the logical
name of the file to be shrunk.
· file_id
Is the
identification (ID) number of the file to be shrunk.
To obtain a file ID, use the FILE_IDEX system
function or query the sys.database_files catalog
view in the current database.
· target_size
Is the size for
the file in megabytes, expressed as an integer. If not
specified, DBCC SHRINKFILE reduces the size to the default file size. The
default size is the size specified when the file was created.
Note: You can
reduce the default size of an empty file by using DBCC SHRINKFILE target_size. For example, if you create a 5-MB file
and then shrink the file to 3 MB while the file is still empty, the default
file size is set to 3 MB. This applies only to empty files that have never contained
data.
If target_size is
specified, DBCC SHRINKFILE tries to shrink the file to the specified size. Used
pages in the part of the file to be freed are relocated to available free space
in the part of the file retained. For example, if there is a 10-MB data file, a
DBCC SHRINKFILE operation with a target_size of
8 causes all used pages in the last 2 MB of the file to be reallocated
into any unallocated pages in the first 8 MB of the file. DBCC SHRINKFILE does
not shrink a file past the size needed to store the data in the file. For
example, if 7 MB of a 10-MB data file is used, a DBCC SHRINKFILE statement
with a target_size of 6 shrinks the
file to only 7 MB, not 6 MB.
For log files, target_size is the amount of free
space in the log after the shrink operation
· EMPTYFILE
Migrates all data
from the specified file to other files in the same filegroup.
Because the Database Engine no longer allows data to be placed in the empty
file, the file can be removed by using the ALTER DATABASE statement.
3) Shrinking a Log File
For log files, the
Database Engine uses target_size to calculate the target size for the whole log; therefore, target_size is the amount of free
space in the log after the shrink operation. Target size for the whole log is
then translated to target size for each log file. DBCC SHRINKFILE tries to
shrink each physical log file to its target size immediately. However, if part
of the logical log resides in the virtual logs beyond the target size, the
Database Engine frees as much space as possible, and then issues an
informational message. The message describes what actions are required to move
the logical log out of the virtual logs at the end of the file. After the
actions are performed, DBCC SHRINKFILE can be used to free the remaining space.
Because a log file
can only be shrunk to a virtual log file boundary, shrinking a log file to a
size smaller than the size of a virtual log file might not be possible, even if
it is not being used. The size of the virtual log file is chosen dynamically by
the Database Engine when log files are created or extended.
4) Examples
A. shrinks the size of a
data file named DataFile1 in the UserDB user database
to 7 MB.
DBCC SHRINKFILE (DataFile1, 7);
GO
B. Shrinking a log file to
a specified target size
/*
The following
example shrinks the log file in the AdventureWorks
database to 1 MB.
To allow the DBCC
SHRINKFILE command to shrink the file, the file is first truncated
by
setting the database recovery model to SIMPLE.
*/
USE AdventureWorks2012;
GO
--
Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE AdventureWorks2012
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (AdventureWorks2012_Log, 1);
GO
--
Reset the database recovery model.
ALTER DATABASE AdventureWorks2012
SET RECOVERY FULL;
GO
C. Truncating a data file
/*
The following
example truncates the primary data file in the AdventureWorks
database.
The sys.database_files catalog view is queried to obtain the file_id of the data file.
*/
USE AdventureWorks2012;
GO
SELECT file_id, name
FROM sys.database_files;
GO
DBCC SHRINKFILE (1, TRUNCATEONLY);
D. Emptying
a file – Note: only ndf files can be emptied.
/*
The following
example demonstrates the procedure for emptying a file so that it can be
removed from the database.
For
the purposes of this example, a data file is first created and it is assumed
that the file contains data.
*/
USE AdventureWorks2012;
GO
-- Create a data
file and assume it contains data.
ALTER DATABASE AdventureWorks2012
ADD FILE (
NAME = Test1data,
FILENAME = 'C:\t1data.ndf',
SIZE = 5MB
);
GO
-- Empty the data
file.
DBCC SHRINKFILE (Test1data, EMPTYFILE);
GO
-- Remove the
data file from the database.
ALTER DATABASE AdventureWorks2012
REMOVE FILE Test1data;
GO
5) Best Practices for DBCC ShrinkFile
Consider the
following information when you plan to shrink a file:
·
A shrink operation is most effective after an
operation that creates lots of unused space, such as a truncate table or a drop
table operation.
·
Most databases require some free space to be
available for regular day-to-day operations. If you shrink a database
repeatedly and notice that the database size grows again, this indicates that
the space that was shrunk is required for regular operations. In these cases,
repeatedly shrinking the database is a wasted operation.
·
A shrink operation does not preserve the
fragmentation state of indexes in the database, and generally increases
fragmentation to a degree. This is another reason not to repeatedly shrink the
database.
·
Shrink multiple files in the same database
sequentially instead of concurrently. Contention on system tables can cause
delays due to blocking.
6) Troubleshooting for DBCC ShrinkFile
· The
File do not shrink due to insufficient free space
If the shrink operation runs without error, but the file does not appear
to have changed in size, verify that the file has adequate free space to remove
by performing one of the following operations:
Run the following query.
SELECT name ,
size
/ 128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS INT) /
128.0 AS FreeSpaceInMB
FROM sys.database_files;
Run the DBCC SQLPERF command to return the space used in the transaction
log.
If insufficient free space is available, the shrink operation cannot
reduce the file size any further.
Typically it is the log file that appears not to shrink. This is usually the result of a
log file that has not been truncated. You can truncate the log by setting the database recovery model to SIMPLE,
or by backing up the log and then running the DBCC SHRINKFILE operation again.
· The
File do not shrink due to fragmented pages in the extents
Remember that DBCC ShrinkFile works at the extent level. The extents with
data, no matter on how many pages, cannot be shrunk. But if you check the free
space at the page level, the result will reflect these empty pages. But DBCC ShrinkFile does not shrink the file in these cases as
demoed below.
-- DBCC
ShrinkFile
CREATE DATABASE test_shrink
GO
USE test_shrink
GO
--create
a table: one row occupies a page
IF EXISTS ( SELECT
*
FROM sys.objects
WHERE name = 'Show_extent'
AND TYPE = 'U' )
DROP TABLE show_extent
ELSE
CREATE TABLE show_extent ( a INT, b NVARCHAR(3900) )
GO
--insert
8000 rows data into the table
DECLARE @i INT
SET @i = 1
WHILE @i <= 1000
BEGIN
INSERT
INTO show_extent
VALUES ( 1, REPLICATE(N'a', 3900) )
INSERT
INTO show_extent
VALUES ( 2, REPLICATE(N'b', 3900) )
INSERT
INTO show_extent
VALUES ( 3, REPLICATE(N'c', 3900) )
INSERT
INTO show_extent
VALUES ( 4, REPLICATE(N'd', 3900) )
INSERT
INTO show_extent
VALUES ( 5, REPLICATE(N'e', 3900) )
INSERT
INTO show_extent
VALUES ( 6, REPLICATE(N'f', 3900) )
INSERT
INTO show_extent
VALUES ( 7, REPLICATE(N'g', 3900) )
INSERT
INTO show_extent
VALUES ( 8, REPLICATE(N'h', 3900) )
SET @i = @i + 1
END
--The
8000 rows uses 64072KB = 64MB data + 8KB index + 64 unused.
sp_spaceused show_extent
/*
name
rows
reserved
data
index_size
unused
show_extent
8000 64072 KB 64000 KB 8 KB
64 KB
*/
DBCC showcontig('show_extent')
GO
/****This
heap table uses 1002 extents and 8000 pages.
DBCC
SHOWCONTIG scanning 'show_extent' table...
Table:
'show_extent' (245575913); index ID: 0, database ID: 18
TABLE
level scan performed.
- Pages
Scanned................................: 8000
- Extents
Scanned..............................: 1002
- Extent
Switches..............................: 1001
- Avg.
Pages per Extent........................: 8.0
- Scan
Density [Best Count:Actual Count].......: 99.80% [1000:1002]
- Extent
Scan Fragmentation ...................: 0.00%
- Avg.
Bytes Free per Page.....................: 279.0
- Avg.
Page Density (full).....................: 96.55%
*/
SELECT * FROM sys.database_files
/*****
Data file size = 8344*8=66752KB
file_id type_desc
physical_name
size (number of pages)
1 ROWS
D:\xxx\DATA\test_shrink.mdf
8344
2
LOG
D:\xxx\DATA\test_shrink_log.ldf 15736
*/
--Now,
let's delete all of the data except for those with a=5
--7000
rows deleted. 1000 rows left, one row/each page in each extent
DELETE show_extent
WHERE a <> 5
GO
--The
left 1000 rows uses
sp_spaceused show_extent
/****Surprisingly
the 1000 rows still uses 64008KB: 32992KB data +8KB + 31008KB unused
name
rows
reserved data
index_size
unused
show_extent
1000 64008 KB 32992
KB 8
KB
31008 KB
*/
DBCC showcontig('show_extent')
GO
/*******All
1000 extents are still used. no one is released.
DBCC SHOWCONTIG
scanning 'show_extent' table...
Table:
'show_extent' (245575913); index ID: 0, database ID: 18
TABLE
level scan performed.
- Pages
Scanned................................: 4124
- Extents
Scanned..............................: 1001
- Extent
Switches..............................: 1000
- Avg.
Pages per Extent........................: 4.1
- Scan
Density [Best Count:Actual Count].......: 51.55% [516:1001]
- Extent
Scan Fragmentation ...................: 0.00%
- Avg.
Bytes Free per Page.....................: 6199.0
- Avg.
Page Density (full).....................: 23.41%
*/
SELECT * FROM sys.database_files
/**** The
same as before the delete command
file_id type_desc
physical_name
size (number of pages)
1
ROWS
D:\xxx\DATA\test_shrink.mdf
8344
2
LOG
D:\xxx\DATA\test_shrink_log.ldf 15736
*/
--But we
know the data is actually only about 1000pages*8KB/page(=8MB), so let's try to
shrink the file to 10MB
DBCC SHRINKFILE (1, 10)
SELECT * FROM sys.database_files
/****almost
the same. So DBCC ShrinkFile does not release the space in this case
file_id type_desc
physical_name
size (number of pages)
1
ROWS
D:\xxx\DATA\test_shrink.mdf
8296
2
LOG
D:\xxx\DATA\test_shrink_log.ldf 15736
*/
sp_spaceused show_extent
/***
Again sp_spaceused shows these 100o rows uses about 64M space, although the
data is actually 8000KB or 8MB.
name
rows
reserved data index_size unused
show_extent
1000 63952 KB 8000 KB 8
KB
55944 KB
*/
--Solution:
rebuild the clustered index if the table has one. Otherwise, create one as
below
CREATE CLUSTERED INDEX show_I
ON show_extent (a)
go
DBCC showcontig('show_extent')
go
/******
Now the extents used has dropped to 125
DBCC
SHOWCONTIG scanning 'show_extent' table...
Table:
'show_extent' (293576084); index ID: 1, database ID: 18
TABLE
level scan performed.
- Pages
Scanned................................: 1000
- Extents
Scanned..............................: 125
- Extent
Switches..............................: 124
- Avg.
Pages per Extent........................: 8.0
- Scan
Density [Best Count:Actual Count].......: 100.00% [125:125]
- Logical
Scan Fragmentation ..................: 0.00%
- Extent
Scan Fragmentation ...................: 0.00%
- Avg.
Bytes Free per Page.....................: 273.0
- Avg.
Page Density (full).....................: 96.63%
*/
--Now
shrinkfile works
dbcc shrinkfile (1, 10)
SELECT * FROM sys.database_files
/**** Now
the data file has reduced to 1304 pages *8K/page = 10432KB=10.4MB
file_id type_desc
physical_name
size (number of pages)
1
ROWS
D:\xxx\DATA\test_shrink.mdf
1304
2
LOG D:\xxx\DATA\test_shrink_log.ldf 15736
*/
sp_spaceused
/*****actually,
the minimum file size should be 10336KB: 8.848MB data + 1.16MB index + 328 KB
unused.
reserved data index_size unused
10336
KB 8848 KB 1160 KB 328
KB
*/
·
DBCC
SHRINKFILE and SHRINKDATABASE commands may not work because of sparsely
populated text, ntext, or image columns
See below from http://support.microsoft.com/kb/324432/en-us
/*
However, DBCC SHRINKFILE and SHRINKDATABASE commands may not work because
of sparsely populated text, ntext, or image columns.
SQL Server uses separate pages to store those LOB data,the clustreed index
approach for non-LOB data does not help. We need to find the objects containing
these fragmented data and rebuild them
One way to find the fragmented objects is to use DBCC ExtentInfo.
The DBCC command DBCC EXTENTINFO provides information about extents
allocations that occur within a database.
The command can be used to identify how extents have been allocated and
whether the extents being used are mixed or uniform. For every extent
allocation, there will be one row in the results. Since extents are comprised
of eight pages, there can be as many as eight allocations for an extent when
there are single page allocations, such as when mixed extents are used. When
uniform extents are used, there will be only one extent allocation and one row
returned for the extent.DBCC EXTENTINFO can be
extremely useful for investigating how pages are allocated to tables and indexes.Using the DBCC command can be extremely useful when
trying to investigate issues related to fragmentation and how pages have been
allocated.
Run the following code after deleting the 7000 rows. Do not use the
clustered index approach to shrink the file for this demo.
Please note this demo presents a solution to identify the highly
fragmented extents/partitions on the previous table. If the query is used for
identifying LOBs, you can specify index_id=255 etc.
*/
IF EXISTS ( SELECT
*
FROM sys.objects
WHERE name = 'extentinfo'
AND TYPE = 'U' )
DROP TABLE extentinfo
ELSE
CREATE TABLE extentinfo
(
[file_id] SMALLINT ,
page_id INT ,
pg_alloc INT ,
ext_size INT ,
obj_id INT ,
index_id INT ,
partition_number INT ,
partition_id BIGINT ,
iam_chain_type VARCHAR(50) ,
pfs_bytes VARBINARY(10)
)
GO
IF EXISTS ( SELECT
*
FROM sys.objects
WHERE name = 'import_extentinfo'
AND TYPE = 'p' )
DROP PROC
import_extentinfo
CREATE PROCEDURE import_extentinfo
AS
DBCC extentinfo('test_shrink')
GO
INSERT extentinfo
EXEC import_extentinfo
GO
/*
If the %
of possible extent / actual extent column is less than 75, this database has
sparsely extents and can be reduced in size.
In our
case, it is a heap table. If it is a BLOB data, index_id should be 255.
*/
SELECT [file_id] ,
obj_id ,
index_id ,
partition_id ,
ext_size ,
'actual_extent_count' = COUNT(*) ,
'actual_page_count' = SUM(pg_alloc) ,
'possible_extent_count' = CEILING(SUM(pg_alloc) * 1.0 / ext_size) ,
'% of possible_extents /
actual_extents' = ( CEILING(SUM(pg_alloc)
* 1.00 / ext_size)
* 100.00 ) / COUNT(*)
FROM extentinfo
GROUP BY [file_id] ,
obj_id ,
index_id ,
partition_id ,
ext_size
HAVING COUNT(*) - CEILING(SUM(pg_alloc) * 1.0 / ext_size) > 0
ORDER BY partition_id ,
obj_id ,
index_id ,
[file_id]
/***
BJECT_ID =389576426==>the show_extent table
file_id obj_id
index_i partition_id
ext_size actual_extent_count
1
389576426
0
72057594039631872
8
998
actual_page_count possible_extent_count % of possible_extents /
actual_extents
4115
515
51.603206412
*/
So, How to fix the problem after the fragmented object is identified?
If
the affected tables do not have to be continuously online while you correct
this situation, you can use any one of the following workarounds to resolve
this problem:
o Use
a SELECT INTO statement to transfer the whole table to a new table in the same
database. Drop the original table, and then run a DBCC SHRINKFILE statement.
Rename the new table to the original table name.
o Use
a SELECT INTO statement to transfer the whole table to a new table in a
different database. Truncate the original table, and then run a DBCC SHRINKFILE
statement. Transfer the data back to the original table.
o Use
the bulk copy program to copy the table out in native mode. Script out the
table, and then drop the existing table. Run a DBCC SHRINKFILE statement.
Create a new table, and then use the bulk copy program to copy the data back to
the table.
o Use
Data Transformation Services (DTS) to transfer the whole database to a new
database. SQL Server 7.0 Data Transformation Services may not transfer text and
image data longer that is than 64 KB correctly. This problem does not apply to
the SQL Server 2000 version of Data Transformation Services. For additional
information, click the article number below to view the article in the
Microsoft Knowledge Base:
257425 FIX: DTS
Object Transfer Does Not Transfer BLOB Data Greater Than 64
You can look for
text and image columns that are greater than 64 KB with the following query:
SELECT MAX (DATALENGTH (TextImageColumn))
FROM TableName
If the text and
image data is greater than 64 KB, the transfer truncates the data and does not
raise any error except in the DTS log files. If you have text or image data
greater than 64 KB, use the information in the third bullet item for the tables
that have text or image data. Next, use DTS for the rest of the items in the
database.
If the database must be online or the size of the database is too large to move
the data into an external file or another database, you can use the DBCC
SHRINKFILE command with the EMPTYFILE option. To use this work around, follow
these steps:
§ If
you are running Microsoft SQL Server 7.0 Service Pack 3, turn on trace flags
1180 and 1197.
§ If
you running Microsoft SQL Server 7.0 Service Pack 3 build 7.00.987, or later,
including Service Pack 4, turn on trace flag 1180.
§ If
you have spread the database across several files for either drive space issues
or performance reasons, create 1 or more additional files in the database by
using the ALTER DATABASE command with a total size comparable to the results of
the following query.
SELECT 'size in MB' = CEILING(SUM(ei.pg_alloc) /
128.000) ,
sfg.groupname
FROM
extentinfo ei
INNER JOIN sysfiles sf ON ei.[file_id] = sf.fileid
AND ei.[file_id] != 1
INNER JOIN sysfilegroups sfg ON sf.groupid = sfg.groupid
GROUP BY sfg.groupname
NOTE: The
preceding query uses the table that is created by the query in the "More
Information" section of this article. Additionally, the file size totals
are broken down by filegroup.
§ Run
a DBCC SHRINKFILE query, with the EMPTYFILE option as the second parameter, on
each file that has sparsely populated blob uniform extents.
§ You
can track the progress of the DBCC SRHINKFILE command by
creating one or more tables with the same schema as the extentinfo table from the "More
Information" section of this article. Then run an "INSERT
.. EXECimport_extentinfo" query to
populate the data to the new tables. Use the following query to analyze the
results:
SELECT
[file_id] ,
'page count' = SUM(pg_alloc) ,
'min page' = MIN(page_id) ,
'max page' = MAX(page_id)
FROM extentinfo
GROUP BY
[file_id]
Additionally,
you can use the following query to keep track of when the DBCC EXTENTINFO
command was run if the tables start with the same pattern of letters and are
created near the time as the DBCC EXTENTINFO command:
SELECT
crdate ,
[name]
FROM sysobjects
WHERE type = 'U'
AND [name] LIKE 'extent%'
ORDER BY
crdate
§
When the DBCC SHRINKFILE query completes, run an
ALTER DATABASE query to remove the old files from the database.
§
You can possibly remove additional space by
running a DBCC SHRINKFILE query with the TRUNCATEONLY option on the new files
that were created in step 3.
§
If you are running Microsoft SQL Server 7.0,
turn off the trace flags you turned on in step 1 or 2.