(1) Memory-Optimized Tables
In-Memory OLTP is a memory-optimized database engine integrated
into the SQL Server engine. In-Memory OLTP is optimized for OLTP.
To use In-Memory OLTP, you define a heavily accessed table as
memory optimized. Memory-optimized-tables are fully transactional, durable, and
are accessed using Transact-SQL in the same way as disk-based tables. A query
can reference both memory-optimized tables and disk-based tables. A transaction
can update data in memory-optimized tables and disk-based tables. The actual
performance gain depends on many factors, but 5-to-20 times performance
improvements are common.
In-Memory OLTP achieves significant performance
and scalability gains by using:
·
Algorithms that are
optimized for accessing memory-resident data.
·
Optimistic concurrency control (i.e., row-versioning) that eliminates
logical locks.
·
Lock free objects that
eliminate all physical locks and latches. Threads that perform transactional
work don’t use locks or latches for concurrency control.
·
Natively compiled
stored procedures, which have significantly better performance than interpreted
stored procedures, when accessing a memory-optimized table.
For more information, see In-Memory OLTP (In-Memory Optimization).
USE master
GO
-- Create a new database.
CREATE DATABASE imoltp
GO
-- Prepare the database for In-Memory OLTP
by
-- adding a memory-optimized filegroup to the database.
ALTER DATABASE imoltp ADD FILEGROUP imoltp_file_group
CONTAINS MEMORY_OPTIMIZED_DATA;
-- Add a file (to hold the
memory-optimized data) to the new filegroup.
ALTER DATABASE imoltp ADD FILE (name='imoltp_file', filename='D:\in-memory-oltp-demo\imoltp_file')
TO FILEGROUP
imoltp_file_group;
GO
/*
Run the following code to create the
disk-based table, two (2) memory-optimized tables,
and the
natively-compiled stored procedure that will be used to demonstrate the
different data access methods:
*/
USE imoltp
GO
-- If the tables or stored procedure
already exist, drop them to start clean.
IF EXISTS (SELECT NAME FROM sys.objects WHERE
NAME = 'DiskBasedTable')
DROP TABLE [dbo].[DiskBasedTable]
GO
IF EXISTS (SELECT NAME FROM sys.objects WHERE
NAME = 'InMemTable')
DROP TABLE [dbo].[InMemTable]
GO
IF EXISTS (SELECT NAME FROM sys.objects WHERE NAME = 'InMemTable2')
DROP TABLE [dbo].[InMemTable2]
GO
IF EXISTS (SELECT NAME FROM sys.objects WHERE NAME = 'usp_InsertData')
DROP PROCEDURE [dbo].[usp_InsertData]
GO
-- Create a traditional disk-based table.
CREATE TABLE [dbo].[DiskBasedTable] (
c1 INT NOT NULL PRIMARY KEY,
c2 NCHAR(48) NOT NULL
)
GO
-- Create a memory-optimized table.
CREATE TABLE [dbo].[InMemTable] (
c1 INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),
c2 NCHAR(48) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_AND_DATA);
GO
-- Create a 2nd memory-optimized table.
CREATE TABLE [dbo].[InMemTable2] (
c1 INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),
c2 NCHAR(48) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_AND_DATA);
GO
-- Create a natively-compiled stored
procedure.
-- Natively compiled stored procedures
are Transact-SQL stored procedures compiled to native code that access memory-optimized
tables.
-- Natively compiled stored procedures allow
for efficient execution of queries and business logic in the stored procedure.
CREATE PROCEDURE [dbo].[usp_InsertData]
@rowcount INT,
@c NCHAR(48)
WITH NATIVE_COMPILATION,
SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION
LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
DECLARE @i INT = 1;
WHILE @i <= @rowcount
BEGIN
INSERT INTO
[dbo].[inMemTable2](c1,c2) VALUES (@i,
@c);
SET @i += 1;
END
END
GO
/*
The setup is complete and we are ready to
execute the queries that will display the response times comparing the
performance between the data access methods.
To complete the example run the following
code multiple times. Ignore the results from the first run which is negatively
affected by initial memory allocation.
--1st time: 1184, 932, 428
--2nd time: 1593, 1325, 245
--3rd: 1578, 1028, 174
--4th: 1347, 846, 237
*/
SET STATISTICS TIME OFF;
SET NOCOUNT ON;
-- Delete data from all tables to reset
the example.
DELETE FROM [dbo].[DiskBasedTable]
WHERE
[c1]>0
GO
DELETE FROM [dbo].[inMemTable]
WHERE [c1]>0
GO
DELETE FROM [dbo].[InMemTable2]
WHERE [c1]>0
GO
-- Declare parameters for the test
queries.
DECLARE @i INT = 1;
DECLARE @rowcount INT = 100000;
DECLARE @c NCHAR(48) = N'12345678901234567890123456789012345678';
DECLARE @timems INT;
DECLARE @starttime datetime2 = sysdatetime();
-- Disk-based table queried with
interpreted Transact-SQL.
BEGIN TRAN
WHILE @I <= @rowcount
BEGIN
INSERT INTO
[dbo].[DiskBasedTable](c1,c2) VALUES (@i, @c);
SET @i += 1;
END
COMMIT
SET @timems = datediff(ms, @starttime, sysdatetime());
SELECT CAST(@timems AS VARCHAR(10)) + ' ms
(disk-based table with interpreted Transact-SQL).';
-- Memory-optimized table queried with interpreted
Transact-SQL.
SET @i = 1;
SET @starttime = sysdatetime();
BEGIN TRAN
WHILE @i <= @rowcount
BEGIN
INSERT INTO [dbo].[InMemTable](c1,c2) VALUES (@i, @c);
SET @i += 1;
END
COMMIT
SET @timems = datediff(ms, @starttime, sysdatetime());
SELECT CAST(@timems AS VARCHAR(10)) + ' ms
(memory-optimized table with interpreted Transact-SQL).';
-- Memory-optimized table queried with a
natively-compiled stored procedure.
SET @starttime = sysdatetime();
EXEC usp_InsertData @rowcount, @c;
SET @timems = datediff(ms, @starttime, sysdatetime());
SELECT CAST(@timems AS VARCHAR(10)) + ' ms
(memory-optimized table with natively-compiled stored procedure).';
(1) SQL Server Data Files in Windows Azure
SQL Server Data Files in Windows Azure enables native support for SQL Server database files stored as
Windows Azure Blobs. This feature allows you to create a database in SQL Server
running in on-premises or in a virtual machine in Windows Azure with a
dedicated storage location for your data in Windows Azure Blob Storage.
SQL Server Data Files in Windows Azure enables native support
for SQL Server database files stored as Windows Azure Blobs. It allows you to
create a database in SQL Server running in on-premises or in a virtual machine
in Windows Azure with a dedicated storage location for your data in Windows
Azure Blob Storage. This enhancement especially simplifies to move databases
between machines by using detach and attach operations. In addition, it
provides an alternative storage location for your database backup files by
allowing you to restore from or to Windows Azure Storage. Therefore, it enables
several hybrid solutions by providing several benefits for data virtualization,
data movement, security and availability, and any easy low costs and
maintenance for high-availability and elastic scaling.
The following diagram demonstrates that this enhancement enables
you to store SQL Server database files as Windows Azure blobs in Windows Azure
Storage regardless of where your server resides.
When
using this new enhancement, you are required to do the followings:
·
You must create a policy on a container and also generate a
shared access signature (SAS) key.
·
For each container used by a data or a log file, you must create
a SQL Server Credential whose name matches the container path.
·
You must store the information regarding Windows Azure Storage
container, its associated policy name, and SAS key in the SQL Server credential
store.
-- Create a credential
CREATE CREDENTIAL [https://testdb.blob.core.windows.net/data]
WITH IDENTITY='SHARED ACCESS SIGNATURE',
SECRET = 'your SAS key'
-- Create database with data and log
files in Windows Azure container.
CREATE DATABASE testdb
ON
( NAME = testdb_dat,
FILENAME =
'https://testdb.blob.core.windows.net/data/TestData.mdf' )
LOG ON
( NAME = testdb_log,
FILENAME = 'https://testdb.blob.core.windows.net/data/TestLog.ldf')
(2) Host a SQL Server Database in a Windows Azure Virtual Machine
Use the Deploy a SQL Server Database to a Windows Azure Virtual Machine Wizard to host a database from an instance of SQL Server in a
Windows Azure Virtual Machine.
Use the Deploy a SQL Server Database
to a Windows Azure VM wizard to deploy a database from an instance of
the Database Engine to SQL Server in a Windows Azure Virtual Machine (VM). The
wizard utilizes a full
database backup operation, so it always copies the complete database
schema and the data from a SQL Server user database. The wizard also does all
of the Azure VM configuration for you, so no pre-configuration of the VM is
required.
You cannot use the wizard for differential
backups because the wizard will not overwrite an existing database that has the
same database name. To replace an existing database on the VM, you must first
drop the existing database or change the database name. If there is a naming
conflict between the database name for an in-flight
deploy operation and an existing database on the VM, the wizard will suggest an
appended database name for the in-flight database to enable you to complete the
operation.
To launch the wizard, use the following steps:
1.
Use SQL Server Management Studio to connect to the instance of
SQL Server with the database you want to deploy.
2.
In Object
Explorer, expand the instance name, then expand the Databases node.
3.
Right-click the database you want
to deploy, select Tasks, and
then select Deploy
Database to Windows Azure VM…
(3) Backup and Restore Enhancements
SQL Server 2014 contains the following enhancements
for SQL Server Backup and Restore:
1) SQL Server Backup to URL – using SSMS now
SQL Server Backup to URL was introduced in SQL
Server 2012 SP1 CU2 supported only by Transact-SQL, PowerShell and SMO. In SQL Server
2014 you can use SQL Server Management Studio to backup to or restore from
Windows Azure Blob storage service. The new option is available both for the
Backup task, and maintenance plans. For more information, see Using
Backup Task in SQL Server Management Studio, SQL
Server Backup to URL Using Maintenance
Plan Wizard, and Restoring
from Windows Azure storage Using SQL Server Management Studio.
========================== Extended
Reading ===============================
Using Backup Task in SQL
Server Management Studio
The
Backup task in SQL Server Management Studio has been enhanced to include URL as
one of the destination options, and other supporting objects required to backup
to Windows Azure storage like the SQL Credential.
The
following steps describe the changes made to the Back Up Database task to allow
for backing up to Windows Azure storage.:
1.
Start SQL Server Management Studio and connect to the SQL Server
instance. Select a database you want to backup, and right click on Tasks,
and select Back Up... This opens the
Back Up Database dialog box.
2.
On the general page the URL option is used to create a backup to
Windows Azure storage. When you select this option, you see other options
enabled on this page:
a.
File Name: Name of the backup file.
b.
SQL Credential: You can either specify an existing SQL Server
Credential, or can create a new one by clicking on the Create next to the SQL Credential box.
c.
Azure storage container: The name of the Windows Azure storage
container to store the backup files.
d.
URL prefix: This is built automatically using the
information specified in the fields described in the previous steps. If you do
edit this value manually, make sure it matches with the other information you
provided previously. For example if you modify the storage URL, make sure the
SQL Credential is set to authenticate to the same storage account.
========================================================================================
1) SQL Server Managed Backup to Windows Azure
Built on SQL Server Backup to URL, SQL Server
Managed Backup to Windows Azure is a service that SQL Server provides to manage and schedule database
and log backups. In this release only backup to Windows Azure storage is
supported. SQL Server Managed Backup to Windows Azure can be configured both at
the database and at instance level allowing for both granular control at the
database level and automating at the instance level. SQL Server Managed Backup
to Windows Azure can be configured on SQL Server instances running on-premises
and SQL Server instances running on Windows Azure virtual machines. It is
recommended for SQL Server instances running on Windows Azure virtual machines.
For more information, see SQL
Server Managed Backup to Windows Azure.
2) Encryption for Backups – Note: TDE is for the Database. Here is for Backups
You can now choose to encrypt the backup file
during a backup operation. It supports several encryption algorithms including
AES 128, AES 192, AES 256, and Triple DES. You must use either a certificate or
an asymmetric key to perform encryption during backup. For more information,
see Backup
Encryption.
Starting in SQL Server 2014, SQL Server has
the ability to encrypt the data while creating a backup. By specifying the
encryption algorithm and the encryptor (a Certificate
or Asymmetric Key) when creating a backup, you can create an encrypted backup
file. All storage destinations: on-premises and Window Azure storage are
supported. In addition, encryption options can be configured for SQL Server
Managed Backup to Windows Azure operations, a new feature introduced in SQL
Server 2014.
To encrypt during backup, you must specify an
encryption algorithm, and an encryptor to secure the
encryption key. The following are the supported encryption options:
·
Encryption
Algorithm: The supported
encryption algorithms are: AES 128, AES 192, AES 256, and Triple DES
·
Encryptor: A certificate or
asymmetric Key
Restoring the encrypted backup: SQL Server restore does not require any
encryption parameters to be specified during restores. It does require that the
certificate or the asymmetric key used to encrypt the backup file be available
on the instance that you are restoring to. The user account performing the
restore must have VIEW DEFINITION permissions on the certificate or
key. If you are restoring the encrypted backup to a different instance, you
must make sure that the certificate is available on that instance.
If you are restoring a backup from a TDE
encrypted database, the TDE certificate should be available on the instance you
are restoring to.
========================== Extended
Reading
==============================
Using SQL Server Management Studio
You
can encrypt a backup when creating the backup of a database in any of the
following dialog boxes:
1.
Back
Up Database (Backup Options Page) On the Backup
Options page, you
can select Encryption, and
specify the encryption algorithm and the certificate or asymmetric key to use
for the encryption.
2.
Using
Maintenance Plan Wizard When
you select a backup task, on the Options tab of the Define
Backup ()Task page, you can select Backup
Encryption, and specify the encryption algorithm and the
certificate or key to use for the encryption.
Following
is a sample Transact-SQL statement to encrypt the backup file:
BACKUP DATABASE [MYTestDB]
TO DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\MyTestDB.bak'
WITH
COMPRESSION,
ENCRYPTION
(
ALGORITHM = AES_256,
SERVER CERTIFICATE = BackupEncryptCert
),
STATS = 10
GO
========================================================================================
(4) New Design for Cardinality Estimation
The cardinality estimation logic, called the cardinality estimator,
is re-designed in SQL Server 2014 to improve the quality of query plans, and
therefore to improve query performance. The new cardinality estimator
incorporates assumptions and algorithms that work well on modern OLTP and data
warehousing workloads. It is based on in-depth cardinality estimation research
on modern workloads, and our learnings over the past 15 years of improving the
SQL Server cardinality estimator. Feedback from customers shows that while most
queries will benefit from the change or remain unchanged, a small number might
show regressions compared to the previous cardinality estimator. For
performance tuning and testing recommendations, see Cardinality Estimation (SQL Server).
Example A. New
cardinality estimates use an average cardinality for recently added ascending
data
This
example demonstrates how the new cardinality estimator can improve cardinality
estimates for ascending data that exceeds the maximum value in the table during
the most recent statistics update.
SELECT item, category, amount FROM dbo.Sales AS s WHERE Date = '2013-12-19';
In
this example, new rows are added to the Sales table each day, the query asks
for sales that occurred on 12/19/2013, and statistics were last updated on
12/18/2013. The previous cardinality estimator assumes the 12/19/2013 values do
not exist since the date exceeds the maximum date and statistics have not been
updated to include the 12/19/2013 values. This situation, known as the
ascending key problem, will occur if you load data during the day, and then run
queries against the data before statistics are updated.
This
behavior has changed. Now, even if statistics have not been updated for the
most recent ascending data that is added since the last statistics update, the
new cardinality estimator assumes the values exist and uses the average
cardinality for each value in the column as the cardinality estimate.
The example is provided as conceptual information. No action is required on your part to change the way you design databases and queries.
(5) Delayed Durability
SQL Server 2014 introduces the ability to reduce latency by
designating some or all transactions as delayed durable. A delayed durable transaction
returns control to the client before the transaction log record is written to
disk. Durability can be controlled at the database level, COMMIT level,
or ATOMIC block level.
Control Transaction Durability
SQL
Server transaction commits can be either fully durable, the SQL Server default, or delayed durable (also known as
lazy commit).
Fully
durable transaction commits are synchronous and report a commit as successful
and return control to the client
only after the log records for the transaction are written to disk.
Delayed durable transaction commits are asynchronous and report a commit as
successful before the log records for the transaction are written to disk.
Writing the transaction log entries to disk is required for a transaction to be
durable. Delayed durable transactions become durable when the transaction log
entries are flushed to disk.
========================== Extended
Reading
================================
Full vs. Delayed Transaction Durability
Both
full and delayed transaction durability have their advantages and
disadvantages. An application can have a mix of fully and delayed durable
transactions. You should carefully consider your business needs and how each
fits into those needs.
Fully
durable transactions write the transaction log to disk before returning control
to the client. You should use fully durable transactions whenever:
·
Your system cannot tolerate any data loss.
See the section When can I lose data? for
information on when you can lose some of your data.
·
The bottleneck is not due to transaction log write latency.
Delayed
transaction durability reduces the latency due to log I/O by keeping the transaction
log records in memory and writing to the transaction log in batches, thus
requiring fewer I/O operations. Delayed transaction durability potentially
reduces log I/O contention, thus reducing waits in the system.
Full
Transaction Durability Guarantees
·
Once transaction commit succeeds, the changes made by the
transaction are visible to the other transactions in the system. See the topic Transaction
Isolation Levels for
more information.
·
Durability is guaranteed on commit. Corresponding log records
are persisted to disk before the transaction commit succeeds and returns
control to the client.
Delayed transaction durability
Delayed
transaction durability is accomplished using asynchronous log writes to disk.
Transaction log records are kept in a buffer and written to disk when the
buffer fills or a buffer flushing event takes place. Delayed transaction
durability reduces both latency and contention within the system because:
·
The transaction commit processing does not wait for log IO to
finish and return control to the client.
·
Concurrent transactions are less likely to contend for log IO;
instead, the log buffer can be flushed to disk in larger chunks, reducing
contention, and increasing throughput.
When
to use delayed transaction durability
Some
of the cases in which you could benefit from using delayed transaction
durability are:
You can
tolerate some data loss.
If you
can tolerate some data loss, for example, where individual records are not
critical as long as you have most of the data, then delayed durability may be
worth considering. If you cannot tolerate any data loss, do not use delayed
transaction durability.
You are
experiencing a bottleneck on transaction log writes.
If
your performance issues are due to latency in transaction log writes, your
application will likely benefit from using delayed transaction durability.
Your
workloads have a high contention rate.
If
your system has workloads with a high contention level much time is lost
waiting for locks to be released. Delayed transaction durability reduces commit
time and thus releases locks faster which results in higher throughput.
Delayed
Transaction Durability Guarantees
·
Once transaction commit succeeds, the changes made by the
transaction are visible to the other transactions in the system.
·
Transaction durability is guaranteed only following a flush of
the in-memory transaction log to disk. The in-memory transaction log is flushed
to disk when:
o
A fully durable transaction in the same database makes a change
in the database and successfully commits.
o
The user executes the system stored procedure sp_flush_log successfully.
o
The in-memory transaction log buffer fills up and automatically
flushes to disk.
If a
fully durable transaction or sp_flush_log
successfully commit, all previously committed delayed durability transactions
have been made durable.
The
log may be flushed to disk periodically. However, SQL Server does not provide
any durability guarantees other than durable transactions and sp_flush_log.
How to control transaction durability
1) Database level control
You,
the DBA, can control whether users can use delayed transaction durability on a
database with the following statement. You must set the delayed durability
setting with ALTER DATABASE.
ALTER DATABASE … SET DELAYED_DURABILITY = { DISABLED | ALLOWED | FORCED }
DISABLED
[default] With this setting, all transactions that commit on
the database are fully durable, regardless of the commit level setting
(DELAYED_DURABILITY=[ON | OFF]). There is no need for
stored procedure change and recompilation. This allows you to ensure that no
data is ever put at risk by delayed durability.
ALLOWED
With
this setting, each transaction’s durability is determined at the transaction
level – DELAYED_DURABILITY = { OFF | ON }. See Atomic
block level control – Natively Compiled Stored Procedures and COMMIT
level control –Transact-SQL for
more information.
FORCED
With
this setting, every transaction that commits on the database is delayed
durable. Whether the transaction specifies fully durable (DELAYED_DURABILITY =
OFF) or makes no specification, the transaction is delayed durable. This
setting is useful when delayed transaction durability is useful for a database
and you do not want to change any application code.
2) Atomic block level control – Natively Compiled Stored Procedures
The
following code goes inside the atomic block.
Transact-SQL
DELAYED_DURABILITY = { OFF | ON }
OFF
[default] The transaction is fully durable, unless the
database option DELAYED_DURABLITY = FORCED is in effect, in which case the
commit is asynchronous and thus delayed durable. See Database
level control for
more information.
ON
The
transaction is delayed durable, unless the database option DELAYED_DURABLITY =
DISABLED is in effect, in which case the commit is synchronous and thus fully
durable. See Database
level control for
more information.
Example
Code:
Transact-SQL
CREATE PROCEDURE <procedureName> …
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
(
DELAYED_DURABILITY = ON,
TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'English'
…
)
END
3) COMMIT level control –Transact-SQL
The COMMIT
syntax is extended so you can force delayed transaction durability. If
DELAYED_DURABILITY is DISABLED or FORCED at the database level (see above) this
COMMIT option is ignored.
Transact-SQL
COMMIT [ { TRAN | TRANSACTION } ] [ transaction_name | @tran_name_variable ] ] [ WITH ( DELAYED_DURABILITY = { OFF | ON } ) ]
OFF
[default] The transaction COMMIT is fully durable, unless the
database option DELAYED_DURABLITY = FORCED is in effect, in which case the
COMMIT is asynchronous and thus delayed durable. See Database
level control for
more information.
ON
The
transaction COMMIT is delayed durable, unless the database option
DELAYED_DURABLITY = DISABLED is in effect, in which case the COMMIT is
synchronous and thus fully durable. See Database
level control for
more information.
========================================================================================
(6) AlwaysOn Enhancements
SQL Server 2014 contains the following enhancements
for AlwaysOn Failover Cluster Instances and AlwaysOn Availability Groups:
·
The Add Azure Replica
Wizard simplifies creating hybrid solutions for AlwaysOn
availability groups. For more information, see Use
the Add Azure Replica Wizard (SQL Server).
·
The maximum number of
secondary replicas is increased from 4 to 8.
·
When disconnected from
the primary replica or during cluster quorum loss, readable secondary replicas now
remain available for read workloads.
·
Failover cluster
instances (FCIs) can now use Cluster Shared Volumes (CSVs) as cluster shared
disks. For more information, see AlwaysOn
Failover Cluster Instances (SQL Server).
·
A new system
function, sys.fn_hadr_is_primary_replica, and a new DMV, sys.dm_io_cluster_valid_path_names, is available.
·
The following DMVs
were enhanced and now return FCI information: sys.dm_hadr_cluster,sys.dm_hadr_cluster_members, and sys.dm_hadr_cluster_networks.
(7) Partition Switching and Indexing
The individual partitions of partitioned tables can now be
rebuilt. For more information, see ALTER INDEX (Transact-SQL).
The PARTITION = partition_number is not new for SQL Server 2014. It has been there
since SQL Server 2008. But the Online option - low_priority_lock_wait is new. See the next item.
Partial Syntax:
-- SQL Server Syntax (All options except filegroup and filestream apply to Azure SQL Database Code-named Sterling Preview.)
ALTER INDEX { index_name | ALL }
ON <object>
{ REBUILD
[ PARTITION = ALL ]
[ WITH ( <rebuild_index_option> [ ,...n ] ) ]
| [ PARTITION = partition_number
[ WITH ( <single_partition_rebuild_index_option> ) [ ,...n ] ]
]
| DISABLE
| REORGANIZE
[ PARTITION = partition_number ]
[ WITH ( LOB_COMPACTION = { ON | OFF } ) ]
| SET ( <set_index_option> [ ,...n ] )
}
[ ; ]
}
<rebuild_index_option > ::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| STATISTICS_INCREMENTAL = { ON | OFF }
| ONLINE = { ON [ ( <low_priority_lock_wait> ) ] | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| DATA_COMPRESSION = { NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( { <partition_number_expression> | <range> }
[ , ...n ] ) ]
}
(8) Managing the Lock Priority of Online Operations
The ONLINE = ON option now contains a WAIT_AT_LOW_PRIORITY option which permits you to specify how long the rebuild process should wait for the necessary locks. The WAIT_AT_LOW_PRIORITY option also allows you to configure the termination of blocking processes related to that rebuild statement. For more information, see ALTER TABLE (Transact-SQL) and ALTER INDEX (Transact-SQL). Troubleshooting information about new types of lock states is available in sys.dm_tran_locks (Transact-SQL) and sys.dm_os_wait_stats (Transact-SQL).
(9) Columnstore Indexes
These new features are available for columnstore indexes:
·
Clustered
columnstore indexes
Use a clustered columnstore
index to improve data compression and query performance for data warehousing
workloads that primarily perform bulk loads and read-only queries. Since the
clustered columnstore index is updateable, the
workload can perform many insert, update, and delete operations.
We
view the clustered columnstore index as the standard for storing large
data warehousing fact tables, and expect it will be used in most data
warehousing scenarios. Since the clustered columnstore
index is updateable, your workload can perform a large number of insert,
update, and delete operations.
For more information, see Columnstore
Indexes Described and Using
Clustered Columnstore Indexes.
In SQL Server 2012, a columnstore
index cannot be clustered.
Only nonclustered columnstore
indexes are available. SQL Server 2014 supports both clustered and nonclustered
columnstore indexes. Both use the same in-memory columnstore technology, but they do have differences in
purpose and in features they support.
In SQL Server 2014+, a clustered columnstore index:
·
Is available in
Enterprise, Developer, and Evaluation editions.
·
Is updateable.
·
Is the primary storage
method for the entire table.
·
Has no key columns.
All columns are included columns.
·
Is the only index on the table. It cannot be combined with any other indexes.
·
Can be configured to
use columnstore or columnstore
archival compression.
·
Does not physically
store columns in a sorted order. Instead, it stores data to improve compression
and performance.
In SQL Server 2012+, a nonclustered
columnstore index:
·
Can index a subset of
columns in the clustered index or heap. For example, it can index the
frequently used columns.
·
Requires extra storage
to store a copy of the columns in the index.
·
Is updated by
rebuilding the index or switching partitions in and out. It is not updateable
by using the DML operations such as insert, update, and delete.
·
Can be combined with
other indexes on the table.
·
Can be configured to
use columnstore or columnstore
archival compression.
·
Does not physically
store columns in a sorted order. Instead, it stores data to improve compression
and performance. Pre-sorting the data before creating the columnstore
index is not required, but can improve columnstore
compression.
/******** Example 1 - Convert a heap to a
clustered columnstore index
This example creates a table as a heap
and then converts it to a clustered columnstore index
named cci_Simple.
This changes the storage for the entire
table from rowstore to columnstore.
*/
CREATE TABLE SimpleTable
(
ProductKey [int] NOT NULL
,
OrderDateKey [int] NOT NULL
,
DueDateKey [int] NOT NULL
,
ShipDateKey [int] NOT NULL
);
GO
CREATE CLUSTERED COLUMNSTORE INDEX cci_Simple ON SimpleTable
GO
/********** Example 2 - Convert a clustered
index to a clustered columnstore index with the same
name.
This example creates a table with
clustered index, and then demonstrates the syntax of converting
the
clustered index to a clustered columnstore index.
This changes the storage for the entire table from rowstore
to columnstore.
*/
CREATE TABLE SimpleTable
(
ProductKey [int] NOT NULL
,
OrderDateKey [int] NOT NULL
,
DueDateKey [int] NOT NULL
,
ShipDateKey [int] NOT NULL
);
GO
CREATE CLUSTERED INDEX cl_simple ON SimpleTable (ProductKey);
GO
CREATE CLUSTERED COLUMNSTORE INDEX cl_simple ON SimpleTable WITH (DROP_EXISTING = ON)
GO
/********** Example 3 - Convert a table
that has nonclustered indexes to a clustered columnstore index.
The following example creates a simple rowstore table, stored as a heap, with two nonclustered indexes,
and then
demonstrates the syntax of converting the rowstore
table to a columnstore table.
The nonclustered
indexes must be dropped before you create the clustered columnstore
index. They cannot be dropped with the DROP_EXISTING option.
*/
--Create the table for use with this
example.
CREATE TABLE SimpleTable (
ProductKey [int] NOT NULL,
OrderDateKey [int] NOT NULL,
DueDateKey [int] NOT NULL,
ShipDateKey [int] NOT NULL);
GO
--Create two nonclustered
indexes for use with this example
CREATE INDEX nc1_simple ON SimpleTable (OrderDateKey);
CREATE INDEX nc2_simple ON SimpleTable (DueDateKey);
GO
--Before creating the clustered columnstore index,
--all nonclustered
indexes must be dropped.
DROP INDEX SimpleTable.nc1_simple;
DROP INDEX SimpleTable.nc2_simple;
CREATE CLUSTERED COLUMNSTORE INDEX cci_simple ON SimpleTable;
GO
/******** Example 4 - Convert a large
fact table from rowstore to columnstore
This example explains how to convert a
large fact table from a rowstore table to a columnstore table.
To convert a rowstore
table to a columnstore table.
First, create a small table to use in
this example.
*/
--Create a rowstore
table with a clustered index and a non-clustered index.
CREATE TABLE MyFactTable (
ProductKey [int] NOT NULL,
OrderDateKey [int] NOT NULL,
DueDateKey
[int] NOT NULL,
ShipDateKey
[int] NOT NULL )
)
WITH (
CLUSTERED INDEX
( ProductKey
)
);
--Add a non-clustered index.
CREATE INDEX my_index ON MyFactTable ( ProductKey, OrderDateKey );
--Drop all non-clustered indexes
DROP INDEX my_index ON MyFactTable;
--Drop the clustered index.
/*
Do this only if you want to specify a new
name for the index when it is converted to a clustered columnstore
index. If you do not drop the clustered index, the new clustered columnstore index will have the same name.
The name of the index might be easier to
remember if you use your own name. All rowstore
clustered indexes use the default name which is 'ClusteredIndex_<GUID>'.
*/
--Process for dropping a clustered index.
--First, look up the name of the
clustered rowstore index.
--Clustered rowstore
indexes always use the DEFAULT name ‘ClusteredIndex_<GUID>’.
SELECT i.name
FROM sys.indexes i
JOIN sys.tables t
ON ( i.type_desc = 'CLUSTERED' ) WHERE t.name = 'MyFactTable';
--Drop the clustered rowstore
index.
DROP INDEX ClusteredIndex_d473567f7ea04d7aafcac5364c241e09
ON MyDimTable;
--Convert the rowstore
table to a columnstore table with a clustered columnstore index.
--Option 1: Convert to columnstore and name the new clustered columnstore
index MyCCI.
CREATE CLUSTERED COLUMNSTORE INDEX MyCCI ON MyFactTable;
--Option 2: Convert to columnstore and use the rowstore
clustered
--index name for the columnstore
clustered index name.
--First, look up the name of the
clustered rowstore index.
SELECT i.name
FROM sys.indexes i
JOIN sys.tables t
ON ( i.type_desc = 'CLUSTERED' )
WHERE t.name = 'MyFactTable';
--Second, create the clustered columnstore index and
--Replace
ClusteredIndex_d473567f7ea04d7aafcac5364c241e09
--with the name of your clustered index.
CREATE CLUSTERED COLUMNSTORE INDEX
ClusteredIndex_d473567f7ea04d7aafcac5364c241e09
ON MyFactTable
WITH DROP_EXISTING = ON;
/******** Example 5 - Defragment a
clustered columnstore index
Rebuild a specific partition.
To rebuild a partition of a large clustered
columnstore index, use ALTER INDEX REBUILD. You can
also use ALTER INDEX with the REBUILD option to rebuild all partitions, or to
rebuild a non-partitioned clustered columnstore
index.
*/
ALTER INDEX cci_fact3
ON fact3
REBUILD PARTITION = 12;
--Rebuild the entire clustered columnstore index
--There are two ways to rebuild the full
clustered columnstore index. You can use CREATE
CLUSTERED COLUMNSTORE INDEX, or ALTER INDEX and the REBUILD option. Both
methods achieve the same results.
--Determine the Clustered Columnstore Index name of MyDimTable.
SELECT i.object_id,
i.name, t.object_id, t.name
FROM sys.indexes i
JOIN sys.tables t
ON (i.type_desc = 'CLUSTERED COLUMNSTORE')
WHERE t.name = 'RowstoreDimTable';
--Rebuild the entire index by using CREATE
CLUSTERED INDEX.
CREATE CLUSTERED COLUMNSTORE INDEX my_CCI
ON MyFactTable
WITH ( DROP_EXISTING = ON );
--Rebuild the entire index by using ALTER
INDEX and the REBUILD option.
ALTER INDEX my_CCI
ON MyFactTable
REBUILD PARTITION = ALL
WITH ( DROP_EXISTING = ON );
--Reorganize the clustered columnstore index
--Reorganizing a clustered columnstore index forces all CLOSED rowgroups
in the deltastore to be compressed with columnstore compression and moved into the columnstore. This is not required, but can improve
performance after a large load in which CLOSED rowgroups
are not moving into the columnstore fast enough. For
more information, see Columnstore Indexes Described.
ALTER INDEX cci_index ON table1 REBUILD;
/******** Example 6 - Convert columnstore tables to rowstore
tables
Convert a columnstore
table to a rowstore table with a clustered index
To convert a columnstore
table to a rowstore table with a clustered index, use
the CREATE INDEX statement with the DROP_EXISTING option.
*/
CREATE CLUSTERED INDEX ci_MyTable
ON MyFactTable
WITH ( DROP EXISTING = ON );
--Example 7: Convert a columnstore table to a rowstore
heap
--To convert a columnstore
table to a rowstore heap, simply drop the clustered columnstore index.
DROP INDEX MyCCI
ON MyFactTable;
·
SHOWPLAN
SHOWPLAN displays information about columnstore indexes. The EstimatedExecutionMode andActualExecutionMode properties have two
possible values: Batch (for columnstore
index) or Row. The Storage property has two
possible values: RowStore and ColumnStore.
·
Archival
data compression
ALTER INDEX … REBUILD has a new
COLUMNSTORE_ARCHIVE data compression option that further compresses the specified partitions of a columnstore index. Use this for archival, or for
other situations that require a smaller data storage size and can afford more
time for storage and retrieval. For more information, see ALTER
INDEX (Transact-SQL).
(10) Buffer Pool Extension
The Buffer Pool Extension provides the seamless integration of solid-state drives (SSD) as
a nonvolatile random access memory (NvRAM) extension
to the Database Engine buffer pool to significantly improve I/O throughput.
·
Enable and configure the
buffer pool extension:
ALTER SERVER CONFIGURATION
SET BUFFER POOL EXTENSION ON
(FILENAME = 'F:\SSDCACHE\Example.BPE', SIZE = 50 GB);
·
Modify the buffer pool
extension configuration
--The following example modifies the size
of a buffer pool extension file. The buffer pool extension option must be
disabled before any of the parameters are modified.
ALTER SERVER CONFIGURATION
SET BUFFER POOL EXTENSION OFF;
GO
EXEC sp_configure 'max server memory (MB)', 12000;
GO
RECONFIGURE;
GO
ALTER SERVER CONFIGURATION
SET BUFFER POOL EXTENSION ON
(FILENAME = 'F:\SSDCACHE\Example.BPE', SIZE = 60 GB);
GO
·
View the buffer pool extension
configuration
-- Returning configuration buffer pool
extension information
SELECT path, file_id, state, state_description, current_size_in_kb
FROM sys.dm_os_buffer_pool_extension_configuration;
--Returning the number of cached pages in
the buffer pool extension file
SELECT COUNT(*) AS cached_pages_count
FROM sys.dm_os_buffer_descriptors
WHERE is_in_bpool_extension <> 0;
(11) Incremental Statistics
CREATE STATISTICS and related statistic statements now permits
per partition statistics to be created by using the INCREMENTAL option. Related
statements allow or report incremental statistics. Affected syntax includes
UPDATE STATISTICS, sp_createstats, CREATE INDEX,
ALTER INDEX, ALTER DATABASE SET options, DATABASEPROPERTYEX, sys.databases, and sys.stats. For
more information, see CREATE STATISTICS (Transact-SQL).
CREATE STATISTICS statistics_name
ON { table_or_indexed_view_name } ( column [ ,...n ] )
[ WHERE <filter_predicate> ]
[ WITH
[ [ FULLSCAN
| SAMPLE number { PERCENT | ROWS }
| STATS_STREAM = stats_stream ] ]
[ [ , ] NORECOMPUTE ]
[ [ , ] INCREMENTAL = { ON | OFF } ]
] ;
INCREMENTAL = {ON | OFF} – a bad name! How about PARTITIONED?
When ON, the statistics created
are per partition
statistics. When OFF, stats are combined for all partitions. The
default is OFF.
If per partition statistics are not supported
an error is generated. Incremental stats are not supported for following
statistics types:
·
Statistics created
with indexes that are not partition-aligned with the base table.
·
Statistics created on AlwaysOn readable secondary databases.
·
Statistics created on
read-only databases.
·
Statistics created on
filtered indexes.
·
Statistics created on
views.
·
Statistics created on
internal tables.
·
Statistics created
with spatial indexes or XML indexes.
(12) Resource Governor Enhancements for Physical IO Control
The
Resource Governor enables you to specify limits on the amount of CPU, physical
IO, and memory that incoming application requests can use within a resource
pool. In SQL Server 2014, you can use the new MIN_IOPS_PER_VOLUME and
MAX_IOPS_PER_VOLUME settings to control the physical IOs issued for user threads for a given
resource pool. For more information, see Resource
Governor Resource Pool and CREATE RESOURCE
POOL (Transact-SQL).
CREATE RESOURCE POOL pool_name
[ WITH
(
[ MIN_CPU_PERCENT = value ]
[ [ , ] MAX_CPU_PERCENT = value ]
[ [ , ] CAP_CPU_PERCENT = value ]
[ [ , ] AFFINITY {SCHEDULER =
AUTO | ( <scheduler_range_spec> )
| NUMANODE = ( <NUMA_node_range_spec> )} ]
[ [ , ] MIN_MEMORY_PERCENT = value ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MIN_IOPS_PER_VOLUME = value ]
[ [ , ] MAX_IOPS_PER_VOLUME = value ]
)
]
[;]
MIN_IOPS_PER_VOLUME =value
Specifies the minimum I/O operations per
second (IOPS) per disk volume to reserve for the resource pool. The allowed
range for value is from 0 through 2^31-1 (2,147,483,647).
Specify 0 to indicate no minimum threshold for the pool. The default is 0.
MAX_IOPS_PER_VOLUME =value
Specifies the maximum I/O operations per
second (IOPS) per disk volume to allow for the resource pool. The allowed range
for value is from 0 through 2^31-1 (2,147,483,647). Specify 0
to set an unlimited threshold for the pool. The default is 0.
If the MAX_IOPS_PER_VOLUME for a pool is set
to 0, the pool is not governed at all and can take all the IOPS in the system
even if other pools have MIN_IOPS_PER_VOLUME set. For this case, we recommend
that you set the MAX_IOPS_PER_VOLUME value for this pool to a high number (for
example, the maximum value 2^31-1) if you want this pool to be governed for IO.
Note: MIN_IOPS_PER_VOLUME
and MAX_IOPS_PER_VOLUME specify the minimum and maximum reads or writes per
second. These reads or writes can be of any size and do not indicate minimum or
maximum throughput.
--These values govern the physical I/O
read and write operations that are available for the resource pool.
CREATE RESOURCE POOL PoolAdmin
WITH (
MIN_IOPS_PER_VOLUME
= 20,
MAX_IOPS_PER_VOLUME
= 100
);
The
MAX_OUTSTANDING_IO_PER_VOLUME setting of the ALTER RESOURCE GOVENOR sets the
maximum outstanding I/O operations per disk volume. You can use this setting to
tune IO resource governance to the IO characteristics of a disk volume and can
be used to limit the number of IOs issued at the SQL Server instance boundary.
For more information, see ALTER
RESOURCE GOVERNOR (Transact-SQL).
ALTER RESOURCE GOVERNOR
{ DISABLE | RECONFIGURE }
|
WITH ( CLASSIFIER_FUNCTION = { schema_name.function_name | NULL } )
|
RESET STATISTICS
|
WITH ( MAX_OUTSTANDING_IO_PER_VOLUME = value )
[ ; ]
MAX_OUTSTANDING_IO_PER_VOLUME = value
Sets the maximum queued I/O operations per
disk volume. These I/O operations can be reads or writes of any size. The
maximum value for MAX_OUTSTANDING_IO_PER_VOLUME is 100. It is not a percent. This setting is
designed to tune IO resource governance to the IO characteristics of a disk
volume. We recommend that you experiment with different values and consider
using a calibration tool such as IOMeter or SQLIO to
identify the max value for your storage subsystem. This setting provides a
system-level safety check that allows SQL Server to meet the minimum IOPS for
resource pools even if other pools have the MAX_IOPS_PER_VOLUME set to
unlimited.
ALTER RESOURCE GOVERNOR
WITH (MAX_OUTSTANDING_IO_PER_VOLUME = 20);
(13) Online Index Operation Event Class
The progress report for the online index operation event class
now has two new data columns: PartitionId and PartitionNumber.
For more information, see Progress Report: Online Index Operation Event Class.
Note: The functionality of Online Index Build Tracking has been in SQL Server since 2005
SQL Server allows tracking the progress of online index
operations using Profiler
or Extended Events. The event is called "Progress Report: Online
Index Operation" In Profiler and "progress_report_online_index_operation"
In Extended Events.
(14) Database Compatibility Level
The 90 compatibility level is not valid in SQL Server 2014. For more information, see ALTER DATABASE Compatibility Level (Transact-SQL)
(15) Inline specification of CLUSTERED and NONCLUSTERED – syntax sugar
Inline
specification of CLUSTERED and NONCLUSTERED indexes is now allowed for disk-based
tables. Creating a table with inline indexes is equivalent to issuing a create
table followed by corresponding CREATE
INDEX statements. Included columns and filter conditions are not
supported with inline indexes.
-- this one
creates a 1 column table with a NONCLUSTERED index called ix_1 on column c1
CREATE TABLE t1( c1 int, INDEX ix_1 NONCLUSTERED (c1))
CREATE TABLE t2( c1 int INDEX ix_1
NONCLUSTERED (c1))
-- this one will create an index on c2
that is NONCLUSTERED as you can see without a comma after c2 and no (column) at
the end of the NONCLUSTERED statement.
CREATE TABLE t3( c1 int, c2 int INDEX ix_1 NONCLUSTERED)
CREATE TABLE t4( c1 int, c2 int,
INDEX ix_1 NONCLUSTERED
(c1,c2))
(16) SELECT … INTO
The SELECT … INTO statement is improved and can now operate
in parallel. The database compatibility level must be at least 110.
See a
demo on http://blogs.technet.com/b/italian_premier_center_for_sql_server/archive/2013/07/24/sneak-peek-on-sql-2014.aspx
(17) System View Enhancements
1) sys.xml_indexes
sys.xml_indexes (Transact-SQL) has 3 new columns: xml_index_type, xml_index_type_description, and path_id.
2) sys.dm_exec_query_profiles
sys.dm_exec_query_profiles (Transact-SQL) monitors real time query progress while a
query is in execution.
3) sys.column_store_row_groups
sys.column_store_row_groups (Transact-SQL) provides clustered columnstore
index information on a per-segment basis to help the administrator make system
management decisions.
4) sys.databases
sys.databases (Transact-SQL) has 3 new columns: is_auto_create_stats_incremental_on, is_query_store_on, andresource_pool_id.
5) System View Enhancements for In-Memory OLTP
For
information about system view enhancements to support In-Memory OLTP, see System
Views, Stored Procedures, DMVs and Wait Types for In-Memory OLTP.
(18) Security Enhancements
1) CONNECT ANY DATABASE Permission
A new
server level permission. Grant CONNECT
ANY DATABASE to a
login that must connect to all databases that currently exist and to any new
databases that might be created in future. Does not grant any permission in any
database beyond connect. Combine with SELECT
ALL USER SECURABLES or VIEW SERVER STATE to allow an auditing process to view all
data or all database states on the instance of SQL Server.
See a
demo on http://blog.dbandbi.com/tag/connect-any-database/
2) IMPERSONATE ANY LOGIN Permission
A new
server level permission. When granted, allows a middle-tier process to impersonate
the account of clients connecting to it, as it connects to databases. When
denied, a high privileged login can be blocked from impersonating other logins.
For example, a login with CONTROL
SERVER permission
can be blocked from impersonating other logins.
3) SELECT ALL USER SECURABLES Permission
A new
server level permission. When granted, a login such as an auditor can view data
in all databases that the user can connect to.
(19) Deployment Enhancements
Deploy a SQL Server Database to a Windows Azure Virtual Machine enables deployment of a SQL Server database to a Windows Azure
VM.
================================================================================
Using the Deploy Database
to Windows Azure VM Wizard
To launch the wizard, use the following steps:
1.
Use SQL Server Management Studio to connect to the instance of
SQL Server with the database you want to deploy.
2.
In Object Explorer,
expand the instance name, then expand the Databases node.
3.
Right-click the database you want
to deploy, select Tasks, and then
select Deploy Database to Windows Azure VM…
================================================================================