(1) Columnstore Indexes
[CTP 2.0] This release offers several new improvements for columnstore indexes including updateable nonclustered columnstore indexes, columnstore indexes on in-memory tables, and many more new features for operational analytics.
See these topics:
·
Columnstore Indexes Versioned Feature
Summary –includes
what’s new
·
Columnstore Indexes Data Loading
·
Columnstore Indexes for Operational
Analytics
·
Columnstore Indexes for Data
Warehousing
·
Columnstore Indexes Maintenance Tasks
Key features for columnstore indexes and the products in which they are
available.
Columnstore Index Feature |
SQL Server 2012 (V11) |
SQL Server 2014 (V12) |
SQL Server 2016 Community Technology
Preview 2 (CTP2) (V13) |
SQL(Azure) Database V12 Premium
Edition |
Batch execution for multi-threaded
queries |
yes |
yes |
yes |
yes |
Batch execution for single-threaded
queries |
yes |
yes |
||
Archival compression option. |
yes |
yes |
yes |
|
Snapshot isolation and read-committed
snapshot isolation |
yes |
yes |
||
Specify columnstore
index when creating a table. |
yes |
yes |
||
AlwaysOn supports columnstore
indexes. |
yes |
yes |
yes |
yes |
AlwaysOn readable secondary supports read-only
columnstore indexes. |
yes |
yes |
yes |
yes |
AlwaysOn readable secondary supports
updateable columnstore indexes. |
yes |
|||
Read-only nonclustered
columnstore index on heap or btree. |
yes |
yes |
yes1 |
yes1 |
Updateable nonclustered
columnstore index on heap or btree |
yes |
|||
Additional btree
indexes allowed on a heap or btree that has a columnstore index. |
yes |
yes |
yes |
|
Updateable clustered columnstore index. |
yes |
yes |
yes |
|
Btree index on a clustered columnstore index. |
yes |
|||
Columnstore index on a memory-optimized table. |
yes |
|||
Nonclustered columnstore
index definition supports using a filtered condition. |
yes |
1 To create a readable nonclustered
columnstore index, store the index on a read-only filegroup.
(2) In-Memory OLTP
[CTP 2.0] In SQL Server 2016 Community Technology
Preview 2 (CTP2), improvements to In-Memory OLTP enable scaling to larger
databases and higher throughput in order to support bigger workloads. In
addition, a number of limitations concerning tables and stored procedures have
been removed to make it easier to migrate your applications to and leverage the
benefits of In-Memory OLTP.
The following table outlines
these new features.
Features |
More information |
Support for performing ALTER operations for
memory-optimized tables and natively compiled stored procedures. |
|
Use MARS (Multiple Active Result Set) connections to
access memory-optimized tables and natively compiled stored procedures This allows leveraging In-Memory OLTP in existing
applications that rely on MARS. |
|
Support for natively compiled, scalar user-defined
functions. |
|
Complete support for collations |
|
Storage improvements |
|
Enhancements to transaction performance analysis
reports |
Determining if a Table or Stored Procedure Should Be Ported to
In-Memory OLTP |
Support for nesting of natively compiled stored
procedures, and other increases in the Transact-SQL surface area. |
|
Support for subqueries in natively compiled stored
procedures. |
|
PowerShell cmdlet for evaluating the migration fitness
of multiple objects in a SQL Server database. |
For more information, see In-Memory
OLTP (In-Memory Optimization).
(3) Live Query Statistics
[CTP 2.0] Management Studio provides the ability to view the live execution plan of an active query. This live query plan provides real-time insights into the query execution process as the controls flow from one query plan operator to another. For more information, see Live Query Statistics.
(4) Query Store
[CTP 2.0] Query store is a new feature that provides DBAs with insight on query plan choice and performance. It simplifies performance troubleshooting by enabling you to quickly find performance differences caused by changes in query plans. The feature automatically captures a history of queries, plans, and runtime statistics, and retains these for your review. It separates data by time windows, allowing you to see database usage patterns and understand when query plan changes happened on the server. The query store presents information by using a Management Studio dialog box, and lets you force the query to one of the selected query plans. For more information, see Monitoring Performance By Using the Query Store.
==========================Extended Reading===================================
The
full name, per Microsoft, is Query Data Store, which makes the meaning more
reasonably understandable. Query Data Store in SQL Server 2016 is intended as a
service that monitors and optimizes query plans with full history of query execution. Query Data
Store can help diagnose performance problems that are related to plan changes.
What it does:
1. Stores
the history of query plans
2.
Captures performance data
for each query plan
3. Identifies
the most expensive queries and earmarks those queries that have degraded or
gotten slower over time
4. Gives
you the ability to force old plans, should a new plan cause performance issues
(almost like a system restore for Windows computers)
5. Easily
fixes plan regressions
6. Analyzes
workload patterns
7. De-risks
SQL Server upgrades with restart and failover
Microsoft
has yet to communicate if Query Data Store will be available in the Standard
edition of SQL Server 2016, but will definitely be available as a service for
the Enterprise version.
· By Using the Query Store Page in Management Studio
1.
In Object Explorer, right-click a database, and then click Properties.
2.
In the Database Properties dialog box, select the Query
Store page.
3.
In the Enable box, select True.
· By Using Transact-SQL Statements
1.
Use the ALTER DATABASE statement to enable the query store.
For example:
2. ALTER DATABASE AdventureWorks2012 SET QUERY_STORE = ON;
Using the Regressed
Queries Feature
After
enabling the query store, refresh the database portion of the Object Explorer
pane to add the Query Store section.
Selecting Regressed
Queries, opens the Regressed Queries pane in Management Studio. The
Regressed Queries pane shows you the queries, and plans in the query store.
Drop down boxes at the top allow you to select queries based on various
criteria. Select a plan to see the graphical query plan. Buttons are available
to view the source query, force, and unforce a query plan, and refresh the
display.
To
force a plan, select a query and plan, and then click Force
Plan. You can only force plans that were saved by the query
plan feature and are still retained in the query plan cache.
(5) Temporal Tables
[CTP 2.0] A temporal table is a new type of table that provides correct information about stored facts at any point in time. Each temporal table consists of two tables actually, one for the current data and one for the historical data. The system ensures that when the data changes in the table with the current data the previous values are stored in the historical table. Querying constructs are provided to hide this complexity from users. For more information, see Temporal Tables.
SQL
Server 2016 Community Technology Preview 2 (CTP2) introduces support for
temporal tables as a database feature that provides built-in support for
provide information about data stored in the table at any point in time rather
than only the data that is correct at the current moment in time. Temporal is a
database feature that was introduced in ANSI SQL 2011 and is now supported in
SQL Server 2016 Community Technology Preview 2 (CTP2). Temporal is a database
feature that was introduced in ANSI SQL 2011 and is now supported in SQL Server
2016 Community Technology Preview 2 (CTP2).
==========================Extended Reading===================================
A
temporal table is a table for which a PERIOD definition exists and which
contains system columns with a datatype of datetime2 into which the period of
validity is recorded by the system, and which has an associated history table
into which the system records all prior versions of each record with their
period of validity. With a temporal table, the value of each record at any
point in time can be determined, rather than just the current value of each
record. A temporal table is also referred to as a system-versioned table.
Real
data sources are dynamic and more often than not business decisions rely on
insights that analysts can get from data evolution. Use cases for temporal
tables include:
·
Understanding business trends over time
·
Tracking data changes over time
·
Auditing all changes to data
·
Maintaining a slowly changing dimension for decision support
applications
·
Recovering from accidental data changes and application errors
System-versioning
for a table is implemented as a pair of tables, a current table and a history
table. Within each of these tables, two additional datetime2 columns are used to define the period
of validity for each record – a system start time (SysStartTime)
column and a system end time (SysEndTime)
column. The current table contains the current value for each record. The
history table contains each previous value for each record, if any, and the
start time and end time for the period for which it was valid.
/*Creating a
temporal table
The following examples show how to create
a temporal table linked to a new history table,
and how to
create a temporal table linked to an existing history table.
Note that the temporal table must have a
primary key defined to be enabled for the table to be enabled for system
versioning.
This example creates a new temporal table
linked to a new history table.
*/
CREATE TABLE Department
(
DepartmentNumber char(10) NOT NULL PRIMARY KEY CLUSTERED,
DepartmentName varchar(50) NOT NULL,
ManagerID int NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime
datetime2 GENERATED
ALWAYS AS ROW START HIDDEN NOT NULL,
SysEndTime datetime2
GENERATED ALWAYS
AS ROW END HIDDEN NOT NULL,
PERIOD FOR
SYSTEM_TIME (SysStartTime,SysEndTime)
)
WITH (SYSTEM_VERSIONING = ON);
--This example creates a new temporal
table linked to an existing history table.
--Existing table
CREATE TABLE Department_History
(
DepartmentNumber char(10) NOT NULL,
DepartmentName varchar(50) NOT NULL,
ManagerID int NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2
NOT NULL,
SysEndTime datetime2
NOT NULL
);
--Temporal table
CREATE TABLE Department
(
DepartmentNumber char(10) NOT NULL PRIMARY KEY CLUSTERED,
DepartmentName varchar(50) NOT NULL,
ManagerID INT NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2
GENERATED ALWAYS
AS ROW START HIDDEN NOT NULL,
SysEndTime datetime2
GENERATED ALWAYS
AS ROW END HIDDEN NOT NULL,
PERIOD FOR
SYSTEM_TIME (SysStartTime,SysEndTime)
)
WITH
(SYSTEM_VERSIONING = ON
(HISTORY_TABLE = dbo.Department_History, DATA_CONSISTENCY_CHECK =
ON )
);
Converting a Non-Temporal
Table to be a Temporal Table
--The following example shows how to add
system versioning to an existing table and create a future history table. This
example assumes that there is an existing table called InsurancePolicy
with a primary key defined. This example populates the newly created period
columns for system versioning using default values for the start and end times
because these values cannot be null. This example uses the HIDDEN clause to
ensure no impact on existing applications interacting with the current table.
--Alter non-temporal table to define
periods for system versioning
ALTER TABLE InsurancePolicy
ADD PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime),
SysStartTime datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL DEFAULT GETUTCDATE(),
SysEndTime datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL DEFAULT CONVERT(DATETIME2, '9999.12.31');
--Enable system versioning
ALTER TABLE InsurancePolicy
SET (SYSTEM_VERSIONING = ON);
=========================================================================================
(6) Backup to Microsoft Azure - using block blobs instead of page blobs
[CTP 2.0] SQL Server backup to URL using the Microsoft Azure Blob storage service now supports using block blobs instead of page blobs. Block blobs have a size limitation of 200GB per blob, whereas page blobs have a size limitation per blob of 1 TB. With block blobs, you can stripe your backup set across multiple blobs resulting in a maximum backup size is 12.8 TB. Backup to page blobs does not support striping. In addition, block blobs have superior performance and a lower cost. As a result, striped backups for large databases can experience significantly decreased backup and restore times from Microsoft Azure.
There
are two steps required to stripe your backup set to block blobs. First, you
must obtain a Shared
Access Signature (SAS) token
to a container in your Azure Storage account. This SAS token is used to create
a SQL Credential and then this SAS token is used when the Azure container is
specified as the backup device. One way to obtain this SAS token is to use
Azure PowerShell commands. Then you use the same Transact-SQL or PowerShell
backup commands without specifying the WITH
CREDENTIAL clause
that was formerly required when using backup to URL. For examples, see Code
Examples.
Note: For SQL Server 2016 Community Technology Preview 2
(CTP2), the new backup to URL functionality does not yet have corresponding UI support in SQL Server
Management Studio.
[CTP 2.0] SQL Server backup to URL now supports using Azure snapshots to backup databases in which all database files are stored using the Microsoft Azure Blob storage service. For more information, see Snapshot Backups for Database Files in Azure.
(7) Managed Backup
[CTP 2.0] In SQL Server 2016 Community Technology Preview 2 (CTP2) SQL Server Managed Backup to Microsoft Azure uses the new block blob storage for backup files. There are also several changes and enhancements to Managed Backup.
·
Support for both automated and custom scheduling of backups.
·
Support backups for system databases.
·
Support for databases that are using the Simple recovery model.
For
more information, see SQL
Server Managed Backup to Microsoft Azure
Note: For SQL Server 2016 Community Technology Preview 2
(CTP2), these new managed backup features do not yet have corresponding UI support in SQL Server
Management Studio.
(8) Trace flag 4199 behaviors are enabled
[CTP 2.0] In
general, you do not need to use trace flag 4199 in SQL Server 2016 Community
Technology Preview 2 (CTP2) since most of the query optimizer behaviors
controlled by this trace flag are enabled unconditionally under the latest compativility level (130) in SQL Server 2016 Community
Technology Preview 2 (CTP2).
==========================Extended Reading===================================
Versions of Microsoft SQL Server later than SQL Server 2000
Service Pack 3 (SP3) delivered most hotfixes to the query optimizer in an off-by-default state
in order to prevent existing production customers from seeing expected plan changes
that could cause performance regressions. At first, each hotfix was delivered
under a separate trace flag. Later, this practice was changed so that most flags were combined under a
single trace flag (4199). This new practice was initiated across multiple
versions starting with the following updates:
·
SQL Server 2005 Service Pack 3 (SP3) Cumulative Update
·
SQL Server 2008 Service Pack 1 (SP1) Cumulative Update package
·
SQL Server 2008 R2 (RTM
Trace flag 4199 was used to collect hotfixes that were intended to
become on-by-default in a future release, whereas other trace flags were used
for situations in which a fix was not intended to become on-by-default in
current form. Starting with SQL Server 2016 RTM, the database
COMPATIBILITY_LEVEL setting will be used enable trace flag 4199-related
hotfixes on-by-default. This article describes the mechanics and policy of how
plan-affecting hotfixes will be delivered for SQL Server 2016 and later versions.
================================Extended
Reading=========================
More information
In SQL Server 2016, trace flag 4199 hotfixes that are made to
previous releases of SQL Server will become enabled under database
COMPATIBILITY_LEVEL 130 without trace flag 4199 enabled. Trace flag 4199 will
be used to release any future SQL Server 2016 hotfixes for databases by using
the 130 compatibility level. Because trace flag 4199 is recommended only for
customers who are seeing specific performance issues, customers are advised to
remove trace flag 4199 after they migrate their databases to the latest
compatibility level because trace flag 4199 will be reused for future fixes
that may not apply to your application and could cause unexpected plan
performance changes on a production system. This means that different trace
flag 4199 hotfixes are enabled for each compatibility level that is supported
in a given product release.
Note By
default, databases that are created in SQL Server 2016 use compatibility level 130
and have new optimizer logic already enabled.
The major advantage of this model is that it reduces risk for production
systems during the upgrade process. This approach separates the installation of
a new major version of SQL Server from the automatic enabling of all new query
processor changes. Because major version upgrades change the file format and
are not reversible, it is a good idea to use the COMPATIBILITY_LEVEL setting,
because this enables a customer to quickly downgrade if an unexpected plan
performance issue is found during an upgrade. If a customer finds an unexpected
plan change that blocks an application upgrade, the customer can ease the
situation by applying an appropriate plan hint by using the Query Store to
force the prior plan, or can ease the situation by contacting Microsoft
Customer Support to help with the issue to provide a workaround or hotfix. When
all issues are lessened, the upgrade can continue. Customers should integrate
this capability into their upgrade planning for SQL Server 2016.
The following table explains the model for how trace flag 4199 will work
starting with SQL Server 2016.
Setting |
SQL
compatibility level |
Trace
flag 4199 |
Optimizer
hotfixes before SQL Server 2016 RTM |
Optimizer
hotfixes after SQL Server 2016 RTM |
1. |
120 |
Off |
Disabled |
Disabled |
2. |
120 |
On |
Enabled |
Disabled |
3. |
130 |
Off |
Enabled by
compatibility level |
Disabled |
4. |
130 |
On |
Enabled by
compatibility level |
Enabled by
compatibility level |
Note Setting no. 3 is recommended for customers
who are newly upgrading to SQL Server 2016.
For major releases after SQL Server 2016, Microsoft plans to continue using
this servicing model for optimizer hotfixes. By default, or each release, any
trace flag 4199 hotfixes from the previous release will be enabled in the next
compatibility level. This means that the recommended state for customers after
they migrate to the latest compatibility level will be to have trace flag 4199
disabled. Later hotfixes would use trace flag 4199 to enable fixes for
customers who have to enable those specific hotfixes in an application.
Customers are advised to disable trace flag 4199 after an application is
upgraded to the latest compatibility level to avoid having unexpected future
optimizer changes enabled on an application unexpectedly.
Note While
many optimizer hotfixes are enabled under trace flag 4199, some use other trace
flags. Trace flag 4199 historically covered trace flags that are widely
applicable and likely to become enabled by default in a future release.
Alternative trace flags are used in hotfixes where the conditions are very
specific to only a few customers, where the fix may cause performance
regressions in other kinds of applications, or where the internal functionality
may see changes before it is ready to become enabled for everyone. Microsoft
will continue to use other trace flags as needed to service the product.
Note This
article focuses on the model for releasing trace flag 4199 hotfixes on the
latest compatibility level of the latest product. (At publication time, this is
SQL Server 2016.) Optimizer hotfixes may be released on older in-market
versions of SQL Server or on lower compatibility levels (120 or prior) of SQL
Server 2016. Microsoft will evaluate each case and determine whether to use
trace flag 4199 or a different trace flag. Because non-hotfix changes are also
enabled when a move is made to a later compatibility level, there is no
specific promise that no plan changes will occur during an upgrade (with or
without trace flag 4199). Customers should always test changes to the
compatibility level carefully for production applications and should use
mitigation technologies, such as the Query Store, if there is a plan-choice
related performance issue.
=======================================================================================
(9) Multiple TempDB Database Files
[CTP 2.0] Setup adds multiple tempdb data files during the installation of a new instance. Below are the pertinent details to consider:
1.
By default, setup adds as many tempdb
files as the CPU count or 8, whichever is lower.
2.
You can configure the number of tempdb
database files using the new command line parameter- /SQLTEMPDBFILECOUNT. It
can be used for unattended and interactive installations.
setup.exe /Q /ACTION="INSTALL" /IACCEPTSQLSERVERLICENSETERMS /FEATURES="SqlEngine" /INSTANCENAME="SQL15" .. /SQLTEMPDBDIR="D:\tempdb" /SQLTEMPDBFILECOUNT="4"
SQLTEMPDBFILECOUNT
only accepts an integer value. If the parameter is not given or the value <=
0, setup will use the default value that is the number of (logical) cores on
the machine or 8, whichever is lower. If the value is greater than the maximum
allowed (cpu count), the
installation will fail with an error.
3.
You can configure the number of tempdb
database files using the new UI input control on the Database Engine Configuration section.
4.
The primary database file tempdb will
still be tempdb.mdf. The additional tempdb files are named as tempdb_mssql_#.ndf where # where # represents a unique number for each
additional tempdb database file created during setup.
The purpose of this naming convention is to make them unique. Uninstalling an
instance of SQL Server deletes the files with naming convention tempdb_mssql_#.ndf. Do not use tempdb_mssql_*.ndf naming
convention for user database files. For instance, in my case, I have 4 logical cores, I see tempdb.mdf, tempdb_mssql_2.ndf, tempdb_mssql_3.ndf,
tempdb_mssql_4.ndf, and one log file - templog.ldf
5.
RebuildDatabase
scenario deletes system databases and installs them again in clean state.
Because the setting of tempdb file count does not
persist, the value of number of tempdb files is not
known during setup. Therefore, RebuildDatabase
scenario does not know the count of tempdb files to
be re-added. You can provide the value of the number of tempdb
files again with the SQLTEMPDBFILECOUNT parameter. If the parameter is not
provided, RebuildDatabase will add a default number
of tempdb files, which is as many tempdb
files as the CPU count or 8, whichever is lower.
(10) Format Query Results as JSON with FOR JSON
[CTP 2.0] Format query results as JSON by adding the FOR JSON clause to a SELECT statement. Use the FOR JSON clause, for example, to delegate the formatting of JSON output from your client applications to SQL Server. Here's a sample query that uses the FOR JSON clause.
For
more info, see Format
Query Results as JSON with FOR JSON (SQL Server).
Here
is the syntax of the FOR JSON clause.
Transact-SQL
<<SELECT QUERY>>
FOR JSON AUTO|PATH
[, ROOT[('<RootName>')]]
[, INCLUDE_NULL_VALUES]
For
more info about the syntax and usage of the FOR JSON clause,
see the following topics in this section.
PATH mode example
Here's
an example that uses PATH mode
with the FOR JSON clause.
In PATH mode,
you can use the dot syntax – for example, 'Item.Price' – to format nested output. This example
also uses the ROOT option
to specify a named root element.
AUTO mode example
Here's
a sample query that uses AUTO mode
with the FOR JSON clause.
In AUTO mode,
the structure of the SELECT statement determines the format of the JSON output.
By default, null values
are not included in the output. You can use the INCLUDE_NULL_VALUES to change this behavior.
Query |
Result |
Transact-SQL SELECT name, surname FROM emp FOR JSON AUTO |
JSON [ { "name": "John" }, { "name": "Jane", "surname": "Doe" } ] |
(11) Always Encrypted
[CTP 2.0] With Always Encrypted, SQL Server can perform operations on encrypted data, and best of all the encryption key resides with the application inside the customer’s trusted environment and not on the server. Always Encrypted secures customer data so DBAs do not have access to plain text data. Encryption and decryption of data happens transparently at the driver level minimizing changes that have to be made to existing applications. For more information, see Always Encrypted (Database Engine).
Always
Encrypted is a feature designed to protect sensitive data, such as credit card
numbers or national identification numbers (e.g. U.S. social security numbers),
stored in SQL Server databases. Always Encrypted allows clients to encrypt
sensitive data inside client applications and never reveal the encryption keys
to SQL Server. As a result, Always Encrypted provides a separation between
those who own the data (and can view it) and those who manage the data (but
should have no access). By ensuring on-premises (i.e., on-site) database administrators, cloud
database operators, or other high-privileged, but unauthorized users, cannot
access the encrypted data, Always Encrypted enables customers to confidently
store sensitive data outside of their direct control. This allows organizations
to encrypt data at rest and in use for storage in Azure, to enable delegation
of on-premises database administration to third parties, or to reduce security
clearance requirements for their own DBA staff.
Always
Encrypted makes encryption transparent to applications. An Always
Encrypted-enabled driver installed on the client computer achieves this by
automatically encrypting and decrypting sensitive data in the SQL Server client
application. The driver encrypts the data in sensitive columns before passing
the data to SQL Server, and automatically rewrites queries so that the
semantics to the application are preserved. Similarly, the driver transparently
decrypts data, stored in encrypted database columns, contained in query
results.
Always
Encrypted is available in SQL Server 2016 Community Technology Preview 2
(CTP2). For a Channel 9 presentation that includes Always Encrypted, see Overview
and Roadmap for Microsoft SQL Server SecurityOverview
and Roadmap for Microsoft SQL Server SecurityOverview
and Roadmap for Microsoft SQL Server SecurityOverview
and Roadmap for Microsoft SQL Server Security.
==========================Extended Reading===================================
SSMS - Getting Started
with Always Encrypted
To
quickly start using Always Encrypted, complete the following steps:
1.
Create a new database.
2.
Provision Always Encrypted keys with Management Studio.
a.
In Management Studio using Object Explorer,
expand your database, expand Security, and then
expand Always Encrypted Keys.
b.
Right-click Column Encryption Keys,
and then click New Column Encryption Key.
c.
Enter a name of a new column encryption key and define its
corresponding column master key by selecting an existing certificate. To create
a new certificate in Windows Certificate Store,
click New in the Column
Master Key Definition box.
d.
Click Create, and then
click OK.
3.
Create a table with required columns, designating which columns will be encrypted and
how they are encrypted. Encryption is specified using column definition
options in the CREATE
TABLE (Transact-SQL) or ALTER
TABLE (Transact-SQL) statements.
Note: If you have an existing database with plaintext data,
some of which you want to encrypt using Always Encrypted, you can script the
schema of the original database, edit the script to designate columns to be
encrypted, and then execute the script against the new database to create the
new version of the schema that uses Always Encrypted to protect data in
selected columns. |
4.
Optionally, migrate any existing data from the original database
using SQL
Server Import and Export Wizard, started on the computer
containing the column master key (the certificate), defined
earlier in step 2. When connecting to the target database using Always
Encrypted, select .Net Framework Data Provider for SqlServer in
the Data Source box and make sure the Column
Encryption Setting keyword
is set to Enabled, in addition
to setting other required keys keywords (e.g. Data Source, Initial Catalog and
authentication-related keywords). This will ensure data inserted to the columns
that are configured as encrypted in your table, gets transparently encrypted
during the import process.
5.
Connect with a client that uses the .Net Framework
Data Provider for SqlServer.
a.
.NET Framework 4.6 must be installed in the machine hosting your
client application. .NET Framework 4.6 is available with SQL Server 2016
Community Technology Preview 2 (CTP2) and is installed with SQL Server
Management Studio. If your client application runs on a machine that does not
contain SQL Server 2016 Community Technology Preview 2 (CTP2) Management Studio,
you need to install .NET Framework 4.6 (for details, see .NET Framework 4.6).
b.
Make sure the column master key (the certificate), defined in
step 2, is deployed on the client machine.
c.
Modify the connection string of your client application, adding
the Column Encryption
Setting=enabled parameter.
For
more information about the client application, see Always
Encrypted (client development).
T-SQL - Getting Started with Always Encrypted
The
following Transact-SQL creates a column master key, column encryption key and a
table with encrypted columns.
CREATE COLUMN MASTER KEY DEFINITION MyCMK
WITH (
KEY_STORE_PROVIDER_NAME =
'MSSQL_CERTIFICATE_STORE',
KEY_PATH = 'Current
User/Personal/f2260f28d909d21c642a3d8e0b45a830e79a1420'
);
---------------------------------------------
CREATE COLUMN ENCRYPTION KEY MyCEK
WITH VALUES
(
COLUMN MASTER
KEY DEFINITION = MyCMK,
ALGORITHM =
'RSA_OAEP',
ENCRYPTED_VALUE =
0x01700000016C006F00630061006C006D0061006300680069006E0065002F006D0079002F003200660061006600640038003100320031003400340034006500620031006100320065003000360039003300340038006100350064003400300032003300380065006600620063006300610031006300284FC4316518CF3328A6D9304F65DD2CE387B79D95D077B4156E9ED8683FC0E09FA848275C685373228762B02DF2522AFF6D661782607B4A2275F2F922A5324B392C9D498E4ECFC61B79F0553EE8FB2E5A8635C4DBC0224D5A7F1B136C182DCDE32A00451F1A7AC6B4492067FD0FAC7D3D6F4AB7FC0E86614455DBB2AB37013E0A5B8B5089B180CA36D8B06CDB15E95A7D06E25AACB645D42C85B0B7EA2962BD3080B9A7CDB805C6279FE7DD6941E7EA4C2139E0D4101D8D7891076E70D433A214E82D9030CF1F40C503103075DEEB3D64537D15D244F503C2750CF940B71967F51095BFA51A85D2F764C78704CAB6F015EA87753355367C5C9F66E465C0C66BADEDFDF76FB7E5C21A0D89A2FCCA8595471F8918B1387E055FA0B816E74201CD5C50129D29C015895CD073925B6EA87CAF4A4FAF018C06A3856F5DFB724F42807543F777D82B809232B465D983E6F19DFB572BEA7B61C50154605452A891190FB5A0C4E464862CF5EFAD5E7D91F7D65AA1A78F688E69A1EB098AB42E95C674E234173CD7E0925541AD5AE7CED9A3D12FDFE6EB8EA4F8AAD2629D4F5A18BA3DDCC9CF7F352A892D4BEBDC4A1303F9C683DACD51A237E34B045EBE579A381E26B40DCFBF49EFFA6F65D17F37C6DBA54AA99A65D5573D4EB5BA038E024910A4D36B79A1D4E3C70349DADFF08FD8B4DEE77FDB57F01CB276ED5E676F1EC973154F86
);
---------------------------------------------
CREATE TABLE Customers (
CustName nvarchar(60)
ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY
= MyCEK,
ENCRYPTION_TYPE =
RANDOMIZED,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'),
SSN varchar(11)
COLLATE Latin1_General_BIN2 ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY
= MyCEK,
ENCRYPTION_TYPE = DETERMINISTIC ,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'),
Age int NULL
);
GO
(12) Stretch Database
[CTP 2.0] Stretch Database is a new feature in SQL Server 2016 Community Technology Preview 2 (CTP2) that leverages resources in Windows Azure to store and query archival data. Stretch Database automatically archives eligible rows from Stretch-enabled tables and uses computational resources in Azure to offload queries over the archived rows. For more info, see Stretch Database as below.
==========================Extended Reading===================================
Stretch Database
Stretch Database lets
you archive your historical data transparently and securely. In SQL Server 2016
Community Technology Preview 2 (CTP2), Stretch Database stores your historical
data in the Microsoft Azure cloud. After you enable Stretch Database, it
silently migrates your historical data to an Azure SQL Database.
You don't have to
change existing queries and client apps. You continue to have seamless access
to both local and remote data.
Your local queries and
database operations against current data typically run faster.
You typically enjoy
reduced cost and complexity.
If you can make the
following statements, Stretch Database may help to meet your requirements and
solve your problems.
I have to keep
transactional data for a long time.
Sometimes I have to
query the historical data.
I have apps, including
older apps that I don’t want to update.
I want to find a way to
save money on storage.
The size of my tables
is getting out of control.
My users say that they
want access to historical data, but they only rarely use it.
I have to keep buying
and adding more storage.
I can’t backup or
restore such large tables within the SLA.
What kind of databases
and tables are candidates for Stretch Database?
Stretch Database
targets transactional databases with large amounts of historical data,
typically stored in a small number of tables. These tables may contain more
than a billion rows.
In SQL Server 2016
Community Technology Preview 2 (CTP2), Stretch Database migrates entire tables.
This assumes that you already move historical data into a table that's separate
from current data.
Use Stretch Database
Advisor, a feature of SQL Server 2016 Upgrade Advisor, to identify databases
and tables for Stretch Database. For more info, see Identify databases and tables for Stretch Database.
What happens when you
enable Stretch Database?
After you enable
Stretch Database for a local server instance, a database, and at least one
table, it silently begins to migrate your historical data to an Azure SQL
Database. You can pause data migration to troubleshoot problems on the local
server or to maximize the available network bandwidth.
Migration. Stretch Database ensures that no data is lost
if a failure occurs during migration. It also has retry logic to handle
connection issues that may occur during migration. A dynamic management view
provides the status of migration.
Querying. You don't have to change existing queries and
client apps. You continue to have seamless access to both local and remote
data, even during data migration. There is a small amount of latency for remote
queries, but you only encounter this latency when you query the historical data
that's archived remotely.
For an overview of the
steps to enable, use, and manage Stretch Database, see Walkthrough: Set up remote data archiving with Stretch Database.
[CTP 2.2] You can
now add security policies to a table that has been configured for Stretch
Database. You can also configure a table that uses Row-Level Security for
Stretch Database.
[CTP 2.2] Stretch
Database Advisor is now available as a component of SQL Server 2016 Upgrade
Advisor Preview 1. You can download Upgrade Advisor Preview 1 here or you can install it by using the Web
Platform Installer. Stretch Database Advisor helps you to adopt Stretch
Database by analyzing existing database tables based on adjustable table size
thresholds to identify candidates for Stretch Database. For more info, see Identify
databases and tables for Stretch Database.
(13) Polybase Support
PolyBase allows you to use T-SQL statements to
access data stored in Hadoop or Azure Blob Storage and query it in an ad-hoc
fashion. It also lets you query semi-structured data and join the results with
relational data sets stored in SQL Server. PolyBase
is optimized for data warehousing workloads and intended for analytical query
scenarios.
==========================Extended Reading===================================
1. Challenges and solutions
PolyBase addresses one of the main customer pain
points in data warehousing: accessing distributed datasets. With the increasing
volumes of unstructured or semi-structured data sets, users are storing data
sets in more cost-effective distributed and scalable systems, such as Hadoop
and cloud environments (for example, Azure storage). DBAs or system
administrators are not required to delete data anymore and do not need to spend
significant time in capacity planning. However, with the increasing adoption of
such new systems, it becomes harder to build applications over them. Data is
distributed and located in heterogeneous systems making it both difficult and
time-consuming to access. New skill sets are required to perform simple
operations and to combine data sets or applications to get new insights.
PolyBase addresses this challenge as follows:
1.
PolyBase
allows you to use T-SQL statements to access and query in an ad-hoc fashion
data stored in Hadoop or Azure Blob Storage. For this purpose, it introduces
three main T-SQL constructs fully integrated in SQL Server as first-class SQL
objects: external file formats, external data sources, and external tables.
2.
PolyBase
allows you to use T-SQL to store data originating in Hadoop or Azure Blob
storage as regular tables. Users and applications can leverage SQL Server’s
mature features, such as column-store technology, for example, for frequent BI
reports. There is no need for a separate ETL or import tool.
3.
PolyBase
integrates seamlessly with all BI tools. Use it with Microsoft’s BI products,
such as SQL Server Reporting Services, SQL Server Analysis Services,
PowerPivot, PowerQuery, and PowerView.
Or use third party BI tools, such as Tableau, Microstrategy
or Cognos.
4.
PolyBase does
not require any additional software installed on the user’s Hadoop or Azure
environment. It is fully transparent for the end user – no knowledge about
Hadoop or Azure is required to use PolyBase
successfully.
5.
PolyBase leverages
a feature of Hadoop: it pushes the computation to where the data resides
instead of pulling all data into SQL Server for processing. Based on statistics
and corresponding costs, SQL Server decides when to generate map jobs on the
fly, to be executed within Hadoop. This is also transparent to the actual end
user or application.
1.
Applications and scenarios for PolyBase
Application |
Scenario |
Benefits |
Ad-hoc query scenario (combining relational and semi-structured
data) |
·
Semi-structured
data sets stored in Hadoop or Azure Storage ·
Relational data
set stored in SQL Server |
·
Use of mature
T-SQL to combine data sets via various JOIN expressions ·
Improved query
performance through push-down computation for Hadoop data sources ·
No sophisticated
knowledge about Hadoop internals needed |
Frequent BI reports |
·
Focus on subset of
data stored in Hadoop or Azure storage ·
No frequent
changes on this subset of data ·
Series of reports
needed on this subset of data |
·
Parallelized
import from Hadoop or Azure storage into SQL Server ·
Ability to create
column-store table on-the-fly via T-SQL to leverage SQL Server’s world-class
column-store technology ·
Minimize
time-to-insights ·
No maintenance of
separate ETL tool and code necessary |
Ad-Hoc Query Scenario - Hadoop data only |
·
Ad-hoc query
against data stored in Hadoop or Azure Blob Storage |
·
Use mature T-SQL
to run queries and combine data stored in different Hadoop systems or Azure
storage containers ·
No sophisticated
knowledge about Hadoop internals needed |
2. CTP2 notes and looking forward
In
this version (CTP2), PolyBase supports Hadoop
offerings from Cloudera and Hortonworks as well as Azure Blob Storage.
PolyBase will increase support for a variety of
data formats in future SQL Server versions.
With
CTP3, a user will be able to export relational data into Hadoop or Azure Blob
storage for archival purposes.
To
improve query performance, in CTP3, users will be able to create a cluster of
SQL Server instances working together to process large data sets from external
data sources, such as Hadoop or Azure Blob Storage, in a scale-out fashion.
(14) Transact-SQL Enhancements
[CTP 2.0] Numerous enhancements support the features described in the other sections of this topic. The following additional enhancements are available.
The
TRUNCATE TABLE statement now permits the truncation of specified partitions.
For more information, see TRUNCATE
TABLE (Transact-SQL).
ALTER
TABLE (Transact-SQL) now
allows many alter column actions to be performed while the table remains
available.
The
full-text index DMV sys.dm_fts_index_keywords_position_by_document
(Transact-SQL) returns
the location of keywords in documents. This DMV has also been added in SQL
Server 2012 SP2 and SQL Server 2014 SP1.
A new
query hint NO_PERFORMANCE_SPOOL can prevent a spool operator from being
added to query plans. This can improve performance when many concurrent queries
are running with spool operations. For more information, see Query
Hints (Transact-SQL).
(15) System View Enhancements
[CTP 2.0] Two new views support row level security. For more information, see sys.security_predicates (Transact-SQL) and sys.security_policies (Transact-SQL).
Seven
new views support the Query Store feature. For more information, see Query
Store Catalog Views (Transact-SQL).
24 new
columns are added to sys.dm_exec_query_stats
(Transact-SQL) provide
information about memory grants.
Two
new query hints (MIN_GRANT_PERCENT and MAX_GRANT_PERCENT) are added to specify
memory grants. SeeQuery Hints
(Transact-SQL).
[CTP
2.2] sys.dm_exec_session_wait_stats
(Transact-SQL) provides
a per session report similar to the server widesys.dm_os_wait_stats
(Transact-SQL).
(16)
Security Enhancements
1)
Row-Level Security
[CTP 2.0] Row
level security introduces predicate based access control. It features a
flexible, centralized, predicate-based evaluation that can take into
consideration metadata (such as labels) or any other criteria the administrator
determines as appropriate. The predicate is used as a criterion to determine
whether or not the user has the appropriate access to the data based on user
attributes. Label based access control can be implemented by using predicate
based access control. For more information, see Row-Level
Security below:
Implement RLS by using the CREATE SECURITY POLICY Transact-SQL statement, and
predicates created as inline table valued functions.
This
short example creates three users, creates and populates a table with 6 rows,
then creates an inline table valued function and a security policy for the
table. The example shows how select statements are filtered for the various
users.
Create
three user accounts that will demonstrate different access capabilities.
CREATE USER Manager WITHOUT LOGIN;
CREATE USER Sales1 WITHOUT LOGIN;
CREATE USER Sales2 WITHOUT LOGIN;
Create
a simple table to hold data.
CREATE TABLE Sales
(
OrderID int,
SalesRep sysname,
Product varchar(10),
Qty int
);
Populate
the table with 6 rows of data, showing 3 orders for each sales representative.
INSERT Sales VALUES
(1, 'Sales1', 'Valve', 5),
(2, 'Sales1', 'Wheel', 2),
(3, 'Sales1', 'Valve', 4),
(4, 'Sales2', 'Bracket', 2),
(5, 'Sales2', 'Wheel', 5),
(6, 'Sales2', 'Seat', 5);
-- View the 6 rows in the table
SELECT * FROM Sales;
Grant
read access on the table to each of the users.
GRANT SELECT ON Sales TO Manager;
GRANT SELECT ON Sales TO Sales1;
GRANT SELECT ON Sales TO Sales2;
Create
a new schema, and an inline table valued function. The function returns 1 when
a row in the SalesRep column is the same as the user
executing the query (@SalesRep =
USER_NAME()) or if the user executing the
query is the Manager user (USER_NAME() = 'Manager').
CREATE SCHEMA Security;
GO
CREATE FUNCTION Security.fn_securitypredicate(@SalesRep AS sysname)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS fn_securitypredicate_result
WHERE @SalesRep = USER_NAME() OR USER_NAME() = 'Manager';
Create
a security policy adding the function as a filter predicate. The state must be
set to ON to enable the policy.
CREATE SECURITY POLICY SalesFilter
ADD FILTER PREDICATE Security.fn_securitypredicate(SalesRep)
ON dbo.Sales
WITH (STATE = ON);
Now
test the filtering predicate, by selected from the Sales table as each user.
EXECUTE AS USER = 'Sales1';
SELECT * FROM Sales;
REVERT;
EXECUTE AS USER = 'Sales2';
SELECT * FROM Sales;
REVERT;
EXECUTE AS USER = 'Manager';
SELECT * FROM Sales;
REVERT;
The
Manager should see all 6 rows. The Sales1 and Sales2 users should only see
their own sales.
Alter
the security policy to disable the policy.
ALTER SECURITY POLICY SalesFilter
WITH (STATE = OFF);
Now
the Sales1 and Sales2 users can see all 6 rows.
1)
Dynamic Data Masking
[CTP 2.0] Dynamic
data masking limits sensitive data exposure by masking it to non-privileged
users. Dynamic data masking helps prevent unauthorized access to sensitive data
by enabling customers to designate how much of the sensitive data to reveal
with minimal impact on the application layer. It’s a policy-based security feature that hides the
sensitive data in the result set of a query over designated database fields,
while the data in the database is not changed. For more information, see Dynamic
Data Masking.
--The following example creates a table
with three different types of dynamic data masks.
--The example populates the table, and
selects to show the result.
CREATE TABLE Membership
(MemberID int IDENTITY PRIMARY KEY,
FirstName varchar(100) MASKED WITH (FUNCTION = 'partial(1,"XXXXXXX",0)') NULL,
LastName varchar(100) NOT NULL,
Phone# varchar(12) MASKED WITH (FUNCTION = 'default()') NULL,
Email varchar(100) MASKED WITH (FUNCTION = 'email()') NULL);
INSERT Membership (FirstName, LastName, Phone#, Email) VALUES
('Roberto', 'Tamburello', '555.123.4567', 'RTamburello@contoso.com'),
('Janice', 'Galvin', '555.123.4568', 'JGalvin@contoso.com.co'),
('Zheng', 'Mu', '555.123.4569', 'ZMu@contoso.net');
/*
MemberID FirstName LastName Phone# Email
1 Roberto Tamburello 555.123.4567 RTamburello@contoso.com
2 Janice Galvin 555.123.4568 JGalvin@contoso.com.co
3 Zheng Mu 555.123.4569 ZMu@contoso.net
*/
SELECT * FROM Membership;
--A new user is created and granted
SELECT permission on the table.
--Queries executed as the TestUser view masked data.
CREATE USER TestUser WITHOUT LOGIN;
GRANT SELECT ON Membership TO TestUser;
/*
MemberID FirstName LastName Phone# Email
1 RXXXXXXX Tamburello xxxx RXXX@XXXX.com
2 JXXXXXXX Galvin xxxx JXXX@XXXX.com
3 ZXXXXXXX Mu xxxx ZXXX@XXXX.com
*/
EXECUTE AS USER = 'TestUser';
SELECT * FROM Membership;
REVERT;
2) New Permissions
[CTP 2.0] The ALTER ANY SECURITY POLICY permission is available as part of the implementation of row level security.
The ALTER ANY MASK and UNMASK permissions are available as part of the
implementation of dynamic data masking.
The ALTER ANY COLUMN ENCRYPTION
KEY, VIEW
ANY COLUMN ENCRYPTION KEY, ALTER ANY COLUMN MASTER KEY
DEFINITION, and VIEW
ANY COLUMN MASTER KEY DEFINITION permissions are available as part of the
implementation of the Always Encrypted feature.
The ALTER ANY EXTERNAL DATA SOURCE and ALTER ANY EXTERNAL FILE FORMAT permissions are visible in SQL Server 2016
Community Technology Preview 2 (CTP2) but only apply to the Analytics Platform
System (SQL Data Warehouse).
3) Transparent Data Encryption
[CTP 2.0] Transparent Data Encryption has been enhanced with support for Intel AES-NI hardware acceleration of encryption. This will reduce the CPU overhead of turning on Transparent Data Encryption.
(17) High Availability Enhancements
[CTP 2.0] Load-balancing of read-intent connection requests is now supported across a set of read-only replicas. The previous behavior always directed connections to the first available read-only replica in the routing list. For more information, see Configure load-balancing across read-only replicas.
The
number of replicas that support automatic failover has been increased from two
to three.
Group
Managed Service Accounts are now supported for AlwaysOn
Failover Clusters. For more information, see Group
Managed Service Accounts. For Windows Server 2012 R2, an update is
required to avoid temporary downtime after a password change. To obtain the
update, see gMSA-based
services can't log on after a password change in a Windows Server 2012 R2
domain.
AlwaysOn Availability Groups supports distributed
transactions and the DTC on Windows Server 2016. For more information, see SQL
Server 2016 Support for DTC and AlwaysOn Availability
Groups.
You
can now configure AlwaysOn Availability Groups to
failover when a database goes offline. This change requires the setting the DB_FAILOVER option to ON in the CREATE AVAILABILITY
GROUP (Transact-SQL) or ALTER
AVAILABILITY GROUP (Transact-SQL) statements.
(18) Tools Enhancements
1) Management Studio
[CTP 2.0] SQL Server Management
Studio supports the Active Directory Authentication Library (ADAL) which is
under development for connecting to Microsoft Azure. This replaces the
certificate-based authentication used in SQL Server 2014 Management
Studio.
SQL
Server Management Studio installation requires installing .NET 4.6 as a
pre-requisite. .NET 4.6 will be automatically installed by setup when SQL
Server Management Studio is selected as an installation choice.
2)
Upgrade Advisor
[CTP 2.2] SQL
Server 2016 Upgrade Advisor Preview 1 is a standalone tool that enables users
of prior versions to run a set of upgrade rules against their SQL Server
database to pinpoint breaking and behavior changes and deprecated features as
well as providing help with the adoption of new features such as Stretch
Database.
You can download Upgrade Advisor Preview 1 here or you can install it by using the Web Platform Installer.