Programmability enhancements
in the Database Engine in SQL Server 2012 include FileTables, statistical
semantic search, property-scoped full-text search and customizable proximity
search, ad-hoc query paging, circular arc segment support for spatial types,
support for sequence objects, default support for 15,000 partitions, and
numerous improvements and additions to Transact-SQL.
(1) Native XML Web Services (SOAP/HTTP endpoints) is removed
Beginning in SQL Server 2012,
you can no longer use CREATE ENDPOINT or ALTER ENDPOINT to add or modify
SOAP/HTTP endpoints.
SQL Server 2005 introduced the
concept of endpoints. Every connection entry point into SQL Server is
abstracted as an endpoint. You can see all the endpoints on your SQL Server
instance by using the sys.endpoints metadata view. In addition to those you’d
expect (for TCP/IP, Named Pipes, Shared Memory) there are also endpoints
defined for the dedicated admin connection, Service Broker, Database Mirroring…and
HTTP. The endpoint concept is still with us, but HTTP endpoints, those
endpoints that allow you to expose SQL Server procedures and functions directly
as HTTP Web Services is deprecated in SQL Server 2012, in
favor of Windows Communication Foundation or
ASP.NET. So basically you would write and host WCF services in .NET which will
directly talk to the SQL database and then consume those services from the
client applications.
(2) FileTables
The FileTable feature builds
on top of the SQL Server FILESTREAM technology to bring support for the Windows
file namespace and compatibility with Windows applications to the file data
stored in SQL Server. This lets an application integrate its storage and data
management components, and provides integrated SQL Server services (including
full-text search and semantic search) over unstructured data and metadata,
along with easy policy management and administration.
In other words, you can now store files and documents in special tables in SQL
Server, but access them from Windows applications as if they were stored
in the file system, without making any changes to the Windows applications.
========================= Extended
Reading==============================
· Enable the Prerequisites for FileTable -
Describes how to enable the prerequisites for creating and using FileTables.
o
At the instance level:
EXEC sp_configure filestream_access_level,
2
RECONFIGURE
o
At the database level:
USE master
GO
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
EXEC sp_configure 'xp_cmdshell', 1;
GO
RECONFIGURE;
GO
EXEC xp_cmdshell 'IF NOT EXIST
D:\DemoFileTable MKDIR D:\DemoFileTable';
GO
IF EXISTS (SELECT 1 FROM sys.databases WHERE name
= 'DemoFileTable')
BEGIN
ALTER DATABASE
DemoFileTable
SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE
DemoFileTable;
END;
CREATE DATABASE
DemoFileTable WITH FILESTREAM
(
NON_TRANSACTED_ACCESS
= FULL,
DIRECTORY_NAME
= N'DemoFileTable'
);
GO
/* Add a FileGroup that can
be used for FILESTREAM */
ALTER DATABASE
DemoFileTable
ADD FILEGROUP
DemoFileTable_FG
CONTAINS FILESTREAM;
GO
/* Add the folder that
needs to be used for the FILESTREAM filegroup. */
ALTER DATABASE
DemoFileTable
ADD FILE
(
NAME= 'DemoFileTable_File',
FILENAME = 'D:\DemoFileTable\DemoFileTable_File'
)
TO FILEGROUP
DemoFileTable_FG;
GO
USE DemoFileTable;
GO
/* Create a FileTable */
CREATE TABLE
DemoFileTable AS FILETABLE
WITH
(
FILETABLE_DIRECTORY = 'DemoFileTableFiles',
FILETABLE_COLLATE_FILENAME = database_default
);
GO
Use DemoFileTable;
GO
SELECT * FROM DemoFileTable;
GO
USE master
GO
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
EXEC sp_configure 'xp_cmdshell', 0;
GO
RECONFIGURE;
GO
·
Create, Alter, and Drop FileTables - Describes
how to create a new FileTable, or alter or drop an existing FileTable.
·
Work with Directories and Paths in FileTables -
Describes the directory structure in which the files are stored in FileTables.
·
Access FileTables with Transact-SQL - Describes
how Transact-SQL data manipulation language (DML) commands work with
FileTables.
(3) Statistical Semantic Search
Statistical Semantic Search
provides deep insight into unstructured documents stored in SQL Server
databases by extracting statistically relevant key phrases, and then - based on these
phrases – identifying similar
documents. These results are made available as structured data through three Transact-SQL rowset functions:
· semantickeyphrasetable
SEMANTICKEYPHRASETABLE
(
table,
{ column | (column_list) | * }
[ , source_key ]
)
· semanticsimilaritytable
SEMANTICSIMILARITYTABLE
(
table,
{ column | (column_list) | * },
source_key
)
· semanticsimilaritydetailstable
SEMANTICSIMILARITYDETAILSTABLE
(
table,
source_column,
source_key,
matched_column,
matched_key
)
Semantic search builds upon
the existing full-text search feature in SQL Server but enables new scenarios
that extend beyond syntactical keyword searches.
While full-text search lets you query the words in a
document, semantic search lets you query the meaning of the
document. New scenarios include automatic tag extraction, related content
discovery, and hierarchical navigation across similar content. For example, you
can query the document similarity index to identify resumes that match a job
description. Or, you can query the index of key phrases to build the taxonomy
for an organization, or for a corpus of documents.
Before you can use Semantic
Search, you have to install, attach, and register an additional database. For
more information, see Install
and Configure Semantic Search.
SELECT SERVERPROPERTY('IsFullTextInstalled')
GO
-- Check if the semantic
db already installed and registered
SELECT * FROM sys.fulltext_semantic_language_statistics_database
GO
o Run the MSI file
o Attach the DB:
CREATE DATABASE
semanticsdb
ON ( FILENAME = 'D:\Program Files\Microsoft Semantic Language
Database\semanticsdb.mdf' )
LOG ON ( FILENAME = 'D:\Program Files\Microsoft
Semantic Language Database\semanticsdb_log.ldf' )
FOR ATTACH
GO
o
Register the semantic DB:
EXEC
sp_fulltext_semantic_register_language_statistics_db @dbname
= N'semanticsdb'
GO
·
Create a Semantic Index
Statistical Semantic Search uses
the indexes that are created by Full-Text Search, and creates additional
indexes. As a result of this dependency on full-text search, you create a new
semantic index when you define a new full-text index, or when you alter an
existing full-text index. You can create a new semantic index by using Transact-SQL
statements, or by using the Full-Text Indexing Wizard and other dialog boxes in
SQL Server Management Studio.
--Example 1:
Create a unique index, full-text index, and semantic index
CREATE FULLTEXT
CATALOG ft AS DEFAULT
GO
CREATE UNIQUE
INDEX ui_ukJobCand
ON HumanResources.JobCandidate(JobCandidateID)
GO
CREATE FULLTEXT
INDEX ON
HumanResources.JobCandidate
(Resume
Language 1033
Statistical_Semantics
)
KEY INDEX
JobCandidateID
WITH STOPLIST = SYSTEM
GO
--Example 2:
Add semantic indexing to a column that already has full-text indexing
ALTER FULLTEXT
INDEX ON
Production.Document
ALTER COLUMN Document
ADD Statistical_Semantics
WITH NO POPULATION
GO
(4) Full-Text Search
1) Property Search
Beginning in SQL Server 2012,
you can configure a full-text index to support property-scoped searching on
properties, such as Author and Title, which are emitted by IFilters. This
form of searching is known as property searching. Whether property searching is
possible on a given type of document depends on whether the corresponding filter (IFilter) extracts search properties
during full-text indexing. Among IFilters that extract a number of document
properties are the IFilters for Microsoft Office 2007 document file types, such
as .docx, .xlsx, and .pptx. For more information, see Search
Document Properties with Search Property Lists.
2) Customizable NEAR
Beginning in SQL Server 2012,
you can customize a proximity search by using the new custom NEAR option
of the CONTAINS predicate or CONTAINSTABLE function. Custom NEAR enables you
to optionally specify the maximum number of non-search terms that separate
the first and last search terms in a match. Custom NEAR also enables you
to optionally specify that words and phrases are matched only if they occur in
the order in which you specify them. For more information, see Search
for Words Close to Another Word with NEAR.
3) New Word Breakers and Stemmers
Stopwords
- SQL Server has a mechanism that discards commonly occurring strings that do
not help the search. These discarded strings are called stopwords
Stoplist
- Stopwords are managed in databases using stoplists. A list of stopwords that,
when associated with a full-text index, is applied to full-text queries on that
index.
Word breakers and stemmers perform linguistic
analysis on all full-text indexed data. Linguistic analysis involves
finding word boundaries (word-breaking) and conjugating verbs (stemming). Word
breakers and stemmers are language specific, and the rules for linguistic
analysis differ for different languages. For a given language,
Word breaker identifies individual words by
determining where word boundaries exist based
on the lexical rules of the language. Each word (also known as a token) is inserted into the full-text index using a compressed
representation to reduce its size.
Stemmer generates inflectional forms of a particular word based on
the rules of that language
(for example, "running", "ran", and "runner" are
various forms of the word "run").
All the word breakers and
stemmers used by Full-Text Search and Semantic Search, with the exception of
the Korean language, are updated in this release. For consistency between the
contents of indexes and the results of queries, we recommend that you
repopulate existing full-text indexes after upgrading.
1. The
third-party word breakers for English that were included with previous releases
of SQL Server have been replaced with Microsoft components. If you have to
retain the previous behavior, see Change
the Word Breaker Used for US English and UK English.
2. The
third-party word breakers for Danish, Polish, and Turkish that were included
with previous releases of SQL Server have been replaced with Microsoft
components. The new components are enabled by default.
3. There
are new word breakers for Czech and Greek. Previous releases of SQL Server
Full-Text Search did not include support for these two languages.
4. The
behavior of the new word breakers has changed. For more information, see Behavior
Changes to Full-Text Search. If you have to retain the previous
behavior, see Revert
the Word Breakers Used by Search to the Previous Version.
5. This release
installs the latest Microsoft word breakers and stemmers, but does not install
the latest Microsoft filters. To download the latest filters, see Microsoft Office 2010
Filter Packs.
(5) New and Enhanced Spatial Features
The new spatial features in
SQL Server 2012 represent a significant
milestone in the evolution of spatial data support in SQL Server. The
support for full globe spatial objects and
for circular arcs on the ellipsoid are industry firsts for relational database
systems. The geography data type has achieved parity with the geometry data type in the functionality and the
variety of methods that it supports. Overall performance, from spatial indexes
to methods, has significantly improved. These and other improvements to spatial
data support represent a significant step forward in the spatial capabilities
of SQL Server.
For a detailed description and
examples of these new spatial features, download the white paper, New Spatial Features in
SQL 2012.
1) Enhancements to spatial data types
1. New circular arcs and related methods
·
New subtypes. There are 3 new subtypes of circular arcs:
o CircularString
o CompoundCurve
o CurvePolygon
·
New methods. All existing methods work on these
circular objects. The following new methods are also introduced:
o BufferWithCurves() uses circular arcs to construct a buffered
object with a greatly reduced number of points compared to STBuffer().
o STNumCurves() and STCurveN() are used for iteration through the
list of the circular arc edges.
o STCurveToLine() and CurveToLineWithTolerance() are used for approximating circular
arcs with line segments within default and user-specified tolerance.
New
and updated methods and aggregates for geometry and geography
·
New methods.
o IsValidDetailed() returns a message that can help to
identify why a spatial object is not valid.
o HasZ returns 1 (true) if a spatial object
contains at least one Z value.
o HasM returns 1 (true) if a spatial object
contains at least one M value.
o AsBinaryZM() adds support for Z and M values to the OGC
WKB format.
o ShortestLineTo() returns a LineString that represents the
shortest distance between two objects.
o STLength() has been updated and now works on both
valid and invalid LineStrings.
o MinDbCompatibilityLevel() is a new method used for backward
compatibility. It indicates whether spatial objects can be recognized by SQL
Server 2008 and SQL Server 2008 R2.
·
New aggregates. These aggregates are available only in
Transact-SQL, and not in the client-side programming library.
o UnionAggregate
o EnvelopeAggregate
o CollectionAggregate
o ConvexHullAggregate
2. Improved precision.
All constructions and
relations are now done with 48 bits of precision, compared to 27 bits used in
SQL Server 2008 and SQL Server 2008 R2. This can reduce the errors caused by
the rounding of floating-point coordinates.
Enhancements
to the geography type
·
Full globe. SQL Server now supports spatial objects
that are larger than a logical hemisphere. Geography features were restricted
to slightly less than a logical hemisphere in SQL Server 2008. In SQL Server
2012, they can now be as big as the entire globe. A new type of object, called
FULLGLOBE, can be constructed or received as a result of an operation.
·
New methods.
o For invalid objects. The geography type now allows invalid
objects to be inserted into a table.STIsValid() and MakeValid() allow invalid geography objects to be
detected and corrected in a similar fashion to the geometry type.
o For ring orientation. Geography polygons can now be accommodated
without regard to ring orientation. This can lead to unintended behavior. ReorientObject() can be used to reorient polygon rings
for cases where they are constructed with the wrong orientation.
o geometry methods added for the geography type. STWithin(), STContains(), STOverlaps(),
andSTConvexHull() methods were previously available only
for the geometry type, but have now been added for the geography type. With the
exception of STConvexHull(), these
new methods are supported by spatial indexes.
·
New SRID. A new spatial reference id (SRID), 104001,
has been added to the list of supported spatial reference systems. This new
SRID is an Earth unit sphere (a sphere of radius 1) and can be used with the
geography type to perform optimized numerical computations when more precise
ellipsoidal mathematics are not required.
·
2) Spatial performance improvements
1. Spatial index improvements
·
New auto grid. A new auto grid spatial index is available
for both spatial types (geometry_auto_grid and geography_auto_grid). The new auto grid
uses a different strategy to pick the right tradeoff between performance and
efficiency. For more information, see CREATE
SPATIAL INDEX (Transact-SQL).
·
New spatial index hint, SPATIAL_WINDOW_MAX_CELLS. This
new spatial hint is critical for fine-tuning query performance using a spatial
index. Dense spatial data often requires a higher SPATIAL_WINDOW_MAX_CELLS,
whereas sparse spatial data often demands a lower SPATIAL_WINDOW_MAX_CELLS for
optimum performance. This hint does not guarantee that a spatial index will be
used in the query plan. However, if it is used, this hint will override the
default WINDOW_MAX_CELLS parameter.
2. Additional performance improvements
·
An optimized nearest neighbor query plan is available when
certain syntax is used.
·
Several other methods have been optimized for common scenarios.
·
Spatial aggregates have better performance as a result of
improvements that affect all CLR UDT aggregates.
·
3) Other spatial improvements
1. Spatial helper stored procedures
Two new helper stored
procedures are available. These procedures can be used to evaluate the
distribution of spatial data within a table over a given spatial column.
2. Support for persisted computed columns
UDTs and spatial types can now
be persisted in computed columns.
4) Changes in the client-side spatial programming library
·
Geometry and geography builders (SqlGeometryBuilder and SqlGeographyBuilder) now
support circular arc constructions.
·
A new method, Deserialize, has been
added to both types (Deserialize(SqlBytes) and Deserialize(SqlBytes)).
This method simplifies deserialization.
(6) Metadata Discovery
1) Use sp_describe_first_result_set, sp_describe_undeclared_parameters, sys.dm_exec_describe_first_result_set, and sys.dm_exec_describe_first_result_set_for_object to replace the SET FMTONLY option for determining the format of a response without actually running the query.
===========================Extended
Reading=======================
Before SQL Server 2012, SET
FMTONLY is used to return only metadata to the client. Can
be used to test the format of the response without actually running the query.
sp_describe_first_result_set or sys.dm_exec_describe_first_result_set - Returns the metadata for the
first possible result set of the Transact-SQL batch.
sp_describe_first_result_set @tsql = N'SELECT object_id, name,
type_desc FROM sys.indexes'
SELECT * FROM sys.dm_exec_describe_first_result_set (N'SELECT object_id, name, type_desc FROM sys.indexes', null, 0) ;
sp_describe_undeclared_parameters - Returns a result set that
contains metadata about undeclared parameters in a Transact-SQL batch.
sp_describe_undeclared_parameters @tsql
=
N'SELECT object_id, name,
type_desc
FROM sys.indexes
WHERE object_id = @id OR name
= @name'
sys.dm_exec_describe_first_result_set_for_object has the same result set definition as sys.dm_exec_describe_first_result_set (Transact-SQL) and is similar to sp_describe_first_result_set
CREATE PROC
TestProc2
AS
SELECT object_id, name FROM sys.objects ;
SELECT name, schema_id,
create_date FROM sys.objects ;
GO
SELECT * FROM sys.dm_exec_describe_first_result_set_for_object(OBJECT_ID('TestProc2'), 0) ;
SELECT * FROM sys.dm_exec_describe_first_result_set_for_object(OBJECT_ID('TestProc2'), 1) ;
GO
DROP PROC
testproc2
=================================================================================
2) EXECUTE Statement
The EXECUTE statement can now
specify the metadata returned from the statement by using the WITH RESULT SETS
argument. For more information, see EXECUTE
(Transact-SQL).
USE AdventureWorks2012;
GO
--CURRENTLY the proc has 7
COLUMNS
SELECT * FROM sys.dm_exec_describe_first_result_set_for_object(OBJECT_ID('dbo.uspGetEmployeeManagers'), 0) ;
--Example 1: Using EXECUTE
to redefine a single result set
EXEC uspGetEmployeeManagers 16
WITH RESULT SETS
(
([Reporting Level] int NOT NULL,
[ID of Employee] int NOT NULL,
[Employee First
Name] nvarchar(50) NOT NULL,
[Employee Last Name]
nvarchar(50) NOT NULL,
[Employee ID of
Manager] nvarchar(max) NOT NULL,
[Manager First Name]
nvarchar(50) NOT NULL,
[Manager Last Name] nvarchar(50) NOT NULL )
);
--EXAMPLE 2: Using EXECUTE
to redefine a two result sets
--Create the procedure
CREATE PROC
Production.ProductList @ProdName nvarchar(50)
AS
-- First result set
SELECT ProductID,
Name, ListPrice
FROM Production.Product
WHERE Name LIKE
@ProdName;
-- Second result set
SELECT Name, COUNT(S.ProductID) AS NumberOfOrders
FROM Production.Product
AS P
JOIN Sales.SalesOrderDetail
AS S
ON P.ProductID
= S.ProductID
WHERE Name LIKE
@ProdName
GROUP BY Name;
GO
-- Execute the procedure
EXEC Production.ProductList '%tire%'
WITH RESULT SETS
(
(ProductID int, -- first
result set definition starts here
Name Name,
ListPrice money)
,
-- comma separates result set definitions
(Name Name,
-- second result set definition starts here
NumberOfOrders int)
);
(7) UTF-16 Support
1) UTF-16 Supplementary Characters (SC) Collations
A new family of supplementary
characters (SC) collations can be used with the data types nchar, nvarchar and sql_variant. For
example: Latin1_General_100_CI_AS_SC or, if using a Japanese collation,
Japanese_Bushu_Kakusu_100_CI_AS_SC. These collations encode Unicode characters
in the UTF-16 format. Characters with codepoint values larger than 0xFFFF
require two consecutive 16-bit words. These characters are called supplementary
characters, and the two consecutive 16-bit words are called surrogate pairs. SC
collations can improve searching and sorting by functions that use the Unicode
types nchar and nvarchar. For more information, see Collation
and Unicode Support.
2) UTF-16 Supplementary Characters (SC) Collation Option for XML
SQL Server 2012 adds a new
collation option – "SC" or "supplementary characters" –
that identifies whether a collation is UTF-16-aware. For more information, see Collation
and Unicode Support. SQL Server 2012 also adds support for this collation option in
the SQL Types XML schema and in other locations where SQL Server exposes or
consumes this information in an XML context. The affected locations are the following:
·
SQL Types XML schema. The
schema version is now 1.2. The schema is backward-compatible and the target
namespace has not changed. The schema now exposes the supplementaryCharacters global attribute.
·
XMLSCHEMA directive with FOR XML. The
new global attribute is exposed in the inline schemas and instance annotations
generated by the XMLSCHEMA directive, alongside similar attributes such as localeId andsqlCompareOptions. This
directive is supported with FOR XML in RAW and AUTO modes, but not in EXPLICIT
or PATH modes.
·
sys.sys XML schema collection. The
new global attribute is prepopulated in the built-in sys.sys XML schema
collection and is made available implicitly in all other XML schema collections
that import the SQL Types XML schema.
·
Catalog views. The
new global attribute is now listed in the following catalog views:
o sys.xml_schema_components
o sys.xml_schema_attributes
o sys.xml_schema_component_placements
·
Upgraded XML schema collections.
After upgrade from a previous version of SQL Server, the new global attribute
is exposed in all XML schema collections that import the SQL Types XML schema.
·
XML column sets. The
new global attribute is added to XML column set values that represent sql_variant
strings that use the new UTF-16 collations. It can also be applied
during inserts and updates to set string values of type sql_variant in sparse columns to use the UTF-16 aware
collation.
(8) Ad-hoc Query Paging Implementation
You can specify a range of
rows returned by a SELECT statement based on row offset and row count values
that you provide. This is useful when you want to control the number of rows
sent to a client application for a given query. For more information, see ORDER
BY Clause (Transact-SQL).
=================================Extended
Reading==============================
ORDER BY
order_by_expression
[
COLLATE collation_name ]
[
ASC | DESC ]
[
,...n ]
[ <offset_fetch>
]
<offset_fetch>
::=
{
OFFSET
{ integer_constant | offset_row_count_expression } { ROW | ROWS }
[
FETCH
{ FIRST | NEXT } {integer_constant | fetch_row_count_expression } { ROW | ROWS
} ONLY
]
}
USE AdventureWorks2012;
GO
-- Skip the first 5 rows
from the sorted result set and return all remaining rows.
SELECT DepartmentID,
Name, GroupName
FROM HumanResources.Department
ORDER BY
DepartmentID
OFFSET 5 ROWS;
-- Skip 0 rows and return
only the first 10 rows from the sorted result set.
SELECT DepartmentID,
Name, GroupName
FROM HumanResources.Department
ORDER BY
DepartmentID
OFFSET 0 ROWS
FETCH NEXT 10 ROWS ONLY;
-- Specifying variables for
OFFSET and FETCH values
DECLARE @StartingRowNumber tinyint = 1
, @FetchRows tinyint = 8;
SELECT DepartmentID,
Name, GroupName
FROM HumanResources.Department
ORDER BY
DepartmentID ASC
OFFSET
@StartingRowNumber ROWS
FETCH NEXT
@FetchRows ROWS ONLY;
===========================================================================
(9) Sequence Objects
A sequence object is a
user-defined schema-bound object that generates a sequence of numeric values
according to the specification with which the sequence was created. It operates
similar to an identity column, but sequence
numbers are not restricted to use in a single table. For more
information, see Sequence
Numbers.
-- Using a sequence number
in a single table
--Create the Test schema
CREATE SCHEMA
Test ;
GO
-- Create a table
CREATE TABLE
Test.Orders
(OrderID int
PRIMARY KEY,
Name varchar(20) NOT NULL,
Qty int NOT NULL);
GO
-- Create a sequence
CREATE SEQUENCE Test.CountBy1
START WITH 1
INCREMENT BY 1 ;
GO
-- Insert three records
INSERT Test.Orders (OrderID, Name, Qty)
VALUES (NEXT VALUE FOR Test.CountBy1, 'Tire', 2) ;
INSERT test.Orders (OrderID, Name, Qty)
VALUES (NEXT VALUE FOR Test.CountBy1, 'Seat', 1) ;
INSERT test.Orders (OrderID, Name, Qty)
VALUES (NEXT VALUE FOR Test.CountBy1, 'Brake', 1) ;
GO
-- View the table
/*
OrderID Name Qty
1 Tire 2
2 Seat 1
3 Brake 1
*/
SELECT * FROM Test.Orders ;
GO
--Using a sequence number
in multiple tables
CREATE SCHEMA
Audit;
GO
CREATE SEQUENCE Audit.EventCounter
AS INT
START WITH
1
INCREMENT BY
1;
GO
CREATE TABLE
Audit.ProcessEvents
(
EventID INT PRIMARY KEY CLUSTERED
DEFAULT ( NEXT VALUE FOR Audit.EventCounter ) ,
EventTime DATETIME NOT
NULL
DEFAULT
( GETDATE() ) ,
EventCode NVARCHAR(5) NOT NULL ,
Description NVARCHAR(300) NULL
);
GO
CREATE TABLE
Audit.ErrorEvents
(
EventID INT PRIMARY KEY CLUSTERED
DEFAULT ( NEXT VALUE FOR Audit.EventCounter ) ,
EventTime DATETIME NOT
NULL
DEFAULT ( GETDATE() ) ,
EquipmentID INT NULL
,
ErrorNumber INT NOT
NULL ,
EventDesc NVARCHAR(256) NULL
);
GO
CREATE TABLE
Audit.StartStopEvents
(
EventID INT PRIMARY KEY CLUSTERED
DEFAULT ( NEXT VALUE FOR Audit.EventCounter ) ,
EventTime DATETIME NOT
NULL
DEFAULT ( GETDATE() ) ,
EquipmentID INT NOT
NULL ,
StartOrStop BIT NOT
NULL
);
GO
INSERT Audit.StartStopEvents
( EquipmentID,
StartOrStop )
VALUES ( 248, 0
);
INSERT Audit.StartStopEvents
( EquipmentID,
StartOrStop )
VALUES ( 72, 0 );
INSERT Audit.ProcessEvents
( EventCode ,
Description
)
VALUES ( 2735 ,
'Clean room temperature 18 degrees C.'
);
INSERT Audit.ProcessEvents
( EventCode ,
Description
)
VALUES ( 18 ,
'Spin rate threashold exceeded.'
);
INSERT Audit.ErrorEvents
( EquipmentID ,
ErrorNumber ,
EventDesc
)
VALUES ( 248 ,
82 ,
'Feeder jam'
);
INSERT Audit.StartStopEvents
( EquipmentID,
StartOrStop )
VALUES ( 248, 1
);
INSERT Audit.ProcessEvents
( EventCode ,
Description
)
VALUES ( 1841 ,
'Central feed in bypass mode.'
);
-- The following statement
combines all events, though not all fields.
/*
EventID EventTime Description
1 2015-09-01
20:35:20.563 Start
2 2015-09-01
20:35:20.570 Start
3 2015-09-01
20:35:20.570 Clean room temperature 18
degrees C.
4 2015-09-01
20:35:20.570 Spin rate threashold exceeded.
5 2015-09-01
20:35:20.570 Feeder jam
6 2015-09-01
20:35:20.570 Stop
7 2015-09-01
20:35:20.570 Central feed in bypass mode.
*/
SELECT EventID ,
EventTime ,
Description
FROM Audit.ProcessEvents
UNION
SELECT EventID ,
EventTime ,
EventDesc
FROM Audit.ErrorEvents
UNION
SELECT EventID ,
EventTime ,
CASE StartOrStop
WHEN 0 THEN 'Start'
ELSE 'Stop'
END
FROM Audit.StartStopEvents
ORDER BY
EventID;
GO
--Generating repeating
sequence numbers in a result set
/*
SurveyGroup Name
1 #A9A6D133
2 PK__#A9A6D13__C9F49270719BC720
3 #B147F2FB
4 PK__#B147F2F__063DA64E0AF90C28
5 #B3303B6D
1 #B4245FA6
2 PK__#B4245FA__FBFF9D0072D9EE11
3 #B60CA818
4 #B9DD38FC
5 PK__#B9DD38F__727E83EBC12B3319
...............
*/
CREATE SEQUENCE CountBy5 AS tinyint
START WITH 1
INCREMENT BY 1
MINVALUE 1
MAXVALUE 5
CYCLE;
GO
SELECT NEXT
VALUE FOR CountBy5 AS
SurveyGroup, Name FROM
sys.objects ;
GO
(10) THROW statement
The THROW statement can be
used to raise an exception and transfer execution to a CATCH block of a
TRY…CATCH construct. For more information, see THROW
(Transact-SQL).
THROW [ {
error_number | @local_variable },
{
message | @local_variable },
{
state | @local_variable } ]
[ ;
]
=================================Extended
Reading==============================
Remarks
·
The statement before the THROW statement must be
followed by the semicolon (;) statement terminator.
·
If a TRY…CATCH construct is not available, the
session is ended. The line number and procedure where the exception is raised
are set. The severity is set to 16.
·
If the THROW statement is specified without
parameters, it must appear inside a CATCH block. This causes the caught
exception to be raised. Any error that occurs in a THROW statement causes the
statement batch to be ended.
·
RaisError vs Throw:
§ The
error_number parameter does not have to be defined
in sys.messages.
§ The
message parameter does not accept printf style formatting.
§
There is no severity parameter.
The exception severity is always set to 16.
§
With RAISERROR developers
had to use different ERROR_xxxx() system functions to get the error details to
pass through the RAISERROR() statement, like:
- ERROR_NUMBER()
- ERROR_MESSAGE()
- ERROR_SEVERITY()
- ERROR_STATE()
- ERROR_NUMBER()
- ERROR_MESSAGE()
- ERROR_SEVERITY()
- ERROR_STATE()
§
With THROW the
benefit is: it is not mandatory to pass any parameter to raise an exception.
Just using the THROW; statement will get the error details and raise it.
Just using the THROW; statement will get the error details and raise it.
§
Default THROW statement will show
the exact line where the exception was occurred. But RAISERROR() will show
the line number where the RAISERROR statement was executed and not the actual
exception. But if you parameterize the THROW statement, it will not show the actual
position of exception occurrence, and the behavior will be same as
RAISERROR(). As with RAISERROR() you've to provide mandatory params, so there is
no way to get the actual position of Line where the error occurred.
Error-handling in SQL Server
2012
· SQL
Server 2012 uses both RAISERROR and the THROW command to generate errors.
· You can
query the @@ERROR system function to determine whether an error has occurred
and what the error number is.
· You can
use the SET XACT_ABORT ON command to force a failure of a transaction and abort
a batch when an error occurs.
o
SET XACT_ABORT {ON | OFF} has appeared since SQL
Server 2000, at least.
o
When SET XACT_ABORT is ON, if a Transact-SQL statement raises a
run-time error, the entire transaction is terminated and rolled back.
o
When SET XACT_ABORT is OFF, in some cases only the Transact-SQL
statement that raised the error is rolled back and the transaction continues
processing. Depending upon the severity of the error, the entire transaction
may be rolled back even when SET XACT_ABORT is OFF. OFF is the default setting.
· Unstructured
error handling does not provide a single place in your code to handle errors.
· The
TRY/CATCH block provides each batch of T-SQL code with a CATCH block in which
to handle errors.
· The THROW
command can be used to re-throw errors.
· There is
a complete set of error functions to capture information about errors.
USE tempdb;
GO
CREATE TABLE
dbo.TestRethrow
( ID INT PRIMARY KEY
);
BEGIN TRY
INSERT dbo.TestRethrow(ID) VALUES(1);
-- Force
error 2627, Violation of PRIMARY KEY constraint to be raised.
INSERT dbo.TestRethrow(ID) VALUES(1);
END TRY
BEGIN CATCH
PRINT 'In catch block.';
THROW;
END CATCH;
--Exampl 2: Using
FORMATMESSAGE with THROW
EXEC sys.sp_addmessage
@msgnum = 60000
,@severity = 16
,@msgtext = N'This is a test message with one numeric parameter (%d), one
string parameter (%s), and another string parameter (%s).'
,@lang = 'us_english';
GO
DECLARE @msg NVARCHAR(2048) = FORMATMESSAGE(60000, 500, N'First string', N'second string');
THROW 60000,
@msg, 1;
Msg 60000, Level
16, State 1, Line 4
This is a test
message with one numeric parameter (500), one string parameter (First string),
and another string parameter (second string).
================================================================================
(11) 14 New Functions and 1 Changed Function
SQL Server 2012 introduces 14
new built-in functions. These functions ease the path of migration for
information workers by emulating functionality that is found in the expression
languages of many desktop applications. However these functions will also be
useful to experienced users of SQL Server.
The new functions are:
Conversion
functions
1) PARSE (Transact-SQL)
PARSE ( string_value AS data_type [ USING culture ] )
Note:
·
Use PARSE only for converting from string to date/time and
number types. For general type conversions, continue to use CAST or CONVERT.
Keep in mind that there is a certain performance overhead in parsing the string
value.
·
PARSE relies on the presence of .the .NET Framework Common
Language Runtime (CLR). This function will not be remoted since it depends on
the presence of the CLR. Remoting a function that requires the CLR would cause
an error on the remote server.
--PARSE into
datetime2
SELECT PARSE('Monday, 13 December 2010'
AS datetime2 USING 'en-US') AS Result;
--PARSE with
currency symbol
--Return 345.98
SELECT PARSE('€345,98'
AS money USING 'de-DE') AS Result;
2) TRY_PARSE (Transact-SQL) - Returns the result of an expression, translated to the requested data type, or null if the cast fails in SQL Server 2012. Use TRY_PARSE only for converting from string to date/time and number types.
--Return NULL
SELECT TRY_PARSE('€345,98'
AS money USING 'zh-CN') AS Result;
3) TRY_CONVERT (Transact-SQL) - Returns a value cast to the specified data type if the cast succeeds; otherwise, returns null.
SELECT
CASE WHEN TRY_CONVERT(float, 'test') IS NULL
THEN 'Cast failed'
ELSE 'Cast succeeded'
END AS
Result;
GO
Date
and time functions
4) DATEFROMPARTS (Transact-SQL)
DATEFROMPARTS (year, month, day)
--2012-12-31
SELECT DATEFROMPARTS ( 2010, 12, 31 ) AS Result;
5) DATETIME2FROMPARTS (Transact-SQL)
DATETIME2FROMPARTS ( year, month,
day, hour, minute, seconds, fractions, precision )
--2011-08-15 14:23:44.500
SELECT DATETIME2FROMPARTS ( 2011, 8, 15, 14, 23, 44, 500, 3 );
6) DATETIMEFROMPARTS (Transact-SQL)
DATETIMEFROMPARTS ( year, month,
day, hour, minute, seconds, milliseconds )
--2010-12-31 23:59:59.000
SELECT DATETIMEFROMPARTS ( 2010, 12, 31, 23, 59, 59, 0 ) AS Result;
7) DATETIMEOFFSETFROMPARTS (Transact-SQL)
DATETIMEOFFSETFROMPARTS ( year,
month, day, hour, minute, seconds, fractions, hour_offset, minute_offset,
precision )
--2011-08-15 14:30:00.500
+12:30
SELECT DATETIMEOFFSETFROMPARTS ( 2011, 8, 15, 14, 30, 00, 500, 12, 30, 3 );
8) EOMONTH (Transact-SQL)
EOMONTH ( start_date
[, month_to_add ] )
DECLARE @date DATETIME
= GETDATE();
SELECT EOMONTH ( @date ) AS 'This Month';
SELECT EOMONTH ( @date, 1 ) AS 'Next Month';
SELECT EOMONTH ( @date, -1 ) AS 'Last Month';
9) SMALLDATETIMEFROMPARTS (Transact-SQL)
SMALLDATETIMEFROMPARTS ( year,
month, day, hour, minute )
--2010-12-31 23:59:00
SELECT SMALLDATETIMEFROMPARTS ( 2010, 12, 31, 23, 59 ) AS Result
10) TIMEFROMPARTS (Transact-SQL)
TIMEFROMPARTS ( hour, minute,
seconds, fractions, precision )
--14:23:44.50
SELECT TIMEFROMPARTS ( 14, 23, 44, 50, 2 );
Logical
functions
11) CHOOSE (Transact-SQL) - Returns the item at the specified index (1-based) from a list of values.
CHOOSE ( index, val_1, val_2 [,
val_n ] )
--Developer
SELECT CHOOSE ( 3, 'Manager', 'Director', 'Developer', 'Tester' ) AS Result;
12) IIF (Transact-SQL) - Returns one of two values, depending on whether the Boolean expression evaluates to true or false.
IIF ( boolean_expression,
true_value, false_value )
--True
DECLARE @a int
= 45, @b int = 40;
SELECT IIF ( @a > @b, 'TRUE', 'FALSE' ) AS Result;
String
functions
13) CONCAT (Transact-SQL) - Returns a string that is the result of concatenating two or more string values.
CONCAT ( string_value1,
string_value2 [, string_valueN ] )
--Happy Birthday 11/25
SELECT CONCAT ( 'Happy ', 'Birthday ', 11, '/', '25' ) AS Result;
CREATE TABLE
#temp (
emp_name nvarchar(200) NOT NULL,
emp_middlename nvarchar(200) NULL,
emp_lastname nvarchar(200) NOT NULL
);
INSERT INTO
#temp VALUES( 'Name', NULL, 'Lastname' );
--Return NameLastname
SELECT CONCAT( emp_name,
emp_middlename, emp_lastname ) AS Result
FROM #temp;
14) FORMAT (Transact-SQL) - Returns a value formatted with the specified format and optional culture in SQL Server 2012. Use the FORMAT function for locale-aware formatting of date/time and number values as strings. For general data type conversions, use CAST or CONVERT.
FORMAT ( value, format [, culture ]
)
value
Expression of a supported data
type to format. For a list of valid types, see the table in the following
Remarks section.
format
nvarchar format pattern.
The format argument must contain a valid .NET
Framework format string, either as a standard format string (for example,
"C" or "D"), or as a pattern of custom characters for dates
and numeric values (for example, "MMMM DD, yyyy (dddd)"). Composite
formatting is not supported. For a full explanation of these formatting
patterns, please consult the .NET Framework documentation on string formatting
in general, custom date and time formats, and custom number formats. A good
starting point is the topic, "Formatting Types."
culture
Optional nvarchar argument specifying a culture.
If the culture argument is not provided, the language of
the current session is used. This language is set either implicitly, or
explicitly by using the SET LANGUAGE statement. culture accepts any culture supported by the .NET
Framework as an argument; it is not limited to the languages explicitly
supported by SQL Server . If theculture argument is not valid, FORMAT raises an error.
DECLARE @d DATETIME
= GETDATE();
SELECT FORMAT( @d, 'dd/MM/yyyy', 'en-US' ) AS 'DateTime Result'
,FORMAT(123456789,'###-##-####') AS 'Custom Number Result';
In addition to the 14 new
functions, one existing function has been changed. The existing
LOG
(Transact-SQL) function now has an optional second base parameter.
15) LOG (Transact-SQL)
LOG ( float_expression [, base ] )
--2.30
SELECT LOG (10);
--1.43
SELECT LOG (10,5);
(12) SQL Server Express LocalDB
SQL Server Express LocalDB is a new lightweight edition of
Express that has all its programmability features, yet runs in user mode and
has a fast, zero-configuration installation and short list of pre-requisites.
The LocalDB edition of SQL Server is targeted to
program developers. LocalDB installation copies a minimal set of
files necessary to start the SQL Server Database Engine. Once LocalDB is installed, developers initiate a
connection by using a special connection string. When connecting, the necessary
infrastructure is automatically created and started, enabling the application
to use the database without complex or time consuming configuration tasks.
Developer Tools can provide developers with a SQL Server Database Engine that
lets them write and test Transact-SQL code without having to manage a full
server instance of SQL Server. An instance of SQL Server Express LocalDB can be managed by using the SqlLocalDB.exe utility.
SQL Server Express LocalDB should be used in place of the SQL Server Express user instance feature which
is deprecated. For more information, see SQL
Server 2012 Express LocalDB.
(13) New and Enhanced Query Optimizer Hints
The syntax for the FORCESEEK
table hint has been modified. You can now specify an index and index columns to
further control the access method on the index. The existing FORCESEEK syntax
remains unmodified and works as before. No changes to applications are
necessary if you do not plan to use the new functionality.
The FORCESCAN table hint has
been added. It complements the FORCESEEK hint by specifying that the query
optimizer use only an index scan operation as the access path to the table or
view referenced in the query. The FORCESCAN hint can be useful for queries in
which the optimizer underestimates the number of affected rows and chooses a
seek operation rather than a scan operation. FORCESCAN can be specified with or
without an INDEX hint. For more information, see Table
Hints (Transact-SQL).
(14) Extended Event Enhancements
The following new Extended
Events are available.
page_allocated:
·
Fields: worker_address, number_pages, page_size, page_location,
allocator_type, page_allocator_type, pool_id
page_freed:
·
Fields: worker_address, number_pages, page_size, page_location,
allocator_type, page_allocator_type, pool_id
allocation_failure:
·
Fields: worker_address, failure_type, allocation_failure_type,
resource_size, pool_id, factor
The following Extended Events
have been modified.
resource_monitor_ring_buffer_record:
·
Fields removed: single_pages_kb, multiple_pages_kb
·
Fields added: target_kb, pages_kb
memory_node_oom_ring_buffer_recorded:
·
Fields removed: single_pages_kb, multiple_pages_kb
·
Fields added: target_kb, pages_kb
(15) OVER Clause Support Enhanced to support window functions
The OVER clause has been
extended to support window functions. Window
functions perform a calculation across a set of rows that are in some
relationship to the current row. For example, you can use the ROWS or
RANGE clause over a set of rows to calculate a moving average or cumulative
total.
In addition, ordering rows
within a partition is now supported in the aggregate functions that allow the
OVER clause to be specified.
=================================Extended
Reading==============================
What’s for?
Determines the partitioning
and ordering of a rowset before the associated window function is applied. That
is, the OVER clause defines a window or
user-specified set of rows within a query result set. A window function
then computes a value for each row in the window. You can use the OVER clause
with functions to compute aggregated values such as moving averages, cumulative
aggregates, running totals, or a top N per group results.
Syntax:
OVER (
[
<PARTITION BY clause> ]
[
<ORDER BY clause> ]
[ <ROW or RANGE clause> ]
)
<PARTITION BY clause> ::=
PARTITION BY value_expression , ...
[ n ]
<ORDER BY clause> ::=
ORDER BY order_by_expression
[ COLLATE
collation_name ]
[ ASC | DESC ]
[ ,...n ]
<ROW or RANGE clause> ::=
{ ROWS | RANGE } <window frame
extent>
<window frame extent> ::=
{ <window frame
preceding>
| <window frame between>
}
<window frame between> ::=
BETWEEN <window frame
bound> AND <window frame bound>
<window frame bound> ::=
{ <window frame
preceding>
| <window frame following>
}
<window frame preceding> ::=
{
UNBOUNDED PRECEDING
|
<unsigned_value_specification> PRECEDING
| CURRENT ROW
}
<window frame following> ::=
{
UNBOUNDED FOLLOWING
|
<unsigned_value_specification> FOLLOWING
| CURRENT ROW
}
<unsigned value specification> ::=
{ <unsigned integer literal>
}
Applies to:
1) Ranking functions
--Using the OVER clause
with the ROW_NUMBER function
USE AdventureWorks2012;
GO
SELECT ROW_NUMBER() OVER(PARTITION BY PostalCode ORDER BY SalesYTD DESC) AS "Row
Number",
p.LastName, s.SalesYTD, a.PostalCode
FROM Sales.SalesPerson
AS s
INNER JOIN Person.Person AS p
ON s.BusinessEntityID
= p.BusinessEntityID
INNER JOIN Person.Address AS a
ON a.AddressID = p.BusinessEntityID
WHERE TerritoryID IS
NOT NULL
AND SalesYTD <>
0
ORDER BY
PostalCode;
GO
2) Aggregate functions
--Using the OVER clause
with aggregate functions
USE AdventureWorks2012;
GO
SELECT SalesOrderID,
ProductID, OrderQty
,SUM(OrderQty) OVER(PARTITION BY
SalesOrderID) AS
Total
,AVG(OrderQty) OVER(PARTITION BY
SalesOrderID) AS
"Avg"
,COUNT(OrderQty) OVER(PARTITION BY
SalesOrderID) AS
"Count"
,MIN(OrderQty) OVER(PARTITION BY
SalesOrderID) AS
"Min"
,MAX(OrderQty) OVER(PARTITION BY
SalesOrderID) AS
"Max"
FROM Sales.SalesOrderDetail
WHERE SalesOrderID IN(43659,43664);
GO
--Specifying the ROWS
clause
SELECT BusinessEntityID, TerritoryID
,CONVERT(varchar(20),SalesYTD,1) AS SalesYTD
,DATEPART(yy,ModifiedDate) AS SalesYear
,CONVERT(varchar(20),SUM(SalesYTD) OVER (PARTITION BY TerritoryID
ORDER BY DATEPART(yy,ModifiedDate)
ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING ),1) AS CumulativeTotal
FROM Sales.SalesPerson
WHERE TerritoryID IS
NULL OR
TerritoryID < 5;
3) Analytic functions
--Using FIRST_VALUE over
partitions
USE AdventureWorks2012;
GO
SELECT JobTitle,
LastName, VacationHours,
FIRST_VALUE(LastName)
OVER (PARTITION BY JobTitle
ORDER BY
VacationHours ASC
ROWS UNBOUNDED PRECEDING
) AS
FewestVacationHours
FROM HumanResources.Employee
AS e
INNER JOIN
Person.Person AS
p
ON e.BusinessEntityID
= p.BusinessEntityID
ORDER BY
JobTitle;
4) NEXT VALUE FOR function
--Using a sequence with a
ranking window function
USE AdventureWorks2012 ;
GO
CREATE SCHEMA
Test;
GO
CREATE SEQUENCE Test.CountBy1
START WITH 1
INCREMENT BY 1 ;
GO
/*
ListNumber FirstName LastName
1 Syed Abbas
2 Catherine Abel
3 Kim Abercrombie
4 Kim Abercrombie
5 Kim Abercrombie
…….
*/
SELECT NEXT
VALUE FOR Test.CountBy1
OVER (ORDER BY LastName) AS ListNumber,
FirstName, LastName
FROM Person.Person
;
GO
==============================================================================
(16) Analytic Functions
The following analytic
functions have been added.
1) FIRST_VALUE (Transact-SQL) - Returns the first value in an ordered set of values in SQL Server 2012.
FIRST_VALUE ( [scalar_expression ] )
OVER ( [ partition_by_clause ] order_by_clause [ rows_range_clause ] )
--Using
FIRST_VALUE over partitions
USE AdventureWorks2012;
GO
SELECT JobTitle,
LastName, VacationHours,
FIRST_VALUE(LastName)
OVER (PARTITION BY JobTitle
ORDER BY
VacationHours ASC
ROWS UNBOUNDED PRECEDING
) AS
FewestVacationHours
FROM HumanResources.Employee
AS e
INNER JOIN
Person.Person AS
p
ON e.BusinessEntityID
= p.BusinessEntityID
ORDER BY
JobTitle;
2) LAST_VALUE (Transact-SQL) - Returns the last value in an ordered set of values in SQL Server 2012.
LAST_VALUE ( [scalar_expression )
OVER ( [ partition_by_clause ] order_by_clause rows_range_clause )
USE AdventureWorks2012;
GO
SELECT Department,
LastName, Rate,
HireDate,
LAST_VALUE(HireDate)
OVER (PARTITION BY
Department ORDER BY
Rate) AS
LastValue
FROM HumanResources.vEmployeeDepartmentHistory
AS edh
INNER JOIN
HumanResources.EmployeePayHistory AS eph
ON eph.BusinessEntityID
= edh.BusinessEntityID
INNER JOIN
HumanResources.Employee AS e
ON e.BusinessEntityID
= edh.BusinessEntityID
WHERE Department IN (N'Information Services',N'Document Control');
3) LEAD (Transact-SQL) - Accesses data from a subsequent row in the same result set without the use of a self-join in SQL Server 2012. LEAD provides access to a row at a given physical offset that follows the current row. Use this analytic function in a SELECT statement to compare values in the current row with values in a following row.
LEAD ( scalar_expression [ ,offset
] , [ default ] )
OVER ( [
partition_by_clause ] order_by_clause )
USE AdventureWorks2012;
GO
/*
TerritoryName BusinessEntityID SalesYTD NextRepSales
Canada 282 2604540.7172 1453719.4653
Canada 278 1453719.4653 0.00
Northwest 284 1576562.1966 1573012.9383
Northwest 283 1573012.9383 1352577.1325
Northwest 280 1352577.1325 0.00
*/
SELECT TerritoryName ,
BusinessEntityID ,
SalesYTD ,
LEAD(SalesYTD,
1, 0) OVER ( PARTITION BY
TerritoryName ORDER BY
SalesYTD DESC )
AS NextRepSales
FROM Sales.vSalesPerson
WHERE TerritoryName IN ( N'Northwest', N'Canada' )
ORDER BY
TerritoryName;
4) LAG (Transact-SQL) - Accesses data from a subsequent row in the same result set without the use of a self-join in SQL Server 2012. LEAD provides access to a row at a given physical offset before the current row. Use this analytic function in a SELECT statement to compare values in the current row with values in a previous row.
LAG ( scalar_expression [ ,offset ]
, [ default ] )
OVER ( [
partition_by_clause ] order_by_clause )
USE AdventureWorks2012;
GO
/*
TerritoryName BusinessEntityID SalesYTD NextRepSales
Canada 282 2604540.7172 0.00
Canada 278 1453719.4653 2604540.7172
Northwest 284 1576562.1966 0.00
Northwest 283 1573012.9383 1576562.1966
Northwest 280 1352577.1325 1573012.9383
*/
SELECT TerritoryName ,
BusinessEntityID ,
SalesYTD ,
LAG(SalesYTD, 1, 0) OVER ( PARTITION BY
TerritoryName ORDER BY
SalesYTD DESC )
AS NextRepSales
FROM Sales.vSalesPerson
WHERE TerritoryName IN ( N'Northwest', N'Canada' )
ORDER BY
TerritoryName;
5) PERCENT_RANK (Transact-SQL) – Calculates the relative rank of a row within a group of rows in SQL Server 2012. Use PERCENT_RANK to evaluate the relative standing of a value within a query result set or partition. PERCENT_RANK is similar to the CUME_DIST function.
PERCENT_RANK( )
OVER ( [ partition_by_clause ] order_by_clause )
USE AdventureWorks2012;
GO
--PERCENT_RANK()
= (RANK() – 1) / (Total Rows –
1)
/*
Department LastName Rate CumeDist PctRank
Document
Control Arifin 17.7885 1 1
Document
Control Kharatishvili 16.8269 0.8 0.5
Document
Control Norred 16.8269 0.8 0.5
Document
Control Berge 10.25 0.4 0
Document
Control Chai 10.25 0.4 0
Information
Services Trenary 50.4808 1 1
Information
Services Conroy 39.6635 0.9 0.889
Information
Services Ajenstat 38.4615 0.8 0.667
Information
Services Wilson 38.4615 0.8 0.667
Information
Services Sharma 32.4519 0.6 0.444
Information
Services Connelly 32.4519 0.6 0.444
Information
Services Bueno 27.4038 0.4 0
Information
Services Berg 27.4038 0.4 0
Information
Services Meyyappan 27.4038 0.4 0
Information
Services Bacon 27.4038 0.4 0
*/
SELECT Department ,
LastName ,
Rate ,
CUME_DIST() OVER ( PARTITION BY Department ORDER BY Rate ) AS CumeDist ,
PERCENT_RANK() OVER ( PARTITION BY Department ORDER BY Rate ) AS PctRank
FROM HumanResources.vEmployeeDepartmentHistory AS
edh
INNER JOIN
HumanResources.EmployeePayHistory AS e ON e.BusinessEntityID =
edh.BusinessEntityID
WHERE Department IN ( N'Information Services', N'Document Control' )
ORDER BY
Department ,
Rate DESC;
6) CUME_DIST (Transact-SQL) – see above
7) PERCENTILE_CONT (Transact-SQL) - Calculates a percentile based on a continuous distribution of the column value in SQL Server 2012. The result is interpolated and might not be equal to any of the specific values in the column.
PERCENTILE_CONT ( numeric_literal )
WITHIN GROUP ( ORDER BY order_by_expression [ ASC | DESC ] )
OVER ( [ <partition_by_clause> ] )
/*
The following
example uses PERCENTILE_CONT and PERCENTILE_DISC to find the median employee
salary in each department. Note that these functions may not return the same
value. This is because PERCENTILE_CONT interpolates the appropriate value,
whether or not it exists in the data set, while PERCENTILE_DISC always returns
an actual value from the set.
*/
USE AdventureWorks2012;
/*
DepartmentName MedianCont MedianDisc
Document
Control 16.8269 16.8269
Engineering 34.375 32.6923
Executive 54.32695 48.5577
*/
SELECT DISTINCT
Name AS DepartmentName
,PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY ph.Rate)
OVER
(PARTITION
BY Name) AS MedianCont
,PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY ph.Rate)
OVER (PARTITION BY Name) AS MedianDisc
FROM HumanResources.Department
AS d
INNER JOIN
HumanResources.EmployeeDepartmentHistory AS dh
ON dh.DepartmentID = d.DepartmentID
INNER JOIN
HumanResources.EmployeePayHistory AS ph
ON ph.BusinessEntityID
= dh.BusinessEntityID
WHERE dh.EndDate
IS NULL;
8) PERCENTILE_DISC (Transact-SQL) – see above for an example
Computes a specific percentile
for sorted values in an entire rowset or within distinct partitions of a rowset
in SQL Server 2012. For a given percentile value P,
PERCENTILE_DISC sorts the values of the expression in the ORDER BY clause and
returns the value with the smallest CUME_DIST value (with respect to the same
sort specification) that is greater than or equal to P. For
example, PERCENTILE_DISC (0.5) will compute the 50th percentile (that is, the
median) of an expression. PERCENTILE_DISC calculates the percentile based on a
discrete distribution of the column values; the result is equal to a specific
value in the column.
PERCENTILE_DISC ( numeric_literal ) WITHIN GROUP ( ORDER BY order_by_expression [ ASC | DESC ] ) OVER ( [ <partition_by_clause> ] )
(17) XQuery Functions Are Surrogate-Aware
The W3C recommendation for XQuery
functions and operators requires them to count a surrogate pair that represents
a high-range Unicode character as a single glyph in UTF-16 encoding. However,
in versions of SQL Server prior to SQL Server 2012, string functions did not
recognize surrogate pairs as a single character. Some string operations – such
as string length calculations and substring extractions – returned incorrect
results. SQL Server 2012 now fully supports UTF-16 and the correct handling of
surrogate pairs. For more information, see the section "XQuery Functions
Are Surrogate-Aware" in the topic Breaking
Changes to Database Engine Features in SQL Server 2012.