1. SQL Server 2008
(1) Compressed Storage of Tables and Indexes
SQL
Server 2008 supports both row and page compression for both tables and
indexes. Data compression can be configured for the following database
objects:
·
A whole table that is stored as a heap.
·
A whole table that is stored as a clustered index.
·
A whole nonclustered index.
·
A whole indexed view.
·
For partitioned tables and indexes, the compression option can
be configured for each partition, and the various partitions of an object do
not have to have the same compression setting.
The
compression setting of a table is not automatically applied to its nonclustered indexes. Each index must be set individually.
Compression is not available for system tables. Tables and indexes can be
compressed when they are created by using the CREATE
TABLE and CREATE
INDEX statements.
To change the compression state of a table, index, or partition, use the ALTER
TABLE or ALTER
INDEX statements.
CREATE TABLE T1
(c1 int, c2 nvarchar(50) )
WITH (DATA_COMPRESSION = ROW[PAGE]);
GO
-- To create a
partitioned table that has compressed partitions:
CREATE TABLE PartitionTable1
(col1 int, col2 varchar(max))
ON myRangePS1 (col1)
WITH
(
DATA_COMPRESSION
= ROW ON PARTITIONS (1),
DATA_COMPRESSION
= PAGE ON PARTITIONS (2 TO 4)
);
GO
/*
The following example
changes the compression of the partitioned table that is created in example C. The
REBUILD PARTITION = 1 syntax causes only partition number 1 to be rebuilt.
*/
ALTER TABLE PartitionTable1
REBUILD PARTITION = 1 WITH (DATA_COMPRESSION = NONE);
GO
(2) FILESTREAM Storage
FILESTREAM
storage enables SQL Server applications to store unstructured data, such as
documents and images, on the file system. This enables client applications to
use the rich streaming APIs and performance of the file system while
maintaining transactional consistency between the unstructured data and
corresponding structured data. For more information, see Designing
and Implementing FILESTREAM Storage.
You
should consider using FILESTREAM:
·
Objects that are being stored are, on average, larger than 1 MB.
·
Fast read access is important.
·
You are developing applications that use a middle tier for
application logic.
For
smaller objects, storing varbinary(max) BLOBs in the database often provides
better streaming performance.
· Enable FILESTREAM on the instance of the SQL Server Database
Engine.
· Creating a FILESTREAM-enabled
database
CREATE DATABASE Archive
ON
PRIMARY ( NAME = Arch1,
FILENAME
= 'c:\data\archdat1.mdf'),
FILEGROUP
FileStreamGroup1 CONTAINS FILESTREAM( NAME = Arch3,
FILENAME
= 'c:\data\filestream1')
LOG ON ( NAME = Archlog1,
FILENAME
= 'c:\data\archlog1.ldf')
GO
· Creating a table to hold FILESTREAM data
When
the database has a FILESTREAM filegroup, you can
create or modify tables to store FILESTREAM data. To specify that a column contains
FILESTREAM data, you create a varbinary(max) column and add the FILESTREAM attribute.
The
following code example shows how to create a table that is named Records. The Id column is a ROWGUIDCOLcolumn and is
required to use FILESTREAM data with Win32 APIs. The SerialNumber column is a UNIQUE INTEGER. The Chart column is a FILESTREAM column and is used to store the Chart in the file system.
CREATE TABLE Archive.dbo.Records
(
[Id]
[uniqueidentifier] ROWGUIDCOL NOT NULL UNIQUE,
[SerialNumber]
INTEGER UNIQUE,
[Chart]
VARBINARY(MAX) FILESTREAM
NULL
)
GO
· Working with FILESTREAM data with
T-SQL
INSERT INTO Archive.dbo.Records
VALUES
(newid (), 2,
CAST ('' as varbinary(max)));
GO
UPDATE Archive.dbo.Records
SET [Chart] = CAST('Xray 1' as varbinary(max))
WHERE
[SerialNumber] = 2;
DELETE Archive.dbo.Records
WHERE
SerialNumber = 1;
GO
(3) New Collations
SQL
Server 2008 introduces new collations that are in full alignment with collations
that Windows Server 2008 provides. These 80 new collations are denoted by *_100
version references. These collations provide users with the most up-to-date and
linguistically accurate cultural sorting conventions. For more information, see Collation
and Unicode Support and Working
with Collations.
(4) Partition Switching on Partitioned Tables and Indexes (It is in SQL Server 2005 already! What’s new in SQL Server 2008?)
Partitioning
data enables you to manage and access subsets of your data quickly and
efficiently while maintaining the integrity of the entire data collection. Now
you can use partition switching to quickly and efficiently transfer subsets of
your data by switching a partition from one table to another.
Partitioning data enables you to manage and
access subsets of your data quickly and efficiently while maintaining the integrity
of the entire data collection. You can use the Transact-SQL ALTER TABLE...SWITCH
statement to quickly and efficiently transfer subsets of your data in the
following ways:
·
Assigning a table as a
partition to an already existing partitioned table.
·
Switching a partition
from one partitioned table to another.
·
Reassigning a
partition to form a single table.
When a partition is transferred, the data is
not physically moved; only the metadata about the location of the data changes.
Before you can switch partitions, several general requirements must be met:
·
Both
tables must exist before the SWITCH operation. The table from which the partition is being moved (the source
table) and the table that is receiving the partition (the target table) must
exist in the database before you perform the switch operation.
·
The
receiving partition must exist and it must be empty. Whether you are adding a table as a
partition to an already existing partitioned table, or moving a partition from
one partitioned table to another, the partition that receives the new partition
must exist and it must be an empty partition.
·
The
receiving nonpartitioned table must exist and it must
be empty. If you are reassigning
a partition to form one nonpartitioned table, the
table that receives the new partition must exist and it must be an empty nonpartitioned table.
·
Partitions
must be on the same column. If you are switching a partition from one partitioned table to
another, both tables must be partitioned on the same column.
·
Source
and target tables must share the same filegroup. The source and the target table of the ALTER
TABLE...SWITCH statement must reside in the same filegroup,
and their large-value columns must be stored in the same filegroup.
Any corresponding indexes, index partitions, or indexed view partitions must
also reside in the same filegroup. However, the filegroup can be different from that of the corresponding
tables or other corresponding indexes.
For
information about partition switching concepts, and to see sample code that
implements partition switching, see Transferring
Data Efficiently by Using Partition Switching and Partition
Switching When Indexed Views Are Defined.
(5) Sparse Columns and Column Sets
Sparse
columns are ordinary columns that have an optimized storage format for null
values. Consider using sparse columns when at least 20 percent to 40 percent of
the values in a column will be NULL. For more information, see Using
Sparse Columns.
CREATE TABLE DocumentStore
(
DocID INT PRIMARY KEY ,
Title VARCHAR(200) NOT NULL ,
ProductionSpecification VARCHAR(20) SPARSE NULL
,
ProductionLocation SMALLINT SPARSE NULL
,
MarketingSurveyGroup VARCHAR(20) SPARSE NULL
);
GO
Tables
that use sparse columns can designate a column set to return all sparse columns
in the table. A column set is an untyped XML
representation that combines
all the sparse columns of a table into a structured output. For more
information, see Using
Column Sets.
CREATE TABLE DocumentStoreWithColumnSet
(
DocID INT PRIMARY KEY ,
Title VARCHAR(200) NOT NULL ,
ProductionSpecification VARCHAR(20) SPARSE
NULL ,
ProductionLocation SMALLINT SPARSE
NULL ,
MarketingSurveyGroup VARCHAR(20) SPARSE
NULL ,
MarketingProgramID INT SPARSE
NULL ,
SpecialPurposeColumns XML COLUMN_SET FOR ALL_SPARSE_COLUMNS
);
GO
INSERT
DocumentStoreWithColumnSet (DocID, Title, ProductionSpecification,
ProductionLocation)
VALUES (1, 'Tire Spec 1', 'AXZZ217', 27)
GO
INSERT
DocumentStoreWithColumnSet (DocID, Title, MarketingSurveyGroup)
VALUES (2, 'Survey 2142', 'Men 25 - 35')
GO
INSERT
DocumentStoreWithColumnSet (DocID, Title, SpecialPurposeColumns)
VALUES (3, 'Tire Spec 2', '<ProductionSpecification>AXW9R411</ProductionSpecification><ProductionLocation>38</ProductionLocation>')
GO
SELECT * FROM
DocumentStoreWithColumnSet ;
Sparse
columns and column sets are defined by using the CREATE
TABLE or ALTER
TABLE statements.
Sparse columns can be used with column sets and filtered indexes:
·
Column sets
INSERT,
UPDATE, and DELETE statements can reference the sparse columns by name.
However, you can also view and work with all the sparse columns of a table that
are combined into a single XML column. This column is called a column set. For
more information about column sets, see Using
Column Sets.
·
Filtered indexes
Because
sparse columns have many null-valued rows, they are especially appropriate for
filtered indexes. A filtered index on a sparse column can index only the rows
that have populated values. This creates a smaller and more efficient index.
For more information, see Filtered
Index Design Guidelines.
When to Use Filtered Indexes
Filtered
indexes are useful when columns contain well-defined subsets of data that
queries reference in SELECT statements. Examples are:
·
Sparse columns that contain only a few non-NULL values.
·
Heterogeneous columns that contain categories of data.
·
Columns that contain ranges of values such as dollar amounts,
time, and dates.
·
Table partitions that are defined by simple comparison logic for
column values.
USE
AdventureWorks;
GO
IF EXISTS (SELECT name FROM sys.indexes
WHERE
name = N'FIBillOfMaterialsWithEndDate'
AND
object_id = OBJECT_ID (N'Production.BillOfMaterials'))
DROP INDEX FIBillOfMaterialsWithEndDate
ON
Production.BillOfMaterials
GO
CREATE NONCLUSTERED INDEX
FIBillOfMaterialsWithEndDate
ON
Production.BillOfMaterials (ComponentID, StartDate)
WHERE
EndDate IS NOT NULL ;
GO
Sparse
columns and filtered indexes enable applications, such as Windows SharePoint
Services, to efficiently store and access a large number of user-defined
properties by using SQL Server 2008.
(6) Spatial Data Storage, Methods, and Indexing
Spatial
data represents information about the physical location and shape of geometric
objects. These objects can be point locations or more complex objects such as
countries, roads, or lakes.
There
are two types of spatial data: geometry and geography.
For
more information, see Working
with Spatial Data (Database Engine), geometry
Data Type Method Reference, and geography
Data Type Method Reference.
Spatial
indexes improve the efficiency of certain set-oriented operations on spatial
objects (spatial data). A spatial index
reduces the number of objects on which relatively costly spatial operations
need to be applied. For more information, see Working
with Spatial Indexes (Database Engine).
/*
The following example
creates a table named SpatialTable that contains a geometry type column,
geometry_col. The example
then creates a spatial index, SIndx_SpatialTable_geometry_col1,
on the geometry_col.
The example uses the default tessellation scheme and specifies the bounding
box.
*/
CREATE TABLE SpatialTable(id
int primary key, geometry_col geometry);
CREATE SPATIAL INDEX
SIndx_SpatialTable_geometry_col1
ON
SpatialTable(geometry_col)
WITH ( BOUNDING_BOX = ( 0, 0, 500, 200 ) );
/*
The following example
creates a table named SpatialTable2 that contains a geography type column,
geography_col. The example
then creates a spatial index, SIndx_SpatialTable_geography_col1,
on the geography_col.
The example uses the default parameters values of the GEOGRAPHY_GRID
tessellation scheme.
*/
CREATE TABLE SpatialTable2(id
int primary key, object GEOGRAPHY);
CREATE SPATIAL INDEX
SIndx_SpatialTable_geography_col1
ON
SpatialTable2(object);
(7) Wide Tables
A wide
table is a table that has defined a column set. Wide tables use sparse columns to increase the total of columns that a
table can have to 30,000 (1024 is the limit for a non-wide table). The number
of indexes and statistics is also increased to 1,000 and 30,000, respectively.
The maximum size of a wide table row is 8,019 bytes. Therefore, most of the
data in any particular row should be NULL. To create or change a table into a
wide table, you add a column
set to the
table definition. The maximum number of nonsparse
columns plus computed columns in a wide table remains 1,024.
By
using wide tables, you can create flexible schemas within an application. You
can add or drop columns whenever you want. Keep in mind that using wide tables
has unique performance considerations, such as increased run-time and
compile-time memory requirements. For more information, see Performance
Considerations for Wide Tables.
(8) Date and Time Data Types
SQL
Server 2008 introduces four new date and time data types. These types enable
applications to have separate types for date or time, increased year range,
increased fractional second precision and time-zone offset support. For more
information, see Using
Date and Time Data.
·
Date
·
Time
·
DateTime2
·
DateTimeOffSet
In SQL
Server 2005, only two types: DateTime and SmalDateTime
(9) Hierarchyid Data Type
SQL Server
2008 introduces a new system-provided data type to encapsulate hierarchical
relationships. Use hierarchyid as a data type to create tables with a
hierarchical structure or to reference the hierarchical structure of data in
another location. Use hierarchical methods to query and perform work with
hierarchical data by using Transact-SQL.
New in
SQL Server 2008, the hierarchyid type makes
it easier to store and query hierarchical data. hierarchyid is
optimized for representing trees, which are the most common type of
hierarchical data.
Examples
where the hierarchyid type makes it easier to store and query
hierarchical data include the following:
·
An organizational structure
·
A file system
·
A set of tasks in a project
·
A taxonomy of language terms
·
A graph of links between Web pages
CREATE TABLE HumanResources.NewOrg
(
OrgNode hierarchyid,
EmployeeID int,
LoginID nvarchar(50),
ManagerID int,
Title nvarchar(100),
HireDate datetime
CONSTRAINT
PK_NewOrg_OrgNode PRIMARY KEY CLUSTERED (OrgNode)
)
GO
Indexing
There are two strategies for indexing
hierarchical data:
·
Depth-first - In a
depth-first index all nodes in the subtree of a node are co-located.
Depth-first indexes are therefore efficient for answering queries about
subtrees, such as "Find all files in this folder and its subfolders".
·
Breadth-first (e.g., Find
all employees who report directly to this manager)
CREATE TABLE Organization
(
EmployeeID hierarchyid,
OrgLevel as
EmployeeID.GetLevel(),
EmployeeName nvarchar(50) NOT NULL
) ;
GO
CREATE CLUSTERED INDEX Org_Breadth_First
ON
Organization(OrgLevel,EmployeeID) ;
GO
CREATE UNIQUE INDEX Org_Depth_First
ON
Organization(EmployeeID)
;
GO
Methods:
GetDescendant (Database Engine)
IsDescendantOf (Database Engine)
GetReparentedValue (Database Engine)
Alternatives
Two alternatives to hierarchyid for
representing hierarchical data are:
·
Parent/Child
·
XML
The
related SqlHierarchyId CLR data type is available for client
applications. For more information, see Using hierarchyid Data Types
(Database Engine).
(10) User-Defined Table Type
The Database
Engine introduces a new user-defined table type that supports representing
table structures for use as parameters in stored procedures and functions, or
in a batch or the body of a stored procedure or function. You can create unique
constraints and primary keys on user-defined table types.
For
more information, see User-Defined
Table Types.
USE
AdventureWorks;
GO
/* Create a
user-defined table type */
CREATE TYPE LocationTableType AS
TABLE
(
LocationName VARCHAR(50)
, CostRate INT );
GO
(11) User Defined Types (UDT)
The
maximum size of a UDT has been increased to 2147483647 bytes (8K in SQL Server
2005.
(12) Full-Text Search
In SQL
Server 2008, the Full-Text Engine is integrated as a database service into
the relational database as part of the server query and storage engine
infrastructure. The new full-text search architecture achieves the following
goals:
·
Integrated storage and management—Full-text search is now
integrated directly with the inherent storage and management features of SQL
Server, and the Microsoft Full-Text Engine for SQL Server (MSFTESQL) service in SQL Server 2005
no longer exists.
o
Full-text indexes are stored
inside the database filegroups, rather than in the
file system. Administrative operations on a database, such as creating a
backup, automatically affect its full-text indexes.
o
A full-text catalog is
now a virtual object that does not belong to any filegroup;
it is a logical concept that refers to a group of full-text indexes.
Therefore, many catalog-management features have been deprecated, and
deprecation has created breaking changes for some features. For more
information,.
·
Integrated query processing—The new
full-text search query processor is part of the Database Engine and is fully
integrated with the SQL Server Query processor. This means that, the query
optimizer recognizes full-text query predicates and automatically executes them
as efficiently as possible.
·
Enhanced administration and troubleshooting—Integrated full-text
search provides tools to help you analyze search structures such as the
full-text index, the output of a given word breaker, stopword
configuration, and so forth. For more information, see Troubleshooting
Full-Text Search.
·
Stopwords and stoplists have replaced noise words and noise-word files. A
stoplist is a database object that facilitates manageability
tasks for stopwords and improves the integrity
between different server instances and environments. For more information, see Stopwords and Stoplists.
·
SQL Server 2008 includes new word breakers for many of the
languages that exist in SQL Server 2005. Only the word breakers for English,
Korean, Thai, and Chinese (all forms) remain the same. For other languages, if
a full-text catalog was imported when a SQL Server 2005 database was
upgraded to SQL Server 2008, one or more languages used by the full-text
indexes in full-text catalog might now be associated with new word breakers
that might behave slightly differently from the imported word breakers. For
more information about how to guarantee a total match between queries and the
full-text index content, see Full-Text
Search Upgrade.
·
A new FDHOST Launcher (MSSQLFDLauncher)
service has been added. For more information, see Getting
Started with Full-Text Search.
·
Full-text indexing works with a FILESTREAM column in the same way that it does with a varbinary(max) column. The FILESTREAM table must have a
column that contains the file name extension for each FILESTREAM BLOB. For more
information, see Querying
varbinary(max) and xml Columns (Full-Text Search), Full-Text
Search Filters, Full-Text
Indexing and Querying Process, and sys.fulltext_document_types
(Transact-SQL).
The
full-text engine indexes the contents of the FILESTREAM BLOBs. Indexing files
such as images might not be useful. When a FILESTREAM BLOB is updated it is reindexed.
For
more information, see Behavior Changes
to Full-Text Search in SQL Server 2008, Deprecated
Full-Text Search Features in SQL Server 2008, Breaking
Changes to Full-Text Search in SQL Server 2008 and Full-Text
Search Architecture.
The basic steps to configure table columns in
a database for full-text search:
· Setting Up a Full-Text Catalog and Index
o Create a full-text catalog to store full-text indexes.
Each full-text index must
belong to a full-text catalog. You can create a separate text catalog for each
full-text index, or you can associate multiple full-text indexes with a given
catalog. A full-text catalog is a virtual object and does not belong to any filegroup. The catalog is a logical concept that refers to
a group of full-text indexes.
--Step 1: create a
full-text catalog
USE
AdventureWorks;
GO
CREATE FULLTEXT CATALOG
AdvWksDocFTCat;
o Create a full-text index on the table or indexed view.
A full-text index is a special type of
token-based functional index that is built and maintained by the Full-Text
Engine. To create full-text search on a table or view, it must have a unique, single-column, non-nullable index. The Full-Text Engine requires this
unique index to map each row in the table to a unique, compressible key. A
full-text index can include char, varchar, nchar, nvarchar, text, ntext, image, xml, varbinary, and varbinary(max) columns.
· Choosing Options for a Full-Text Index
o
Choosing the column
language
o
Choosing a filegroup for a full-text index
o
Assigning the
full-text index to a full-text catalog
o
Associating a stoplist with the full-text index
o
Updating a full-text
index
-- Step 2: Before you
can create a full-text index on the Document table,
-- ensure that the
table has a unique, single-column, non-nullable index.
-- The following CREATE
INDEX statement creates a unique index, ui_ukDoc,
-- on the DocumentID
column of the Document table:
CREATE UNIQUE INDEX ui_ukDoc
ON Production.Document(DocumentID);
-- Step 3: After you
have a unique key, you can create a full-text index
-- on the Document
table by using the following statement.
CREATE FULLTEXT INDEX ON Production.Document
(
Document
--Full-text index column name
TYPE
COLUMN FileExtension --Name
of column that contains file type information
Language
2057
--2057 is the LCID for British English
)
KEY INDEX ui_ukDoc ON
AdvWksDocFTCat --Unique index
WITH CHANGE_TRACKING AUTO --Population type;
GO
· Query with Full-Text Search
o
Contains and FreeText
--The following example
finds all products with a price of $80.99 that contain the word
"Mountain".
USE
AdventureWorks;
GO
SELECT Name, ListPrice
FROM Production.Product
WHERE ListPrice = 80.99
AND
CONTAINS(Name, 'Mountain');
GO
-- The following
example searches for all documents containing the words related to vital, safety, components.
USE
AdventureWorks;
GO
SELECT Title
FROM Production.Document
WHERE FREETEXT (Document, 'vital safety components'
);
GO
o
ContainTable and FreeTextTable functions - referenced
like a regular table name in the FROM clause of a SELECT statement. They return
a table of zero, one, or more rows that match the full-text query. The returned
table contains only rows of the base table that match the selection criteria
specified in the full-text search condition of the function. Queries using one
of these functions return a relevance ranking value (RANK) and full-text key
(KEY) for each row, as follows:
§ KEY column
The KEY column returns unique values of the
returned rows. The KEY column can be used to specify selection criteria.
§ RANK column
The RANK column returns a rank value for
each row that indicates how well the row matched the selection criteria. The
higher the rank value of the text or document in a row, the more relevant the
row is for the given full-text query. Note that different rows can be ranked
identically. You can limit the number of matches to be returned by specifying
the optional top_n_by_rank parameter.
/*
Using CONTAINSTABLE -
The following example returns the description and category name of all food
categories for which the Description column contains the words "sweet and
savory" near either the word "sauces" or the word
"candies." All rows with a category name "Seafood" are
disregarded. Only rows with a rank value of 2 or higher are returned.
*/
USE Northwind;
GO
SELECT FT_TBL.Description,
FT_TBL.CategoryName,
KEY_TBL.RANK
FROM Categories
AS FT_TBL INNER
JOIN
CONTAINSTABLE (Categories, Description,
'("sweet and savory" NEAR sauces) OR
("sweet and
savory" NEAR candies)'
) AS KEY_TBL
ON
FT_TBL.CategoryID =
KEY_TBL.[KEY]
WHERE KEY_TBL.RANK > 2
AND
FT_TBL.CategoryName <>
'Seafood'
ORDER BY KEY_TBL.RANK DESC;
GO
/*
Using FREETEXTTABLE The
following example extends a FREETEXTTABLE query to return the highest ranked
rows first and to add the ranking of each row to the select list. To specify
the query, you must know that CategoryID is the unique key column for the
Categories table.
*/
USE Northwind;
GO
SELECT KEY_TBL.RANK, FT_TBL.Description
FROM Categories
AS FT_TBL
INNER JOIN
FREETEXTTABLE(Categories, Description,
'How can I make my own beers and ales?') AS KEY_TBL
ON FT_TBL.CategoryID = KEY_TBL.[KEY]
ORDER BY KEY_TBL.RANK DESC;
GO
(13) Compatibility Level
ALTER
DATABASE SET COMPATIBILITY_LEVEL replaces sp_dbcmptlevel
for setting the database compatibility level. For more information, see ALTER
DATABASE Compatibility Level (Transact-SQL).
ALTER DATABASE database_name
SET COMPATIBILITY_LEVEL = { 80 | 90 | 100 }
(14) Compound Operators
Operators
that perform an operation and set a variable to the result, for example SET @x += 2, are
available. For more information, see Compound
Operators (Transact-SQL).
(15) CONVERT Function
The
CONVERT function is enhanced to allow conversions between binary and character
hexadecimal values. For more information, see CAST and
CONVERT (Transact-SQL).
CONVERT ( data_type [ ( length ) ] , expression [ , style ] )
expression
Is any
valid expression.
data_type
Is the
target data type. This includes xml, bigint, and sql_variant.
Alias data types cannot be used.
length
Is an
optional integer that specifies the length of the target data type. The default value is 30.
style
Is an
integer expression that specifies how the CONVERT function is to translate expression. If
style is NULL, NULL is returned. The range is determined by data_type. For
more information, see the Remarks section.
Note: When expression is binary(n), varbinary(n), char(n), or varchar(n), style can be one of the values shown in the following table. Style values that are not listed in the table return an error.
0 (default) - Translates ASCII characters to
binary bytes or binary bytes to ASCII characters. Each character or byte is
converted 1:1. If the data_type is a binary type, the characters 0x are
added to the left of the result.
1, 2 - If the data_type is a binary type, the expression must be a
character expression. The expression must
be composed of an even number of hexadecimal digits (0, 1, 2, 3, 4, 5, 6, 7, 8,
9, A, B, C, D, E, F, a, b, c, d, e, f). If the style is set to 1 the characters 0x must be the
first two characters in the expression. If the expression contains an odd
number of characters or if any of the characters are invalid an error is
raised.
If the
length of the converted expression is greater than the length of the data_type the result will be right truncated.
Fixed
length data_types that are larger than the converted result
will have zeros added to the right of the result.
If the
data_type is a character type, the expression must be
a binary expression. Each binary character is converted into two hexadecimal
characters. If the length of the converted expression is greater than thedata_type length it will be right truncated.
If the data_type is a fix sized character type and the
length of the converted result is less than its length of the data_type; spaces are added to the right
of the converted expression to maintain an even number of hexadecimal digits.
The
characters 0x will be added to the left of the converted result for style 1.
Note:
Because Unicode data always uses an even number of bytes, use caution when you
convert binary or varbinary to or from Unicode supported data types.
For example, the following conversion does not return a hexadecimal value of
41; it returns 4100: SELECT CAST(CAST(0x41 AS nvarchar) AS varbinary).
(16) Date and Time Functionality - support for the ISO week-date system now
SQL
Server 2008 includes support for the ISO week-date system. For more
information, see DATEPART
(Transact-SQL).
ISO
8601 includes the ISO week-date system, a numbering system for weeks. Each week
is associated with the year in which Thursday occurs. For example, week 1 of
2004 (2004W01) ran from Monday 29 December 2003 to Sunday, 4 January 2004. The
highest week number in a year might be 52 or 53. This style of numbering is
typically used in European countries, but rare elsewhere.
The
first Thursday on the year is week 1. Weeks start Mondays.
(17) GROUPING SETS
The GROUPING SETS, ROLLUP, and
CUBE operators are added to the GROUP BY clause. There is a new function, GROUPING_ID (), that
returns more grouping-level information than the existing GROUPING() function. The non-ISO compliant WITH
ROLLUP, WITH CUBE, and ALL syntax is deprecated. For more information,
see Using
GROUP BY with ROLLUP, CUBE, and GROUPING SETS.
(18) MERGE Statement
This
new Transact-SQL statement performs INSERT, UPDATE, or DELETE operations on a
target table based on the results of a join with a source table. The syntax
allows you to join a data source with a target table or view, and then perform
multiple actions based on the results of that join. For more information, see MERGE
(Transact-SQL).
[ WITH <common_table_expression> [,...n] ]
MERGE
[ TOP ( expression ) [ PERCENT ] ]
[ INTO ] <target_table> [ WITH ( <merge_hint> ) ] [
[ AS ] table_alias ]
USING
<table_source>
ON
<merge_search_condition>
[ WHEN MATCHED [ AND <clause_search_condition>
]
THEN
<merge_matched> ]
[...n ]
[ WHEN NOT MATCHED [ BY TARGET ] [ AND <clause_search_condition> ]
THEN
<merge_not_matched> ]
[ WHEN NOT MATCHED BY SOURCE [ AND <clause_search_condition>
]
THEN
<merge_matched> ]
[...n ]
[ <output_clause> ]
[ OPTION ( <query_hint> [
,...n ] ) ]
;
USE
AdventureWorks;
GO
IF OBJECT_ID(N'Production.usp_UpdateInventory', N'P') IS NOT NULL
DROP
PROCEDURE Production.usp_UpdateInventory;
GO
CREATE PROCEDURE Production.usp_UpdateInventory
@OrderDate DATETIME
AS
MERGE
Production.ProductInventory AS target
USING
( SELECT ProductID ,
SUM(OrderQty)
FROM Sales.SalesOrderDetail AS
sod
JOIN Sales.SalesOrderHeader
AS soh ON sod.SalesOrderID = soh.SalesOrderID
AND soh.OrderDate
= @OrderDate
GROUP BY ProductID
) AS source ( ProductID, OrderQty
)
ON ( target.ProductID = source.ProductID )
WHEN
MATCHED AND target.Quantity - source.OrderQty <= 0 THEN
DELETE
WHEN
MATCHED THEN
UPDATE
SET
target.Quantity
= target.Quantity - source.OrderQty ,
target.ModifiedDate
= GETDATE()
OUTPUT
$action ,
Inserted.ProductID ,
Inserted.Quantity ,
Inserted.ModifiedDate ,
Deleted.ProductID ,
Deleted.Quantity ,
Deleted.ModifiedDate;
GO
EXECUTE Production.usp_UpdateInventory '20030501'
(19) SQL Dependency Reporting
SQL
Server 2008 introduces a new catalog view and system functions to provide consistent
and reliable SQL dependency reporting. You can use sys.sql_expression_dependencies, sys.dm_sql_referencing_entities, and sys.dm_sql_referenced_entities to report on cross-server, cross-database,
and database SQL dependencies for both schema-bound and non-schema-bound
objects. For more information, see Reporting
SQL Dependencies.
In short, to view SQL dependencies, SQL
Server 2008 provides:
·
sys.sql_expression_dependencies catalog view
·
sys.dm_sql_referenced_entities and sys.dm_sql_referencing_entities
DMFs.
·
View Dependencies in SQL Server Management Studio.
(20) Table-Valued Parameters
The
Database Engine introduces a new parameter type that can reference user-defined
table types. Table-valued parameters can send multiple rows of data to
a SQL Server statement or routine (such as a stored procedure or function)
without creating a temporary table. For more information, see Table-Valued
Parameters (Database Engine).
==========================Extended Reading=========================
Table-valued parameters are like parameter
arrays in OLE DB and ODBC, but offer more flexibility and closer integration
with Transact-SQL. Table-valued parameters also have the benefit of being able
to participate in set-based operations.
You can create and execute Transact-SQL
routines with table-valued parameters, and call them from Transact-SQL code, managed and
native clients in any managed language.
· Creating
and Using Table-Valued Parameters in Transact-SQL
1.
Create
a table type and define the table structure.
/*
The following example
uses Transact-SQL and shows you how to create a table-valued parameter type, declare
a variable to reference it, fill the parameter list, and then pass the values
to a stored procedure.
*/
USE
AdventureWorks;
GO
/* Create a table type.
*/
CREATE TYPE LocationTableType AS
TABLE
(
LocationName VARCHAR(50)
, CostRate INT );
GO
2.
Declare
a routine that has a parameter of the table type.
/* Create a procedure
to receive data for the table-valued parameter. */
CREATE PROCEDURE usp_InsertProductionLocation
@TVP LocationTableType READONLY
AS
SET
NOCOUNT ON
INSERT
INTO [AdventureWorks].[Production].[Location]
([Name]
,[CostRate]
,[Availability]
,[ModifiedDate])
SELECT *, 0, GETDATE()
FROM @TVP;
GO
3.
Declare
a variable of the table type, and reference the table type.
/* Declare a variable
that references the type. */
DECLARE
@LocationTVP
AS
LocationTableType;
4.
Fill
the table variable by using an INSERT statement.
/* Add data to the
table variable. */
INSERT INTO @LocationTVP (LocationName,
CostRate)
SELECT
[Name], 0.00
FROM
[AdventureWorks].[Person].[StateProvince];
5.
After
the table variable is created and filled, you can pass the variable to a
routine.
/* Pass the table
variable data to a stored procedure. */
EXEC
usp_InsertProductionLocation @LocationTVP;
GO
· Benefits
Table-valued parameters offer more flexibility
and in some cases better performance than temporary tables or other ways to
pass a list of parameters. Table-valued parameters offer the following
benefits:
o
Do
not acquire locks for the initial population of data from a client.
o
Provide
a simple programming model.
o
Enable
you to include complex business logic in a single routine.
o
Reduce
round trips to the server.
o
Can
have a table structure of different cardinality.
o
Are
strongly typed.
o
Enable
the client to specify sort order and unique keys.
· Restrictions
o
SQL
Server does not maintain statistics on columns of table-valued parameters.
o
Table-valued
parameters must be passed as input READONLY parameters to Transact-SQL
routines. You cannot perform DML operations such as UPDATE, DELETE, or INSERT
on a table-valued parameter in the body of a routine.
o
You
cannot use a table-valued parameter as target of a SELECT INTO or INSERT EXEC
statement. A table-valued parameter can be in the FROM clause of SELECT INTO or
in the INSERT EXEC string or stored-procedure.
(21) Transact-SQL Row Constructors
Transact-SQL
is enhanced to allow multiple value inserts within a single INSERT statement.
For more information, see INSERT
(Transact-SQL).