1. SQL Server 2000
I. New
features
1)
User-defined functions
SQL
Server has always provided the ability to store and execute SQL code routines
via stored procedures. In addition, SQL Server has always supplied a number of
built-in functions. Functions can be used almost anywhere an expression can be
specified in a query. This was one of the shortcomings of stored
procedures—they couldn't be used inline in queries in select lists, where
clauses, and so on. Perhaps you want to write a routine to calculate the last
business day of the month. With a stored procedure, you have to execute the
procedure, passing in the current month as a parameter and returning the value
into an output variable, and then use the variable in your queries. If only you
could write your own function that you could use directly in the query just
like a system function, in SQL Server 2000, you can.
SQL Server 2000 introduces the long-awaited
support for user-defined functions. User-defined functions can take zero or
more input parameters and return a single value—either a scalar value like the
system-defined functions, or a table result. Table-valued functions can be used
anywhere table or view expressions can be used in queries, and they can perform
more complex logic than is allowed in a view.
2)
Indexed views
Views are often used to simplify complex
queries, and they can contain joins and aggregate functions. However, in the
past, queries against views were resolved to queries against the underlying
base tables, and any aggregates were recalculated each time you ran a query
against the view. In SQL Server 2000 Enterprise or Developer Edition, you can
define indexes on views to improve query performance against the view. When
creating an index on a view, the result set of the view is stored and indexed
in the database. Existing applications can take advantage of the performance
improvements without needing to be modified.
Indexed views can improve performance for the
following types of queries:
·
Joins
and aggregations that process many rows
·
Join
and aggregation operations that are performed frequently within many queries
·
Decision
support queries that rely on summarized, aggregated data that is infrequently
updated
3)
Distributed partitioned views
SQL Server 7.0 provided the ability to create partitioned
views using the UNION ALL statement in a view definition. It was limited,
however, in that all the tables had to reside within the same SQL Server where
the view was defined.
SQL Server 2000 expands the ability to create
partitioned views by allowing you to horizontally partition tables across
multiple SQL Servers. The feature helps you scale out one database server to
multiple database servers, while making the data appear as if it comes from a
single table on a single SQL Server. In addition, partitioned views are now
able to be updated.
4)
INSTEAD OF (new in 2000) and AFTER (>1 now) triggers
In versions of SQL Server prior to 7.0, a table
could not have more than one trigger defined for INSERT, UPDATE, and DELETE.
These triggers only fired after the data modification took place. SQL
Server 7.0 introduced the ability to define multiple AFTER triggers for the
same operation on a table. SQL Server 2000 extends this capability by providing
the ability to define which AFTER trigger fires first and which fires last.
(Any other AFTER triggers besides the first and last will fire in an
undetermined order.)
SQL Server 2000 also introduces the ability to
define INSTEAD OF triggers. INSTEAD OF triggers can be specified on both tables
and views. (AFTER triggers can still only be specified on tables.) If an
INSTEAD OF trigger is defined on a table or view, the trigger will be executed
in place of the data modification action for which it is defined. The data
modification is not executed unless the SQL code to perform it is included in
the trigger definition.
5) New
data types – Bigint, sql_variant, and table data type (as a local variable
only)
SQL Server 2000 introduces three new datatypes.
Two of these can be used as datatypes for local variables, stored procedure
parameters and return values, user-defined function parameters and return
values, or table columns:
·
bigint—
An 8-byte integer that can store values from –263 (–9223372036854775808)
through 263-1(9223372036854775807).
·
sql_variant—
A variable-sized column that can store values of various SQL Server–supported
data types, with the exception of text, ntext, timestamp, and sql_variant.
·
The
third new datatype, the table datatype, can be used only as a local
variable datatype within functions, stored procedures, and SQL batches. The
table datatype cannot be passed as a parameter to functions or stored
procedures, nor can it be used as a column datatype. A variable defined with
the table datatype can be used to store a result set for later processing. A
table variable can be used in queries anywhere a table can be specified.
6) Text
in Row Data
In previous versions of SQL Server, text and
image data was always stored on a separate page chain from where the actual
data row resided. The data row contained only a pointer to the text or image
page chain, regardless of the size of the text or image data. SQL Server 2000
provides a new text in row table option that allows small text and image data
values to be placed directly in the data row, instead of requiring a separate
data page. This can reduce the amount of space required to store small text and
image data values, as well as reduce the amount of I/O required to retrieve
rows containing small text and image data values.
7)
Cascading RI constraints
In previous versions of SQL Server, referential
integrity (RI) constraints were restrictive only. If an insert, update, or
delete operation violated referential integrity, it was aborted with an error
message. SQL Server 2000 provides the ability to specify the action to take
when a column referenced by a foreign key constraint is updated or deleted. You
can still abort the update or delete if related foreign key records exist by
specifying the NO ACTION option, or you can specify the new CASCADE option,
which will cascade the update or delete operation to the related foreign key
records.
8)
Multiple SQL Server instances
Previous versions of SQL Server supported the
running of only a single instance of SQL Server at a time on a computer.
Running multiple instances or multiple versions of SQL Server required
switching back and forth between the different instances, requiring changes in
the Windows registry. (The SQL Server Switch provided with 7.0 to switch
between 7.0 and 6.5 performed the registry changes for you.)
SQL Server 2000 provides support for running
multiple instances of SQL Server on the same system. This allows you to
simultaneously run one instance of SQL Server 6.5 or 7.0 along with one or more
instances of SQL Server 2000. Each SQL Server instance runs independently of
the others and has its own set of system and user databases, security
configuration, and so on. Applications can connect to the different instances
in the same way they connect to different SQL Servers on different machines.
This feature provides the ability to run an
older version of SQL Server alongside SQL Server 2000, as well as the ability
to run separate environments (for example, a development and test environment)
on the same computer.
9) XML
support
Extensible Markup Language has become a standard
in Web-related programming to describe the contents of a set of data and how
the data should be output or displayed on a Web page. XML, like HTML, is
derived from the Standard Generalized Markup Language (SGML). When linking a
Web application to SQL Server, a translation needs to take place from the
result set returned from SQL Server to a format that can be understood and
displayed by a Web application. Previously, this translation needed to be done
in a client application.
SQL Server 2000 provides native support for XML.
This new feature provides the ability to do the following:
·
Return
query result sets directly in XML format.
·
Retrieve
data from an XML document as if it were a SQL Server table.
·
Access
SQL Server through a URL using HTTP. Through Internet Information Services
(IIS), you can define a virtual root that gives you HTTP access to the data and
XML functionality of SQL Server 2000.
The latest version of SQLXML, version 3.0,
extends the built-in XML capabilities of SQL Server 2000 with technology to
create XML Web services from SQL Server stored procedures or server-side XML
templates. SQLXML 3.0 also includes extensions to the .NET Framework that
provide SQLXML programmability to the languages supported by Microsoft Visual
Studio .NET, including C# and Microsoft Visual Basic .NET.
To help you make the most of the XML
capabilities of SQL Server 2000, Microsoft provides, as a free download from
its Web site, the Microsoft SQL Server 2000 Web Services Toolkit, which
consists of tools, code, samples, and whitepapers for building XML Web services
and Web applications with SQL Server 2000.
10) Log
shipping
The Enterprise Edition of SQL Server 2000 now
supports log shipping, which you can use to copy and load transaction log
backups from one database to one or more databases on a constant basis. This
allows you to have a primary read/write database with one or more read-only
copies of the database that are kept synchronized by restoring the logs from
the primary database. The destination database can be used as a warm standby
for the primary database, for which you can switch users over in the event of a
primary database failure. Additionally, log shipping provides a way to offload
read-only query processing from the primary database to the destination
database.
This capability was available in previous
versions of SQL Server, but it required the DBA to manually set up the process
and schedule the jobs to copy and restore the log backups. SQL Server 2000
officially supports log shipping and has made it easier to set up via the
Database Maintenance Plan Wizard. This greatly simplifies the process by
automatically generating the jobs and configuring the databases to support log
shipping.
11)
Notification Services
A new component is now available for SQL Server
2000 that makes it easy to build applications that forward messages to end
users. This feature is called SQL Server Notification Services. SQL Server
Notification Services is a platform for the development and deployment of
notification applications. Notification applications send messages to users
based upon subscriptions that they set up in advance. Depending on how the
subscriptions are configured, messages can be sent to the subscriber
immediately or on a predetermined schedule. The messages sent can be
personalized to reflect the preferences of the subscriber.
The Notification Services platform provides a
reliable, high-performance server that is built on the .NET Framework and SQL
Server 2000 and runs on the Microsoft Windows Server family of operating
systems. Notification Services was designed for scalability and efficiency: It
can support applications with millions of subscribers and large volumes of
data. As a platform, it is extensible and provides interoperability with a
variety of existing applications.
SQL Server serves as the matching engine for
notification applications, as well as the storage location for the subscription
information. The subscriber and delivery information is stored in a central
Notification Services database, and individual subscription information is
stored in application-specific databases.
12)
Microsoft SQL Server 2000 Driver for JDBC
Microsoft recently released its JDBC driver for
SQL Server 2000 as a free download for all SQL Server 2000 customers. The
Microsoft SQL Server 2000 Driver for JDBC is a Type 4 JDBC driver that provides
highly scalable and reliable connectivity for the enterprise Java environment.
The current release of the SQL Server 2000 Driver for JDBC supports the JDBC
2.0 specification.
The SQL Server 2000 Driver for JDBC provides
JDBC access to SQL Server 2000 from any Java-enabled applet, application, or
application server. The JDBC driver provides enterprise features like support
for XA transactions, server-side cursors, SQL_Variant datatypes, updateable
resultsets, and more.
The SQL Server 2000 Driver for JDBC supports the
Java Developer's Kit versions 1.1.8, 1.2, and 1.3 and is supported on the
following operating systems:
·
Microsoft
Windows® XP
·
Microsoft
Windows 2000 with Service Pack 2 or higher
·
AIX
·
HP-UX
·
Solaris
·
Linux
II.
Enhancements
1)
Index Enhancements
SQL Server 2000 provides enhancements for
defining indexes as well as enhancements to the way indexes are built.
Indexes can now be defined on computed columns,
and you can specify when creating an index whether it should be built in
ascending or descending index key order. SQL Server 2000 also provides an
option to use tempdb for performing the sorting operations when creating an
index. This can be specified with the WITH SORT_IN_TEMPDB option. When WITH
SORT_IN_TEMPDB is specified, SQL Server performs the intermediate sorting
required to build this index in tempdb, rather than in the current database. If
tempdb is on a separate disk from the destination file group in the current
database, building the index will take less time.
In addition to the new creation options, SQL
Server now supports the use of multiple processors to perform parallel scanning
and sorting when creating an index to help speed up index creation.
2)
Collation Enhancements
In previous versions of SQL Server, all
databases within a SQL Server had to use the same code page and sort order that
SQL Server was configured to use. (This is typically established during SQL
Server installation and is not always easy to change.) If you had to restore a
database from a server using a different sort order or collation, a normal
backup and restore wouldn't work and you would have to bring it over using the
Database Migration utility.
SQL Server 2000 now uses collations instead of
code pages and sort orders and supports multiple collations within a single SQL
Server. You now can specify collations at the database level or at the column
level within a table. SQL Server 2000 still supports most collations that were
supported in earlier versions of SQL Server, as well as provides support for
collations based on Windows collations.
3) DBCC
Enhancements
DBCC can be run without taking shared table
locks while scanning tables, thereby enabling them to be run concurrently with
update activity on tables. Additionally, DBCC now takes advantage of multiple
processors, thus enabling near-linear gain in performance in relation to the
number of CPUs (provided that I/O is not a bottleneck).
4) Full
Text Search Enhancements
SQL Server 2000 provides enhancements to the
Full Text Search capabilities by including change tracking, which maintains a
log of all changes to full-text indexed data. SQL Server 2000 also includes
image filtering, which allows you to index and query documents stored in image
columns.
With change tracking, you can update the
full-text index by flushing the change log manually, on a schedule, or as
changes occur, using the background update index option.
Image filtering allows you to specify the
filename extension that a document would have had if it were stored as a file
in the file system. The Full Text Search services can then load the appropriate
document filter and extract textual information for indexing from the image
data.
5)
Clustering Enhancements
In SQL Server 2000, Microsoft has made it much
easier to install, configure, and maintain a Microsoft SQL Server 2000 failover
cluster. In addition, SQL Server 2000 now provides the ability to failover and
failback to or from any node in a SQL Server 2000 cluster, add or remove a node
from the cluster using the SQL Server 2000 Setup utility, and reinstall or
rebuild a cluster instance on any node without affecting the other cluster node
instances.
6)
Backup and Restore Enhancements
In SQL Server 2000, passwords can be defined for
backup sets and media sets to prevent unauthorized users from restoring
sensitive SQL Server backups. SQL Server 2000 also has improved the speed of
differential database backups such that they now should complete in a time
proportional to the amount of data changed since the last full backup.
SQL Server 2000 also introduces a new model for
specifying backup and restore options for your database. Previous database
options such as “truncate log on checkpoint” and “select into/bulk copy” have
been replaced by three recovery models: Full Recovery, Bulk Logged Recovery,
and Simple Recovery. These new models help clarify when you are balancing
increased or decreased exposure to losing work against the performance and log
space requirements of the different plans.
SQL Server 2000 also provides support for
recovering transaction logs to specific points of work using named log marks in
the transaction log, as well as the ability to perform partial database
restores.
7) Up
to 64GB Memory Support
The Enterprise Editions of SQL Server 2000 can
use the Microsoft Windows 2000 Advanced Windows Extension (AWE) API to support
up to 8GB of memory on a Windows 2000 Advanced Data Server and 64GB of memory
on a Windows 2000 Datacenter server.
8) SQL
Server 2000 64-Bit Version
At the time of this writing, Microsoft was in
the process of beginning beta testing for a 64-bit version of SQL Server 2000,
code-named “Liberty.” Built to take advantage of hardware enhancements of the
64-bit Itanium platform from Intel, Liberty offers higher levels of
single-system scalability for memory-intensive data applications. Currently,
the planned hardware offerings support up to 64GB of physical linear memory for
the 64-bit version of SQL Server 2000, with up to 4 terabytes of physical
linear memory planned for future hardware.
The 64-bit version of SQL Server 2000 includes a
64-bit database server, a 64-bit server agent, and a 64-bit analysis server for
OLAP and data mining. The 64-bit version of SQL Server 2000 will require 64-bit
hardware running the 64-bit version of Microsoft Windows .NET Server Beta3 or
the current release of Windows Advanced Server, Limited Edition running on
Intel Itanium processors.
The 64-bit Windows Advanced Server, Limited
Edition platform provides the ability to install up to 16 instances of SQL
Server 2000 on a single machine, and supports larger numbers of users and
applications. This should result in a lower cost of ownership as businesses
will require fewer servers to support the same number of users.
The 64-bit components of SQL Server 2000 are
code compatible with the 32-bit versions of SQL Server 2000, providing
compatibility for organizations that need to maintain some of their 32-bit SQL
Server deployments, while introducing new 64-bit technologies for larger, more
demanding database applications. All existing 32-bit client applications,
including database management and administration tools such as Enterprise
Manager, Query Analyzer, and so on, can be used to remotely manage 64-bit SQL
Server 2000 installations. In addition, the database storage structures are
identical between the 32- and 64-bit versions of SQL Server, so databases can
be exchanged between the 32- and 64-bit environments.
9)
Analysis Services Enhancements
What was formerly known as SQL Server OLAP
Services is now called SQL Server Analysis Services. Analysis Services provides
new and improved features that enhance the capabilities of the previous OLAP
Services provided in SQL Server 7.0. A major new feature is the Data Mining
component, which can be used to discover information in OLAP cubes and
relational databases. In addition, enhancements have been made to the Cube,
Dimension, and Hierarchy types to improve and extend the scalability,
functionality, and analysis capabilities of cubes. Security enhancements
include the ability to assign permissions on cube cells and dimension members
to roles.
10)
Data Transformation Services Enhancements
(1) New
Custom Tasks
New DTS custom tasks, available through DTS
Designer or the DTS object model, allow you to create DTS packages that perform
tasks or set variables based on the properties of the run-time environment. Use
these tasks to:
·
Import
data from, and send data and completed packages to, Internet and File Transfer
Protocol (FTP) sites.
·
Run
packages asynchronously.
·
Build
packages that send messages to each other.
·
Build
packages that execute other packages.
·
Join
multiple package executions as part of a transaction.
DTS package logs save information for each
package execution, allowing you to maintain a complete execution history. You
can also view execution information for individual processes within a task.
You can generate exception files for transformation
tasks. When you log to exception files, you can save source and destination
error rows to a file through the DTS OLE DB text file provider and re-process
the error rows.
DTS packages now can be saved to a Microsoft®
Visual Basic® file. This allows a package created by the DTS Import/Export
Wizard or DTS Designer to be incorporated into Visual Basic programs or to be
used as prototypes by Visual Basic developers who need to reference the
components of the DTS object model.
A new multiphase data pump allows advanced users
to customize the operation of the data pump at various stages of its operation.
You can now use global variables as input and output parameters for queries.
You can now use parameterized source queries in
a DTS transformation task and an Execute SQL task. In addition, DTS includes an
option for saving the results of a parameterized query to a global variable,
allowing you to perform functions such as saving disconnected Microsoft
ActiveX® Data Objects (ADO) recordsets in DTS.
You now can use the Execute Package task to
dynamically assign the values of global variables from a parent package to a
child package. Use global variables to pass information from one package to
another when each package performs different work items. For example, use one
package to download data on a nightly basis, summarize the data, assign summary
data values to global variables, and pass the values to another package that
further processes the data.
11)
Microsoft SQL Server 2000 Meta Data Services replaces Microsoft Repository in
SQL Server 7.0, which was subsequently replaced by XML in SQL Server 2005
Microsoft® SQL Server™ 2000 Meta Data Services
extends and renames the former repository component known as Microsoft
Repository. Meta Data Services extends repository technology by introducing a
new browser for viewing data in a repository database, new Extensible Markup Language (XML) interchange
support, and new repository engine features.
To find out why SQL Server 2000 Meta Data
Services was deprecated in SQL Server 2005, read
12)
English Query Enhancements – Discontinued in SQL Server 2005, replace with
Full-text Search in SQL Server 20005+ and Semantic Search in SQL Server 2012+
English Query introduces new features such as:
·
Greater integration with
Microsoft® Visual Studio®, Analysis Services, and Full-Text Search.
·
A graphical user
interface for English Query authoring.
·
The SQL Project Wizard.
·
An XML-based language
for persisting English Query model information.
2. SQL Server 2005
I. New
features
1) SSMS
2) SQL
Server Configuration Manager
3)
CLR/.NET Framework Integration
4) DMVs
and DMFs
5)
System Catalog Views
6) SQL
Management Objects (SMO, objects designed for programmatic management of
Microsoft SQL Server.) – Replaces SQL Distributed Management Objects (SQL-DMO)
7) DAC
8)
SQLCMD
9)
Database Mail
10)
Online index and restore
11)
Native encryption – column encryption
12)
Database Mirroring
13)
Database Snapshots
14)
Server Broker
15)
SSIS – replaces DTS
16)
Table and index partition
17)
Snapshot isolation
18) BI
– BIDS introduced
19)
Query notification – Notification Services
With the availability of the Service Broker, SQL
Server 2005 also introduces notification support for SQL Server queries. Query
Notification is useful for applications that cache database query results in
their own private cache area, such as database-driven websites. Rather than
having the application repeatedly poll the database to determine whether the
data has changed and the cache needs to be refreshed, commands that are sent to
the server through any of the client APIs, such as ADO.NET, OLE DB, open
database connectivity (ODBC), Microsoft ActiveX Data Objects (ADO), or Simple
Object Access Protocol (SOAP), may include a tag that requires a notification.
For each query included in the request, SQL Server creates a notification
subscription. When the data changes, a notification is delivered through a SQL
Service Broker queue to notify the application that data has changed, at which
point the application can refresh its data cache.
20)
Multiple active result sets
21) New
SQL Server data types: xml, varchar(max), and varbinary(max)
II.
Enhanced features
1)
Database Engine Enhancement
(1) Instant file initialization
New or expanded database files are made
available much faster now because the initialization of the file with binary
zeros is deferred until data is written to the files.
(2) Partial availability
In the event of database file corruption, the
database can still be brought online if the primary filegroup is available.
(3) Database file movement
You can now use the ALTER DATABASE command to
move a database file. The physical file must be moved manually. This feature
was available in SQL Server 2000, but it only worked on tempdb.
(4) Large rows storage
SQL Server 2005 now allows for the storage of
rows that are greater than 8060 bytes. The 8060-byte limitation that existed
with SQL Server 2000 has been relaxed by allowing the storage of certain data
types (such as varchar and nvarchar) on a row overflow data page.
(5) Stored computed columns
Computed columns that were calculated on-the-fly
in prior versions can now be stored in the table structure. You accomplish this
by specifying the PERSISTED keyword as part of the computed column definition.
2)
Index Enhancements
(1) Included columns
Non-key columns can now be added to an index for
improved performance. The performance gains are achieved with covering indexes
that allow the Query Optimizer to locate all the column values referenced in
the query.
(2) ALTER INDEX
As with other database objects, such as tables
and databases, you can now modify indexes by using the ALTER statement. Index
operations that were previously performed with DBCC commands or system stored
procedures can now be accomplished with the ALTER INDEX command.
(3) Parallel index operations
Scan and sort activities associated with index
operations can now be done in parallel.
3)
T-SQL Enhancements
(1) Ranking functions
SQL Server 2005 introduces four new ranking
functions: ROW_NUMBER, RANK, DENSE_RANK, and NTILE. These new functions allow
you to efficiently analyze data and provide ranking values to result rows of a
query.
(2) Common table expressions
A common table expression (CTE) is a temporary
named result set that can be referred to within a query, similarly to a
temporary table. CTEs can be thought of as an improved version of derived
tables that more closely resemble a non-persistent type of view. You can also
use CTEs to develop recursive queries that you can use to expand a hierarchy.
(3) PIVOT/UNPIVOT operator
The PIVOT operator allows you to generate
crosstab reports for open-schema and other scenarios in which you rotate rows
into columns, possibly calculating aggregations along the way and presenting
the data in a useful form. The UNPIVOT operator allows you to normalize
pre-pivoted data.
(4) APPLY operator
The APPLY relational operator allows you to
invoke a specified table-valued function once per each row of an outer table
expression.
(5) TOP – enhanced with % and variables
In SQL Server 2005, the TOP operator has been
enhanced, and it now allows you to specify a numeric expression to return the
number or percentage of rows to be affected by your query; you can optionally
use variables and subqueries. You can also now use the TOP option in DELETE,
UPDATE, and INSERT queries.
(6) DML with results – the OUTPUT clause
SQL Server 2005 introduces a new OUTPUT clause
that allows you to return data from a modification statement (INSERT, UPDATE,
or DELETE) to the processing application or into a table or table variable.
(7) Exception handling for transactions with TRY…CATCH
Earlier versions of SQL Server required you to
include error-handling code after every statement that you suspected could
potentially generate an error. SQL Server 2005 addresses this by introducing a
simple but powerful exception-handling mechanism in the form of a TRY...CATCH
T-SQL construct.
4)
Security Enhancements
(1) SQL login password policies
SQL Server logins can now be governed by a more
rigid password policy. This is implemented with new CHECK_POLICY and CHECK_
EXPIRATION options that can be selected for a SQL Server login. These options
facilitate stronger passwords and cause passwords to expire. The password
policy is enforced only on Windows 2003 Server and above.
(2) User/schema separation
In prior versions of SQL Server, the fully
qualified name for every object was directly tied to the object owner. With SQL
Server 2005, schema names are used in the object namespace instead. This
user/schema separation provides more flexibility in the object model and allows
for object owners to be changed without affecting the code that references the
objects.
(3) Module execution context – Execute As
The EXECUTE AS option can be used to set the
execution context for SQL statements. This allows a user to impersonate another
user and is particularly useful for testing permissions.
(4) Permission granularity – more granular control
The security model in SQL Server 2005 provides a
much more granular level of control than earlier versions of SQL Server. This
granular control provides some new types of security and allows you to apply
security to a new set of database objects.
(5) Data Definition Language (DDL) triggers
Database administrators can now write server- or
database-level triggers that fire on events such as creating or dropping a
table, adding a server login, or altering a database. These triggers provide an
invaluable auditing mechanism to automatically capture these events as they
occur.
5)
Backup and Restore Enhancements
(1) Copy-only backups
These backups can be made without disrupting the
sequencing of other backups (for example, differential or log backups). Because
copy-only backups do not affect the restoration chain, they are useful in
situations such as when you simply want to get a copy of the database for
testing purposes.
(2) Mirrored backups
SQL Server 2005 adds the capability to create
additional copies of database backups via mirrored backups. Mirrored backups
provide redundancy so that you can overcome the failure of a single backup
device or medium by utilizing the mirrored copy of the backup.
(3) Partial backups
A partial backup contains all the data in the
primary filegroup, any filegroup that is not read-only, and any filegroup that
has been explicitly identified for backup. The elimination of read-only
filegroups from partial backups saves space, saves time, and reduces the server
overhead that is required while performing the backup.
(4) Online restore
Online restore allows a filegroup, file, or a
specific page within a file to be restored while the rest of the database
remains online and available.
6)
Recovery Enhancements
SQL Server 2005 reduces the amount of time it
takes for a database to become available with a new and faster recovery option.
In SQL Server 2005, users can access a recovering database after the
transaction log has been rolled forward. Earlier versions of SQL Server
required users to wait until incomplete transactions had rolled back, even if
the users did not need to access the affected parts of the database.
7) SQL
Server Agent Enhancements
(1) Job Activity Monitor
A new auto-refreshing tool named Job Activity
Monitor has been added to help monitor the execution of scheduled jobs. You can
adjust the refresh rate of the screen and specify filtering criteria in order
to isolate a job or set of jobs.
(2) Shared job schedules
A job schedule can now be shared among jobs that
have the same job owner.
(3)Enhanced SQL Agent security
Several new roles have been added that provide
enhanced security management for the SQL Server Agent. In addition, a separate
proxy account can now be defined for each type of subsystem that the SQL Server
Agent can interact with.
(4) Performance improvements
New thread pooling and reduced job execution delays
have improved the performance of SQL Server Agent.
8)
Replication Enhancements
(1) Replication security enhancements
The replication security model has changed,
allowing more control over the accounts under which replication agents run.
(2) Simplification of the user interface
Replication wizards and dialog boxes have been
redesigned for SQL Server 2005 to simplify the setup of a replication topology.
There are 40% fewer wizard dialogs, and scripting is now integrated into the
wizards, providing improved capabilities to script replication setup during or
after wizard execution.
(3) Replication of schema changes
A much broader range of schema changes can be
replicated without the use of special stored procedures. DDL statements are
issued at the publisher and are automatically propagated to all subscribers.
(4) Peer-to-peer transactional replication
The new peer-to-peer model allows replication
between identical participants in the topology (a master/master or symmetric
publisher concept).
(5) Initialization of a transactional subscription from a backup
Setting up replication between databases that
initially contain large volumes of data can be time- consuming and require
large amounts of storage. SQL Server 2005 provides a new publication option
that allows any backup taken after the creation of a transactional publication
to be restored at the subscriber, rather than using a snapshot to initialize
the subscription.
(6) Heterogeneous replication
Enhancements have been made for publishing data
from an Oracle database with transactional and snapshot replication. In
addition, there is improved support for many non-SQL Server subscribers.
(7) Replication mobility
Merge replication provides the ability to
replicate data over HTTPS, with the web synchronization option, which is useful
for synchronizing data from mobile users over the Internet or synchronizing
data between Microsoft SQL Server databases across a corporate firewall.
(8) Replication scalability and performance enhancements
Scalability and performance enhancements include
significant improvements to the performance of filtered merge publications and
the ability of the Distribution Agent in transaction replication to apply
batches of changes, in parallel, to a subscriber.
9)
Failover Clustering Enhancements
(1) More SQL Server Services supported
In SQL Server 2005, support for failover
clustering has been extended to SQL Server Analysis Services, Notification
Services, and SQL Server replication.
(2) More nodes are supported (limited to four nodes in SQL 2000)
With Windows 2003 Enterprise Edition, you can
now create up to eight nodes in a single cluster.
For the limit of 4 on SQL Server 2000, see the
link (http://technet.microsoft.com/en-us/library/cc917693.aspx) for details.
Excerpted here:
… A node is a server
within the cluster. Windows NT Server 4.0, Enterprise Edition and Windows 2000
Advanced Server and Window 2003 Advanced Server both support two-node
clustering, and Windows 2000 Datacenter Server supports up to four-node
clustering and Windows 2003 supports up to eight node clustering however you
are limited to four nodes if SQL Server 2000 clustering is to be used. ..
(3) SQL Server 2005 Standard Edition supports two-node failover
cluster too
SQL Server 2005 also provides the ability to set
up a two-node failover cluster, using SQL Server 2005 Standard Edition. In
previous releases, clustering was available only for the Enterprise Edition.
(4) Simpler Microsoft Cluster Service (MSCS) setup
Installing MSCS has become very easy (with
Windows 2003 and above). MSCS is a prerequisite for SQL Server Clustering.
(5) Cleaner SQL Server Clustering installation wizard
The much-improved wizard detects and handles
most prerequisites and provides for a single point of installation for multiple
SQL Server node configurations.
(6) Increased instances per cluster
Up to 50 SQL Server instances per cluster are
now supported with SQL Server 2005 Enterprise Edition and up to 16 SQL Server
instances per cluster for SQL Server Standard Edition.
10)
Notification Services Enhancements - died in SQL 2008, replaced with
data-driven subscription in SSRS or Complex Event Processing introduced in SQL
Server 2008R2
Notification Services was provided as a feature
for SQL Server 2000 before SQL Server 2005 was released. SQL Server 2005
provides a number of enhancements to Notification Services, including the
following:
(1) Integration into SSMS
Notification
Services is now integrated into SSMS Object Explorer. Using Object Explorer,
you can perform most nscontrol command prompt utility tasks, and you can start
and stop instances of Notification Services.
//***************************************************************************
*…
NSControl is a command prompt utility for administering Notification
*
Services. It provides commands for deploying, configuring, monitoring,
*
and controlling Notification Services instances and *applications…
**************************************************************************//
(2)
Support for subscriber-defined conditions
In SQL Server 2005, Notification Services
supports a new type of actions, called condition actions, which allow subscribers
to define their own query clauses over a predefined data set. Using condition
actions allows subscribers to fully define their own subscriptions over the
data set.
(3) Database independence
SQL Server Notification Services supports using
existing databases for instance and application data.
(4) New management API
SQL Server Notification Services has a new
management API, Microsoft.SqlServer.Management.Nmo, that can be used to develop
Notification Services instances and applications and to manage those instances
and applications.
11)
Full-Text Search Enhancements
(1) Integrated backup and restoration for full-text catalogs
In SQL Server 2005, full-text catalogs can be
backed up and restored along with, or separate from, data- base data.
(2) Full-text catalogs included in database attach and detach
operations
SQL Server 2005 preserves full-text catalogs
when administrators perform database detach and attach operations.
(3) Full-text indexing performance improvements
SQL Server 2005 Full-Text Search includes a
major upgrade of the Microsoft Search service to version 3.0, which provides
massively improved full-text index population performance and provides one
instance of the Microsoft Search service per instance of SQL Server.
12) Web
Services Enhancements
The main enhancement to Web services in SQL
Server 2005 is that you can use HTTP to access SQL Server directly, without
using a middle-tier listener such as Microsoft Internet Information Services
(IIS). SQL Server 2005 exposes a Web service interface to allow the execution
of SQL statements and invocation of functions and procedures directly. Query
results are returned in XML format and can take advantage of the Visual Studio
Web services infrastructure.
13)
Analysis Services Enhancements
(1) Analysis Services is now fully integrated with the SSMS.
(2) Many of the same wizards and management aspects of Business
Intelligence Development Studio are also available in SSMS.
(3) SQL Server 2005 allows up to 50 separate instances of
Analysis Services on one machine with Enterprise Edition and up to 16 separate
instances with the Developer and Standard Editions.
(4) Analysis Services is now a cluster-aware application, and
failover clustering is completely supported.
(5) SQL Server 2005 supports the XML for Analysis Services 1.1
specification and Analysis Services Scripting Language (ASSL) for XML-based
administration.
(6) Proactive caching has been enabled at the partition level,
to push data that has changed into cache for immediate access in Analysis
Services. This is a big architectural change that directly addresses
high-performance query execution of data within online analytical processing
(OLAP) cubes that change frequently.
(7) The Unified Dimensional Model (UDM) paradigm provides a
powerful metadata abstraction layer to use for all Analysis Services reference
needs. It leverages concepts such as dimensions, measures, hierarchies, and so
on and provides these simplified reference points to all interfaces and
environments.
(8) Perspectives are now available to simplify and control the
end user’s view into complex cubes.
(9) Several new data mining algorithms have appeared, such as
Naïve Bayes, Association, Sequence Clustering, Time Series/Linear Regression,
and Neural Network algorithms. . Analysis Services includes more robust usage
and integration with SSIS for complex data transformations and filtering of
data mining.
14)
Reporting Services Enhancements
The first version of SQL Server Reporting
Services shipped in January 2004 for use with SQL Server 2000. However,
Reporting Services is now an integrated component of SQL Server 2005, and the
new version included in SQL Server 2005 contains a great deal of new features
geared toward ease of use, performance, and the improvement of a rich
development platform. Reporting Services is a SQL Server service, similar to
the relational database engine or Analysis Services. It allows you to design
reports, deploy them on a server, and make them available to users in a secured
environment in a variety of online and offline formats.
3. SQL Server 2008
I. New
Features
1) New
storage features
(1) FILESTREAM
(2) Sparse column and column sets (an untyped XML representation
that combines all the sparse columns of a table into a structured output.)
(3) Row-level and page-level compression
2) New
data types
(1) Date and time
(2) Datetime2
(3) DatetimeOffset
(4) Hierarchyid
(5) Spatial: Geometry and Geography
(6) User defined table type
3) New
T-SQL constructs
(1) Compound operators
(2) Grouping Sets
(3) Merge
(4) Row Constructors
(5) Table-valued parameters
(6) Insert over DML - A Filterable Alternative to OUTPUT…INTO
//**************************************************************************************************
INSERT OVER DML refers
to a special syntax in which you wrap an INSERT
INTO…SELECT statement around any
data manipulation language (DML) statement (INSERT, UPDATE, DELETE, or MERGE)
that has an OUTPUT clause, rather than using OUTPUT…INTO on the
DML statement itself. The subtle but crucial difference here is that OUTPUT…INTO
captures the changes, while the INSERT OVER DML syntax consumes the
changes captured by OUTPUT. By treating the OUTPUT changes as the
source for a standard INSERT INTO…SELECT statement, you can apply any WHERE
clause to the INSERT INTO…SELECT statement that you want.
Beyond that, there
isn’t anything more you can do with INSERT OVER DML than what you can with OUTPUT…INTO.
But to casually overlook the significance of filtering change data is to miss
the point of INSERT OVER DML entirely. By being able to filter change data with
a WHERE clause, you can better control which data changes captured are
shipped to the destination table and which aren’t. (from http://mscerts.programming4.us/sql_server/sql%20server%202012%20%20%20t-sql%20enhancements%20-%20the%20insert%20over%20dml%20syntax%20(part%201)%20-%20a%20filterable%20alternative%20to%20output%E2%80%A6into.aspx)
**************************************************************************************************//
(7) New Date and Time Functions
4) New
performance features
(1) Filtered indexes and statistics,
(2) FORCESEEK query hint - for more information see
http://technet.microsoft.com/en-us/library/bb510478(v=sql.105).aspx
(3) Hash values for finding similar queries in the plan cache,
(4) Plan Guide Successful and Plan Guide Unsuccessful event
classes,
(5) Guided/Misguided Plan Executions/sec Performance Monitor
counters,
(6) LOCK ESCALATION option for ALTER TABLE,
(7) Hot-add CPUs
5) New
security features
(1) Transparent data encryption,
(2) Extensible Key Management,
(3) SQL Server Audit - key components: SQL Server Audit using
Extended Events, Server Audit Specification, Database Audit Specification,
Target (a file, Windows Security Log, or Windows Application Log)
6) New
database administration features
(1) Backup compression - WITH COMPRESSION
(2) Change Data Capture - Providing
information about DML changes on a table and a database. By using change data
capture, you eliminate expensive techniques such as user triggers, timestamp
columns, and join queries.
//******************************************************************************************************************
-- Step 1:
Create CDC_DEMO database
CREATE DATABASE CDC_DEMO;
GO
USE CDC_DEMO;
GO
CREATE TABLE dbo.Boat (
Id int,
Name varchar(50),
BoatType varchar(20),
BoatLength tinyint,
BoatBeam varchar(20));
CREATE TABLE dbo.BoatListing (
Id int,
BoatId int,
AskingPrice money,
LastUpdateUser varchar(25));
-- Step 2:
enable CDC on the database
EXEC sys.sp_cdc_enable_db;
GO
-- Step 3:
Turning on Change Data Capture for a Table
USE CDC_DEMO;
GO
EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name = N'Boat',
@role_name = NULL, -- Role gating not enabled
@filegroup_name = N'PRIMARY'; -- consider writing to a
separate filegroup
GO
-- If not for
the entire table, use additional parameter
EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name = N'BoatListing',
@role_name = NULL, -- Role gating not enabled
@filegroup_name = N'PRIMARY', -- should consider
writing audit date to separate filegroup
@captured_column_list = N'Id,AskingPrice,LastUpdateUser';
GO
--Step 4: add
some intital data to the table
USE CDC_DEMO;
GO
SET NOCOUNT ON;
INSERT INTO dbo.Boat VALUES(1
,'Wind
Witch'
,'Sailboat',23,'8 feet and 2 inches'
);
INSERT INTO dbo.Boat VALUES(1
,'Wind
Witch II'
,'Sailboat',25,'8 feet and 6 inches'
);
--Step 5:
Display Change Data Capture Information For the Boat table
USE CDC_DEMO;
GO
DECLARE @from_lsn binary(10), @to_lsn binary(10)
SET @from_lsn = sys.fn_cdc_get_min_lsn('dbo_Boat')
SET @to_lsn = sys.fn_cdc_get_max_lsn()
SELECT
CT.__$start_lsn,
CT.__$operation,
CASE CT.__$operation
WHEN 1 THEN 'Delete'
WHEN 2 THEN 'Insert'
WHEN 3 THEN 'Update - Pre'
WHEN 4 THEN 'Update - Post'
END AS Operation,
CT.*,
LSN.tran_begin_time,
LSN.tran_end_time,
LSN.tran_id
FROM
cdc.fn_cdc_get_all_changes_dbo_Boat
(@from_lsn, @to_lsn, N'all update old') AS CT INNER JOIN
cdc.lsn_time_mapping
AS LSN ON CT.__$start_lsn = LSN.start_lsn
GO
******************************************************************************************************************//
(3) Change Tracking - enables applications to
obtain only changes that have been made to the user tables, along with the
information about those changes.
//******************************************************************************************************************
Basic Steps: (a) enable change tracking for the database and
table
--Enabling Change Tracking at Database Level
ALTER DATABASE LearningCT
SET CHANGE_TRACKING = ON
(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON)
--AUTO_CLEANUP -> With this option you can switch ON or OFF
automatic
--tracking table clean up process
--CHANGE_RETENTION -> With this option, you can specify the time
frame
--for which tracked information will be maintained
--Enabling Change Tracking at Table Level
ALTER TABLE Employee
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = ON)
--TRACK_COLUMNS_UPDATED -> With this option, you can include columns
--also whose values were changed
(b) view the change tracking information
using system tables
SELECT * FROM sys.change_tracking_databases
SELECT * FROM sys.change_tracking_tables
SELECT * FROM sys.internal_tables
WHERE parent_object_id = OBJECT_ID('Employee')
******************************************************************************************************************//
(4) Data Collector
//******************************************************************************************************************
The data collector is a component of SQL
Server 2008 that collects different sets of data. Data collection either
runs constantly or on a user-defined schedule. The data collector stores the
collected data in a relational database known as the management data warehouse.
The
data collector is a core component of the data collection platform for SQL
Server 2008 and the tools that are provided by SQL Server. The data collector
provides one central point for data collection across your database servers and
applications. This collection point can obtain data from a variety of sources and is not limited to performance data,
unlike SQL Trace.
The
data collector enables you to adjust the scope of data collection to suit your
test and production environments. The data collector also uses a data
warehouse, a relational database that enables you to manage the data that you
collect by setting different retention periods for your data.
The
data collector supports dynamic tuning for data collection and is extensible
through its API.
- Create
logins and map them to data collector roles.
- Configure
the management data warehouse.
- In
addition, you may want to create proxy accounts to access the management
data warehouse server.
- Out of
the box, SQL 2008 comes with 3 pre-canned data collection scripts (disk
usage, query statistics, and server activities). However, it is possible
to create custom made data collection scripts that are fully integrated
with SQL Server Data Management Warehouse.
******************************************************************************************************************//
(5) Policy-Based Management - Four key components
//******************************************************************************************************************
- Target - an entity that is managed by
Policy-Based management; e.g. a database, a table, an index, etc.
- Facet - a predefined set of properties that can
be managed
- Condition - a property expression that evaluates
to True or False; i.e. the state of a Facet
- Policy - a condition to be checked and/or
enforced
******************************************************************************************************************//
(6) SQL Server Extended Events - provide a generic tracing and
troubleshooting framework which allows deeper and more granular level control
of tracing which was not possible using earlier methods like DBCC, SQL Trace,
Profiler, etc... Key components: 4 packages,254/259 events for 08/R2, Actions
for raised events, targets for consuming the events, predicates for dynamically
filtering
(7) Resource Governor - Three key components: resource
pools (a collection of system resources such as
memory or CPU), workload groups (a
user-specified category of requests) and classification (or
classifier user-defined functions- a
set of user-written rules that enable Resource Governor to classify session
requests into the workload groups).
//******************************************************************************************************************
--Resource pool to be used by OLTP
Application
CREATE RESOURCE POOL OLTPPool
WITH
(
MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100,
MIN_MEMORY_PERCENT=50, MAX_MEMORY_PERCENT=100
)
GO
--Resource pool to be used by Report Application
CREATE RESOURCE POOL ReportPool
WITH
(
MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100,
MIN_MEMORY_PERCENT=50, MAX_MEMORY_PERCENT=100
)
GO
--Workload Group to be used by OLTP
Application
CREATE WORKLOAD GROUP OLTPGroup
USING OLTPPool ;
GO
--Workload Group to be used by Report
Application
CREATE WORKLOAD GROUP ReportGroup
USING ReportPool ;
GO
USE master;
GO
/*create the classifier UDF to route
incoming request to different workload groups and
finally I will enable Resource Governor
with ALTER RESOURCE GOVERNOR RECONFIGURE statement.
Assumption here is, the OLTP application
uses "OLTPUser" login whereas reporting
application uses "ReportUser"
login.*/
CREATE FUNCTION dbo.ResourceClassifier()
RETURNS SYSNAME
WITH SCHEMABINDING
AS
BEGIN
--Declare
the variable to hold the value returned in sysname.
DECLARE @WorkloadGroup AS SYSNAME
--If
the user login is 'OLTPUser', map the connection to the
--OLTPGroup
workload group.
IF (SUSER_NAME() = 'OLTPUser')
SET @WorkloadGroup = 'OLTPGroup'
--If
the user login is 'ReportUser', map the connection to
--the
ReportGroup workload group.
ELSE IF (SUSER_NAME() = 'ReportUser')
SET @WorkloadGroup = 'ReportGroup'
ELSE
SET @WorkloadGroup = 'default'
RETURN @WorkloadGroup
END
GO
--Register the classifier user-defined
function and update the
--the in-memory configuration.
ALTER RESOURCE GOVERNOR
WITH (CLASSIFIER_FUNCTION=dbo.ResourceClassifier);
GO
--Enabling Resource Governor(By default
when you install
--SQL Server, Resource Governor is
disabled)
--It loads the stored configuration
metadata into memory
ALTER RESOURCE GOVERNOR RECONFIGURE
GO
--Disabling Resource Governor
ALTER RESOURCE GOVERNOR DISABLE
GO
--It resets statistics on all workload
groups and resource pools.
ALTER RESOURCE GOVERNOR RESET STATISTICS
GO
******************************************************************************************************************//
7) New
SQL Server management features
(1) Transact-SQL Debugger,
(2) IntelliSense,
(3) Error list window,
(4) Multiserver queries - query multiple servers from one window
after creating a Registered SQL Server Group with the desired servers.
(5) PowerShell integration
8)
Service Broker
(1) New Conversation Priorities
(2) New Diagnostic Utility - ssbdiaglose
(3) New Service Broker Elements in Object Explorer
(4) New System Monitor Object and Counters
(5) New Service Broker Tutorials
II.
Enhancement
1)
Installation Enhancements
(1) Slipstream Installation
Starting with SQL Server 2008 Service Pack 1,
you can now perform slipstream installations of SQL Server 2008. Slipstream is
the integration of the original installation media files with a service pack
and/or a cumulative update so that they can be installed in a single step.
(2) Selective uninstallation
SQL Server 2008 also provides the capability to
selectively uninstall cumulative updates and/or service packs via the Programs
and Features control panel.
2) SQL
Server Management Studio
(1) Customize the columns displayed by the Object Explorer
Details window.
(2) Display the properties of a selected item from Object
Explorer Details at the bottom of the window.
(3) View a color-coded status bar at the bottom of the window
for Transact-SQL and MDX code editors which provide information about the
editor connection. The status bar changes color when a code editor opens
multiple connections.
(4) Customize the tab name for items in the title bar of the
code editor windows.
(5) Configure the number of rows returned when you open tables
via the Object Browser.
(6) Create and manage plan guides via the Programmability folder
in Object Browser.
3)
Dynamic Management Views
(1) SQL Server 2008 adds five new dynamic management views to
return memory-related information about SQL Server:
·
Sys.dm_os_memory_nodes—Returns memory
allocation information for memory allocated through SQL Server Memory Manager.
·
sys.dm_os_nodes—Returns information about
SQL OS memory nodes, internal
structures of SQL OS that abstract hardware processor locality.
·
sys.dm_os_process_memory—Returns
complete information about memory allocated to SQL Server process space.
·
sys.dm_os_sys_memory—describes the memory
state for the operation system
(2) In addition, the cpu_ticks_in_ms column in the
sys.dm_os_sys_info dynamic management view has been discontinued, and two new
columns, sqlserver_start_time_ms_ticks and sqlserver_start_time, have been
added.
4)
Database Mirroring
SQL Server 2008 provides a number of
enhancements to database mirroring, mostly related to performance of database
mirroring, including:
(1) Compression of the transaction log records being streamed to
the mirror database,
(2) Asynchronous write-ahead on the incoming log stream and
read-ahead during the undo phase, and improved use of the log send buffers.
5)
SQLCLR Enhancements
(1) Extending the 8KB size limit for CLR user-defined types and
CLR user-defined aggregates,
(2) Supporting the definition of ordered table-valued functions,
(3) Providing support for multiple input parameters for
user-defined aggregates, and including the option to define static methods as
user-defined functions.
6)
Replication Enhancements
SQL Server 2008 introduces a number of usability
and manageability enhancements for peer-to-peer replication and the Replication
Monitor.
(1) Peer-to-peer replication includes the following significant
usability and manageability improvements:
·
A new option, enabled by default that
allows the Distribution Agent
to detect conflicts during synchronization and to stop applying changes at the
affected row.
·
The capability to add nodes to a
replication topology without quiescing the topology.
·
The capability to configure a topology visually
in the Configure Peer-to-Peer
Topology Wizard. The Topology Viewer enables you to perform common
configuration tasks, such as adding new nodes, deleting nodes, and adding new
connections between existing nodes.
(2) The Replication Monitor includes the following usability
improvements:
·
Most of the Replication Monitor grids
allow you to specify which columns to view, sort by multiple columns, and filter
rows in the grid based on column values.
·
The Common Jobs tab for the Publisher node
has been renamed to Agents.
·
The single Warnings and Agents tab for the
publication node has been
split into separate Warnings and Agents tabs to emphasize the difference
between administering performance warnings and monitoring replication agents.
7)
Service Broker Enhancements
(1) Support for conversation priorities.
This support allows administrators and
developers to specify that messages for important Service Broker conversations
are sent and received before messages from less important conversations to
ensure that low-priority work does not block higher-priority work.
(2) The new ssbdiagnose command-prompt utility to analyze and
diagnose Service Broker configurations and conversations.
(3) A new performance object and counters that report how often
Service Broker dialogs request transmission objects and how often inactive
transmission objects are written to work tables in tempdb.
(4) Support for Service Broker in SQL Server Management Studio
via new Service Broker Elements in Object Explorer.
8)
SSAS:
(1) A new Aggregation Designer makes it easier to browse and
modify aggregation designs.
(2) Aggregation design and usage-based optimization wizards have
been simplified and enhanced.
(3) New AMO warning messages alert users when they depart from
design best practices or make logical errors in database design.
(4) Simplified and enhanced cube and dimension wizards help you
create better cubes and dimensions in fewer steps.
(5) A new Attribute Relationship designer makes it easier to
browse and modify attribute relationships. . A new Key Columns dialog box makes
editing key columns easier. . Key columns can now be edited in the Properties
panel.
(6) An updated Dimension Structure tab helps make modifying
attributes and hierarchies easier.
(7) A new storage structure is available and performance has
been enhanced in all backup and restore scenarios.
(8) When creating a mining structure, you can now divide the
data in the mining structure into training and testing sets.
(9) You can now attach filters to a mining model and apply the
filter during both training and testing. Applying a filter to the model lets
you control the data used to train the model and lets you more easily assess
the performance of the model on subsets of the data.
(10) Cross-validation is now available in the Mining Accuracy
Chart view of the Data Mining Designer.
(11) SQL Server 2008 supports the creation, management, and use
of data mining models from Microsoft Excel when you use the SQL Server 2008
Data Mining add-ins for Office 2007.
(12) You are able to add aliases to columns in a mining model to
make it easier to understand column content and reference the column in DMX
statements.
9) SSIS
(1) Improvements in the parallel execution of data flow pipeline
paths on multiprocessor systems, going beyond the SQL Server 2005 limit of two
engines.
(2) A new script environment. Visual Studio for Applications
(VSA) has been replaced with Visual Studio Tools for Applications (VSTA). VSTA
allows the development of scripts written on Visual C# and Visual Basic,
providing support for the Script Task and Script Component. VSTA also supports
debugging and adding managed assemblies to a script at design time.
(3) Increased performance and improved caching options for
Lookup Transformations.
(4) Data profiling to improve data quality by identifying
potential data quality problems.
(5) Support for the new Change Data Capture feature in SQL
Server 2008, providing an effective method for performing incremental data
loads from source tables to data marts and data warehouses.
4. SQL Server 2008R2
I. New
Features
1) Database Engine - Connecting to the Database Engine Using
Extended Protection
SQL Server supports service binding and channel
binding to help reduce these authentication relay attacks on SQL Server
instances
2)
Multi-Server Administration and Data-Tier Application:
(1) SQL Server Utility/ Utility Explorer - a central repository
for performance data and management policies that tailor the operation of
instances of the Database Engine that have been enrolled in the utility,
(2) DAC- a single unit for developing, deploying, and managing
the database objects used by an application
3)
Master Data Services
//******************************************************************************************************************
(1) Purpose: to discover and define non-transactional lists of
data, with the goal of compiling maintainable master lists
(2) Key components: (a) Master Data Services Configuration
Manager, a tool you use to create and configure Master Data Services databases
and Web applications,(b) Master Data Manager, a Web application you use to
perform administrative tasks (like creating a model or business rule), and that
users access to update data, and (c ) Master Data Services Web service,
which developers can use to extend or develop custom solutions for Master Data
Services.
(3) Key terms: model, entity, attributes, members, attribute
group, business rules, derived hierarchies.
(4) How is it different from DQS? MDS and DQS are independent
but also collaborative. Both play important roles in enterprise information management
(EIM) scenarios.
· DQS emphasizes the cleansing, correction, and standardization of
data through the collection of business knowledge in a DQS knowledge base, and
the application of that knowledge in interactive and automated cleansing
scenarios.
· MDS emphasizes the ongoing "care and feeding" of
master data, providing a set of tools that allow business users to keep records
up to date, while allowing IT departments to secure and manage the master data
models, while integrating with external systems that utilize the master data.
· DQS uses Domain Rules and MDS employs Business Rules
· Domain rules in DQS and business rules in MDS are both used to
maintain the integrity of data, but they are used in different circumstances,
as part of different operations, and from within different tools. Domain rules
are used to identify data problems as part of a cleansing project, while
business rules are used to prevent bad data from being introduced into MDS. The
two types of rules can apply to the same data at different places in the
information management lifecycle, so the same rule can end up being defined in
both places.
***********************************************************************************************//
4) SSAS
– No Changes on Multidimensional Modeling and Data Mining
(1) PowerPivot for Excel
PowerPivot for Excel- (a) An add-in to Excel
2010 to create relationships in large amounts of data from different sources,
and then use that data as the basis for PivotTables and other data
visualization objects that support data analysis in Excel. (b) Using a local
Analysis Services VertiPaq engine that compresses and loads data
DAX - Data Analysis Expressions used by
PowerPivot for Excel for creating sophisticated calculations, using time
intelligence, and performing lookups
(2) PowerPivot for SharePoint
Extends SharePoint 2010 and Excel Services to
add server-side processing, collaboration, and document management support for
the PowerPivot workbooks that you publish to SharePoint. You still use
PowerPivot for Excel to create PowerPivot workbooks.
SQL Server 2008 R2 also introduces Analysis
Services integration with SharePoint products to support large scale data
analytics for Excel workbooks that you publish to a SharePoint site. In this
release, Setup includes a new setup role installation option so that you can
deploy Analysis Services as a preconfigured PowerPivot for SharePoint server in
a SharePoint farm.
5) SSIS
– No Change
6) SSRS
(1) Report Builder 3.0,
(2) Report parts,
(3) Shared datasets - A shared dataset is published on a
report server and can be used by multiple reports. A shared dataset must be
based on a shared data source. A shared dataset can be cached and scheduled by
creating a cache refresh plan.
(4) Sparklines and data bars,
(5) Indicators,
(6) Calculating aggregates of aggregates - You can create
expressions that calculate an aggregate of an aggregate. For example, in a cell
in the row group header that is associated with a group based on year, you can
calculate the average monthly sales for year by using the
expression =Avg(Sum(Fields!Sales.Value,"Month"),"Year").
By using this feature for charts and gauges that are nested in a
table, you can align horizontal and vertical axes for charts and scales for
gauges. You do this by calculating the maximum and minimum of aggregated values
so the nested items use the same ranges.,
(7) Maps,
(8) Lookup functions - Expressions in data regions can now
include references to Lookup
Function (Report Builder 3.0 and SSRS), LookupSet
Function (Report Builder 3.0 and SSRS), and Multilookup
Function (Report Builder 3.0 and SSRS). These lookup functions can retrieve
values from a dataset that has name/value pairs and that is not bound to the
data region.
In the following
expression, Lookup compares the value of ProductID to ID in each row
of the dataset called "Product" and, when a match is found, returns
the value of the Name field for that row.
=Lookup(Fields!ProductID.Value, Fields!ID.Value, Fields!Name.Value,
"Product")
II.
Enhancement
1) SQL
Server 2008 R2 Installation
(1) Installing SQL Server Using SysPrep.
A stand-alone installation of SQL Server
includes two steps: prepare and complete. In SQL Server 2008, these two steps
ran as one integrated step during installation.
Starting with SQL Server 2008 R2, in addition to
the integrated installation, the two steps are available as separate
installation actions, and can be used in preparing an image of SQL Server.
(2) Side-by-side instances of SQL Server 2008 R2 and SQL Server
2008.
SQL Server 2008 R2 and SQL Server 2008 share the
same major version number and therefore share some SQL Server components.
(3) A new command line parameter /ACCEPTSQLSERVERLICENSETERMS is
now required for unattended installations of SQL Server.
(4) Starting in SQL Server 2008 R2, the Reporting Services
component no longer supports Itanium-based servers running Windows Server 2003
or Windows Server 2003 R2.
Reporting Services continues to support other
64-bit operating systems, including Windows Server 2008 for Itanium-Based
Systems and Windows Server°2008°R2 for Itanium-Based Systems. To upgrade to SQL
Server 2008 R2 from a SQL Server 2008 installation with Reporting Services on
an Itanium-based system edition of Windows Server 2003 or Windows Server 2003
R2, you must first upgrade the operating system.
(5) New role-based installation parameter for installing SQL
Server in a predetermined configuration.
SQL Server Express is installed using the AllFeatures_WithDefaults
role that minimizes the required user input. The AllFeatures_WithDefaults
role installs all available features, adds the current user to the SQL Server sysadmin
fixed server role, and uses default values for the service accounts whenever
possible. The AllFeatures_WithDefaults role can be provided as a command
line parameter when installing the full product.
(6) The option to enable the feature usage reporting during SQL
Server Setup is now available on the Accept License Terms page.
In the previous versions of SQL Server, the
feature usage reporting option was available through the Error and Usage
Reporting page which now only includes the option to enable error
reporting.
(7) Two new premium editions to meet the needs of large-scale
datacenters and data warehouses: SQL Server 2008 R2 Datacenter. SQL Server 2008
R2 Parallel Data Warehouse.
SQL Server Datacenter delivers a high-performing
data platform that provides the highest levels of scalability for large
application workload, virtualization, and consolidation.
Unlike the other editions of SQL Server which
are software-only products, the Parallel Data Warehouse edition is a
combination hardware and software offering. You can’t just buy SQL Server 2008
R2 Parallel Data Warehouse edition from Microsoft and install it on one of your
own severs as you can with the other editions of SQL Server. Instead, you
purchase SQL Server 2008 R2 Parallel Data Warehouse from one of Microsoft’s
partners, which are HP, Dell, IBM, and Bull. Each of the different partners
provides a variety of different sizes of system configurations.
(8) The largest database supported by SQL Server Express has
been increased from 4GB to 10 GB
(9) Maximum number of CPU support for SQL Server Enterprise has
increased to a maximum of 8 CPUs.
2)
Database Engine
(1) Unicode Compression
SQL Server 2008 R2 introduces Unicode
compression. Data that is stored in nvarchar(n) and nchar(n) columns can be
compressed by using an implementation of the Standard Compression Scheme for
Unicode (SCSU) algorithm.
3)
Service Broker - turn on and off poison-message handling on a queue
SQL Server 2008 R2 introduces the option to turn
on and off poison-message handling on a queue. A queue that has poison message
turned off will not be disabled after consecutive transaction rollbacks. With
this feature, a custom poison-message handing strategy can be defined by an
application if you want. The Transact-SQL CREATE QUEUEand ALTER QUEUE statements have been updated, and the [IsPoisonMessageHandlingEnabled]
property has been added to
the[Microsoft.SqlServer.Management.Smo.Broker.ServiceQueue] API.
4) SSIS
(1) ADO NET Destination Editor (Connection Manager Page)
5. SQL Server 2012
I. New Features
1) New
storage features
(1) Columnstore indexes,
(2) FileTable storage.
2) New
Transact-SQL (T-SQL) constructs
(1) Sequence objects,
(2) THROW statement - Raises an exception and transfers
execution to a CATCH block of a TRY…CATCH construct
//******************************************************************************************************************
THROW [ { error_number | @local_variable },
{ message |
@local_variable },
{ state |
@local_variable } ]
[ ; ]
What are the benefits of THROW over RAISERROR?
- Correct line
number of the error (even when you re-throw the exception with the THROW
keyword the original error number and line number is preserved unlike the
RAISERROR command where it is overwritten) – not the line number where
RAISERROR executed!
- Easy to use - no
need for extra code as in RAISERROR
- Complete
termination - The severity level raised by THROW is always 16. But the
more important feature is that when the THROW statement in a CATCH block
is
- Independence of
sys.messages - This feature makes it possible to re-throw custom message
numbers without the need to use sp_addmessage
to add
the number.
*****************************************************************************************************************//
(3) New conversion functions - PARSE, TRY_PARSE, and TRY_CONVERT
//******************************************************************************************************************
Parse - Returns the result of an
expression, translated to the requested data type
PARSE ( string_value AS data_type [ USING culture ] )
-- return 2010-12-13 00:00:00.0000000
SELECT PARSE('Monday, 13 December 2010' AS datetime2 USING 'en-US') AS Result;
-- return 345.98
SELECT PARSE('€345,98' AS money USING 'de-DE') AS Result;
*****************************************************************************************************************//
(4) New logical functions - CHOOSE and IIF,
//******************************************************************************************************************
Returns the item at the specified index
from a list of values in SQL Server.
SELECT CHOOSE ( 3, 'Manager', 'Director',
'Developer', 'Tester' ) AS Result; --return Developer
*****************************************************************************************************************//
//******************************************************************************************************************
Returns one of two values, depending on whether the Boolean expression
evaluates to true or false in SQL Server.
IIF ( boolean_expression,
true_value, false_value )
*****************************************************************************************************************//
(5) New string functions
- CONCAT and FORMAT,
//******************************************************************************************************************
CONCAT ( string_value1,
string_value2 [, string_valueN ] )
*****************************************************************************************************************//
//******************************************************************************************************************
FORMAT - for numeric and data/time only
FORMAT ( value, format [, culture ] )
*****************************************************************************************************************//
(6) New date and time functions
·
DATEFROMPARTS ,
·
DATETIME2FROMPARTS ,
·
DATETIMEFROMPARTS ,
·
DATETIMEOFFSETFROMPARTS ,
·
SMALLDATETIMEFROMPARTS ,
·
TIMEFROMPARTS
(7) New ROWS and RANGE Clauses
Provides support for support for windows
framing, which can be used to restrict the results to a subset of rows by
specifying the start and end points within the partition of the OVER clause.
//******************************************************************************************************************
--2008R2 and below
OVER (
[
<PARTITION BY clause> ]
[
<ORDER BY clause> ]
)
--2012 and upper
OVER (
[
<PARTITION BY clause> ]
[
<ORDER BY clause> ]
[
<ROW or RANGE clause> ]
)
*****************************************************************************************************************//
(8) New window offset
functions – LAG and LEAD functions (=previous and next)
LAG and LEAD functions used in conjunction with
OVER clause let you return a value from a specific row that’s a certain offset
from the current row.
//******************************************************************************************************************
LAG (scalar_expression
[,offset] [,default])
OVER ( [
partition_by_clause ] order_by_clause )
USE AdventureWorks2012;
GO
SELECT BusinessEntityID, YEAR(QuotaDate) AS SalesYear, SalesQuota AS CurrentQuota,
LAG(SalesQuota, 1,0) OVER (ORDER BY YEAR(QuotaDate)) AS PreviousQuota
FROM Sales.SalesPersonQuotaHistory
WHERE BusinessEntityID = 275 and YEAR(QuotaDate) IN ('2005','2006');
*****************************************************************************************************************//
(9) New window rank
distribution functions — PERCENT_RANK, PERCENTILE_CONT, PERCENTILE DISC, and
CUME_DIST.
(10) New ad hoc query paging - Provides a mechanism to implement
paging using the relatively simple syntax provided by the ANSI standard ORDER BY ...
OFFSET / FETCH clause.
//******************************************************************************************************************
OFFSET / FETCH: using them with ORDER BY
The OFFSET-FETCH clause provides you
with an option to fetch only a window or page of results from the result set. OFFSET-FETCH
can be used only with the ORDER BY clause.
[ORDER BY { order_by_expression [ ASC | DESC ] } [
,...n][<offset_fetch>] ]
<offset_fetch> ::= {OFFSET { integer_constant |
offset_row_count_expression } { ROW | ROWS }
[FETCH { FIRST | NEXT } {integer_constant | fetch_row_count_expression }
{ ROW | ROWS } ONLY]}
--Skip first 10 rows from the sorted
result set and return the remaining rows.
SELECT First Name + ' ' + Last Name FROM Employees ORDER BY First Name OFFSET 10 ROWS;
--Skip first 10 rows from the sorted
resultset and return next 5 rows.
SELECT First Name + ' ' + Last Name FROM Employees ORDER BY First Name OFFSET 10 ROWS FETCH NEXT 5 ROWS ONLY;
*****************************************************************************************************************//
3) New
scalability and performance features
(1) Indirect checkpoints,
(2) FORCESCAN table hint,
(3) Number of table partitions increased to 15,000.
4) New
security features
(1) Database Audit (???, this is not not new. introduced in
2008, also using xevents),
(2) User-defined server roles,
(3) Contained databases.
5) New
availability features - AlwaysOn
A number of high-availability enhancements known
as AlwaysOn, which include AlwaysOn Availability Groups and AlwaysOn Failover
Cluster Instances.
6)
Statistical Semantic Search
Statistical Semantic Search builds upon the
existing full-text search feature in SQL Server by querying the contextual
meaning of terms within a document repository.
//******************************************************************************************************************
Semantic search builds upon the existing
full-text search feature in SQL Server, but enables new scenarios that extend
beyond keyword searches. While full-text search lets you query the words in a document, semantic search lets you query the meaning of the document. Solutions that are now possible include
automatic tag extraction, related content discovery, and hierarchical
navigation across similar content. For example, you can query the index of key
phrases to build the taxonomy for an organization, or for a corpus of
documents. Or, you can query the document similarity index to identify resumes
that match a job description.
Statistical Semantic Search provides
deep insight into unstructured documents stored in SQL Server databases by
extracting and indexing statistically relevant key phrases. Then it also uses these key phrases to identify and index documents that are
similar or related.
Before
you can index documents with Semantic Search, you have to store the documents in a SQL Server database.
The FileTable feature in SQL Server 2014 makes
unstructured files and documents first-class citizens of the relational
database. As
a result, database developers can manipulate documents together with structured
data in Transact-SQL set-based operations.
You
query these semantic indexes by using three Transact-SQL rowset functions to
retrieve the results as structured data: semantickeyphrasetable,
semanticsimilaritytable, semanticsimilaritydetailstable
--Find
the Key Phrases in a Document
--The
following query gets the key phrases that were identified in the sample
document.
--It
presents the results in descending order by the score that ranks the
statistical significance of each key phrase.
--
This query calls the semantickeyphrasetable (Transact-SQL) function.
SET @Title = 'Sample Document.docx'
SELECT @DocID = DocumentID
FROM Documents
WHERE DocumentTitle = @Title
SELECT @Title AS Title, keyphrase, score
FROM SEMANTICKEYPHRASETABLE(Documents,
*, @DocID)
ORDER BY score DESC
--Find
Similar or Related Documents
--The
following query gets the documents that were identified as similar or related
to the sample document.
--It
presents the results in descending order by the score that ranks the similarity
of the 2 documents.
--This
query calls the semanticsimilaritytable (Transact-SQL) function.
SET @Title = 'Sample Document.docx'
SELECT @DocID = DocumentID
FROM Documents
WHERE DocumentTitle = @Title
SELECT @Title AS SourceTitle,
DocumentTitle AS MatchedTitle,
DocumentID, score
FROM SEMANTICSIMILARITYTABLE(Documents,
*, @DocID)
INNER JOIN Documents ON DocumentID =
matched_document_key
ORDER BY score DESC
--Find
the Key Phrases That Make Documents Similar or Related
--The
following query gets the key phrases that make the 2 sample documents similar
or related to one another.
--It
presents the results in descending rder by the score that ranks the weight of
each key phrase.
--This
query calls the semanticsimilaritydetailstable (Transact-SQL) function.
SET @SourceTitle = 'first.docx'
SET @MatchedTitle = 'second.docx'
SELECT @SourceDocID = DocumentID FROM Documents WHERE DocumentTitle = @SourceTitle
SELECT @MatchedDocID = DocumentID FROM Documents WHERE DocumentTitle = @MatchedTitle
SELECT @SourceTitle AS SourceTitle,
@MatchedTitle AS MatchedTitle, keyphrase, score
FROM semanticsimilaritydetailstable(Documents,
DocumentContent,
@SourceDocID, DocumentContent, @MatchedDocID)
ORDER BY score DESC
*****************************************************************************************************************//
7) Data
Quality Services
This new feature allows you to build a knowledge
base of data rules and use those to perform a variety of critical data quality
tasks, including correction, enrichment, standardization, and de-duplication of
your data.
II. Enhancement
1) SQL
Server Management Studio Enhancements
(1) Alternative keyboard shortcut schemes
SSMS support two keyboard shortcut schemes. The
default is based on the Microsoft Visual Studio 2010 keyboard shortcuts. The
other scheme matches the keyboard shortcuts from SQL Server 2008 R2. The SQL
Server 2000 shortcuts are no longer available in SQL Server 2012.
(2) Query Editor T-SQL debugger
The debugger was enhanced in a number of
different ways, including improved breakpoint processing, expanded Watch window
capabilities, and the ability to debug prior instances of SQL Server, including
SQL Server 2005 (SP2) or later.
(3) Query Editor snippets
T-SQL code snippets are a new type of template
that was added in SQL Server 2012. These templates are another great starting
point when writing new T-SQL statements in the Database Engine Query Editor.
(4) Query Editor IntelliSense
IntelliSense has been optimized to produce
better string matches. It has also been enhanced to support the newly added
snippets.
(5) Extended Events GUI
SSMS now provides a graphical interface to
create, manage, and monitor Extended Events sessions for monitoring SQL Server.
(6) Database Recovery advisor
The Database Recovery tool is launched from the
Restore Database window and displays visual
timeline that outlines the database backups that are available for
restore, allowing a user to restore to a point in time by simply clicking the
visual timeline; the necessary backups will be selected for recovery.
2)
Resource Governor Enhancements
SQL Server 2012 provides enhancements to the
Resource Governor that enable you to more effectively govern performance,
especially for machines with a large number of CPU cores. The enhancements
include the following:
(1) Support for up to 64 resource pools, an increase from the
previous maximum of 20, to better enable partitioning larger machines between
more workloads
(2) A new CAP_CPU_PERCENT option for resource pools to provide a
means of setting a hard-cap limit on CPU resource usage, which can provide
greater predictability
(3) A new AFFINITY option for resource pools providing a
mechanism to affinitize resource pools to one or more schedulers and NUMA
nodes, which can provide greater isolation of CPU resources
3)
Spatial Data Enhancements
(1) SQL Server 2012 adds four new spatial type classes:
FULLGLOBE, CIRCULARSTRING, COMPOUNDCURVE, and CURVEPOLYGON .
(2) SQL Server 2012 also includes features and enhancements that
improve performance for spatial data and operations, including new spatial
indexes, a spatial index hint, compression, improved index build time, and a
new query plan for spatial queries.
(3) SQL Server 2012 provides further performance enhancements at
the spatial library level for Distance() and ShortestLineTo() methods along
with performance improvements at the SQL level for methods which are based on
them, including STDistance() and ShortestLineTo() .
4)
Service Broker Enhancements
(1) The ability to send messages to multiple services.
(2) The ability to reconfigure poison message handling and to
view this setting for any service.
(3) Built-in tracking for how long a message has been in a
queue. (A new column message_enqueue_time now exists on every queue.)
(4) Built-in support for AlwaysOn Availability.
5)
Full-Text Search Enhancements
SQL Server 2012 full-text search enhancements
include improvements in performance and scale and new functionality, including
the previously discuss Semantic Search capability.
Full-text search in SQL Server 2012 now scales
to more than 100 million documents. Significant improvement can also be
realized in the traditional full-text query performance, which is reported to
be approximately 7 to 10 times faster than in the previous versions of SQL
Server. Additional performance improvements are a result of improved internal
implementation, improved query plans, and preventing queries from blocking
index updates.
The new functionality associated with full-text
search includes the following:
(1) Property Search
In SQL Server 2012, users can now conduct
searches on document properties.
(2) Customizable Near
The Custom Proximity Operator, or Customizable
Near, makes it possible for users to specify how close a search term must
appear to others before it is considered a successful match.
(3) New Wordbreaks
6)
Master Data Services Enhancements
(1) The ability to use Excel to manage master data via the
Master Data Services Add-in for Excel.
(2) The ability to match data before loading, to confirm that
you are not adding duplicate records, using SQL Server Data Quality Services
(DQS) matching to compare two sources of data.
(3) The ability to load all members and attribute values for an
entity at one time.
(4) A new higher-performance MDSModelDeploy command-line tool is
now available to create and deploy packages with data, in addition to a Model
Deployment Wizard in the web application that is used to deploy model
structures only.
(5) A new Model Package Editor that enables you to deploy
selected parts of a model rather than the entire model.
(6) Creating a recursive hierarchy with three or more levels is
now supported in MDS.
(7) SharePoint workflow integration.
7) SSAS
SQL Server 2012 provides enhancements to almost
every component of SSAS, along with the addition of major scale-out and
in-memory capabilities. The following are some of the top new features and
enhancements in SSAS:
(1) A new architecture for SQL Server 2012 SP1 PowerPivot that
supports a PowerPivot server outside of a SharePoint 2013 farm that leverages
Excel Services for querying, loading, refreshing, and saving data.
(2) PowerPivot (in Microsoft Excel 2013) supports deeper
integration with data exploration workflows.
(3) Tabular models enhancements, including optimized storage for
measures and key performance indicators (KPIs), extended data categorizations,
extended characters, hierarchy annotation, and improved support when importing
from Data Market (external data) data feeds.
(4) Tabular Model Designer Diagram View, which displays tables,
with relationships between them, in a graphical format.
(5) xVelocity In-memory Analytics Engine for tabular models.
(6) New trace events in multidimensional databases to help
troubleshoot lock-related query or processing problems.
(7) Unified business intelligence (BI) semantic modeling schema.
(8) Increased language support, which now includes MDX, DMX,
DAX, XML/A, and ASSL.
//******************************************************************************************************************
The Data Analysis Expressions (DAX)
language is a new formula language that you can use in
PowerPivot workbooks. DAX is not a subset of MDX, but a new formula
language that is considered an extension of the formula language in Excel. The
DAX statements operate against an in-memory relational data store, comprised of
tables and relationships in the PowerPivot workbook. You use DAX to create
custom measures and calculated columns.
*****************************************************************************************************************//
//******************************************************************************************************************
Data Mining Extensions (DMX) is a
language that you can use to create and work with data mining models in
Microsoft SQL Server Analysis Services. You can use DMX to create the
structure of new data mining models, to train these models, and to browse,
manage, and predict against them. DMX is composed of data definition language (DDL)
statements, data manipulation language (DML) statements, and functions and
operators.
*****************************************************************************************************************//
//******************************************************************************************************************
XMLA is an XML based, exclusive protocol
used to handle communication between clients and a SSAS database. It
is SOAP based and is designed to be a standard access point for any
multidimensional source. XMLA is the driving scripting language behind many of
the tasks within SSAS. Some of the scripting tasks that can accomplished by
XMLA include:
- Creating and changing objects
- Processing objects
- Handling connections
- Backup and restores
- Designing aggregations
- Merging partitions
XMLA contains 2 basic methods, Discover
and Execute. The Discover method retrieves lists of information in XML format
while the Execute method sends commands to the SSAS instances for execution by
the instance. We will discuss several of the execute tasks and methods in the
next several sections of this tip.
In summary, XMLA is the XML based protocol used to communicate with an Analysis
Server. It can be used to script discover and execute methods including Create
and Alter, Backup and Restore, and Processing cube objects. These scripts are
run in Management Studio and can be automated via PowerShell or the SQL Agent.
*****************************************************************************************************************//
//******************************************************************************************************************
Microsoft SQL Server Analysis
Services client applications, including SQL Server Management Studio and
Business Intelligence Development Studio, communicate with Analysis Services by
using SOAP messages. Analysis Services Scripting Language (ASSL), which is the
XML dialect used for these messages, is made up of two parts:
- A Data Definition Language (DDL), or object
definition language, which defines and
describes an instance of Analysis Services, as well as the
databases and database objects that the instance contains. This object
definition language is the subject of this reference section.
- A command language that sends action commands, such as Create, Alter,
or Process, to an instance of Analysis Services. This command language is
discussed in the XML
for Analysis Reference (XMLA).
*****************************************************************************************************************//
8) SSIS
(1) A new deployment model for deploying projects to the
Integration Services server.
//******************************************************************************************************************
Project
Deployment Model
The deployment packet contains everything
(packages and parameters) needed for deployment in a single file with an ispac extension
and hence streamlines the deployment process. It also includes
Project/Package Parameters, Environments, Environment variables and Environment
references.
Project
Parameters and Package Parameters
If
you are using the project deployment model, you can create project parameters
or package parameters. These parameters allow you to set the properties of
package components at package execution time and change the execution behavior.
The
basic difference between project parameters and package parameters is the
scope. A project parameter can be used in any package of the project whereas
the package level parameter is specific to the package where it has been
defined. The best part of these parameters is that you can mark any of them as
sensitive and it will be stored in an encrypted form in the catalog.
Environments
and Environment variables
An
environment (development, test or production) is a container for environment
variables which are used to apply different groups of values to the properties
of package components by means of environment reference during runtime.
An
environment reference is the mapping between an environment variable to pass a
value to a property of a package component. A project can have multiple
environment references, but a single instance of package execution can only use
a single environment reference. This means that when you are executing your
project/package you need to specify a single environment to use for that
execution instance.
When
defining a variable you can mark it sensitive and hence it will be stored in an
encrypted form and NULL will be returned if you query it using T-SQL.
Integration
Service Catalog
You
can create one Integration Services catalog per SQL Server instance. It stores
application data (deployed projects including packages, parameters and
environments) in a SQL Server database and uses SQL Server encryption to
encrypt sensitive data. When you create a catalog you need to provide a
password which will be used to create a database master key for encryption and
therefore it's recommended that you back up this database master key after
creating the catalog.
The
catalog uses SQLCLR (the .NET Common Language Runtime(CLR) hosted within SQL
Server), so you need to enable CLR on the SQL Server instance before creating a
catalog (I have provided the step below for this). This catalog also stores
multiple versions of the deployed SSIS projects and if required you can revert
to any of the deployed versions. The catalog also stores details about the
operations performed on the catalog like project deployment with versions,
package execution, etc.... which you can monitor on the server. There is one
default job provided for cleanup of operation data and can be controlled by
setting catalog properties
Basic
Steps:
1. Create an Integration
Services Catalog
2. Create a SSIS project
with Project Deployment Model
3. Deploy the project to
Integration Services Catalog
4. Create Environments,
Environment variables
5. Set up environment
reference in the deployed project
6. Execute deployed
project/package using the environment for example either for TEST or PROD
7. Analyze the operations
performed on the Integration Services Catalog
8. Validate the deployed
project or package
9. Redeploy the project to
Integration Services Catalog
10.Analyze deployed project versions and restored to
desired one
*****************************************************************************************************************//
(2) The ability to use
server environments to specify runtime values for packages contained in a
project.
(3) New views, stored procedures, and stored functions to help
in troubleshooting performance and data problems.
(4) The Flat File connection manager now supports parsing of
flat files with embedded qualifiers. Improved remapping of columns by the SSIS
Designer when a new data source is connected. (Columns are remapped based on
their name and data type rather than by lineage ID.)
(5) The Integration Services Merge and Merge Join
transformations are more robust and reliable. Integration Services now includes
the DQS Cleansing transformation that enables you to more easily and accurately
improve the quality of data.
(6) The ability to use SQL Server 2012’s Change Data Capture
technology from within Integration Services.
9) SSRS
Enhancements
SQL Server 2012 includes only a small number of
significant updates to the core SSRS platform, which is not surprising
considering the SSRS overhaul that came with SQL Server 2008 R2. The
enhancements provided in SQL Server 2012 include the following:
(1) An updated rendering extension for Microsoft Excel
2007-2010, supported both by Report Builder and the SSRS web rendering controls
(2) An updated rendering extension for Microsoft Word 2007-2010
(3) The ability to view web-based reports on touch-screen Apple
iOS 6 devices and on Microsoft Surface-based tablet devices
6. SQL Server 2014
I. New Features
1)
Encryption for Backups – New in 2014 (it’s encryption, not compression!)
//******************************************************************************************************************
Backup Encryption in SQL Server 2014
Starting in SQL Server 2014, SQL Server has the ability
to encrypt the data while creating a backup. By specifying the encryption
algorithm and the encryptor (a Certificate or Asymmetric Key) when creating a
backup, you can create an encrypted backup file. All storage destinations:
on-premises and Window Azure storage are supported. In addition, encryption
options can be configured for SQL Server Managed Backup to Windows Azure
operations, a new feature introduced in SQL Server 2014.
To encrypt during backup, you must specify an encryption
algorithm, and an encryptor to secure the encryption key. The following are the
supported encryption options:
- Encryption
Algorithm: The supported encryption algorithms are: AES 128, AES 192, AES
256, and Triple DES
- Encryptor:
A certificate or asymmetric Key
Caution
It
is very important to back up the certificate or asymmetric key, and
preferably to a different location than the backup file it was used to
encrypt. Without the certificate or asymmetric key, you cannot restore the
backup, rendering the backup file unusable.
Restoring the encrypted backup: SQL Server restore does
not require any encryption parameters to be specified during restores. It does
require that the certificate or the asymmetric key used to encrypt the backup
file be available on the instance that you are restoring to. The user account
performing the restore must have VIEW DEFINITION permissions on the certificate
or key. If you are restoring the encrypted backup to a different instance, you
must make sure that the certificate is available on that instance.
If you are restoring a backup from a TDE encrypted
database, the TDE certificate should be available on the instance you are
restoring to.
- Encrypting
the database backups helps secure the data: SQL Server provides the option
to encrypt the backup data while creating a backup.
- Encryption
can also be used for databases that are encrypted using TDE.
- Encryption
s supported for backups done by SQL Server Managed Backup to Windows
Azure, which provides additional security for off-site backups.
- This
feature supports multiple encryption algorithms up to AES 256 bit. This
gives you the option to select an algorithm that aligns with your
requirements.
- You can
integrate encryption keys with Extended Key Management (EKM) providers.
The following are prerequisites for encrypting a backup:
- Create a
Database Master Key for the master database: The database master key is a
symmetric key that is used to protect the private keys of certificates and
asymmetric keys that are present in the database.
- Create a
certificate or asymmetric Key to use for backup encryption.
The following are restrictions that apply to the encryption
options:
- If you are
using asymmetric key to encrypt the backup data, only asymmetric keys
residing in the EKM provider are supported.
- SQL Server
Express and SQL Server Web do not support encryption during backup.
However restoring from an encrypted backup to an instance of SQL Server
Express or SQL Server Web is supported.
- Previous
versions of SQL Server cannot read encrypted backups.
- Appending
to an existing backup set option is not supported for encrypted backups.
To encrypt a backup or to restore from an encrypted
backup:
VIEW DEFINITION permission on the certificate or
asymmetric key that is used to encrypt the database backup.
The sections below provide a brief introduction to the
steps to encrypting the data during backup. For a complete walkthrough of the
different steps of encrypting your backup using Transact-SQL, see Create an
Encrypted Backup.
Using SQL Server Management Studio
You can encrypt a backup when creating the backup of a
database in any of the following dialog boxes:
- Back Up
Database (Backup Options Page) On the Backup Options page, you can
select Encryption, and specify the encryption algorithm and the certificate
or asymmetric key to use for the encryption.
- Using
Maintenance Plan Wizard When you select a backup task, on the Options
tab of the Define Backup ()Task page, you can select Backup Encryption,
and specify the encryption algorithm and the certificate or key to use for
the encryption.
Using Transact SQL
Following is a sample Transact-SQL statement to encrypt
the backup file:
BACKUP DATABASE [MYTestDB]
TO DISK = N'C:\Program
Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\MyTestDB.bak'
WITH
COMPRESSION,
ENCRYPTION
(
ALGORITHM = AES_256,
SERVER CERTIFICATE = BackupEncryptCert
),
STATS = 10
GO
Using PowerShell
This example creates the encryption options and uses it
as a parameter value in Backup-SqlDatabase cmdlet to create an encrypted
backup.
C:\PS>$encryptionOption = New-SqlBackupEncryptionOption
-Algorithm Aes256 -EncryptorType ServerCertificate -EncryptorName
"BackupCert"
C:\PS>Backup-SqlDatabase -ServerInstance . -Database
"MyTestDB" -BackupFile "MyTestDB.bak" -CompressionOption On
-EncryptionOption $encryptionOption
Create a backup of the encryption certificate and keys to
a location other than your local machine where the instance is installed. To
account for disaster recovery scenarios, consider storing a backup of the
certificate or key to an off-site location. You cannot restore an encrypted
backup without the certificate used to encrypt the backup.
To restore an encrypted backup, the original certificate
used when the backup was taken with the matching thumbprint should be available
on the instance you are restoring to. Therefore, the certificate should not be
renewed on expiry or changed in any way. Renewal can result in updating the
certificate triggering the change of the thumbprint, therefore making the
certificate invalid for the backup file. The account performing the restore
should have VIEW DEFINITION permissions on the certificate or the asymmetric
key used to encrypt during backup.
Availability Group database backups are typically
performed on the preferred backup replica. If restoring a backup on a replica
other than where the backup was taken from, ensure that the original
certificate used for backup is available on the replica you are restoring to.
If the database is TDE enabled, choose different
certificates or asymmetric keys for encrypting the database and the backup to
increase security.
*****************************************************************************************************************//
2)
Memory-optimized tables – New, it was disk-based table in earlier versions
//******************************************************************************************************************
Use it when you have a plentiful amount
of memory and numerous multi-core processors.
All data is stored in memory and hence
unlike disk based tables, pages don't need to be brought into the buffer pool
or cache. For data persistence of memory optimized tables, the In-memory OLTP
engine creates a set of checkpoint files on a filestream
filegroup that keeps track of changes to the data in an append only mode and
uses this during recovery and the restore process.
In order to create a memory optimized
table, you need to create a database with a filestream filegroup (along with
the use of CONTAINS MEMORY_OPTIMIZED_DATA as shown below) or you can alter an
existing database to add a filestream
filegroup.
USE master
GO
CREATE DATABASE MemoryOptimizedTableDemoDB
ON
PRIMARY(
NAME = [MemoryOptimizedTableDemoDB_data],
FILENAME = 'c:\data\MemoryOptimizedTableDemoDB_data.mdf', SIZE = 1024MB
),
FILEGROUP [MemoryOptimizedTableDemoDB_MOTdata] CONTAINS MEMORY_OPTIMIZED_DATA
(
NAME = [MemoryOptimizedTableDemoDB_folder1],
FILENAME = 'c:\data\MemoryOptimizedTableDemoDB_folder1'
),
(
NAME = [MemoryOptimizedTableDemoDB_folder2],
FILENAME = 'c:\data\MemoryOptimizedTableDemoDB_folder2'
)
LOG ON (
NAME = [MemoryOptimizedTableDemoDB_log],
FILENAME = 'C:\log\MemoryOptimizedTableDemoDB_log.ldf', SIZE = 500MB
);
GO
CREATE TABLE [Customer](
[CustomerID] INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT =1000000),
[Name] NVARCHAR(250) COLLATE Latin1_General_100_BIN2 NOT NULL INDEX [IName] HASH WITH (BUCKET_COUNT = 1000000),
[CustomerSince] DATETIME NULL
)
WITH
(MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
SELECT
OBJECT_ID('Customer')
SELECT name, description FROM sys.dm_os_loaded_modules
where description = 'XTP Native DLL'
*****************************************************************************************************************//
3)
Delayed Durability (New in 2014, returning control to the client before the
transaction log record is written to disk) vs. Fully Durable (returning control
to the client after the transaction log record is written to disk)
II. Enhancements
1)
Database Engine Feature Enhancements
(1) SQL Server Data Files in Windows Azure - enables native
database files stored as Windows Azure Blobs.
(2) Host a SQL Server Database in a Windows Azure Virtual
Machine – Deploying a SQL Server Database to a Windows Azure Virtual Machine.
When to use?
2)
Backup and Restore Enhancements
(1) SQL Server Backup to URL – SSMS GUI now from/to Windows
Azure Blob storage service (URL), TSQL+PowerShell+SMO in 2012 SP1 CU2
(2) SQL
Server Managed Backup to Windows Azure
·
For managing and
automating SQL Server backups to the Windows Azure Blob storage service;
·
At the db level or
instance level;
·
Use TSQL for the setup
of the service
3)
AlwaysOn Enhancements
(1) Migration to Windows Azure is made simpler with the Add
Azure Replica Wizard.
(2) The maximum number of secondary replicas is increased from 4
to 8.
(3) When disconnected from the primary replica or during cluster
quorum loss, readable secondary replicas now remain available for read
workloads.
(4) Failover cluster instances (FCIs) can now use Cluster Shared
Volumes (CSVs) as cluster shared disks in Windows Server 2012 and above.
(5) A new system function, sys.fn_hadr_is_primary_replica, and a
new DMV, sys.dm_io_cluster_valid_path_names, is
available.
(6) The following DMVs were enhanced and now return FCI
information: sys.dm_hadr_cluster, sys.dm_hadr_cluster_members, and sys.dm_hadr_cluster_networks.
4)
Partition Switching and Indexing – enhanced with additional partition switching
and index rebuild options
5)
Managing the lock priority of online operations – the WAIT_AT_LOW_PRIORITY
option lets you to specify how long the rebuild process should wait for the
necessary locks
6)
Columnstore indexes
(1) Updateable clustered columnstore indexes – clustered
columnstore index now in 2014, updatable
(2) SHOWPLAN displays information about columnstore indexes.
(3) Archival data compression - ALTER INDEX … REBUILD has a new
COLUMNSTORE_ARCHIVE data compression option that further compresses the
specified partitions of a columnstore index.
(4) Buffer pool extension - Solid-state drives (SSD) store data
in memory (RAM) in a persistent manner, traditionally using DRAM (Dynamic RAM)
7) New
Design for Cardinality Estimation
(1) Query plans enhancement using the New Design for Cardinality
Estimation
(2) Performance Testing and Tuning Recommendations
(3) New XEvents
(4) Examples on in the new cardinality estimates
8)
Incremental Statistics – statistics can now be by partition in 2014
9)
Resource Governor enhancements for physical IO control
10)
Transact-SQL Enhancements
(1) Inline specification of CLUSTERED and NONCLUSTERED –create
table +create index
(2) SELECT … INTO – can run in parallel now
11)
System Table Enhancements
(1) sys.xml_indexes - This catalog view (started in 2008) has 3
new columns: xml_index_type, xml_index_type_description, and path_id.
(2) sys.dm_exec_query_profiles – New in SQL 2014, for monitoring
real time query progress while the query is in execution.
(3) sys.column_store_row_groups – New in SQL 2014, providing
clustered columnstore index information on a per-segment basis to help the
administrator make system management decisions.
12)
Security Enhancements – three new server level permissions: CONNECT ANY
DATABASE, IMPERSONATE ANY LOGIN, SELECT ALL USER SECURABLES
(1) CONNECT ANY DATABASE Permission
(2) IMPERSONATE ANY LOGIN Permission
(3) SELECT ALL USER SECURABLES Permission
13)
Deployment Enhancements- to Win Azure VM now
14)
Online index operation event class – two new columns added: PartitionId and
PartitionNumber
15)
Replications - None
16)
SSAS - PowerView
//*****************************************************************************************************************
Features recently added: Power View for
Multidimensional Models (Power View for Tabular Model started in SQL2012 SP1
CU4)
Power
View used to be limited to tabular data. However, with SQL Server 2014, Power
View can now be used with multidimensional models (OLAP cubes) and can create a
variety of data visualizations including tables, matrices, bubble charts, and
geographical maps. Power View multidimensional models also support queries
using Data Analysis Expressions (DAX).
*****************************************************************************************************************//
17)
SSIS - None
18)
SSRS - None
(Sources: Microsoft SQL Server 2000/2005/2008 R2/2012
Unleashed, BOL, and others)
*******************************************************************************************
*The
following is from http://windowsitpro.com/sql-server-2014/top-ten-new-features-sql-server-2014*
*******************************************************************************************
Top Ten: New Features in SQL Server 2014
1.
In-Memory
OLTP Engine
SQL Server 2014 enables memory
optimization of selected tables and stored procedures. The In-Memory OLTP
engine is designed for high concurrency and uses a new optimistic concurrency
control mechanism to eliminate locking delays. Microsoft states that customers
can expect performance to be up to 20 times better than with SQL Server 2012
when using this new feature. For more information, check out “Rev Up
Application Performance with the In-Memory OLTP Engine.”
2.
AlwaysOn
Enhancements
Microsoft
has enhanced AlwaysOn integration by expanding the maximum number of secondary
replicas from four to eight. Readable secondary replicas are now also available
for read workloads, even when the primary replica is unavailable. In addition,
SQL Server 2014 provides the new Add Azure Replica Wizard, which helps you
create asynchronous secondary replicas in Windows Azure. Related: No Fooling:
SQL Server 2014 to Release April 1
3.
Buffer
Pool Extension
SQL
Server 2014 provides a new solid state disk (SSD) integration capability that
lets you use SSDs to expand the SQL Server 2014 Buffer Pool as nonvolatile RAM
(NvRAM). With the new Buffer Pool Extensions feature, you can use SSD drives to
expand the buffer pool in systems that have maxed out their memory. Buffer Pool
Extensions can provide performance gains for read-heavy OLTP workloads.
4.
Updateable
Columnstore Indexes
When
Microsoft introduced the columnstore index in SQL Server 2012, it provided
improved performance for data warehousing queries. For some queries, the
columnstore indexes provided a tenfold performance improvement. However, to
utilize the columnstore index, the underlying table had to be read-only. SQL
Server 2014 eliminates this restriction with the new updateable Columnstore
Index. The SQL Server 2014 Columnstore Index must use all the columns in the
table and can’t be combined with other indexes.
5.
Storage
I/O control
The
Resource Governor lets you limit the amount of CPU and memory that a given
workload can consume. SQL Server 2014 extends the reach of the Resource
Governor to manage storage I/O usage as well. The SQL Server 2014 Resource
Governor can limit the physical I/Os issued for user threads in a given
resource pool. Related: A Tale about SQL Service Packs for 2014
6.
Power
View for Multidimensional Models
Power
View used to be limited to tabular data. However, with SQL Server 2014, Power
View can now be used with multidimensional models (OLAP cubes) and can create a
variety of data visualizations including tables, matrices, bubble charts, and
geographical maps. Power View multidimensional models also support queries
using Data Analysis Expressions (DAX).
7.
Power
BI for Office 365 Integration
Power
BI for Office 365 is a cloud-based business intelligence (BI) solution that
provides data navigation and visualization capabilities. Power BI for Office
365 includes Power Query (formerly code-named Data Explorer), Power Map
(formerly code-named GeoFlow), Power Pivot, and Power View. You can learn more about
Power BI at Microsoft’s Power BI for Office 365 site.
8.
SQL
Server Data Tools for Business Intelligence
The
new SQL Server Data Tools for BI (SSDT-BI) is used to create SQL Server
Analysis Services (SSAS) models, SSRS reports, and SSIS packages. The new
SSDT-BI supports SSAS and SSRS for SQL Server 2014 and earlier, but SSIS
projects are limited to SQL Server 2014. In the pre-release version of SQL
Server 2014, SQL Server Setup doesn’t install SSDT-BI. Instead, you must
download SSDT-BI separately from the Microsoft Download Center.
9.
Backup
Encryption
One
welcome addition to SQL Server 2014 is the ability to encrypt database backups
for at-rest data protection. SQL Server 2014 supports several encryption
algorithms, including Advanced Encryption Standard (AES) 128, AES 192, AES 256,
and Triple DES. You must use a certificate or an asymmetric key to perform
encryption for SQL Server 2014 backups.
10.
SQL
Server Managed Backup to Windows Azure
SQL
Server 2014’s native backup supports Windows Azure integration. Although I’m
not entirely convinced that I would want to depend on an Internet connection to
restore my backups, on-premises SQL Server 2014 and Windows Azure virtual
machine (VM) instances support backing up to Windows Azure storage. The Windows
Azure backup integration is also fully built into SQL Server Management Studio
(SSMS).
1. SQL Server 2000
I. New
features
1)
User-defined functions
SQL
Server has always provided the ability to store and execute SQL code routines
via stored procedures. In addition, SQL Server has always supplied a number of
built-in functions. Functions can be used almost anywhere an expression can be
specified in a query. This was one of the shortcomings of stored
procedures—they couldn't be used inline in queries in select lists, where
clauses, and so on. Perhaps you want to write a routine to calculate the last
business day of the month. With a stored procedure, you have to execute the
procedure, passing in the current month as a parameter and returning the value
into an output variable, and then use the variable in your queries. If only you
could write your own function that you could use directly in the query just
like a system function, in SQL Server 2000, you can.
SQL Server 2000 introduces the long-awaited
support for user-defined functions. User-defined functions can take zero or
more input parameters and return a single value—either a scalar value like the
system-defined functions, or a table result. Table-valued functions can be used
anywhere table or view expressions can be used in queries, and they can perform
more complex logic than is allowed in a view.
2)
Indexed views
Views are often used to simplify complex
queries, and they can contain joins and aggregate functions. However, in the
past, queries against views were resolved to queries against the underlying
base tables, and any aggregates were recalculated each time you ran a query
against the view. In SQL Server 2000 Enterprise or Developer Edition, you can
define indexes on views to improve query performance against the view. When
creating an index on a view, the result set of the view is stored and indexed
in the database. Existing applications can take advantage of the performance
improvements without needing to be modified.
Indexed views can improve performance for the
following types of queries:
·
Joins
and aggregations that process many rows
·
Join
and aggregation operations that are performed frequently within many queries
·
Decision
support queries that rely on summarized, aggregated data that is infrequently
updated
3)
Distributed partitioned views
SQL Server 7.0 provided the ability to create partitioned
views using the UNION ALL statement in a view definition. It was limited,
however, in that all the tables had to reside within the same SQL Server where
the view was defined.
SQL Server 2000 expands the ability to create
partitioned views by allowing you to horizontally partition tables across
multiple SQL Servers. The feature helps you scale out one database server to
multiple database servers, while making the data appear as if it comes from a
single table on a single SQL Server. In addition, partitioned views are now
able to be updated.
4)
INSTEAD OF (new in 2000) and AFTER (>1 now) triggers
In versions of SQL Server prior to 7.0, a table
could not have more than one trigger defined for INSERT, UPDATE, and DELETE.
These triggers only fired after the data modification took place. SQL
Server 7.0 introduced the ability to define multiple AFTER triggers for the
same operation on a table. SQL Server 2000 extends this capability by providing
the ability to define which AFTER trigger fires first and which fires last.
(Any other AFTER triggers besides the first and last will fire in an
undetermined order.)
SQL Server 2000 also introduces the ability to
define INSTEAD OF triggers. INSTEAD OF triggers can be specified on both tables
and views. (AFTER triggers can still only be specified on tables.) If an
INSTEAD OF trigger is defined on a table or view, the trigger will be executed
in place of the data modification action for which it is defined. The data
modification is not executed unless the SQL code to perform it is included in
the trigger definition.
5) New
data types – Bigint, sql_variant, and table data type (as a local variable
only)
SQL Server 2000 introduces three new datatypes.
Two of these can be used as datatypes for local variables, stored procedure
parameters and return values, user-defined function parameters and return
values, or table columns:
·
bigint—
An 8-byte integer that can store values from –263 (–9223372036854775808)
through 263-1(9223372036854775807).
·
sql_variant—
A variable-sized column that can store values of various SQL Server–supported
data types, with the exception of text, ntext, timestamp, and sql_variant.
·
The
third new datatype, the table datatype, can be used only as a local
variable datatype within functions, stored procedures, and SQL batches. The
table datatype cannot be passed as a parameter to functions or stored
procedures, nor can it be used as a column datatype. A variable defined with
the table datatype can be used to store a result set for later processing. A
table variable can be used in queries anywhere a table can be specified.
6) Text
in Row Data
In previous versions of SQL Server, text and
image data was always stored on a separate page chain from where the actual
data row resided. The data row contained only a pointer to the text or image
page chain, regardless of the size of the text or image data. SQL Server 2000
provides a new text in row table option that allows small text and image data
values to be placed directly in the data row, instead of requiring a separate
data page. This can reduce the amount of space required to store small text and
image data values, as well as reduce the amount of I/O required to retrieve
rows containing small text and image data values.
7)
Cascading RI constraints
In previous versions of SQL Server, referential
integrity (RI) constraints were restrictive only. If an insert, update, or
delete operation violated referential integrity, it was aborted with an error
message. SQL Server 2000 provides the ability to specify the action to take
when a column referenced by a foreign key constraint is updated or deleted. You
can still abort the update or delete if related foreign key records exist by
specifying the NO ACTION option, or you can specify the new CASCADE option,
which will cascade the update or delete operation to the related foreign key
records.
8)
Multiple SQL Server instances
Previous versions of SQL Server supported the
running of only a single instance of SQL Server at a time on a computer.
Running multiple instances or multiple versions of SQL Server required
switching back and forth between the different instances, requiring changes in
the Windows registry. (The SQL Server Switch provided with 7.0 to switch
between 7.0 and 6.5 performed the registry changes for you.)
SQL Server 2000 provides support for running
multiple instances of SQL Server on the same system. This allows you to
simultaneously run one instance of SQL Server 6.5 or 7.0 along with one or more
instances of SQL Server 2000. Each SQL Server instance runs independently of
the others and has its own set of system and user databases, security
configuration, and so on. Applications can connect to the different instances
in the same way they connect to different SQL Servers on different machines.
This feature provides the ability to run an
older version of SQL Server alongside SQL Server 2000, as well as the ability
to run separate environments (for example, a development and test environment)
on the same computer.
9) XML
support
Extensible Markup Language has become a standard
in Web-related programming to describe the contents of a set of data and how
the data should be output or displayed on a Web page. XML, like HTML, is
derived from the Standard Generalized Markup Language (SGML). When linking a
Web application to SQL Server, a translation needs to take place from the
result set returned from SQL Server to a format that can be understood and
displayed by a Web application. Previously, this translation needed to be done
in a client application.
SQL Server 2000 provides native support for XML.
This new feature provides the ability to do the following:
·
Return
query result sets directly in XML format.
·
Retrieve
data from an XML document as if it were a SQL Server table.
·
Access
SQL Server through a URL using HTTP. Through Internet Information Services
(IIS), you can define a virtual root that gives you HTTP access to the data and
XML functionality of SQL Server 2000.
The latest version of SQLXML, version 3.0,
extends the built-in XML capabilities of SQL Server 2000 with technology to
create XML Web services from SQL Server stored procedures or server-side XML
templates. SQLXML 3.0 also includes extensions to the .NET Framework that
provide SQLXML programmability to the languages supported by Microsoft Visual
Studio .NET, including C# and Microsoft Visual Basic .NET.
To help you make the most of the XML
capabilities of SQL Server 2000, Microsoft provides, as a free download from
its Web site, the Microsoft SQL Server 2000 Web Services Toolkit, which
consists of tools, code, samples, and whitepapers for building XML Web services
and Web applications with SQL Server 2000.
10) Log
shipping
The Enterprise Edition of SQL Server 2000 now
supports log shipping, which you can use to copy and load transaction log
backups from one database to one or more databases on a constant basis. This
allows you to have a primary read/write database with one or more read-only
copies of the database that are kept synchronized by restoring the logs from
the primary database. The destination database can be used as a warm standby
for the primary database, for which you can switch users over in the event of a
primary database failure. Additionally, log shipping provides a way to offload
read-only query processing from the primary database to the destination
database.
This capability was available in previous
versions of SQL Server, but it required the DBA to manually set up the process
and schedule the jobs to copy and restore the log backups. SQL Server 2000
officially supports log shipping and has made it easier to set up via the
Database Maintenance Plan Wizard. This greatly simplifies the process by
automatically generating the jobs and configuring the databases to support log
shipping.
11)
Notification Services
A new component is now available for SQL Server
2000 that makes it easy to build applications that forward messages to end
users. This feature is called SQL Server Notification Services. SQL Server
Notification Services is a platform for the development and deployment of
notification applications. Notification applications send messages to users
based upon subscriptions that they set up in advance. Depending on how the
subscriptions are configured, messages can be sent to the subscriber
immediately or on a predetermined schedule. The messages sent can be
personalized to reflect the preferences of the subscriber.
The Notification Services platform provides a
reliable, high-performance server that is built on the .NET Framework and SQL
Server 2000 and runs on the Microsoft Windows Server family of operating
systems. Notification Services was designed for scalability and efficiency: It
can support applications with millions of subscribers and large volumes of
data. As a platform, it is extensible and provides interoperability with a
variety of existing applications.
SQL Server serves as the matching engine for
notification applications, as well as the storage location for the subscription
information. The subscriber and delivery information is stored in a central
Notification Services database, and individual subscription information is
stored in application-specific databases.
12)
Microsoft SQL Server 2000 Driver for JDBC
Microsoft recently released its JDBC driver for
SQL Server 2000 as a free download for all SQL Server 2000 customers. The
Microsoft SQL Server 2000 Driver for JDBC is a Type 4 JDBC driver that provides
highly scalable and reliable connectivity for the enterprise Java environment.
The current release of the SQL Server 2000 Driver for JDBC supports the JDBC
2.0 specification.
The SQL Server 2000 Driver for JDBC provides
JDBC access to SQL Server 2000 from any Java-enabled applet, application, or
application server. The JDBC driver provides enterprise features like support
for XA transactions, server-side cursors, SQL_Variant datatypes, updateable
resultsets, and more.
The SQL Server 2000 Driver for JDBC supports the
Java Developer's Kit versions 1.1.8, 1.2, and 1.3 and is supported on the
following operating systems:
·
Microsoft
Windows® XP
·
Microsoft
Windows 2000 with Service Pack 2 or higher
·
AIX
·
HP-UX
·
Solaris
·
Linux
II.
Enhancements
1)
Index Enhancements
SQL Server 2000 provides enhancements for
defining indexes as well as enhancements to the way indexes are built.
Indexes can now be defined on computed columns,
and you can specify when creating an index whether it should be built in
ascending or descending index key order. SQL Server 2000 also provides an
option to use tempdb for performing the sorting operations when creating an
index. This can be specified with the WITH SORT_IN_TEMPDB option. When WITH
SORT_IN_TEMPDB is specified, SQL Server performs the intermediate sorting
required to build this index in tempdb, rather than in the current database. If
tempdb is on a separate disk from the destination file group in the current
database, building the index will take less time.
In addition to the new creation options, SQL
Server now supports the use of multiple processors to perform parallel scanning
and sorting when creating an index to help speed up index creation.
2)
Collation Enhancements
In previous versions of SQL Server, all
databases within a SQL Server had to use the same code page and sort order that
SQL Server was configured to use. (This is typically established during SQL
Server installation and is not always easy to change.) If you had to restore a
database from a server using a different sort order or collation, a normal
backup and restore wouldn't work and you would have to bring it over using the
Database Migration utility.
SQL Server 2000 now uses collations instead of
code pages and sort orders and supports multiple collations within a single SQL
Server. You now can specify collations at the database level or at the column
level within a table. SQL Server 2000 still supports most collations that were
supported in earlier versions of SQL Server, as well as provides support for
collations based on Windows collations.
3) DBCC
Enhancements
DBCC can be run without taking shared table
locks while scanning tables, thereby enabling them to be run concurrently with
update activity on tables. Additionally, DBCC now takes advantage of multiple
processors, thus enabling near-linear gain in performance in relation to the
number of CPUs (provided that I/O is not a bottleneck).
4) Full
Text Search Enhancements
SQL Server 2000 provides enhancements to the
Full Text Search capabilities by including change tracking, which maintains a
log of all changes to full-text indexed data. SQL Server 2000 also includes
image filtering, which allows you to index and query documents stored in image
columns.
With change tracking, you can update the
full-text index by flushing the change log manually, on a schedule, or as
changes occur, using the background update index option.
Image filtering allows you to specify the
filename extension that a document would have had if it were stored as a file
in the file system. The Full Text Search services can then load the appropriate
document filter and extract textual information for indexing from the image
data.
5)
Clustering Enhancements
In SQL Server 2000, Microsoft has made it much
easier to install, configure, and maintain a Microsoft SQL Server 2000 failover
cluster. In addition, SQL Server 2000 now provides the ability to failover and
failback to or from any node in a SQL Server 2000 cluster, add or remove a node
from the cluster using the SQL Server 2000 Setup utility, and reinstall or
rebuild a cluster instance on any node without affecting the other cluster node
instances.
6)
Backup and Restore Enhancements
In SQL Server 2000, passwords can be defined for
backup sets and media sets to prevent unauthorized users from restoring
sensitive SQL Server backups. SQL Server 2000 also has improved the speed of
differential database backups such that they now should complete in a time
proportional to the amount of data changed since the last full backup.
SQL Server 2000 also introduces a new model for
specifying backup and restore options for your database. Previous database
options such as “truncate log on checkpoint” and “select into/bulk copy” have
been replaced by three recovery models: Full Recovery, Bulk Logged Recovery,
and Simple Recovery. These new models help clarify when you are balancing
increased or decreased exposure to losing work against the performance and log
space requirements of the different plans.
SQL Server 2000 also provides support for
recovering transaction logs to specific points of work using named log marks in
the transaction log, as well as the ability to perform partial database
restores.
7) Up
to 64GB Memory Support
The Enterprise Editions of SQL Server 2000 can
use the Microsoft Windows 2000 Advanced Windows Extension (AWE) API to support
up to 8GB of memory on a Windows 2000 Advanced Data Server and 64GB of memory
on a Windows 2000 Datacenter server.
8) SQL
Server 2000 64-Bit Version
At the time of this writing, Microsoft was in
the process of beginning beta testing for a 64-bit version of SQL Server 2000,
code-named “Liberty.” Built to take advantage of hardware enhancements of the
64-bit Itanium platform from Intel, Liberty offers higher levels of
single-system scalability for memory-intensive data applications. Currently,
the planned hardware offerings support up to 64GB of physical linear memory for
the 64-bit version of SQL Server 2000, with up to 4 terabytes of physical
linear memory planned for future hardware.
The 64-bit version of SQL Server 2000 includes a
64-bit database server, a 64-bit server agent, and a 64-bit analysis server for
OLAP and data mining. The 64-bit version of SQL Server 2000 will require 64-bit
hardware running the 64-bit version of Microsoft Windows .NET Server Beta3 or
the current release of Windows Advanced Server, Limited Edition running on
Intel Itanium processors.
The 64-bit Windows Advanced Server, Limited
Edition platform provides the ability to install up to 16 instances of SQL
Server 2000 on a single machine, and supports larger numbers of users and
applications. This should result in a lower cost of ownership as businesses
will require fewer servers to support the same number of users.
The 64-bit components of SQL Server 2000 are
code compatible with the 32-bit versions of SQL Server 2000, providing
compatibility for organizations that need to maintain some of their 32-bit SQL
Server deployments, while introducing new 64-bit technologies for larger, more
demanding database applications. All existing 32-bit client applications,
including database management and administration tools such as Enterprise
Manager, Query Analyzer, and so on, can be used to remotely manage 64-bit SQL
Server 2000 installations. In addition, the database storage structures are
identical between the 32- and 64-bit versions of SQL Server, so databases can
be exchanged between the 32- and 64-bit environments.
9)
Analysis Services Enhancements
What was formerly known as SQL Server OLAP
Services is now called SQL Server Analysis Services. Analysis Services provides
new and improved features that enhance the capabilities of the previous OLAP
Services provided in SQL Server 7.0. A major new feature is the Data Mining
component, which can be used to discover information in OLAP cubes and
relational databases. In addition, enhancements have been made to the Cube,
Dimension, and Hierarchy types to improve and extend the scalability,
functionality, and analysis capabilities of cubes. Security enhancements
include the ability to assign permissions on cube cells and dimension members
to roles.
10)
Data Transformation Services Enhancements
(1) New
Custom Tasks
New DTS custom tasks, available through DTS
Designer or the DTS object model, allow you to create DTS packages that perform
tasks or set variables based on the properties of the run-time environment. Use
these tasks to:
·
Import
data from, and send data and completed packages to, Internet and File Transfer
Protocol (FTP) sites.
·
Run
packages asynchronously.
·
Build
packages that send messages to each other.
·
Build
packages that execute other packages.
·
Join
multiple package executions as part of a transaction.
DTS package logs save information for each
package execution, allowing you to maintain a complete execution history. You
can also view execution information for individual processes within a task.
You can generate exception files for transformation
tasks. When you log to exception files, you can save source and destination
error rows to a file through the DTS OLE DB text file provider and re-process
the error rows.
DTS packages now can be saved to a Microsoft®
Visual Basic® file. This allows a package created by the DTS Import/Export
Wizard or DTS Designer to be incorporated into Visual Basic programs or to be
used as prototypes by Visual Basic developers who need to reference the
components of the DTS object model.
A new multiphase data pump allows advanced users
to customize the operation of the data pump at various stages of its operation.
You can now use global variables as input and output parameters for queries.
You can now use parameterized source queries in
a DTS transformation task and an Execute SQL task. In addition, DTS includes an
option for saving the results of a parameterized query to a global variable,
allowing you to perform functions such as saving disconnected Microsoft
ActiveX® Data Objects (ADO) recordsets in DTS.
You now can use the Execute Package task to
dynamically assign the values of global variables from a parent package to a
child package. Use global variables to pass information from one package to
another when each package performs different work items. For example, use one
package to download data on a nightly basis, summarize the data, assign summary
data values to global variables, and pass the values to another package that
further processes the data.
11)
Microsoft SQL Server 2000 Meta Data Services replaces Microsoft Repository in
SQL Server 7.0, which was subsequently replaced by XML in SQL Server 2005
Microsoft® SQL Server™ 2000 Meta Data Services
extends and renames the former repository component known as Microsoft
Repository. Meta Data Services extends repository technology by introducing a
new browser for viewing data in a repository database, new Extensible Markup Language (XML) interchange
support, and new repository engine features.
To find out why SQL Server 2000 Meta Data
Services was deprecated in SQL Server 2005, read
12)
English Query Enhancements – Discontinued in SQL Server 2005, replace with
Full-text Search in SQL Server 20005+ and Semantic Search in SQL Server 2012+
English Query introduces new features such as:
·
Greater integration with
Microsoft® Visual Studio®, Analysis Services, and Full-Text Search.
·
A graphical user
interface for English Query authoring.
·
The SQL Project Wizard.
·
An XML-based language
for persisting English Query model information.
2. SQL Server 2005
I. New
features
1) SSMS
2) SQL
Server Configuration Manager
3)
CLR/.NET Framework Integration
4) DMVs
and DMFs
5)
System Catalog Views
6) SQL
Management Objects (SMO, objects designed for programmatic management of
Microsoft SQL Server.) – Replaces SQL Distributed Management Objects (SQL-DMO)
7) DAC
8)
SQLCMD
9)
Database Mail
10)
Online index and restore
11)
Native encryption – column encryption
12)
Database Mirroring
13)
Database Snapshots
14)
Server Broker
15)
SSIS – replaces DTS
16)
Table and index partition
17)
Snapshot isolation
18) BI
– BIDS introduced
19)
Query notification – Notification Services
With the availability of the Service Broker, SQL
Server 2005 also introduces notification support for SQL Server queries. Query
Notification is useful for applications that cache database query results in
their own private cache area, such as database-driven websites. Rather than
having the application repeatedly poll the database to determine whether the
data has changed and the cache needs to be refreshed, commands that are sent to
the server through any of the client APIs, such as ADO.NET, OLE DB, open
database connectivity (ODBC), Microsoft ActiveX Data Objects (ADO), or Simple
Object Access Protocol (SOAP), may include a tag that requires a notification.
For each query included in the request, SQL Server creates a notification
subscription. When the data changes, a notification is delivered through a SQL
Service Broker queue to notify the application that data has changed, at which
point the application can refresh its data cache.
20)
Multiple active result sets
21) New
SQL Server data types: xml, varchar(max), and varbinary(max)
II.
Enhanced features
1)
Database Engine Enhancement
(1) Instant file initialization
New or expanded database files are made
available much faster now because the initialization of the file with binary
zeros is deferred until data is written to the files.
(2) Partial availability
In the event of database file corruption, the
database can still be brought online if the primary filegroup is available.
(3) Database file movement
You can now use the ALTER DATABASE command to
move a database file. The physical file must be moved manually. This feature
was available in SQL Server 2000, but it only worked on tempdb.
(4) Large rows storage
SQL Server 2005 now allows for the storage of
rows that are greater than 8060 bytes. The 8060-byte limitation that existed
with SQL Server 2000 has been relaxed by allowing the storage of certain data
types (such as varchar and nvarchar) on a row overflow data page.
(5) Stored computed columns
Computed columns that were calculated on-the-fly
in prior versions can now be stored in the table structure. You accomplish this
by specifying the PERSISTED keyword as part of the computed column definition.
2)
Index Enhancements
(1) Included columns
Non-key columns can now be added to an index for
improved performance. The performance gains are achieved with covering indexes
that allow the Query Optimizer to locate all the column values referenced in
the query.
(2) ALTER INDEX
As with other database objects, such as tables
and databases, you can now modify indexes by using the ALTER statement. Index
operations that were previously performed with DBCC commands or system stored
procedures can now be accomplished with the ALTER INDEX command.
(3) Parallel index operations
Scan and sort activities associated with index
operations can now be done in parallel.
3)
T-SQL Enhancements
(1) Ranking functions
SQL Server 2005 introduces four new ranking
functions: ROW_NUMBER, RANK, DENSE_RANK, and NTILE. These new functions allow
you to efficiently analyze data and provide ranking values to result rows of a
query.
(2) Common table expressions
A common table expression (CTE) is a temporary
named result set that can be referred to within a query, similarly to a
temporary table. CTEs can be thought of as an improved version of derived
tables that more closely resemble a non-persistent type of view. You can also
use CTEs to develop recursive queries that you can use to expand a hierarchy.
(3) PIVOT/UNPIVOT operator
The PIVOT operator allows you to generate
crosstab reports for open-schema and other scenarios in which you rotate rows
into columns, possibly calculating aggregations along the way and presenting
the data in a useful form. The UNPIVOT operator allows you to normalize
pre-pivoted data.
(4) APPLY operator
The APPLY relational operator allows you to
invoke a specified table-valued function once per each row of an outer table
expression.
(5) TOP – enhanced with % and variables
In SQL Server 2005, the TOP operator has been
enhanced, and it now allows you to specify a numeric expression to return the
number or percentage of rows to be affected by your query; you can optionally
use variables and subqueries. You can also now use the TOP option in DELETE,
UPDATE, and INSERT queries.
(6) DML with results – the OUTPUT clause
SQL Server 2005 introduces a new OUTPUT clause
that allows you to return data from a modification statement (INSERT, UPDATE,
or DELETE) to the processing application or into a table or table variable.
(7) Exception handling for transactions with TRY…CATCH
Earlier versions of SQL Server required you to
include error-handling code after every statement that you suspected could
potentially generate an error. SQL Server 2005 addresses this by introducing a
simple but powerful exception-handling mechanism in the form of a TRY...CATCH
T-SQL construct.
4)
Security Enhancements
(1) SQL login password policies
SQL Server logins can now be governed by a more
rigid password policy. This is implemented with new CHECK_POLICY and CHECK_
EXPIRATION options that can be selected for a SQL Server login. These options
facilitate stronger passwords and cause passwords to expire. The password
policy is enforced only on Windows 2003 Server and above.
(2) User/schema separation
In prior versions of SQL Server, the fully
qualified name for every object was directly tied to the object owner. With SQL
Server 2005, schema names are used in the object namespace instead. This
user/schema separation provides more flexibility in the object model and allows
for object owners to be changed without affecting the code that references the
objects.
(3) Module execution context – Execute As
The EXECUTE AS option can be used to set the
execution context for SQL statements. This allows a user to impersonate another
user and is particularly useful for testing permissions.
(4) Permission granularity – more granular control
The security model in SQL Server 2005 provides a
much more granular level of control than earlier versions of SQL Server. This
granular control provides some new types of security and allows you to apply
security to a new set of database objects.
(5) Data Definition Language (DDL) triggers
Database administrators can now write server- or
database-level triggers that fire on events such as creating or dropping a
table, adding a server login, or altering a database. These triggers provide an
invaluable auditing mechanism to automatically capture these events as they
occur.
5)
Backup and Restore Enhancements
(1) Copy-only backups
These backups can be made without disrupting the
sequencing of other backups (for example, differential or log backups). Because
copy-only backups do not affect the restoration chain, they are useful in
situations such as when you simply want to get a copy of the database for
testing purposes.
(2) Mirrored backups
SQL Server 2005 adds the capability to create
additional copies of database backups via mirrored backups. Mirrored backups
provide redundancy so that you can overcome the failure of a single backup
device or medium by utilizing the mirrored copy of the backup.
(3) Partial backups
A partial backup contains all the data in the
primary filegroup, any filegroup that is not read-only, and any filegroup that
has been explicitly identified for backup. The elimination of read-only
filegroups from partial backups saves space, saves time, and reduces the server
overhead that is required while performing the backup.
(4) Online restore
Online restore allows a filegroup, file, or a
specific page within a file to be restored while the rest of the database
remains online and available.
6)
Recovery Enhancements
SQL Server 2005 reduces the amount of time it
takes for a database to become available with a new and faster recovery option.
In SQL Server 2005, users can access a recovering database after the
transaction log has been rolled forward. Earlier versions of SQL Server
required users to wait until incomplete transactions had rolled back, even if
the users did not need to access the affected parts of the database.
7) SQL
Server Agent Enhancements
(1) Job Activity Monitor
A new auto-refreshing tool named Job Activity
Monitor has been added to help monitor the execution of scheduled jobs. You can
adjust the refresh rate of the screen and specify filtering criteria in order
to isolate a job or set of jobs.
(2) Shared job schedules
A job schedule can now be shared among jobs that
have the same job owner.
(3)Enhanced SQL Agent security
Several new roles have been added that provide
enhanced security management for the SQL Server Agent. In addition, a separate
proxy account can now be defined for each type of subsystem that the SQL Server
Agent can interact with.
(4) Performance improvements
New thread pooling and reduced job execution delays
have improved the performance of SQL Server Agent.
8)
Replication Enhancements
(1) Replication security enhancements
The replication security model has changed,
allowing more control over the accounts under which replication agents run.
(2) Simplification of the user interface
Replication wizards and dialog boxes have been
redesigned for SQL Server 2005 to simplify the setup of a replication topology.
There are 40% fewer wizard dialogs, and scripting is now integrated into the
wizards, providing improved capabilities to script replication setup during or
after wizard execution.
(3) Replication of schema changes
A much broader range of schema changes can be
replicated without the use of special stored procedures. DDL statements are
issued at the publisher and are automatically propagated to all subscribers.
(4) Peer-to-peer transactional replication
The new peer-to-peer model allows replication
between identical participants in the topology (a master/master or symmetric
publisher concept).
(5) Initialization of a transactional subscription from a backup
Setting up replication between databases that
initially contain large volumes of data can be time- consuming and require
large amounts of storage. SQL Server 2005 provides a new publication option
that allows any backup taken after the creation of a transactional publication
to be restored at the subscriber, rather than using a snapshot to initialize
the subscription.
(6) Heterogeneous replication
Enhancements have been made for publishing data
from an Oracle database with transactional and snapshot replication. In
addition, there is improved support for many non-SQL Server subscribers.
(7) Replication mobility
Merge replication provides the ability to
replicate data over HTTPS, with the web synchronization option, which is useful
for synchronizing data from mobile users over the Internet or synchronizing
data between Microsoft SQL Server databases across a corporate firewall.
(8) Replication scalability and performance enhancements
Scalability and performance enhancements include
significant improvements to the performance of filtered merge publications and
the ability of the Distribution Agent in transaction replication to apply
batches of changes, in parallel, to a subscriber.
9)
Failover Clustering Enhancements
(1) More SQL Server Services supported
In SQL Server 2005, support for failover
clustering has been extended to SQL Server Analysis Services, Notification
Services, and SQL Server replication.
(2) More nodes are supported (limited to four nodes in SQL 2000)
With Windows 2003 Enterprise Edition, you can
now create up to eight nodes in a single cluster.
For the limit of 4 on SQL Server 2000, see the
link (http://technet.microsoft.com/en-us/library/cc917693.aspx) for details.
Excerpted here:
… A node is a server
within the cluster. Windows NT Server 4.0, Enterprise Edition and Windows 2000
Advanced Server and Window 2003 Advanced Server both support two-node
clustering, and Windows 2000 Datacenter Server supports up to four-node
clustering and Windows 2003 supports up to eight node clustering however you
are limited to four nodes if SQL Server 2000 clustering is to be used. ..
(3) SQL Server 2005 Standard Edition supports two-node failover
cluster too
SQL Server 2005 also provides the ability to set
up a two-node failover cluster, using SQL Server 2005 Standard Edition. In
previous releases, clustering was available only for the Enterprise Edition.
(4) Simpler Microsoft Cluster Service (MSCS) setup
Installing MSCS has become very easy (with
Windows 2003 and above). MSCS is a prerequisite for SQL Server Clustering.
(5) Cleaner SQL Server Clustering installation wizard
The much-improved wizard detects and handles
most prerequisites and provides for a single point of installation for multiple
SQL Server node configurations.
(6) Increased instances per cluster
Up to 50 SQL Server instances per cluster are
now supported with SQL Server 2005 Enterprise Edition and up to 16 SQL Server
instances per cluster for SQL Server Standard Edition.
10)
Notification Services Enhancements - died in SQL 2008, replaced with
data-driven subscription in SSRS or Complex Event Processing introduced in SQL
Server 2008R2
Notification Services was provided as a feature
for SQL Server 2000 before SQL Server 2005 was released. SQL Server 2005
provides a number of enhancements to Notification Services, including the
following:
(1) Integration into SSMS
Notification
Services is now integrated into SSMS Object Explorer. Using Object Explorer,
you can perform most nscontrol command prompt utility tasks, and you can start
and stop instances of Notification Services.
//***************************************************************************
//***************************************************************************
*…
NSControl is a command prompt utility for administering Notification
*
Services. It provides commands for deploying, configuring, monitoring,
*
and controlling Notification Services instances and *applications…
**************************************************************************//
(2) Support for subscriber-defined conditions
(2) Support for subscriber-defined conditions
In SQL Server 2005, Notification Services
supports a new type of actions, called condition actions, which allow subscribers
to define their own query clauses over a predefined data set. Using condition
actions allows subscribers to fully define their own subscriptions over the
data set.
(3) Database independence
SQL Server Notification Services supports using
existing databases for instance and application data.
(4) New management API
SQL Server Notification Services has a new
management API, Microsoft.SqlServer.Management.Nmo, that can be used to develop
Notification Services instances and applications and to manage those instances
and applications.
11)
Full-Text Search Enhancements
(1) Integrated backup and restoration for full-text catalogs
In SQL Server 2005, full-text catalogs can be
backed up and restored along with, or separate from, data- base data.
(2) Full-text catalogs included in database attach and detach
operations
SQL Server 2005 preserves full-text catalogs
when administrators perform database detach and attach operations.
(3) Full-text indexing performance improvements
SQL Server 2005 Full-Text Search includes a
major upgrade of the Microsoft Search service to version 3.0, which provides
massively improved full-text index population performance and provides one
instance of the Microsoft Search service per instance of SQL Server.
12) Web
Services Enhancements
The main enhancement to Web services in SQL
Server 2005 is that you can use HTTP to access SQL Server directly, without
using a middle-tier listener such as Microsoft Internet Information Services
(IIS). SQL Server 2005 exposes a Web service interface to allow the execution
of SQL statements and invocation of functions and procedures directly. Query
results are returned in XML format and can take advantage of the Visual Studio
Web services infrastructure.
13)
Analysis Services Enhancements
(1) Analysis Services is now fully integrated with the SSMS.
(2) Many of the same wizards and management aspects of Business
Intelligence Development Studio are also available in SSMS.
(3) SQL Server 2005 allows up to 50 separate instances of
Analysis Services on one machine with Enterprise Edition and up to 16 separate
instances with the Developer and Standard Editions.
(4) Analysis Services is now a cluster-aware application, and
failover clustering is completely supported.
(5) SQL Server 2005 supports the XML for Analysis Services 1.1
specification and Analysis Services Scripting Language (ASSL) for XML-based
administration.
(6) Proactive caching has been enabled at the partition level,
to push data that has changed into cache for immediate access in Analysis
Services. This is a big architectural change that directly addresses
high-performance query execution of data within online analytical processing
(OLAP) cubes that change frequently.
(7) The Unified Dimensional Model (UDM) paradigm provides a
powerful metadata abstraction layer to use for all Analysis Services reference
needs. It leverages concepts such as dimensions, measures, hierarchies, and so
on and provides these simplified reference points to all interfaces and
environments.
(8) Perspectives are now available to simplify and control the
end user’s view into complex cubes.
(9) Several new data mining algorithms have appeared, such as
Naïve Bayes, Association, Sequence Clustering, Time Series/Linear Regression,
and Neural Network algorithms. . Analysis Services includes more robust usage
and integration with SSIS for complex data transformations and filtering of
data mining.
14)
Reporting Services Enhancements
The first version of SQL Server Reporting
Services shipped in January 2004 for use with SQL Server 2000. However,
Reporting Services is now an integrated component of SQL Server 2005, and the
new version included in SQL Server 2005 contains a great deal of new features
geared toward ease of use, performance, and the improvement of a rich
development platform. Reporting Services is a SQL Server service, similar to
the relational database engine or Analysis Services. It allows you to design
reports, deploy them on a server, and make them available to users in a secured
environment in a variety of online and offline formats.
3. SQL Server 2008
I. New
Features
1) New
storage features
(1) FILESTREAM
(2) Sparse column and column sets (an untyped XML representation
that combines all the sparse columns of a table into a structured output.)
(3) Row-level and page-level compression
2) New
data types
(1) Date and time
(2) Datetime2
(3) DatetimeOffset
(4) Hierarchyid
(5) Spatial: Geometry and Geography
(6) User defined table type
3) New
T-SQL constructs
(1) Compound operators
(2) Grouping Sets
(3) Merge
(4) Row Constructors
(5) Table-valued parameters
(6) Insert over DML - A Filterable Alternative to OUTPUT…INTO
//**************************************************************************************************
INSERT OVER DML refers
to a special syntax in which you wrap an INSERT
INTO…SELECT statement around any
data manipulation language (DML) statement (INSERT, UPDATE, DELETE, or MERGE)
that has an OUTPUT clause, rather than using OUTPUT…INTO on the
DML statement itself. The subtle but crucial difference here is that OUTPUT…INTO
captures the changes, while the INSERT OVER DML syntax consumes the
changes captured by OUTPUT. By treating the OUTPUT changes as the
source for a standard INSERT INTO…SELECT statement, you can apply any WHERE
clause to the INSERT INTO…SELECT statement that you want.
Beyond that, there
isn’t anything more you can do with INSERT OVER DML than what you can with OUTPUT…INTO.
But to casually overlook the significance of filtering change data is to miss
the point of INSERT OVER DML entirely. By being able to filter change data with
a WHERE clause, you can better control which data changes captured are
shipped to the destination table and which aren’t. (from http://mscerts.programming4.us/sql_server/sql%20server%202012%20%20%20t-sql%20enhancements%20-%20the%20insert%20over%20dml%20syntax%20(part%201)%20-%20a%20filterable%20alternative%20to%20output%E2%80%A6into.aspx)
**************************************************************************************************//
(7) New Date and Time Functions
4) New
performance features
(1) Filtered indexes and statistics,
(2) FORCESEEK query hint - for more information see
http://technet.microsoft.com/en-us/library/bb510478(v=sql.105).aspx
(3) Hash values for finding similar queries in the plan cache,
(4) Plan Guide Successful and Plan Guide Unsuccessful event
classes,
(5) Guided/Misguided Plan Executions/sec Performance Monitor
counters,
(6) LOCK ESCALATION option for ALTER TABLE,
(7) Hot-add CPUs
5) New
security features
(1) Transparent data encryption,
(2) Extensible Key Management,
(3) SQL Server Audit - key components: SQL Server Audit using
Extended Events, Server Audit Specification, Database Audit Specification,
Target (a file, Windows Security Log, or Windows Application Log)
6) New
database administration features
(1) Backup compression - WITH COMPRESSION
(2) Change Data Capture - Providing
information about DML changes on a table and a database. By using change data
capture, you eliminate expensive techniques such as user triggers, timestamp
columns, and join queries.
//******************************************************************************************************************
-- Step 1:
Create CDC_DEMO database
CREATE DATABASE CDC_DEMO;
GO
USE CDC_DEMO;
GO
CREATE TABLE dbo.Boat (
Id int,
Name varchar(50),
BoatType varchar(20),
BoatLength tinyint,
BoatBeam varchar(20));
CREATE TABLE dbo.BoatListing (
Id int,
BoatId int,
AskingPrice money,
LastUpdateUser varchar(25));
-- Step 2:
enable CDC on the database
EXEC sys.sp_cdc_enable_db;
GO
-- Step 3:
Turning on Change Data Capture for a Table
USE CDC_DEMO;
GO
EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name = N'Boat',
@role_name = NULL, -- Role gating not enabled
@filegroup_name = N'PRIMARY'; -- consider writing to a
separate filegroup
GO
-- If not for
the entire table, use additional parameter
EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name = N'BoatListing',
@role_name = NULL, -- Role gating not enabled
@filegroup_name = N'PRIMARY', -- should consider
writing audit date to separate filegroup
@captured_column_list = N'Id,AskingPrice,LastUpdateUser';
GO
--Step 4: add
some intital data to the table
USE CDC_DEMO;
GO
SET NOCOUNT ON;
INSERT INTO dbo.Boat VALUES(1
,'Wind
Witch'
,'Sailboat',23,'8 feet and 2 inches'
);
INSERT INTO dbo.Boat VALUES(1
,'Wind
Witch II'
,'Sailboat',25,'8 feet and 6 inches'
);
--Step 5:
Display Change Data Capture Information For the Boat table
USE CDC_DEMO;
GO
DECLARE @from_lsn binary(10), @to_lsn binary(10)
SET @from_lsn = sys.fn_cdc_get_min_lsn('dbo_Boat')
SET @to_lsn = sys.fn_cdc_get_max_lsn()
SELECT
CT.__$start_lsn,
CT.__$operation,
CASE CT.__$operation
WHEN 1 THEN 'Delete'
WHEN 2 THEN 'Insert'
WHEN 3 THEN 'Update - Pre'
WHEN 4 THEN 'Update - Post'
END AS Operation,
CT.*,
LSN.tran_begin_time,
LSN.tran_end_time,
LSN.tran_id
FROM
cdc.fn_cdc_get_all_changes_dbo_Boat
(@from_lsn, @to_lsn, N'all update old') AS CT INNER JOIN
cdc.lsn_time_mapping
AS LSN ON CT.__$start_lsn = LSN.start_lsn
GO
******************************************************************************************************************//
(3) Change Tracking - enables applications to
obtain only changes that have been made to the user tables, along with the
information about those changes.
//******************************************************************************************************************
Basic Steps: (a) enable change tracking for the database and
table
--Enabling Change Tracking at Database Level
ALTER DATABASE LearningCT
SET CHANGE_TRACKING = ON
(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON)
--AUTO_CLEANUP -> With this option you can switch ON or OFF
automatic
--tracking table clean up process
--CHANGE_RETENTION -> With this option, you can specify the time
frame
--for which tracked information will be maintained
--Enabling Change Tracking at Table Level
ALTER TABLE Employee
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = ON)
--TRACK_COLUMNS_UPDATED -> With this option, you can include columns
--also whose values were changed
(b) view the change tracking information
using system tables
SELECT * FROM sys.change_tracking_databases
SELECT * FROM sys.change_tracking_tables
SELECT * FROM sys.internal_tables
WHERE parent_object_id = OBJECT_ID('Employee')
******************************************************************************************************************//
(4) Data Collector
//******************************************************************************************************************
The data collector is a component of SQL
Server 2008 that collects different sets of data. Data collection either
runs constantly or on a user-defined schedule. The data collector stores the
collected data in a relational database known as the management data warehouse.
The
data collector is a core component of the data collection platform for SQL
Server 2008 and the tools that are provided by SQL Server. The data collector
provides one central point for data collection across your database servers and
applications. This collection point can obtain data from a variety of sources and is not limited to performance data,
unlike SQL Trace.
The
data collector enables you to adjust the scope of data collection to suit your
test and production environments. The data collector also uses a data
warehouse, a relational database that enables you to manage the data that you
collect by setting different retention periods for your data.
The
data collector supports dynamic tuning for data collection and is extensible
through its API.
- Create
logins and map them to data collector roles.
- Configure
the management data warehouse.
- In
addition, you may want to create proxy accounts to access the management
data warehouse server.
- Out of the box, SQL 2008 comes with 3 pre-canned data collection scripts (disk usage, query statistics, and server activities). However, it is possible to create custom made data collection scripts that are fully integrated with SQL Server Data Management Warehouse.
******************************************************************************************************************//
(5) Policy-Based Management - Four key components
//******************************************************************************************************************
- Target - an entity that is managed by
Policy-Based management; e.g. a database, a table, an index, etc.
- Facet - a predefined set of properties that can
be managed
- Condition - a property expression that evaluates
to True or False; i.e. the state of a Facet
- Policy - a condition to be checked and/or
enforced
******************************************************************************************************************//
(6) SQL Server Extended Events - provide a generic tracing and
troubleshooting framework which allows deeper and more granular level control
of tracing which was not possible using earlier methods like DBCC, SQL Trace,
Profiler, etc... Key components: 4 packages,254/259 events for 08/R2, Actions
for raised events, targets for consuming the events, predicates for dynamically
filtering
(7) Resource Governor - Three key components: resource
pools (a collection of system resources such as
memory or CPU), workload groups (a
user-specified category of requests) and classification (or
classifier user-defined functions- a
set of user-written rules that enable Resource Governor to classify session
requests into the workload groups).
//******************************************************************************************************************
--Resource pool to be used by OLTP
Application
CREATE RESOURCE POOL OLTPPool
WITH
(
MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100,
MIN_MEMORY_PERCENT=50, MAX_MEMORY_PERCENT=100
)
GO
--Resource pool to be used by Report Application
CREATE RESOURCE POOL ReportPool
WITH
(
MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100,
MIN_MEMORY_PERCENT=50, MAX_MEMORY_PERCENT=100
)
GO
--Workload Group to be used by OLTP
Application
CREATE WORKLOAD GROUP OLTPGroup
USING OLTPPool ;
GO
--Workload Group to be used by Report
Application
CREATE WORKLOAD GROUP ReportGroup
USING ReportPool ;
GO
USE master;
GO
/*create the classifier UDF to route
incoming request to different workload groups and
finally I will enable Resource Governor
with ALTER RESOURCE GOVERNOR RECONFIGURE statement.
Assumption here is, the OLTP application
uses "OLTPUser" login whereas reporting
application uses "ReportUser"
login.*/
CREATE FUNCTION dbo.ResourceClassifier()
RETURNS SYSNAME
WITH SCHEMABINDING
AS
BEGIN
--Declare
the variable to hold the value returned in sysname.
DECLARE @WorkloadGroup AS SYSNAME
--If
the user login is 'OLTPUser', map the connection to the
--OLTPGroup
workload group.
IF (SUSER_NAME() = 'OLTPUser')
SET @WorkloadGroup = 'OLTPGroup'
--If
the user login is 'ReportUser', map the connection to
--the
ReportGroup workload group.
ELSE IF (SUSER_NAME() = 'ReportUser')
SET @WorkloadGroup = 'ReportGroup'
ELSE
SET @WorkloadGroup = 'default'
RETURN @WorkloadGroup
END
GO
--Register the classifier user-defined
function and update the
--the in-memory configuration.
ALTER RESOURCE GOVERNOR
WITH (CLASSIFIER_FUNCTION=dbo.ResourceClassifier);
GO
--Enabling Resource Governor(By default
when you install
--SQL Server, Resource Governor is
disabled)
--It loads the stored configuration
metadata into memory
ALTER RESOURCE GOVERNOR RECONFIGURE
GO
--Disabling Resource Governor
ALTER RESOURCE GOVERNOR DISABLE
GO
--It resets statistics on all workload
groups and resource pools.
ALTER RESOURCE GOVERNOR RESET STATISTICS
GO
******************************************************************************************************************//
7) New
SQL Server management features
(1) Transact-SQL Debugger,
(2) IntelliSense,
(3) Error list window,
(4) Multiserver queries - query multiple servers from one window
after creating a Registered SQL Server Group with the desired servers.
(5) PowerShell integration
8)
Service Broker
(1) New Conversation Priorities
(2) New Diagnostic Utility - ssbdiaglose
(3) New Service Broker Elements in Object Explorer
(4) New System Monitor Object and Counters
(5) New Service Broker Tutorials
II.
Enhancement
1)
Installation Enhancements
(1) Slipstream Installation
Starting with SQL Server 2008 Service Pack 1,
you can now perform slipstream installations of SQL Server 2008. Slipstream is
the integration of the original installation media files with a service pack
and/or a cumulative update so that they can be installed in a single step.
(2) Selective uninstallation
SQL Server 2008 also provides the capability to
selectively uninstall cumulative updates and/or service packs via the Programs
and Features control panel.
2) SQL
Server Management Studio
(1) Customize the columns displayed by the Object Explorer
Details window.
(2) Display the properties of a selected item from Object
Explorer Details at the bottom of the window.
(3) View a color-coded status bar at the bottom of the window
for Transact-SQL and MDX code editors which provide information about the
editor connection. The status bar changes color when a code editor opens
multiple connections.
(4) Customize the tab name for items in the title bar of the
code editor windows.
(5) Configure the number of rows returned when you open tables
via the Object Browser.
(6) Create and manage plan guides via the Programmability folder
in Object Browser.
3)
Dynamic Management Views
(1) SQL Server 2008 adds five new dynamic management views to
return memory-related information about SQL Server:
·
Sys.dm_os_memory_nodes—Returns memory
allocation information for memory allocated through SQL Server Memory Manager.
·
sys.dm_os_nodes—Returns information about
SQL OS memory nodes, internal
structures of SQL OS that abstract hardware processor locality.
·
sys.dm_os_process_memory—Returns
complete information about memory allocated to SQL Server process space.
·
sys.dm_os_sys_memory—describes the memory
state for the operation system
(2) In addition, the cpu_ticks_in_ms column in the
sys.dm_os_sys_info dynamic management view has been discontinued, and two new
columns, sqlserver_start_time_ms_ticks and sqlserver_start_time, have been
added.
4)
Database Mirroring
SQL Server 2008 provides a number of
enhancements to database mirroring, mostly related to performance of database
mirroring, including:
(1) Compression of the transaction log records being streamed to
the mirror database,
(2) Asynchronous write-ahead on the incoming log stream and
read-ahead during the undo phase, and improved use of the log send buffers.
5)
SQLCLR Enhancements
(1) Extending the 8KB size limit for CLR user-defined types and
CLR user-defined aggregates,
(2) Supporting the definition of ordered table-valued functions,
(3) Providing support for multiple input parameters for
user-defined aggregates, and including the option to define static methods as
user-defined functions.
6)
Replication Enhancements
SQL Server 2008 introduces a number of usability
and manageability enhancements for peer-to-peer replication and the Replication
Monitor.
(1) Peer-to-peer replication includes the following significant
usability and manageability improvements:
·
A new option, enabled by default that
allows the Distribution Agent
to detect conflicts during synchronization and to stop applying changes at the
affected row.
·
The capability to add nodes to a
replication topology without quiescing the topology.
·
The capability to configure a topology visually
in the Configure Peer-to-Peer
Topology Wizard. The Topology Viewer enables you to perform common
configuration tasks, such as adding new nodes, deleting nodes, and adding new
connections between existing nodes.
(2) The Replication Monitor includes the following usability
improvements:
·
Most of the Replication Monitor grids
allow you to specify which columns to view, sort by multiple columns, and filter
rows in the grid based on column values.
·
The Common Jobs tab for the Publisher node
has been renamed to Agents.
·
The single Warnings and Agents tab for the
publication node has been
split into separate Warnings and Agents tabs to emphasize the difference
between administering performance warnings and monitoring replication agents.
7)
Service Broker Enhancements
(1) Support for conversation priorities.
This support allows administrators and
developers to specify that messages for important Service Broker conversations
are sent and received before messages from less important conversations to
ensure that low-priority work does not block higher-priority work.
(2) The new ssbdiagnose command-prompt utility to analyze and
diagnose Service Broker configurations and conversations.
(3) A new performance object and counters that report how often
Service Broker dialogs request transmission objects and how often inactive
transmission objects are written to work tables in tempdb.
(4) Support for Service Broker in SQL Server Management Studio
via new Service Broker Elements in Object Explorer.
8)
SSAS:
(1) A new Aggregation Designer makes it easier to browse and
modify aggregation designs.
(2) Aggregation design and usage-based optimization wizards have
been simplified and enhanced.
(3) New AMO warning messages alert users when they depart from
design best practices or make logical errors in database design.
(4) Simplified and enhanced cube and dimension wizards help you
create better cubes and dimensions in fewer steps.
(5) A new Attribute Relationship designer makes it easier to
browse and modify attribute relationships. . A new Key Columns dialog box makes
editing key columns easier. . Key columns can now be edited in the Properties
panel.
(6) An updated Dimension Structure tab helps make modifying
attributes and hierarchies easier.
(7) A new storage structure is available and performance has
been enhanced in all backup and restore scenarios.
(8) When creating a mining structure, you can now divide the
data in the mining structure into training and testing sets.
(9) You can now attach filters to a mining model and apply the
filter during both training and testing. Applying a filter to the model lets
you control the data used to train the model and lets you more easily assess
the performance of the model on subsets of the data.
(10) Cross-validation is now available in the Mining Accuracy
Chart view of the Data Mining Designer.
(11) SQL Server 2008 supports the creation, management, and use
of data mining models from Microsoft Excel when you use the SQL Server 2008
Data Mining add-ins for Office 2007.
(12) You are able to add aliases to columns in a mining model to
make it easier to understand column content and reference the column in DMX
statements.
9) SSIS
(1) Improvements in the parallel execution of data flow pipeline
paths on multiprocessor systems, going beyond the SQL Server 2005 limit of two
engines.
(2) A new script environment. Visual Studio for Applications
(VSA) has been replaced with Visual Studio Tools for Applications (VSTA). VSTA
allows the development of scripts written on Visual C# and Visual Basic,
providing support for the Script Task and Script Component. VSTA also supports
debugging and adding managed assemblies to a script at design time.
(3) Increased performance and improved caching options for
Lookup Transformations.
(4) Data profiling to improve data quality by identifying
potential data quality problems.
(5) Support for the new Change Data Capture feature in SQL
Server 2008, providing an effective method for performing incremental data
loads from source tables to data marts and data warehouses.
4. SQL Server 2008R2
I. New
Features
1) Database Engine - Connecting to the Database Engine Using
Extended Protection
SQL Server supports service binding and channel
binding to help reduce these authentication relay attacks on SQL Server
instances
2)
Multi-Server Administration and Data-Tier Application:
(1) SQL Server Utility/ Utility Explorer - a central repository
for performance data and management policies that tailor the operation of
instances of the Database Engine that have been enrolled in the utility,
(2) DAC- a single unit for developing, deploying, and managing
the database objects used by an application
3)
Master Data Services
//******************************************************************************************************************
(1) Purpose: to discover and define non-transactional lists of
data, with the goal of compiling maintainable master lists
(2) Key components: (a) Master Data Services Configuration
Manager, a tool you use to create and configure Master Data Services databases
and Web applications,(b) Master Data Manager, a Web application you use to
perform administrative tasks (like creating a model or business rule), and that
users access to update data, and (c ) Master Data Services Web service,
which developers can use to extend or develop custom solutions for Master Data
Services.
(3) Key terms: model, entity, attributes, members, attribute
group, business rules, derived hierarchies.
(4) How is it different from DQS? MDS and DQS are independent
but also collaborative. Both play important roles in enterprise information management
(EIM) scenarios.
· DQS emphasizes the cleansing, correction, and standardization of
data through the collection of business knowledge in a DQS knowledge base, and
the application of that knowledge in interactive and automated cleansing
scenarios.
· MDS emphasizes the ongoing "care and feeding" of
master data, providing a set of tools that allow business users to keep records
up to date, while allowing IT departments to secure and manage the master data
models, while integrating with external systems that utilize the master data.
· DQS uses Domain Rules and MDS employs Business Rules
· Domain rules in DQS and business rules in MDS are both used to
maintain the integrity of data, but they are used in different circumstances,
as part of different operations, and from within different tools. Domain rules
are used to identify data problems as part of a cleansing project, while
business rules are used to prevent bad data from being introduced into MDS. The
two types of rules can apply to the same data at different places in the
information management lifecycle, so the same rule can end up being defined in
both places.
***********************************************************************************************//
***********************************************************************************************//
4) SSAS
– No Changes on Multidimensional Modeling and Data Mining
(1) PowerPivot for Excel
PowerPivot for Excel- (a) An add-in to Excel
2010 to create relationships in large amounts of data from different sources,
and then use that data as the basis for PivotTables and other data
visualization objects that support data analysis in Excel. (b) Using a local
Analysis Services VertiPaq engine that compresses and loads data
DAX - Data Analysis Expressions used by
PowerPivot for Excel for creating sophisticated calculations, using time
intelligence, and performing lookups
(2) PowerPivot for SharePoint
Extends SharePoint 2010 and Excel Services to
add server-side processing, collaboration, and document management support for
the PowerPivot workbooks that you publish to SharePoint. You still use
PowerPivot for Excel to create PowerPivot workbooks.
SQL Server 2008 R2 also introduces Analysis
Services integration with SharePoint products to support large scale data
analytics for Excel workbooks that you publish to a SharePoint site. In this
release, Setup includes a new setup role installation option so that you can
deploy Analysis Services as a preconfigured PowerPivot for SharePoint server in
a SharePoint farm.
5) SSIS
– No Change
6) SSRS
(1) Report Builder 3.0,
(2) Report parts,
(3) Shared datasets - A shared dataset is published on a
report server and can be used by multiple reports. A shared dataset must be
based on a shared data source. A shared dataset can be cached and scheduled by
creating a cache refresh plan.
(4) Sparklines and data bars,
(5) Indicators,
(6) Calculating aggregates of aggregates - You can create
expressions that calculate an aggregate of an aggregate. For example, in a cell
in the row group header that is associated with a group based on year, you can
calculate the average monthly sales for year by using the
expression =Avg(Sum(Fields!Sales.Value,"Month"),"Year").
By using this feature for charts and gauges that are nested in a
table, you can align horizontal and vertical axes for charts and scales for
gauges. You do this by calculating the maximum and minimum of aggregated values
so the nested items use the same ranges.,
(7) Maps,
(8) Lookup functions - Expressions in data regions can now
include references to Lookup
Function (Report Builder 3.0 and SSRS), LookupSet
Function (Report Builder 3.0 and SSRS), and Multilookup
Function (Report Builder 3.0 and SSRS). These lookup functions can retrieve
values from a dataset that has name/value pairs and that is not bound to the
data region.
In the following
expression, Lookup compares the value of ProductID to ID in each row
of the dataset called "Product" and, when a match is found, returns
the value of the Name field for that row.
=Lookup(Fields!ProductID.Value, Fields!ID.Value, Fields!Name.Value,
"Product")
II.
Enhancement
1) SQL
Server 2008 R2 Installation
(1) Installing SQL Server Using SysPrep.
A stand-alone installation of SQL Server
includes two steps: prepare and complete. In SQL Server 2008, these two steps
ran as one integrated step during installation.
Starting with SQL Server 2008 R2, in addition to
the integrated installation, the two steps are available as separate
installation actions, and can be used in preparing an image of SQL Server.
(2) Side-by-side instances of SQL Server 2008 R2 and SQL Server
2008.
SQL Server 2008 R2 and SQL Server 2008 share the
same major version number and therefore share some SQL Server components.
(3) A new command line parameter /ACCEPTSQLSERVERLICENSETERMS is
now required for unattended installations of SQL Server.
(4) Starting in SQL Server 2008 R2, the Reporting Services
component no longer supports Itanium-based servers running Windows Server 2003
or Windows Server 2003 R2.
Reporting Services continues to support other
64-bit operating systems, including Windows Server 2008 for Itanium-Based
Systems and Windows Server°2008°R2 for Itanium-Based Systems. To upgrade to SQL
Server 2008 R2 from a SQL Server 2008 installation with Reporting Services on
an Itanium-based system edition of Windows Server 2003 or Windows Server 2003
R2, you must first upgrade the operating system.
(5) New role-based installation parameter for installing SQL
Server in a predetermined configuration.
SQL Server Express is installed using the AllFeatures_WithDefaults
role that minimizes the required user input. The AllFeatures_WithDefaults
role installs all available features, adds the current user to the SQL Server sysadmin
fixed server role, and uses default values for the service accounts whenever
possible. The AllFeatures_WithDefaults role can be provided as a command
line parameter when installing the full product.
(6) The option to enable the feature usage reporting during SQL
Server Setup is now available on the Accept License Terms page.
In the previous versions of SQL Server, the
feature usage reporting option was available through the Error and Usage
Reporting page which now only includes the option to enable error
reporting.
(7) Two new premium editions to meet the needs of large-scale
datacenters and data warehouses: SQL Server 2008 R2 Datacenter. SQL Server 2008
R2 Parallel Data Warehouse.
SQL Server Datacenter delivers a high-performing
data platform that provides the highest levels of scalability for large
application workload, virtualization, and consolidation.
Unlike the other editions of SQL Server which
are software-only products, the Parallel Data Warehouse edition is a
combination hardware and software offering. You can’t just buy SQL Server 2008
R2 Parallel Data Warehouse edition from Microsoft and install it on one of your
own severs as you can with the other editions of SQL Server. Instead, you
purchase SQL Server 2008 R2 Parallel Data Warehouse from one of Microsoft’s
partners, which are HP, Dell, IBM, and Bull. Each of the different partners
provides a variety of different sizes of system configurations.
(8) The largest database supported by SQL Server Express has
been increased from 4GB to 10 GB
(9) Maximum number of CPU support for SQL Server Enterprise has
increased to a maximum of 8 CPUs.
2)
Database Engine
(1) Unicode Compression
SQL Server 2008 R2 introduces Unicode
compression. Data that is stored in nvarchar(n) and nchar(n) columns can be
compressed by using an implementation of the Standard Compression Scheme for
Unicode (SCSU) algorithm.
3)
Service Broker - turn on and off poison-message handling on a queue
SQL Server 2008 R2 introduces the option to turn
on and off poison-message handling on a queue. A queue that has poison message
turned off will not be disabled after consecutive transaction rollbacks. With
this feature, a custom poison-message handing strategy can be defined by an
application if you want. The Transact-SQL CREATE QUEUEand ALTER QUEUE statements have been updated, and the [IsPoisonMessageHandlingEnabled]
property has been added to
the[Microsoft.SqlServer.Management.Smo.Broker.ServiceQueue] API.
4) SSIS
(1) ADO NET Destination Editor (Connection Manager Page)
5. SQL Server 2012
I. New Features
1) New
storage features
(1) Columnstore indexes,
(2) FileTable storage.
2) New
Transact-SQL (T-SQL) constructs
(1) Sequence objects,
(2) THROW statement - Raises an exception and transfers
execution to a CATCH block of a TRY…CATCH construct
//******************************************************************************************************************
THROW [ { error_number | @local_variable },
{ message |
@local_variable },
{ state |
@local_variable } ]
[ ; ]
What are the benefits of THROW over RAISERROR?
- Correct line
number of the error (even when you re-throw the exception with the THROW
keyword the original error number and line number is preserved unlike the
RAISERROR command where it is overwritten) – not the line number where
RAISERROR executed!
- Easy to use - no
need for extra code as in RAISERROR
- Complete
termination - The severity level raised by THROW is always 16. But the
more important feature is that when the THROW statement in a CATCH block
is
- Independence of
sys.messages - This feature makes it possible to re-throw custom message
numbers without the need to use sp_addmessage
to add
the number.
*****************************************************************************************************************//
(3) New conversion functions - PARSE, TRY_PARSE, and TRY_CONVERT
//******************************************************************************************************************
Parse - Returns the result of an
expression, translated to the requested data type
PARSE ( string_value AS data_type [ USING culture ] )
-- return 2010-12-13 00:00:00.0000000
SELECT PARSE('Monday, 13 December 2010' AS datetime2 USING 'en-US') AS Result;
-- return 345.98
SELECT PARSE('€345,98' AS money USING 'de-DE') AS Result;
*****************************************************************************************************************//
(4) New logical functions - CHOOSE and IIF,
//******************************************************************************************************************
Returns the item at the specified index
from a list of values in SQL Server.
SELECT CHOOSE ( 3, 'Manager', 'Director',
'Developer', 'Tester' ) AS Result; --return Developer
*****************************************************************************************************************//
//******************************************************************************************************************
Returns one of two values, depending on whether the Boolean expression
evaluates to true or false in SQL Server.
IIF ( boolean_expression,
true_value, false_value )
*****************************************************************************************************************//
(5) New string functions
- CONCAT and FORMAT,
//******************************************************************************************************************
CONCAT ( string_value1,
string_value2 [, string_valueN ] )
*****************************************************************************************************************//
//******************************************************************************************************************
FORMAT - for numeric and data/time only
*****************************************************************************************************************//
(6) New date and time functions
·
DATEFROMPARTS ,
·
DATETIME2FROMPARTS ,
·
DATETIMEFROMPARTS ,
·
DATETIMEOFFSETFROMPARTS ,
·
SMALLDATETIMEFROMPARTS ,
·
TIMEFROMPARTS
(7) New ROWS and RANGE Clauses
Provides support for support for windows
framing, which can be used to restrict the results to a subset of rows by
specifying the start and end points within the partition of the OVER clause.
//******************************************************************************************************************
--2008R2 and below
OVER (
[
<PARTITION BY clause> ]
[
<ORDER BY clause> ]
)
--2012 and upper
OVER (
[
<PARTITION BY clause> ]
[
<ORDER BY clause> ]
[
<ROW or RANGE clause> ]
)
*****************************************************************************************************************//
(8) New window offset
functions – LAG and LEAD functions (=previous and next)
LAG and LEAD functions used in conjunction with
OVER clause let you return a value from a specific row that’s a certain offset
from the current row.
//******************************************************************************************************************
LAG (scalar_expression
[,offset] [,default])
OVER ( [
partition_by_clause ] order_by_clause )
USE AdventureWorks2012;
GO
SELECT BusinessEntityID, YEAR(QuotaDate) AS SalesYear, SalesQuota AS CurrentQuota,
LAG(SalesQuota, 1,0) OVER (ORDER BY YEAR(QuotaDate)) AS PreviousQuota
FROM Sales.SalesPersonQuotaHistory
WHERE BusinessEntityID = 275 and YEAR(QuotaDate) IN ('2005','2006');
*****************************************************************************************************************//
(9) New window rank
distribution functions — PERCENT_RANK, PERCENTILE_CONT, PERCENTILE DISC, and
CUME_DIST.
(10) New ad hoc query paging - Provides a mechanism to implement
paging using the relatively simple syntax provided by the ANSI standard ORDER BY ...
OFFSET / FETCH clause.
//******************************************************************************************************************
OFFSET / FETCH: using them with ORDER BY
The OFFSET-FETCH clause provides you with an option to fetch only a window or page of results from the result set. OFFSET-FETCH can be used only with the ORDER BY clause.
[ORDER BY { order_by_expression [ ASC | DESC ] } [
,...n][<offset_fetch>] ]
<offset_fetch> ::= {OFFSET { integer_constant |
offset_row_count_expression } { ROW | ROWS }
[FETCH { FIRST | NEXT } {integer_constant | fetch_row_count_expression }
{ ROW | ROWS } ONLY]}
--Skip first 10 rows from the sorted
result set and return the remaining rows.
SELECT First Name + ' ' + Last Name FROM Employees ORDER BY First Name OFFSET 10 ROWS;
--Skip first 10 rows from the sorted
resultset and return next 5 rows.
SELECT First Name + ' ' + Last Name FROM Employees ORDER BY First Name OFFSET 10 ROWS FETCH NEXT 5 ROWS ONLY;
*****************************************************************************************************************//
3) New
scalability and performance features
(1) Indirect checkpoints,
(2) FORCESCAN table hint,
(3) Number of table partitions increased to 15,000.
4) New
security features
(1) Database Audit (???, this is not not new. introduced in
2008, also using xevents),
(2) User-defined server roles,
(3) Contained databases.
5) New
availability features - AlwaysOn
A number of high-availability enhancements known
as AlwaysOn, which include AlwaysOn Availability Groups and AlwaysOn Failover
Cluster Instances.
6)
Statistical Semantic Search
Statistical Semantic Search builds upon the
existing full-text search feature in SQL Server by querying the contextual
meaning of terms within a document repository.
//******************************************************************************************************************
Semantic search builds upon the existing
full-text search feature in SQL Server, but enables new scenarios that extend
beyond keyword searches. While full-text search lets you query the words in a document, semantic search lets you query the meaning of the document. Solutions that are now possible include
automatic tag extraction, related content discovery, and hierarchical
navigation across similar content. For example, you can query the index of key
phrases to build the taxonomy for an organization, or for a corpus of
documents. Or, you can query the document similarity index to identify resumes
that match a job description.
Statistical Semantic Search provides
deep insight into unstructured documents stored in SQL Server databases by
extracting and indexing statistically relevant key phrases. Then it also uses these key phrases to identify and index documents that are
similar or related.
Before
you can index documents with Semantic Search, you have to store the documents in a SQL Server database.
The FileTable feature in SQL Server 2014 makes
unstructured files and documents first-class citizens of the relational
database. As
a result, database developers can manipulate documents together with structured
data in Transact-SQL set-based operations.
You
query these semantic indexes by using three Transact-SQL rowset functions to
retrieve the results as structured data: semantickeyphrasetable,
semanticsimilaritytable, semanticsimilaritydetailstable
--Find
the Key Phrases in a Document
--The
following query gets the key phrases that were identified in the sample
document.
--It
presents the results in descending order by the score that ranks the
statistical significance of each key phrase.
--
This query calls the semantickeyphrasetable (Transact-SQL) function.
SET @Title = 'Sample Document.docx'
SELECT @DocID = DocumentID
FROM Documents
WHERE DocumentTitle = @Title
SELECT @Title AS Title, keyphrase, score
FROM SEMANTICKEYPHRASETABLE(Documents,
*, @DocID)
ORDER BY score DESC
--Find
Similar or Related Documents
--The
following query gets the documents that were identified as similar or related
to the sample document.
--It
presents the results in descending order by the score that ranks the similarity
of the 2 documents.
--This
query calls the semanticsimilaritytable (Transact-SQL) function.
SET @Title = 'Sample Document.docx'
SELECT @DocID = DocumentID
FROM Documents
WHERE DocumentTitle = @Title
SELECT @Title AS SourceTitle,
DocumentTitle AS MatchedTitle,
DocumentID, score
FROM SEMANTICSIMILARITYTABLE(Documents,
*, @DocID)
INNER JOIN Documents ON DocumentID =
matched_document_key
ORDER BY score DESC
--Find
the Key Phrases That Make Documents Similar or Related
--The
following query gets the key phrases that make the 2 sample documents similar
or related to one another.
--It
presents the results in descending rder by the score that ranks the weight of
each key phrase.
--This
query calls the semanticsimilaritydetailstable (Transact-SQL) function.
SET @SourceTitle = 'first.docx'
SET @MatchedTitle = 'second.docx'
SELECT @SourceDocID = DocumentID FROM Documents WHERE DocumentTitle = @SourceTitle
SELECT @MatchedDocID = DocumentID FROM Documents WHERE DocumentTitle = @MatchedTitle
SELECT @SourceTitle AS SourceTitle,
@MatchedTitle AS MatchedTitle, keyphrase, score
FROM semanticsimilaritydetailstable(Documents,
DocumentContent,
@SourceDocID, DocumentContent, @MatchedDocID)
ORDER BY score DESC
*****************************************************************************************************************//
7) Data
Quality Services
This new feature allows you to build a knowledge
base of data rules and use those to perform a variety of critical data quality
tasks, including correction, enrichment, standardization, and de-duplication of
your data.
II. Enhancement
1) SQL
Server Management Studio Enhancements
(1) Alternative keyboard shortcut schemes
SSMS support two keyboard shortcut schemes. The
default is based on the Microsoft Visual Studio 2010 keyboard shortcuts. The
other scheme matches the keyboard shortcuts from SQL Server 2008 R2. The SQL
Server 2000 shortcuts are no longer available in SQL Server 2012.
(2) Query Editor T-SQL debugger
The debugger was enhanced in a number of
different ways, including improved breakpoint processing, expanded Watch window
capabilities, and the ability to debug prior instances of SQL Server, including
SQL Server 2005 (SP2) or later.
(3) Query Editor snippets
T-SQL code snippets are a new type of template
that was added in SQL Server 2012. These templates are another great starting
point when writing new T-SQL statements in the Database Engine Query Editor.
(4) Query Editor IntelliSense
IntelliSense has been optimized to produce
better string matches. It has also been enhanced to support the newly added
snippets.
(5) Extended Events GUI
SSMS now provides a graphical interface to
create, manage, and monitor Extended Events sessions for monitoring SQL Server.
(6) Database Recovery advisor
The Database Recovery tool is launched from the
Restore Database window and displays visual
timeline that outlines the database backups that are available for
restore, allowing a user to restore to a point in time by simply clicking the
visual timeline; the necessary backups will be selected for recovery.
2)
Resource Governor Enhancements
SQL Server 2012 provides enhancements to the
Resource Governor that enable you to more effectively govern performance,
especially for machines with a large number of CPU cores. The enhancements
include the following:
(1) Support for up to 64 resource pools, an increase from the
previous maximum of 20, to better enable partitioning larger machines between
more workloads
(2) A new CAP_CPU_PERCENT option for resource pools to provide a
means of setting a hard-cap limit on CPU resource usage, which can provide
greater predictability
(3) A new AFFINITY option for resource pools providing a
mechanism to affinitize resource pools to one or more schedulers and NUMA
nodes, which can provide greater isolation of CPU resources
3)
Spatial Data Enhancements
(1) SQL Server 2012 adds four new spatial type classes:
FULLGLOBE, CIRCULARSTRING, COMPOUNDCURVE, and CURVEPOLYGON .
(2) SQL Server 2012 also includes features and enhancements that
improve performance for spatial data and operations, including new spatial
indexes, a spatial index hint, compression, improved index build time, and a
new query plan for spatial queries.
(3) SQL Server 2012 provides further performance enhancements at
the spatial library level for Distance() and ShortestLineTo() methods along
with performance improvements at the SQL level for methods which are based on
them, including STDistance() and ShortestLineTo() .
4)
Service Broker Enhancements
(1) The ability to send messages to multiple services.
(2) The ability to reconfigure poison message handling and to
view this setting for any service.
(3) Built-in tracking for how long a message has been in a
queue. (A new column message_enqueue_time now exists on every queue.)
(4) Built-in support for AlwaysOn Availability.
5)
Full-Text Search Enhancements
SQL Server 2012 full-text search enhancements
include improvements in performance and scale and new functionality, including
the previously discuss Semantic Search capability.
Full-text search in SQL Server 2012 now scales
to more than 100 million documents. Significant improvement can also be
realized in the traditional full-text query performance, which is reported to
be approximately 7 to 10 times faster than in the previous versions of SQL
Server. Additional performance improvements are a result of improved internal
implementation, improved query plans, and preventing queries from blocking
index updates.
The new functionality associated with full-text
search includes the following:
(1) Property Search
In SQL Server 2012, users can now conduct
searches on document properties.
(2) Customizable Near
The Custom Proximity Operator, or Customizable
Near, makes it possible for users to specify how close a search term must
appear to others before it is considered a successful match.
(3) New Wordbreaks
6)
Master Data Services Enhancements
(1) The ability to use Excel to manage master data via the
Master Data Services Add-in for Excel.
(2) The ability to match data before loading, to confirm that
you are not adding duplicate records, using SQL Server Data Quality Services
(DQS) matching to compare two sources of data.
(3) The ability to load all members and attribute values for an
entity at one time.
(4) A new higher-performance MDSModelDeploy command-line tool is
now available to create and deploy packages with data, in addition to a Model
Deployment Wizard in the web application that is used to deploy model
structures only.
(5) A new Model Package Editor that enables you to deploy
selected parts of a model rather than the entire model.
(6) Creating a recursive hierarchy with three or more levels is
now supported in MDS.
(7) SharePoint workflow integration.
7) SSAS
SQL Server 2012 provides enhancements to almost
every component of SSAS, along with the addition of major scale-out and
in-memory capabilities. The following are some of the top new features and
enhancements in SSAS:
(1) A new architecture for SQL Server 2012 SP1 PowerPivot that
supports a PowerPivot server outside of a SharePoint 2013 farm that leverages
Excel Services for querying, loading, refreshing, and saving data.
(2) PowerPivot (in Microsoft Excel 2013) supports deeper
integration with data exploration workflows.
(3) Tabular models enhancements, including optimized storage for
measures and key performance indicators (KPIs), extended data categorizations,
extended characters, hierarchy annotation, and improved support when importing
from Data Market (external data) data feeds.
(4) Tabular Model Designer Diagram View, which displays tables,
with relationships between them, in a graphical format.
(5) xVelocity In-memory Analytics Engine for tabular models.
(6) New trace events in multidimensional databases to help
troubleshoot lock-related query or processing problems.
(7) Unified business intelligence (BI) semantic modeling schema.
(8) Increased language support, which now includes MDX, DMX,
DAX, XML/A, and ASSL.
//******************************************************************************************************************
The Data Analysis Expressions (DAX)
language is a new formula language that you can use in
PowerPivot workbooks. DAX is not a subset of MDX, but a new formula
language that is considered an extension of the formula language in Excel. The
DAX statements operate against an in-memory relational data store, comprised of
tables and relationships in the PowerPivot workbook. You use DAX to create
custom measures and calculated columns.
*****************************************************************************************************************//
//******************************************************************************************************************
Data Mining Extensions (DMX) is a
language that you can use to create and work with data mining models in
Microsoft SQL Server Analysis Services. You can use DMX to create the
structure of new data mining models, to train these models, and to browse,
manage, and predict against them. DMX is composed of data definition language (DDL)
statements, data manipulation language (DML) statements, and functions and
operators.
*****************************************************************************************************************//
//******************************************************************************************************************
XMLA is an XML based, exclusive protocol
used to handle communication between clients and a SSAS database. It
is SOAP based and is designed to be a standard access point for any
multidimensional source. XMLA is the driving scripting language behind many of
the tasks within SSAS. Some of the scripting tasks that can accomplished by
XMLA include:
- Creating and changing objects
- Processing objects
- Handling connections
- Backup and restores
- Designing aggregations
- Merging partitions
XMLA contains 2 basic methods, Discover
and Execute. The Discover method retrieves lists of information in XML format
while the Execute method sends commands to the SSAS instances for execution by
the instance. We will discuss several of the execute tasks and methods in the
next several sections of this tip.
In summary, XMLA is the XML based protocol used to communicate with an Analysis Server. It can be used to script discover and execute methods including Create and Alter, Backup and Restore, and Processing cube objects. These scripts are run in Management Studio and can be automated via PowerShell or the SQL Agent.
In summary, XMLA is the XML based protocol used to communicate with an Analysis Server. It can be used to script discover and execute methods including Create and Alter, Backup and Restore, and Processing cube objects. These scripts are run in Management Studio and can be automated via PowerShell or the SQL Agent.
*****************************************************************************************************************//
//******************************************************************************************************************
Microsoft SQL Server Analysis
Services client applications, including SQL Server Management Studio and
Business Intelligence Development Studio, communicate with Analysis Services by
using SOAP messages. Analysis Services Scripting Language (ASSL), which is the
XML dialect used for these messages, is made up of two parts:
- A Data Definition Language (DDL), or object
definition language, which defines and
describes an instance of Analysis Services, as well as the
databases and database objects that the instance contains. This object
definition language is the subject of this reference section.
- A command language that sends action commands, such as Create, Alter,
or Process, to an instance of Analysis Services. This command language is
discussed in the XML
for Analysis Reference (XMLA).
*****************************************************************************************************************//
8) SSIS
(1) A new deployment model for deploying projects to the
Integration Services server.
//******************************************************************************************************************
Project
Deployment Model
The deployment packet contains everything
(packages and parameters) needed for deployment in a single file with an ispac extension
and hence streamlines the deployment process. It also includes
Project/Package Parameters, Environments, Environment variables and Environment
references.
Project
Parameters and Package Parameters
If
you are using the project deployment model, you can create project parameters
or package parameters. These parameters allow you to set the properties of
package components at package execution time and change the execution behavior.
The
basic difference between project parameters and package parameters is the
scope. A project parameter can be used in any package of the project whereas
the package level parameter is specific to the package where it has been
defined. The best part of these parameters is that you can mark any of them as
sensitive and it will be stored in an encrypted form in the catalog.
Environments
and Environment variables
An
environment (development, test or production) is a container for environment
variables which are used to apply different groups of values to the properties
of package components by means of environment reference during runtime.
An
environment reference is the mapping between an environment variable to pass a
value to a property of a package component. A project can have multiple
environment references, but a single instance of package execution can only use
a single environment reference. This means that when you are executing your
project/package you need to specify a single environment to use for that
execution instance.
When
defining a variable you can mark it sensitive and hence it will be stored in an
encrypted form and NULL will be returned if you query it using T-SQL.
Integration
Service Catalog
You
can create one Integration Services catalog per SQL Server instance. It stores
application data (deployed projects including packages, parameters and
environments) in a SQL Server database and uses SQL Server encryption to
encrypt sensitive data. When you create a catalog you need to provide a
password which will be used to create a database master key for encryption and
therefore it's recommended that you back up this database master key after
creating the catalog.
The
catalog uses SQLCLR (the .NET Common Language Runtime(CLR) hosted within SQL
Server), so you need to enable CLR on the SQL Server instance before creating a
catalog (I have provided the step below for this). This catalog also stores
multiple versions of the deployed SSIS projects and if required you can revert
to any of the deployed versions. The catalog also stores details about the
operations performed on the catalog like project deployment with versions,
package execution, etc.... which you can monitor on the server. There is one
default job provided for cleanup of operation data and can be controlled by
setting catalog properties
Basic
Steps:
1. Create an Integration
Services Catalog
2. Create a SSIS project
with Project Deployment Model
3. Deploy the project to
Integration Services Catalog
4. Create Environments,
Environment variables
5. Set up environment
reference in the deployed project
6. Execute deployed
project/package using the environment for example either for TEST or PROD
7. Analyze the operations
performed on the Integration Services Catalog
8. Validate the deployed
project or package
9. Redeploy the project to
Integration Services Catalog
10.Analyze deployed project versions and restored to
desired one
*****************************************************************************************************************//
(2) The ability to use
server environments to specify runtime values for packages contained in a
project.
(3) New views, stored procedures, and stored functions to help
in troubleshooting performance and data problems.
(4) The Flat File connection manager now supports parsing of
flat files with embedded qualifiers. Improved remapping of columns by the SSIS
Designer when a new data source is connected. (Columns are remapped based on
their name and data type rather than by lineage ID.)
(5) The Integration Services Merge and Merge Join
transformations are more robust and reliable. Integration Services now includes
the DQS Cleansing transformation that enables you to more easily and accurately
improve the quality of data.
(6) The ability to use SQL Server 2012’s Change Data Capture
technology from within Integration Services.
9) SSRS
Enhancements
SQL Server 2012 includes only a small number of
significant updates to the core SSRS platform, which is not surprising
considering the SSRS overhaul that came with SQL Server 2008 R2. The
enhancements provided in SQL Server 2012 include the following:
(1) An updated rendering extension for Microsoft Excel
2007-2010, supported both by Report Builder and the SSRS web rendering controls
(2) An updated rendering extension for Microsoft Word 2007-2010
(3) The ability to view web-based reports on touch-screen Apple
iOS 6 devices and on Microsoft Surface-based tablet devices
6. SQL Server 2014
I. New Features
1)
Encryption for Backups – New in 2014 (it’s encryption, not compression!)
//******************************************************************************************************************
Backup Encryption in SQL Server 2014
Starting in SQL Server 2014, SQL Server has the ability
to encrypt the data while creating a backup. By specifying the encryption
algorithm and the encryptor (a Certificate or Asymmetric Key) when creating a
backup, you can create an encrypted backup file. All storage destinations:
on-premises and Window Azure storage are supported. In addition, encryption
options can be configured for SQL Server Managed Backup to Windows Azure
operations, a new feature introduced in SQL Server 2014.
To encrypt during backup, you must specify an encryption
algorithm, and an encryptor to secure the encryption key. The following are the
supported encryption options:
- Encryption
Algorithm: The supported encryption algorithms are: AES 128, AES 192, AES
256, and Triple DES
- Encryptor:
A certificate or asymmetric Key
Caution
|
It
is very important to back up the certificate or asymmetric key, and
preferably to a different location than the backup file it was used to
encrypt. Without the certificate or asymmetric key, you cannot restore the
backup, rendering the backup file unusable.
|
Restoring the encrypted backup: SQL Server restore does
not require any encryption parameters to be specified during restores. It does
require that the certificate or the asymmetric key used to encrypt the backup
file be available on the instance that you are restoring to. The user account
performing the restore must have VIEW DEFINITION permissions on the certificate
or key. If you are restoring the encrypted backup to a different instance, you
must make sure that the certificate is available on that instance.
If you are restoring a backup from a TDE encrypted
database, the TDE certificate should be available on the instance you are
restoring to.
- Encrypting
the database backups helps secure the data: SQL Server provides the option
to encrypt the backup data while creating a backup.
- Encryption
can also be used for databases that are encrypted using TDE.
- Encryption
s supported for backups done by SQL Server Managed Backup to Windows
Azure, which provides additional security for off-site backups.
- This
feature supports multiple encryption algorithms up to AES 256 bit. This
gives you the option to select an algorithm that aligns with your
requirements.
- You can
integrate encryption keys with Extended Key Management (EKM) providers.
The following are prerequisites for encrypting a backup:
- Create a
Database Master Key for the master database: The database master key is a
symmetric key that is used to protect the private keys of certificates and
asymmetric keys that are present in the database.
- Create a
certificate or asymmetric Key to use for backup encryption.
The following are restrictions that apply to the encryption
options:
- If you are
using asymmetric key to encrypt the backup data, only asymmetric keys
residing in the EKM provider are supported.
- SQL Server
Express and SQL Server Web do not support encryption during backup.
However restoring from an encrypted backup to an instance of SQL Server
Express or SQL Server Web is supported.
- Previous
versions of SQL Server cannot read encrypted backups.
- Appending
to an existing backup set option is not supported for encrypted backups.
To encrypt a backup or to restore from an encrypted
backup:
VIEW DEFINITION permission on the certificate or
asymmetric key that is used to encrypt the database backup.
The sections below provide a brief introduction to the
steps to encrypting the data during backup. For a complete walkthrough of the
different steps of encrypting your backup using Transact-SQL, see Create an
Encrypted Backup.
Using SQL Server Management Studio
You can encrypt a backup when creating the backup of a
database in any of the following dialog boxes:
- Back Up
Database (Backup Options Page) On the Backup Options page, you can
select Encryption, and specify the encryption algorithm and the certificate
or asymmetric key to use for the encryption.
- Using
Maintenance Plan Wizard When you select a backup task, on the Options
tab of the Define Backup ()Task page, you can select Backup Encryption,
and specify the encryption algorithm and the certificate or key to use for
the encryption.
Using Transact SQL
Following is a sample Transact-SQL statement to encrypt
the backup file:
BACKUP DATABASE [MYTestDB]
TO DISK = N'C:\Program
Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\MyTestDB.bak'
WITH
COMPRESSION,
ENCRYPTION
(
ALGORITHM = AES_256,
SERVER CERTIFICATE = BackupEncryptCert
),
STATS = 10
GO
Using PowerShell
This example creates the encryption options and uses it
as a parameter value in Backup-SqlDatabase cmdlet to create an encrypted
backup.
C:\PS>$encryptionOption = New-SqlBackupEncryptionOption
-Algorithm Aes256 -EncryptorType ServerCertificate -EncryptorName
"BackupCert"
C:\PS>Backup-SqlDatabase -ServerInstance . -Database
"MyTestDB" -BackupFile "MyTestDB.bak" -CompressionOption On
-EncryptionOption $encryptionOption
Create a backup of the encryption certificate and keys to
a location other than your local machine where the instance is installed. To
account for disaster recovery scenarios, consider storing a backup of the
certificate or key to an off-site location. You cannot restore an encrypted
backup without the certificate used to encrypt the backup.
To restore an encrypted backup, the original certificate
used when the backup was taken with the matching thumbprint should be available
on the instance you are restoring to. Therefore, the certificate should not be
renewed on expiry or changed in any way. Renewal can result in updating the
certificate triggering the change of the thumbprint, therefore making the
certificate invalid for the backup file. The account performing the restore
should have VIEW DEFINITION permissions on the certificate or the asymmetric
key used to encrypt during backup.
Availability Group database backups are typically
performed on the preferred backup replica. If restoring a backup on a replica
other than where the backup was taken from, ensure that the original
certificate used for backup is available on the replica you are restoring to.
If the database is TDE enabled, choose different
certificates or asymmetric keys for encrypting the database and the backup to
increase security.
*****************************************************************************************************************//
2)
Memory-optimized tables – New, it was disk-based table in earlier versions
//******************************************************************************************************************
Use it when you have a plentiful amount
of memory and numerous multi-core processors.
All data is stored in memory and hence
unlike disk based tables, pages don't need to be brought into the buffer pool
or cache. For data persistence of memory optimized tables, the In-memory OLTP
engine creates a set of checkpoint files on a filestream
filegroup that keeps track of changes to the data in an append only mode and
uses this during recovery and the restore process.
In order to create a memory optimized
table, you need to create a database with a filestream filegroup (along with
the use of CONTAINS MEMORY_OPTIMIZED_DATA as shown below) or you can alter an
existing database to add a filestream
filegroup.
USE master
GO
CREATE DATABASE MemoryOptimizedTableDemoDB
ON
PRIMARY(
NAME = [MemoryOptimizedTableDemoDB_data],
FILENAME = 'c:\data\MemoryOptimizedTableDemoDB_data.mdf', SIZE = 1024MB
),
FILEGROUP [MemoryOptimizedTableDemoDB_MOTdata] CONTAINS MEMORY_OPTIMIZED_DATA
(
NAME = [MemoryOptimizedTableDemoDB_folder1],
FILENAME = 'c:\data\MemoryOptimizedTableDemoDB_folder1'
),
(
NAME = [MemoryOptimizedTableDemoDB_folder2],
FILENAME = 'c:\data\MemoryOptimizedTableDemoDB_folder2'
)
LOG ON (
NAME = [MemoryOptimizedTableDemoDB_log],
FILENAME = 'C:\log\MemoryOptimizedTableDemoDB_log.ldf', SIZE = 500MB
);
GO
CREATE TABLE [Customer](
[CustomerID] INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT =1000000),
[Name] NVARCHAR(250) COLLATE Latin1_General_100_BIN2 NOT NULL INDEX [IName] HASH WITH (BUCKET_COUNT = 1000000),
[CustomerSince] DATETIME NULL
)
WITH
(MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
SELECT OBJECT_ID('Customer')
SELECT name, description FROM sys.dm_os_loaded_modules
where description = 'XTP Native DLL'
*****************************************************************************************************************//
3)
Delayed Durability (New in 2014, returning control to the client before the
transaction log record is written to disk) vs. Fully Durable (returning control
to the client after the transaction log record is written to disk)
II. Enhancements
1)
Database Engine Feature Enhancements
(1) SQL Server Data Files in Windows Azure - enables native
database files stored as Windows Azure Blobs.
(2) Host a SQL Server Database in a Windows Azure Virtual
Machine – Deploying a SQL Server Database to a Windows Azure Virtual Machine.
When to use?
2)
Backup and Restore Enhancements
(1) SQL Server Backup to URL – SSMS GUI now from/to Windows
Azure Blob storage service (URL), TSQL+PowerShell+SMO in 2012 SP1 CU2
(2) SQL
Server Managed Backup to Windows Azure
·
For managing and
automating SQL Server backups to the Windows Azure Blob storage service;
·
At the db level or
instance level;
·
Use TSQL for the setup
of the service
3)
AlwaysOn Enhancements
(1) Migration to Windows Azure is made simpler with the Add
Azure Replica Wizard.
(2) The maximum number of secondary replicas is increased from 4
to 8.
(3) When disconnected from the primary replica or during cluster
quorum loss, readable secondary replicas now remain available for read
workloads.
(4) Failover cluster instances (FCIs) can now use Cluster Shared
Volumes (CSVs) as cluster shared disks in Windows Server 2012 and above.
(5) A new system function, sys.fn_hadr_is_primary_replica, and a
new DMV, sys.dm_io_cluster_valid_path_names, is
available.
(6) The following DMVs were enhanced and now return FCI
information: sys.dm_hadr_cluster, sys.dm_hadr_cluster_members, and sys.dm_hadr_cluster_networks.
4)
Partition Switching and Indexing – enhanced with additional partition switching
and index rebuild options
5)
Managing the lock priority of online operations – the WAIT_AT_LOW_PRIORITY
option lets you to specify how long the rebuild process should wait for the
necessary locks
6)
Columnstore indexes
(1) Updateable clustered columnstore indexes – clustered
columnstore index now in 2014, updatable
(2) SHOWPLAN displays information about columnstore indexes.
(3) Archival data compression - ALTER INDEX … REBUILD has a new
COLUMNSTORE_ARCHIVE data compression option that further compresses the
specified partitions of a columnstore index.
(4) Buffer pool extension - Solid-state drives (SSD) store data
in memory (RAM) in a persistent manner, traditionally using DRAM (Dynamic RAM)
7) New
Design for Cardinality Estimation
(1) Query plans enhancement using the New Design for Cardinality
Estimation
(2) Performance Testing and Tuning Recommendations
(3) New XEvents
(4) Examples on in the new cardinality estimates
8)
Incremental Statistics – statistics can now be by partition in 2014
9)
Resource Governor enhancements for physical IO control
10)
Transact-SQL Enhancements
(1) Inline specification of CLUSTERED and NONCLUSTERED –create
table +create index
(2) SELECT … INTO – can run in parallel now
11)
System Table Enhancements
(1) sys.xml_indexes - This catalog view (started in 2008) has 3
new columns: xml_index_type, xml_index_type_description, and path_id.
(2) sys.dm_exec_query_profiles – New in SQL 2014, for monitoring
real time query progress while the query is in execution.
(3) sys.column_store_row_groups – New in SQL 2014, providing
clustered columnstore index information on a per-segment basis to help the
administrator make system management decisions.
12)
Security Enhancements – three new server level permissions: CONNECT ANY
DATABASE, IMPERSONATE ANY LOGIN, SELECT ALL USER SECURABLES
(1) CONNECT ANY DATABASE Permission
(2) IMPERSONATE ANY LOGIN Permission
(3) SELECT ALL USER SECURABLES Permission
13)
Deployment Enhancements- to Win Azure VM now
14)
Online index operation event class – two new columns added: PartitionId and
PartitionNumber
15)
Replications - None
16)
SSAS - PowerView
//*****************************************************************************************************************
Features recently added: Power View for Multidimensional Models (Power View for Tabular Model started in SQL2012 SP1 CU4)
Features recently added: Power View for Multidimensional Models (Power View for Tabular Model started in SQL2012 SP1 CU4)
Power
View used to be limited to tabular data. However, with SQL Server 2014, Power
View can now be used with multidimensional models (OLAP cubes) and can create a
variety of data visualizations including tables, matrices, bubble charts, and
geographical maps. Power View multidimensional models also support queries
using Data Analysis Expressions (DAX).
*****************************************************************************************************************//
17)
SSIS - None
18)
SSRS - None
(Sources: Microsoft SQL Server 2000/2005/2008 R2/2012
Unleashed, BOL, and others)
*******************************************************************************************
*******************************************************************************************
*The
following is from http://windowsitpro.com/sql-server-2014/top-ten-new-features-sql-server-2014*
*******************************************************************************************
*******************************************************************************************
Top Ten: New Features in SQL Server 2014
1.
In-Memory
OLTP Engine
SQL Server 2014 enables memory
optimization of selected tables and stored procedures. The In-Memory OLTP
engine is designed for high concurrency and uses a new optimistic concurrency
control mechanism to eliminate locking delays. Microsoft states that customers
can expect performance to be up to 20 times better than with SQL Server 2012
when using this new feature. For more information, check out “Rev Up
Application Performance with the In-Memory OLTP Engine.”
2.
AlwaysOn
Enhancements
Microsoft
has enhanced AlwaysOn integration by expanding the maximum number of secondary
replicas from four to eight. Readable secondary replicas are now also available
for read workloads, even when the primary replica is unavailable. In addition,
SQL Server 2014 provides the new Add Azure Replica Wizard, which helps you
create asynchronous secondary replicas in Windows Azure. Related: No Fooling:
SQL Server 2014 to Release April 1
3.
Buffer
Pool Extension
SQL
Server 2014 provides a new solid state disk (SSD) integration capability that
lets you use SSDs to expand the SQL Server 2014 Buffer Pool as nonvolatile RAM
(NvRAM). With the new Buffer Pool Extensions feature, you can use SSD drives to
expand the buffer pool in systems that have maxed out their memory. Buffer Pool
Extensions can provide performance gains for read-heavy OLTP workloads.
4.
Updateable
Columnstore Indexes
When
Microsoft introduced the columnstore index in SQL Server 2012, it provided
improved performance for data warehousing queries. For some queries, the
columnstore indexes provided a tenfold performance improvement. However, to
utilize the columnstore index, the underlying table had to be read-only. SQL
Server 2014 eliminates this restriction with the new updateable Columnstore
Index. The SQL Server 2014 Columnstore Index must use all the columns in the
table and can’t be combined with other indexes.
5.
Storage
I/O control
The
Resource Governor lets you limit the amount of CPU and memory that a given
workload can consume. SQL Server 2014 extends the reach of the Resource
Governor to manage storage I/O usage as well. The SQL Server 2014 Resource
Governor can limit the physical I/Os issued for user threads in a given
resource pool. Related: A Tale about SQL Service Packs for 2014
6.
Power
View for Multidimensional Models
Power
View used to be limited to tabular data. However, with SQL Server 2014, Power
View can now be used with multidimensional models (OLAP cubes) and can create a
variety of data visualizations including tables, matrices, bubble charts, and
geographical maps. Power View multidimensional models also support queries
using Data Analysis Expressions (DAX).
7.
Power
BI for Office 365 Integration
Power
BI for Office 365 is a cloud-based business intelligence (BI) solution that
provides data navigation and visualization capabilities. Power BI for Office
365 includes Power Query (formerly code-named Data Explorer), Power Map
(formerly code-named GeoFlow), Power Pivot, and Power View. You can learn more about
Power BI at Microsoft’s Power BI for Office 365 site.
8.
SQL
Server Data Tools for Business Intelligence
The
new SQL Server Data Tools for BI (SSDT-BI) is used to create SQL Server
Analysis Services (SSAS) models, SSRS reports, and SSIS packages. The new
SSDT-BI supports SSAS and SSRS for SQL Server 2014 and earlier, but SSIS
projects are limited to SQL Server 2014. In the pre-release version of SQL
Server 2014, SQL Server Setup doesn’t install SSDT-BI. Instead, you must
download SSDT-BI separately from the Microsoft Download Center.
9.
Backup
Encryption
One
welcome addition to SQL Server 2014 is the ability to encrypt database backups
for at-rest data protection. SQL Server 2014 supports several encryption
algorithms, including Advanced Encryption Standard (AES) 128, AES 192, AES 256,
and Triple DES. You must use a certificate or an asymmetric key to perform
encryption for SQL Server 2014 backups.
10.
SQL
Server Managed Backup to Windows Azure
SQL
Server 2014’s native backup supports Windows Azure integration. Although I’m
not entirely convinced that I would want to depend on an Internet connection to
restore my backups, on-premises SQL Server 2014 and Windows Azure virtual
machine (VM) instances support backing up to Windows Azure storage. The Windows
Azure backup integration is also fully built into SQL Server Management Studio
(SSMS).