Monday, March 18, 2013

Create Indexed Views


http://msdn.microsoft.com/en-us/library/ms191432.aspx

Create Indexed Views

This topic describes how to create an indexed view in SQL Server 2012 by using Transact-SQL. The first index created on a view must be a unique clustered index. After the unique clustered index has been created, you can create nonclustered indexes. Creating a unique clustered index on a view improves query performance because the view is stored in the database in the same way a table with a clustered index is stored. The query optimizer may use indexed views to speed up the query execution. The view does not have to be referenced in the query for the optimizer to consider that view for a substitution.
In This Topic
The following steps are required to create an indexed view and are critical to the successful implementation of the indexed view:
  1. Verify the SET options are correct for all existing tables that will be referenced in the view.
  2. Verify that the SET options for the session are set correctly before creating any new tables and the view.
  3. Verify that the view definition is deterministic.
  4. Create the view by using the WITH SCHEMABINDING option.
  5. Create the unique clustered index on the view.

Required SET Options for Indexed Views

Evaluating the same expression can produce different results in the Database Engine when different SET options are active when the query is executed. For example, after the SET option CONCAT_NULL_YIELDS_NULL is set to ON, the expression 'abc' + NULL returns the value NULL. However, after CONCAT_NULL_YIEDS_NULL is set to OFF, the same expression produces 'abc'.
To make sure that the views can be maintained correctly and return consistent results, indexed views require fixed values for several SET options. The SET options in the following table must be set to the values shown in theRequiredValue column whenever the following conditions occur:
  • The view and subsequent indexes on the view are created.
  • The base tables referenced in the view at the time the table is created.
  • There is any insert, update, or delete operation performed on any table that participates in the indexed view. This requirement includes operations such as bulk copy, replication, and distributed queries.
  • The indexed view is used by the query optimizer to produce the query plan.
    SET options
    Required value
    Default server value
    Default
    OLE DB and ODBC value
    Default
    DB-Library value
    ANSI_NULLS
    ON
    ON
    ON
    OFF
    ANSI_PADDING
    ON
    ON
    ON
    OFF
    ANSI_WARNINGS*
    ON
    ON
    ON
    OFF
    ARITHABORT
    ON
    ON
    OFF
    OFF
    CONCAT_NULL_YIELDS_NULL
    ON
    ON
    ON
    OFF
    NUMERIC_ROUNDABORT
    OFF
    OFF
    OFF
    OFF
    QUOTED_IDENTIFIER
    ON
    ON
    ON
    OFF
    *Setting ANSI_WARNINGS to ON implicitly sets ARITHABORT to ON when the database compatibility level is set to 90 or higher.
If you are using an OLE DB or ODBC server connection, the only value that must be modified is the ARITHABORT setting. All DB-Library values must be set correctly either at the server level by using sp_configure or from the application by using the SET command.
Important note Important
We strongly recommend that you set the ARITHABORT user option to ON server-wide as soon as the first indexed view or index on a computed column is created in any database on the server.

Deterministic Views

The definition of an indexed view must be deterministic. A view is deterministic if all expressions in the select list, as well as the WHERE and GROUP BY clauses, are deterministic. Deterministic expressions always return the same result any time they are evaluated with a specific set of input values. Only deterministic functions can participate in deterministic expressions. For example, the DATEADD function is deterministic because it always returns the same result for any given set of argument values for its three parameters. GETDATE is not deterministic because it is always invoked with the same argument, but the value it returns changes each time it is executed.
To determine whether a view column is deterministic, use the IsDeterministic property of the COLUMNPROPERTYfunction. To determine if a deterministic column in a view with schema binding is precise, use the IsPrecise property of the COLUMNPROPERTY function COLUMNPROPERTY returns 1 if TRUE, 0 if FALSE, and NULL for input that is not valid. This means that the column is not deterministic or not precise.
Even if an expression is deterministic, if it contains float expressions, the exact result may depend on the processor architecture or version of microcode. To ensure data integrity, such expressions can participate only as nonkey columns of indexed views. Deterministic expressions that do not contain float expressions are called precise. Only precise deterministic expressions can participate in key columns and in WHERE or GROUP BY clauses of indexed views.

Additional Requirements

In addition to the SET options and deterministic function requirements, the following requirements must be met:
  • The user that executes CREATE INDEX must be the owner of the view.
  • When you create the index, the IGNORE_DUP_KEY option must be set to OFF (the default setting).
  • Tables must be referenced by two-part names, schema.tablename in the view definition.
  • User-defined functions referenced in the view must be created by using the WITH SCHEMABINDING option.
  • Any user-defined functions referenced in the view must be referenced by two-part names, schema.function.
  • The data access property of a user-defined function must be NO SQL, and external access property must be NO.
  • Common language runtime (CLR) functions can appear in the select list of the view, but cannot be part of the definition of the clustered index key. CLR functions cannot appear in the WHERE clause of the view or the ON clause of a JOIN operation in the view.
  • CLR functions and methods of CLR user-defined types used in the view definition must have the properties set as shown in the following table.
    Property
    Note
    DETERMINISTIC = TRUE
    Must be declared explicitly as an attribute of the Microsoft .NET Framework method.
    PRECISE = TRUE
    Must be declared explicitly as an attribute of the .NET Framework method.
    DATA ACCESS = NO SQL
    Determined by setting DataAccess attribute to DataAccessKind.None and SystemDataAccess attribute to SystemDataAccessKind.None.
    EXTERNAL ACCESS = NO
    This property defaults to NO for CLR routines.
  • The view must be created by using the WITH SCHEMABINDING option.
  • The view must reference only base tables that are in the same database as the view. The view cannot reference other views.
  • The SELECT statement in the view definition must not contain the following Transact-SQL elements:
    COUNT
    ROWSET functions (OPENDATASOURCE, OPENQUERY, OPENROWSET, AND OPENXML)
    OUTER joins (LEFT, RIGHT, or FULL)
    Derived table (defined by specifying a SELECT statement in the FROM clause)
    Self-joins
    Specifying columns by using SELECT * or SELECTtable_name.*
    DISTINCT
    STDEV, STDEVP, VAR, VARP, or AVG
    Common table expression (CTE)
    float *, textntext, or imagecolumns
    Subquery
    OVER clause, which includes ranking or aggregate window functions
    Full-text predicates (CONTAIN, FREETEXT)
    SUM function that references a nullable expression
    ORDER BY
    CLR user-defined aggregate function
    TOP
    CUBE, ROLLUP, or GROUPING SETS operators
    MIN, MAX
    UNION, EXCEPT, or INTERSECT operators
    TABLESAMPLE
    Table variables
    OUTER APPLY or CROSS APPLY
    PIVOT, UNPIVOT
    Sparse column sets
    Inline or multi-statement table-valued functions
    OFFSET
    CHECKSUM_AGG
    *The indexed view can contain float columns; however, such columns cannot be included in the clustered index key.
  • If GROUP BY is present, the VIEW definition must contain COUNT_BIG(*) and must not contain HAVING. These GROUP BY restrictions are applicable only to the indexed view definition. A query can use an indexed view in its execution plan even if it does not satisfy these GROUP BY restrictions.
  • If the view definition contains a GROUP BY clause, the key of the unique clustered index can reference only the columns specified in the GROUP BY clause.

Recommendations

When you refer to datetime and smalldatetime string literals in indexed views, we recommend that you explicitly convert the literal to the date type you want by using a deterministic date format style. For a list of the date format styles that are deterministic, see CAST and CONVERT (Transact-SQL). Expressions that involve implicit conversion of character strings to datetime or smalldatetime are considered nondeterministic. This is because the results depend on the LANGUAGE and DATEFORMAT settings of the server session. For example, the results of the expressionCONVERT (datetime, '30 listopad 1996', 113) depend on the LANGUAGE setting because the string 'listopad' means different months in different languages. Similarly, in the expression DATEADD(mm,3,'2000-12-01'), SQL Server interprets the string '2000-12-01' based on the DATEFORMAT setting.
Implicit conversion of non-Unicode character data between collations is also considered nondeterministic.
Creating indexes on views that contain these implicit conversion expressions is not allowed in compatibility level 90 and higher. However, existing views that contain these expressions from an upgraded database are maintainable. If you use indexed views that contain implicit string to date conversions, be certain that the LANGUAGE and DATEFORMAT settings are consistent in your databases and applications to avoid possible indexed view corruption.

Considerations

The setting of the large_value_types_out_of_row option of columns in an indexed view is inherited from the setting of the corresponding column in the base table. This value is set by using sp_tableoption. The default setting for columns formed from expressions is 0. This means that large value types are stored in-row.
Indexed views can be created on a partitioned table, and can themselves be partitioned.
To prevent the Database Engine from using indexed views, include the OPTION (EXPAND VIEWS) hint on the query. Also, if any of the listed options are incorrectly set, this will prevent the optimizer from using the indexes on the views. For more information about the OPTION (EXPAND VIEWS) hint, see SELECT (Transact-SQL).
All indexes on a view are dropped when the view is dropped. All nonclustered indexes and auto-created statistics on the view are dropped when the clustered index is dropped. User-created statistics on the view are maintained. Nonclustered indexes can be individually dropped. Dropping the clustered index on the view removes the stored result set, and the optimizer returns to processing the view like a standard view.
Indexes on tables and views can be disabled. When a clustered index on a table is disabled, indexes on views associated with the table are also disabled.

Security

Permissions

Requires CREATE VIEW permission in the database and ALTER permission on the schema in which the view is being created.

To create an indexed view

  1. In Object Explorer, connect to an instance of Database Engine.
  2. On the Standard bar, click New Query.
  3. Copy and paste the following example into the query window and click Execute. The example creates a view and an index on that view. Two queries are included that use the indexed view.
    USE AdventureWorks2012;
    GO
    --Set the options to support indexed views.
    SET NUMERIC_ROUNDABORT OFF;
    SET ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, ARITHABORT,
        QUOTED_IDENTIFIER, ANSI_NULLS ON;
    GO
    --Create view with schemabinding.
    IF OBJECT_ID ('Sales.vOrders', 'view') IS NOT NULL
    DROP VIEW Sales.vOrders ;
    GO
    CREATE VIEW Sales.vOrders
    WITH SCHEMABINDING
    AS
        SELECT SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Revenue,
            OrderDate, ProductID, COUNT_BIG(*) AS COUNT
        FROM Sales.SalesOrderDetail AS od, Sales.SalesOrderHeader AS o
        WHERE od.SalesOrderID = o.SalesOrderID
        GROUP BY OrderDate, ProductID;
    GO
    --Create an index on the view.
    CREATE UNIQUE CLUSTERED INDEX IDX_V1 
        ON Sales.vOrders (OrderDate, ProductID);
    GO
    --This query can use the indexed view even though the view is 
    --not specified in the FROM clause.
    SELECT SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Rev, 
        OrderDate, ProductID
    FROM Sales.SalesOrderDetail AS od
        JOIN Sales.SalesOrderHeader AS o ON od.SalesOrderID=o.SalesOrderID
            AND ProductID BETWEEN 700 and 800
            AND OrderDate >= CONVERT(datetime,'05/01/2002',101)
    GROUP BY OrderDate, ProductID
    ORDER BY Rev DESC;
    GO
    --This query can use the above indexed view.
    SELECT  OrderDate, SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Rev
    FROM Sales.SalesOrderDetail AS od
        JOIN Sales.SalesOrderHeader AS o ON od.SalesOrderID=o.SalesOrderID
            AND DATEPART(mm,OrderDate)= 3
            AND DATEPART(yy,OrderDate) = 2002
    GROUP BY OrderDate
    ORDER BY OrderDate ASC;
    GO
    

Saturday, February 16, 2013

Paging Through Results


Paging Through Results


After implementing a pipelined top-N query to retrieve the first page efficiently, you will often also need another query to fetch the next pages. The resulting challenge is that it has to skip the rows from the previous pages. There are two different methods to meet this challenge: firstly the offset method, which numbers the rows from the beginning and uses a filter on this row number to discard the rows before the requested page. The second method, which I call the seek method, searches the last entry of the previous page and fetches only the following rows.
The following examples show the more widely used offset method. Its main advantage is that it is very easy to handle—especially with databases that have a dedicated keyword for it (offset). This keyword was even taken into the SQL standard as part of the fetch first extension.
MySQL
Oracle
PostgreSQL
SQL Server
SQL Server does not have an “offset” extension for its proprietary top clause but introduced the fetch first extension with SQL Server 2012 (“Denali”). The offset clause is mandatory although the standard defines it as an optional addendum.
SELECT *
  FROM sales
 ORDER BY sale_date DESC
OFFSET 10 ROWS
 FETCH NEXT 10 ROWS ONLY;
Besides the simplicity, another advantage of this method is that you just need the row offset to fetch an arbitrary page. Nevertheless, the database must count all rows from the beginning until it reaches the requested page. Figure 7.2 shows that the scanned index range becomes greater when fetching more pages.

Figure 7.2. Access Using the Offset Method


This has two disadvantages: (1) the pages drift when inserting new sales because the numbering is always done from scratch; (2) the response time increases when browsing further back.
This book is definitively worth having in the company library.” — Joe Celko
The seek method avoids both problems because it uses the values of the previous page as a delimiter. That means it searches for the values that must come behind the last entry from the previous page. This can be expressed with a simple where clause. To put it the other way around: the seek method simply doesn’t select already shown values.
The next example shows the seek method. For the sake of demonstration, we will start with the assumption that there is only one sale per day. This makes the SALE_DATE a unique key. To select the sales that must come behind a particular date you must use a less than condition (<) because of the descending sort order. For an ascending order, you would have to use a greater than (>) condition. The fetch first clause is just used to limit the result to ten rows.
SELECT *
  FROM sales
 WHERE sale_date < ?
 ORDER BY sale_date DESC
 FETCH FIRST 10 ROWS ONLY;
Instead of a row number, you use the last value of the previous page to specify the lower bound. This has a huge benefit in terms of performance because the database can use the SALE_DATE < ? condition for index access. That means that the database can truly skip the rows from the previous pages. On top of that, you will also get stable results if new rows are inserted.
Nevertheless, this method does not work if there is more than one sale per day—as shown in Figure 7.2—because using the last date from the first page (“yesterday”) skips all results from yesterday—not just the ones already shown on the first page. The problem is that the order by clause does not establish a deterministic row sequence. That is, however, prerequisite to using a simple range condition for the page breaks.
Without a deterministic order by clause, the database by definition does not deliver a deterministic row sequence. The only reason you usually get a consistent row sequence is that the database usually executes the query in the same way. Nevertheless, the database could in fact shuffle the rows having the same SALE_DATE and still fulfill the order by clause. In recent releases it might indeed happen that you get the result in a different order every time you run the query, not because the database shuffles the result intentionally but because the database might utilize parallel query execution. That means that the same execution plan can result in a different row sequence because the executing threads finish in a non-deterministic order.

Important

Paging requires a deterministic sort order.
Even if the functional specifications only require sorting “by date, latest first”, we as the developers must make sure the order by clause yields a deterministic row sequence. For this purpose, we might need to extend the order by clause with arbitrary columns just to make sure we get a deterministic row sequence. If the index that is used for the pipelined order by has additional columns, it is a good start to add them to the order by clause so we can continue using this index for the pipelined order by. If this still does not yield a deterministic sort order, just add any unique column(s) and extend the index accordingly.
In the following example, we extend the order by clause and the index with the primary key SALE_ID to get a deterministic row sequence. Furthermore, we must apply the “comes after” logic to both columns together to get the desired result:
CREATE INDEX sl_dtid ON sales (sale_date, sale_id);

SELECT *
  FROM sales
 WHERE (sale_date, sale_id) < (?, ?)
 ORDER BY sale_date DESC, sale_id DESC
 FETCH FIRST 10 ROWS ONLY;
The where clause uses the little-known “row values” syntax (see the box entitled “SQL Row Values”). It combines multiple values into a logical unit that is applicable to the regular comparison operators. As with scalar values, the less-than condition corresponds to “comes after” when sorting in descending order. That means the query considers only the sales that come after the given SALE_DATE, SALE_ID pair.

SQL Row Values

Even though the row values syntax is part of the SQL standard, only a few databases support it. SQL Server 2012 (“Denali”) does not support row values at all. The Oracle database supports row values in principle, but cannot apply range operators on them (ORA-01796). MySQL evaluates row value expressions correctly but cannot use them as access predicate during an index access. PostgreSQL, however, supports the row value syntax and uses them to access the index if there is a corresponding index available.
Nevertheless it is possible to use an approximated variant of the seek method with databases that do not properly support the row values—even though the approximation is not as elegant and efficient as row values in PostgreSQL. For this approximation, we must use “regular” comparisons to express the required logic as shown in this Oracle example:
SELECT *
  FROM ( SELECT *
           FROM sales
          WHERE sale_date <= ?
            AND NOT (sale_date = ? AND sale_id >= ?)
          ORDER BY sale_date DESC, sale_id DESC
       )
 WHERE rownum <= 10;
The where clause consists of two parts. The first part considers the SALE_DATE only and uses a less than or equal to (<=) condition—it selects more rows as needed. This part of the where clause is simple enough so that all databases can use it to access the index. The second part of the where clause removes the excess rows that were already shown on the previous page. The box entitled “Indexing Equivalent Logic” explains why the where clause is expressed this way.

Indexing Equivalent Logic

The execution plan shows that the database uses the first part of the where clause as access predicate.
---------------------------------------------------------------
|Id | Operation                      | Name    |  Rows | Cost |
---------------------------------------------------------------
| 0 | SELECT STATEMENT               |         |    10 |    4 |
|*1 |  COUNT STOPKEY                 |         |       |      |
| 2 |   VIEW                         |         |    10 |    4 |
| 3 |    TABLE ACCESS BY INDEX ROWID | SALES   | 50218 |    4 |
|*4 |     INDEX RANGE SCAN DESCENDING| SL_DTIT |     2 |    3 |
---------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=10)
   4 - access("SALE_DATE"<=:SALE_DATE)
       filter("SALE_DATE"<>:SALE_DATE
           OR "SALE_ID"<TO_NUMBER(:SALE_ID))
The access predicates on SALE_DATE enables the database to skip over the days that were fully shown on previous pages. The second part of the where clause is a filter predicate only. That means that the database reads a few rows from the previous page, but drops them immediately. Figure 7.3 shows the respective access path.

Figure 7.3. Access Using the Seek Method


Figure 7.4 compares the performance characteristics of the offset and the seek methods. The accuracy of measurement is insufficient to see the difference on the left hand side of the chart, however the difference is clearly visible from about page 20 onwards.

Figure 7.4. Scalability when Fetching the Next Page


Of course the seek method has drawbacks as well, the difficulty in handling it being the most important one. You not only have to phrase the where clause very carefully—you also cannot fetch arbitrary pages. Moreover you need to reverse all comparison and sort operations to change the browsing direction. Precisely these two functions—skipping pages and browsing backwards—are not needed when using an infinite scrolling mechanism for the user interface.

 http://use-the-index-luke.com/sql/partial-results/fetch-next-page 

Tuesday, February 12, 2013

TSQL LIKE MULTIPLE WORD


--1
SELECT * FROM TABLE WHERE FIELD LIKE '%TEXT1%TEXT2%'
-------------
--2
declare @YourFilterString nvarchar(100) = 'TEXT1 TEXT2'
select part
into #FilterTable
from dbo.[SplitString](@YourFilterString,' ')
select *
  from TABLE
  INNER JOIN #FilterTable on TABLE.FIELD LIKE '%' + #FilterTable.part + '%'
DROP TABLE #FilterTable

Monday, February 11, 2013

TSQL SPILIT FUNCTION


CREATE FUNCTION [dbo].[SplitString]
(
    @sString nvarchar(2048),
    @cDelimiter nchar(1)
)
RETURNS @tParts TABLE ( part nvarchar(2048) )
AS
BEGIN
    if @sString is null return
    declare @iStart int,
    @iPos int
    if substring( @sString, 1, 1 ) = @cDelimiter
    begin
    set @iStart = 2
    insert into @tParts
    values( null )
    end
    else
    set @iStart = 1
    while 1=1
    begin
    set @iPos = charindex( @cDelimiter, @sString, @iStart )
    if @iPos = 0
    set @iPos = len( @sString )+1
    if @iPos - @iStart > 0
    insert into @tParts
    values ( substring( @sString, @iStart, @iPos-@iStart ))
    else
    insert into @tParts
    values( null )
    set @iStart = @iPos+1
    if @iStart > len( @sString )
    break
    end
    RETURN

END

Thursday, January 17, 2013

TSQL List Indexes for Specific Table


USE AdventureWorksDW2008R2
GO

SELECT
            so.name AS TableName
            , si.name AS IndexName
            , si.type_desc AS IndexType
FROM
            sys.indexes si
            JOIN sys.objects so ON si.[object_id] = so.[object_id]
WHERE
            so.type = 'U'    --Only get indexes for User Created Tables
            AND si.name IS NOT NULL
            AND so.name = 'FactInternetSales'
ORDER BY
            so.name, si.type 

Monday, January 14, 2013

Using CROSS APPLY to optimize joins on BETWEEN conditions


Recently I encountered a case when I knew much more about the data than the optimizer. Originally the performance was horrible, this is why I had to have a look at the query in the first place. When I was able to share my knowledge with the optimizer, it produced a better plan, and the query ran dramatically faster.

The slow query

The following tables store one-munite commercials for every minut for one year, and customer calls, one call per minute, for the same year. The scripts that populate tables with test data are provided at the end of this post. Here are the tables:

CREATE TABLE dbo.Commercials(
  
StartedAt DATETIME NOT NULL 
   
CONSTRAINT PK_Commercials PRIMARY KEY,
  
EndedAt DATETIME NOT NULL,
  
CommercialName VARCHAR(30) NOT NULL); GO
CREATE TABLE dbo.Calls(CallID INT 
  CONSTRAINT 
PK_Calls NOT NULL PRIMARY KEY,
  
AirTime DATETIME NOT NULL,
  
SomeInfo CHAR(300)); GO
CREATE UNIQUE INDEX Calls_AirTime
  ON dbo.Calls(AirTimeINCLUDE(SomeInfo); GO

Every commercial in my table lasts for at most one minute, and they do not overlap. I can easily enforce both conditions with constraints (
Storing intervals of time with no overlaps" ), which are omitted in this post just to keep it simple.
The following query retrieves only 181 rows, and it runs very slowly:
 SELECT s.StartedAts.EndedAtc.AirTime
FROM dbo.Commercials s JOIN dbo.Calls c 
  ON c.AirTime >= s.StartedAt AND c.AirTime s.EndedAt
WHERE c.AirTime BETWEEN '20080701' AND '20080701 03:00'
  
 Table 'Calls'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 2, logical reads 3338264, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Commercials'. Scan count 2, logical reads 7166, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 71704 ms,  elapsed time = 36316 ms.
Why is it so slow? I haven't mastered the fine art of adding images to my posts yet, so I have to explain verbally. For every call the DB engine scans all the commercials which begin before the time of the call, which is expensive. The reason is simple: the optimizer does not know that the commercials are short, and that the commercials do not overlap, so it must scan all the potential matches, which are all the commercials which begin before the time of the call.

Using CROSS APPLY  to tell the optimizer that commercials do not overlap.

 Because commercials do not overlap, we need at most one match. Translating this information into plain SQL is easy, and the query runs dramatically faster:

SELECT s.StartedAts.EndedAtc.AirTime
FROM dbo.Calls c CROSS APPLY(
  
SELECT TOP 1 s.StartedAts.EndedAt FROM dbo.Commercials s 
  WHERE c.AirTime >= s.StartedAt AND c.AirTime s.EndedAt
  ORDER BY s.StartedAt DESCAS s
WHERE c.AirTime BETWEEN '20080701' AND '20080701 03:00'


Table 'Commercials'. Scan count 181, logical reads 1327, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Calls'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 31 ms,  elapsed time = 31 ms.

Note: if you needed only one column from Commercials table, you could easily use just a subquery. Because more than one column is needed, CROSS APPLY is a better, a more performant choice choice without redundant code.
Note: If you are using the assumption that the commercials do not overlap, you have to enforce that business rule in the database. Also to make sure that you don't forget that your query relies on that assumption, use a unit test do document it.
Also let me put it differently: If you are using the assumption that the commercials do not overlap, use a unit test do document it, so that that you don't forget that your query relies on that assumption. Also you have to enforce that business rule in the database.

Using another range condition to tell the optimizer that commercials are short.

 Because commercials are short, there is no need to scan the commercials that start more than maximum commercial's length before the call. Again, translating this information into plain SQL is quite easy too, and again the query runs much faster, even faster than the previous one:

SELECT s.StartedAts.EndedAtc.AirTime
FROM dbo.Commercials s JOIN dbo.Calls c 
  ON c.AirTime >= s.StartedAt AND c.AirTime s.EndedAt
WHERE c.AirTime BETWEEN '20080701' AND '20080701 03:00' AND s.StartedAt BETWEEN '20080630 23:45' AND '20080701 03:00'
 

Table 'Worktable'. Scan count 1, logical reads 753, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Calls'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Commercials'. Scan count 1, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 31 ms,  elapsed time = 24 ms.

Note: If you are using the assumption that the commercials are short, you have to enforce that business rule in the database. Also to make sure that you don't forget that your query relies on that assumption, use a unit test do document it.
Also let me put it differently: If you are using the assumption that the commercials are short, use a unit test do document that. Also you have to enforce that business rule in the database. 


Setting up tables and test data

CREATE TABLE dbo.Numbers(INT NOT NULL PRIMARY KEY) GO
DECLARE @i INT; SET @i 1; INSERT INTO dbo.Numbers(nSELECT 1; WHILE @i<1024000 BEGIN
  INSERT INTO 
dbo.Numbers(n)
    
SELECT @i FROM dbo.Numbers;
  
SET @i @i 2; END; GO
INSERT INTO dbo.Commercials(StartedAtEndedAtCommercialName) SELECT DATEADD(minute1'20080101')
   ,
DATEADD(minuten'20080101')
   ,
'Show #'+CAST(AS VARCHAR(6))
  
FROM dbo.Numbers
  WHERE n<=24*365*60; GO
INSERT INTO dbo.Calls(CallID,
  
AirTime,
  
SomeInfo) SELECT 
   ,DATEADD(minute1'20080101')
   ,
'Call during Commercial #'+CAST(AS VARCHAR(6))
  
FROM dbo.Numbers
  WHERE n<=24*365*60; GO
                 
             

http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/07/07/using-cross-apply-to-optimize-joins-on-between-conditions.aspx

Saturday, January 5, 2013

TSQL Get Queries with Maximum cost


--TSQL Get Queries with Maximum cost
select highest_cpu_queries.total_worker_time, q.dbid, q.objectid, q.number, q.encrypted, q.[text],highest_cpu_queries.*
from
(
select top 50 qs.*
from sys.dm_exec_query_stats qs
order by qs.total_worker_time desc
)
as highest_cpu_queries cross apply sys.dm_exec_sql_text(plan_handle) as q
order by highest_cpu_queries.total_worker_time desc

--REINDEX TABLE TO OPTIMIZE QUERIES
DBCC DBREINDEX(TBL_Product, '', 80)
--OR
DBCC DBREINDEX (TBL_Product)

--AND CONSIDER WHICH FIELDS NEED NON CLUSTERED INDEX IN YOUR DB
SELECT * FROM SYS.dm_db_missing_index_details