The Transaction must contain a series of properties known as ACID.
ACID (an acronym for Atomicity Consistency Isolation Durability) is a concept that Database Professionals generally look for when evaluating databases and application architectures. For a reliable database all this four attributes should be achieved.
Atomicity (each transaction is “all or nothing”):
If one part of the transaction fails, the entire transaction fails, and the database state is left unchanged.
For example, in an application that transfers funds from one account to another, the atomicity property ensures that, if a debit is made successfully from one account, the corresponding credit is made to the other account.
Data is in a consistent state when a transaction starts and when it ends.
For example, in an application that transfers funds from one account to another, the consistency property ensures that the total value of funds in both the accounts is the same at the start and end of each transaction.
Every transaction has a well-defined boundary. One transaction will never affect another transaction running at the same time.
Durability means that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently (even if the database crashes immediately thereafter). To defend against power loss, transactions (or their effects) must be recorded in a non-volatile memory.
Nested transactions usually occur when one stored procedure begins a transaction and then calls other procedures also using transactions before committing the initial transaction. However, a single stored procedure could also have multiple transactions open at the same time.
The transaction is either committed or rolled back based on the action taken at the end of the outermost transaction.
If the outer transaction is committed, the inner nested transactions are also committed.
- If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether or not the inner transactions were individually committed.
@@TRANCOUNT – Records the Current transaction nesting level
BEGIN TRAN – @@TRANCOUNT
COMMIT TRAN – @@TRANCOUNT – 1
ROLLBACK TRAN – @@TRANCOUNT
INSERT INTO People VALUES(‘Tom’)
INSERT INTO People VALUES(‘Dick’)
If you execute the above sample you will find that it generates the error:
The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
This error suggests that the final line of the script, which commits the outer transaction failed because the outer transaction no longer exists. Indeed. When you try to rollback a nested transaction, all active transactions are cancelled.
Savepoints offer a mechanism to roll back portions of transactions.
SAVE TRANSACTION Savepoint_name
ROLLBACK TRANSACTION savepoint_name – To roll back to the savepoint instead of rolling back to the start of the transaction
PRINT‘First Transaction: ‘+CONVERT(VARCHAR,@@TRANCOUNT)
NSERTINTO People VALUES(‘Tom’)
SAVE TRAN Savepoint1
PRINT‘Second Transaction: ‘+CONVERT(VARCHAR,@@TRANCOUNT)
INSERTINTO People VALUES(‘Dick’)
ROLLBACK TRAN Savepoint1
PRINT‘After Rollback: ‘+CONVERT(VARCHAR,@@TRANCOUNT)
PRINT‘After Commit: ‘+CONVERT(VARCHAR,@@TRANCOUNT)
First Transaction: 1
Second Transaction: 1
After Rollback: 1
After Commit: 0 */
A view is a virtual table created by a stored SQL statement that can span multiple tables. Views can be used as a method of security within your database, and they provide a simpler front end to a user querying the data.
A view is a virtual table that consists of columns from one or more tables. a view is an object that derives its data from one or more tables. These tables are referred to as base or underlying tables. Once you have defined a view, you can reference it like any other table in a database.
A view serves as a security mechanism. This ensures that users are able to retrieve and modify only the data seen by them. Users cannot see or access the remaining data in the underlying tables. A view also serves as a mechanism to simplify query execution. Complex queries can be stored in the form as a view, and data from the view can be extracted using simple queries.
Views ensure the security of data by restricting access to the following data:
- Specific rows of the tables.
- Specific columns of the tables.
- Specific rows and columns of the tables.
- Rows fetched by using joins.
- Statistical summary of data in a given tables.
- Subsets of another view or a subset of views and tables.
Some common examples of views are:
- A subset of rows or columns of a base table.
- A union of two or more tables.
- A join of two or more tables.
- A statistical summary of base tables.
- A subset of another view, or some combination of views and base table.
A view can be created by using the CREATE VIEW statement.
CREATE VIEW view_name
[(column_name [, column_name]….)]
AS select_statement [WITH CHECK OPTION]
View_name specifies the name of the view and must follow the rules for identifiers.
Column_name specifies the name of the column to be used in view. If the column_name option is not specified, then the view is created with the same columns as specified in the select_statement.
CREATE VIEW vwCustomer
SELECT CustomerId, Company Name, Phone
ii. Getting Information on a View:
SQL Server stores information on the view in the following system tables:
· SYSOBJECTS — stores the name of the view.
· SYSCOLUMNS — stores the names of the columns defined in the view.
· SYSDEPENDS — stores information on the view dependencies.
· SYSCOMMENTS — stores the text of the view definition.
There are also certain system-stored procedures that help retrieve information on views. The sp_help system-stored procedure displays view-related information. It displays the view definition, provided the name of the view is given as its parameter.
Displays the definition of the vwCustomer view
If a view is created with the WITH ENCRYPTION option, it cannot view the sp_helptext system-stored procedure.
iii. Altering Views:
You can modify a view without dropping it. This ensures that the permission on the view is also not lost. You can modify a view without affecting its dependent objects, such as triggers and stored procedures. You modify a view using the ALTER VIEW statement.
ALTER VIEW vwCustomer
SELECT CustomerId, Company Name, Phone, Fax
iv. Dropping Views:
You can drop a view from a database by using the DROP VIEW statement. When a view is dropped, it has no effect on the underlying tables. Dropping a view removes its definition and all the permissions assigned to it. Furthermore, if you query any views that reference a dropped view, you receive an error message. However, dropping a table that references a view does not drop the view automatically. You must drop it explicitly.
DROP VIEW view_name
View_name is the name of the view to be dropped.
You can drop multiple views with a single DROP VIEW statement. The names of the views that need to be dropped are separated by commas in the DROP VIEW statement.
v.. Renaming Views
You can rename a view without having to drop it. This ensures that the permissions on the view are not lost.
· A view can be renamed by using the sp_rename system stored procedure.
Sp_rename old_viewname, new_viewname
Old_viewname is the view that needs to be renamed.
New_viewname is the new name of the view.
Sp_rename vwCutomers vwCustomerDetails
Renames vwCutomers to vwCustomerDetails
vi. Manipulating Data Using Views:
You can modify data by using a view in only one of its base tables even though a view may be derived from multiple underlying tables. For example, a view vwNew that is derived from two tables, table and table2, can be used to modify either table or table2 in a single statement. A single data modification statement that affected both the underlying tables is not permitted.
vii. When to Use a View:
You need to have a goal in mind when creating a view. There are a number of scenarios where you will want to look for a view as a solution.
- To hide the complexity of the underlying database schema, or customize the data and schema for a set of users.
- To control access to rows and columns of data- a view can be a useful tool to provide both column and row level security for a database.
- To aggregate data for performance- if you need to roll up or aggregate data from a set of tables.
viii. Modifying Data through a View:
- Use INSTEAD OF triggers with logic to support INSERT, UPDATE and DELETE statements.
- Use updatable partitioned views that modify one or more member tables.
Advantages of views:
1. View the data without storing the data into the object.
2. Restricts the view of a table i.e. can hide some of columns in the tables.
3. Join two or more tables and show it as one object to user.
4. restricts the access of a table so that nobody can insert the rows into the table.
1. Can not use DML operations on this.
2. When table is dropped view becomes inactive… it depends on the table objects.
3. It is an object, so it occupies space.
The SQL JOIN statement is used to combine the data contained in two relational database tables based upon a common attribute.
Types of Join:
- Inner Join
- Outer Join
- Left Outer Join
- Right Outer Join
- Full Outer Join
- Cross Join
1. Inner Join: Inner Join is a default type join of SQL Server. It uses logical operators such as =, <, > to match the records in two tables. Inner Join includes equi join and natural joins.
Self Join: Self join joins a single sql database table to itself.
Equi Join: Equi Join returns all the columns from both tables and filters the records satisfying the matching condition specified in Join “ON” statement of sql inner join query.
2. Outer Join: Outer Join has further 3 sub categories as left, right and full. Outer Join uses these category names as keywords that can be specified in the FROM clause.
o Left Outer Join: Left Outer Join returns all the rows from the table specified first in the Left Outer Join Clause. If in the left table any row has no matching record in the right side table then that row returns null column values for that particular tuple.
o Right Outer Join: Right Outer Join is exactly the reverse method of Left Outer Join. It returns all the rows from right table and returns null values for the rows having no match in the left joined table.
o Full Outer Join: Full outer join returns all the rows from both left and right joined tables. If there is any match missing from the left table then it returns null column values for left side table and if there is any match missing from right table then it returns null value columns for the right side table.
2. Cross Join: Cross join works as a Cartesian product of rows for both left and right table. It combined each row of left table with all the rows of right table.
The SQL CREATE TRIGGER statement provides a way for the database management system to actively control, monitor, and manage a group of tables whenever an insert, update, or delete operation is performed. The statements specified in the SQL trigger are executed each time an SQL insert, update, or delete operation is performed. An SQL trigger may call stored procedures or user-defined functions to perform additional processing when the trigger is executed.
Unlike stored procedures, an SQL trigger cannot be directly called from an application. Instead, an SQL trigger is invoked by the database management system on the execution of a triggering insert, update, or delete operation. The definition of the SQL trigger is stored in the database management system and is invoked by the database management system, when the SQL table, that the trigger is defined on, is modified.
A trigger is a Database object just like a stored procedure or we can say it is a special kind of Stored procedure which fires after (/before) a specified language event executes. More specifically, it is for the object which is attached to a Table or View or Database schemas for tracking the operations on them. The main difference between a trigger and a stored procedure is that the former is attached to a table or view and is fired only when an INSERT, UPDATE, and/or DELETE occurs, while a stored procedure executes at any time when it is called.
TRIGGER myTrigger ON Employee FOR INSERT,
IF EXISTS (
Inserted i JOIN employee e
ON i.ID = e.ID)
Types of Triggers
There are some added types in SQL Server 2005 for triggering actions:
1. DML Triggers
INSTEAD OF Triggers
2. DDL Triggers
3. CLR Triggers
These triggers are fired when a Data Manipulation Language (DML) event takes place. These are attached to a Table or View and are fired only when an INSERT, UPDATE and/or DELETE event occurs.
- INSTEAD OF Triggers:
INSTEAD-OF triggers are objects that will execute instead of data manipulation statements in the database engine. For example, attaching an INSTEAD-OF INSERT trigger to a table will tell the database engine to execute that trigger instead of executing the statement that would insert values into that table.
Why INSTEAD OF Trigger?
INSTEAD-OF triggers are very powerful objects in SQL Server. They allow the developer to divert the database engine to do something different than what the user is trying to do. An example of this would be to add an INSTEAD-OF trigger to any table in your database that rolls back transactions on tables that you do not want modified. You must be careful when using this method because the INSTEAD-OF trigger will need to be disabled before any specified modifications can occur to this table.
Perhaps a more functional reason to use an INSTEAD-OF trigger would be to add the trigger to a view. Adding an INSTEAD-OF trigger to a view essentially allows you to create updateable views. Updateable views allow you to totally abstract your database schema so you can potentially design a system in such a way that your database developers do not have to worry about the OLTP database schema and instead rely upon a standard set of views for data modifications.
CREATE VIEW order_info AS
SELECT c.customer_id, c.cust_last_name, c.cust_first_name,
o.order_id, o.order_date, o.order_status
FROM customers c, orders o
WHERE c.customer_id = o.customer_id;
Normally this view would not be updatable, because the primary key of the orders table (order_id) is not unique in the result set of the join view. To make this view updatable, create an INSTEAD OF trigger on the view to process INSERT statements directed to the view. The PL/SQL trigger implementation is shown in italics.
CREATE OR REPLACE TRIGGER order_info_insert
INSTEAD OF INSERT ON order_info
PRAGMA EXCEPTION_INIT (duplicate_info, -00001);
INSERT INTO customers
(customer_id, cust_last_name, cust_first_name)
INSERT INTO orders (order_id, order_date, customer_id)
WHEN duplicate_info THEN
msg=> ‘Duplicate customer or order ID’);
You can now insert into both base tables through the view (as long as all NOT NULL columns receive values):
INSERT INTO order_info VALUES
(999, ‘Smith’, ‘John’, 2500, ’13-MAR-2001′, 0);
- AFTER Triggers:
AFTER triggers are executed after the action of the INSERT, UPDATE, or DELETE statement is performed. This is the only option available in earlier versions on Microsoft SQL Server and also AFTER triggers can be specified only on tables only. Here is a sample trigger creation statement on the Users table.
------ Creating a DML trigger in T-SQL -------
SET NOCOUNT ON
CREATE TABLE UserTable (User_ID int IDENTITY, User_Name varchar (30), Type varchar (10))
CREATE TRIGGER tr_User_INSERT
PRINT GETDATE ()
Go INSERT UserTable (User_Name, Type) VALUES ('James', 'ADMIN')
------ Result ---------------
Apr 30 2007 7:04AM
DDL triggers are new to SQL Server 2005. This type of triggers, like regular triggers, fire stored procedures in response to an event. They fire in response to a variety of Data Definition Language (DDL) events. These events are specified by the T-SQL statements that are start with the keywords CREATE, ALTER, and DROP. Certain stored procedures that perform DDL-like operations can also fire this. These are used for administrative tasks like auditing and regulating database operations.
A CLR triggers can be any of the above, e.g. can be a DDL or DML one or can also be an AFTER or INSTEAD OF trigger. Here we need to execute one or more methods written in managed codes that are members of an assembly created in the ..Net framework. Again, that assembly must be deployed in SQL Server 2005 using CREATE assembly statement.
The Microsoft.SqlServer.Server Namespace contains the required classes and enumerations for this objective.
Steps for creating CLR Triggers:
The following are required steps for creating a CLR-Trigger of DML (After trigger) type for Insert action. This database Object is executed as the result of a user action against a table i.e. for an INSERT statement.
Creating a .NET class of triggering action
Making assembly(.DLL) from that Class
Enabling CLR environment in that database.
Registering the assembly in SQL Server
Creating CLR Trigger using that assembly
A clustered index stores the actual data rows at the leaf level of the index. Returning to the example above, that would mean that the entire row of data associated with the primary key value of 123 would be stored in that leaf node. An important characteristic of the clustered index is that the indexed values are sorted in either ascending or descending order. As a result, there can be only one clustered index on a table or view. In addition, data in a table is sorted only if a clustered index has been defined on a table.
Note: A table that has a clustered index is referred to as a clustered table. A table that has no clustered index is referred to as a heap.
Unlike a clustered indexed, the leaf nodes of a non-clustered index contain only the values from the indexed columns and row locators that point to the actual data rows, rather than contain the data rows themselves. This means that the query engine must take an additional step in order to locate the actual data.
A row locator’s structure depends on whether it points to a clustered table or to a heap. If referencing a clustered table, the row locator points to the clustered index, using the value from the clustered index to navigate to the correct data row. If referencing a heap, the row locator points to the actual data row.
Non-clustered indexes cannot be sorted like clustered indexes; however, you can create more than one non-clustered index per table or view. SQL Server 2005 supports up to 249 non-clustered indexes, and SQL Server 2008 support up to 999. This certainly doesn’t mean you should create that many indexes. Indexes can both help and hinder performance, as I explain later in the article.
In addition to being able to create multiple non-clustered indexes on a table or view, you can also add included columns to your index. This means that you can store at the leaf level not only the values from the indexed column, but also the values from non-indexed columns. This strategy allows you to get around some of the limitations on indexes. For example, you can include non-indexed columns in order to exceed the size limit of indexed columns (900 bytes in most cases).
In addition to an index being clustered or non-clustered, it can be configured in other ways:
- Composite index: An index that contains more than one column. In both SQL Server 2005 and 2008, you can include up to 16 columns in an index, as long as the index doesn’t exceed the 900-byte limit. Both clustered and non-clustered indexes can be composite indexes.
- Unique Index: An index that ensures the uniqueness of each value in the indexed column. If the index is a composite, the uniqueness is enforced across the columns as a whole, not on the individual columns. For example, if you were to create an index on the FirstName and LastName columns in a table, the names together must be unique, but the individual names can be duplicated.
A unique index is automatically created when you define a primary key or unique constraint:
- Primary key: When you define a primary key constraint on one or more columns, SQL Server automatically creates a unique, clustered index if a clustered index does not already exist on the table or view. However, you can override the default behavior and define a unique, non-clustered index on the primary key.
- Unique: When you define a unique constraint, SQL Server automatically creates a unique, non-clustered index. You can specify that a unique clustered index be created if a clustered index does not already exist on the table.
- Covering index: A type of index that includes all the columns that are needed to process a particular query. For example, your query might retrieve the FirstName and LastName columns from a table, based on a value in the ContactID column. You can create a covering index that includes all three columns.
Microsoft SQL Server supports two types of indexes:
- Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this reason, each database table may have only one clustered index. If a PRIMARY KEY constraint is created for a database table and no clustered index currently exists for that table, SQL Server automatically creates a clustered index on the primary key.
- Non-clustered indexes are created outside of the database table and contain a sorted list of references to the table itself. SQL Server 2000 supports a maximum of 249 non-clustered indexes per table. However, it’s important to keep in mind that non-clustered indexes slow down the data modification and insertion process, so indexes should be kept to a minimum
One of the hardest tasks facing database administrators is the selection of appropriate columns for non-clustered indexes. You should consider creating non-clustered indexes on any columns that are frequently referenced in the WHERE clauses of SQL statements. Other good candidates are columns referenced by JOIN and GROUP BY operations.
You may wish to also consider creating non-clustered indexes that cover all of the columns used by certain frequently issued queries. These queries are referred to as “covered queries” and experience excellent performance gains.
Indexes allow you to speed query performance on commonly used columns and improve the overall processing speed of your database.
Although indexes can reduce access time significantly, they can also have adverse effects on performance. Before you create indexes, consider the effects of multiple indexes on disk space and processing time:
· Each index requires storage or disk space. The exact amount depends on the size of the table and the size and number of columns in the index.
· Each INSERT or DELETE operation performed on a table requires additional updating of each index on that table. This is also true for each UPDATE operation that changes the value of an index key.
· The LOAD utility rebuilds or appends to any existing indexes.
The index free space MODIFIED BY parameter can be specified on the LOAD command to override the index PCTFREE used when the index was created.
· Each index potentially adds an alternative access path for a query for the optimizer to consider, which increases the compilation time.
Choose indexes carefully to address the needs of the application program.
Difference B/W Clustered and Non-Clustered Index:
There are clustered and non-clustered indexes. A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
A non-clustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a non-clustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.
- Every index increases the time in takes to perform INSERTS, UPDATES and DELETES, so the number of indexes should not be very much. Try to use maximum 4-5 indexes on one table, not more. If you have read-only table, then the number of indexes may be increased.
- Keep your indexes as narrow as possible. This reduces the size of the index and reduces the number of reads required to read the index.
- Try to create indexes on columns that have integer values rather than character values.
- If you create a composite (multi-column) index, the order of the columns in the key is very important. Try to order the columns in the key as to enhance selectivity, with the most selective columns to the leftmost of the key.
- If you want to join several tables, try to create surrogate integer keys for this purpose and create indexes on their columns.
- Create surrogate integer primary key (identity for example) if your table will not have many insert operations.
- Clustered indexes are more preferable than non-clustered, if you need to select by a range of values or you need to sort results set with GROUP BY or ORDER BY.
- If your application will be performing the same query over and over on the same table, consider creating a covering index on the table.
- You can use the SQL Server Profiler Create Trace Wizard with “Identify Scans of Large Tables” trace to determine which tables in your database may need indexes. This trace will show which tables are being scanned by queries instead of using an index.
- You can use the sp_MSforeachtable undocumented stored procedure to rebuild all indexes in your database. Try to schedule it to execute during CPU idle time and slow production periods.
sp_MSforeachtable @command1=”print ‘?’ DBCC DBREINDEX (‘?’)”
USER DEFINED FUNCTIONS:
UDF’s support a wide range of functionality and just because you can’t commit write changes to a database, it doesn’t mean that you should turn a blind eye to them. Here’s a list of the statements that you can include in the body of a UDF:
- Flow control statements such as IF…ELSE, CASE, GOTO labels, etc.
- UPDATE, INSERT and DELETE commands to modify the contents of tables which are local to the UDF and have been created using SQL Server 2000’s news TABLE variable type.
- The EXECUTE command to execute stored procedures and extended stored procedures.
- The SET and SELECT statements to assign a value to a variable.
- SELECT statements to retrieve field values from tables into local variables.
- CURSOR, OPEN CURSOR, CLOSE CURSOR, DEALLOCATE CURSOR and FETCH NEXT FROM statements as long as the cursor is both created and disposed of within the scope of the UDF. Note that actual rows can’t be returned, and they must be explicitly returned into type-matching variables using the FETCH NEXT INTO statement.
- DECLARE statements to create variables local (in terms of scope) to the UDF.
- Internally defined SQL variables which are prepended with “@@”, such as @@ERROR and @@FETCH_STATUS.
Types of User Defined Functions
There are three different types of User Defined Functions. Each type refers to the data being returned by the function. Scalar functions return a single value. In Line Table functions return a single table variable that was created by a select statement. The final UDF is a Multi-statement Table Function. This function returns a table variable whose structure was created by hand, similar to a Create Table statement. It is useful when complex data manipulation inside the function is required.
Our first User Defined Function will accept a date time, and return only the date portion. Scalar functions return a value. From inside Query Analyzer, enter:
CREATE FUNCTION dbo.DateOnly (@InDateTime datetime)
RETURNS varchar (10)
DECLARE @MyOutput varchar (10)
SET @MyOutput = CONVERT (varchar (10), @InDateTime, 101)
To call our function, execute: SELECT dbo.DateOnly (GETDATE ())
Notice the User Defined Function must be prefaced with the owner name, DBO in this case. In addition, GETDATE can be used as the input parameter, but could not be used inside the function itself. Other built in SQL functions that cannot be used inside a User Defined Function include: RAND , NEWID, @@CONNCECTIONS, @@TIMETICKS, and @@PACK_SENT. Any built in function that is non-deterministic.
The statement begins by supplying a function name and input parameter list. In this case, a date time value will be passed in. The next line defines the type of data the UDF will return. Between the BEGIN and END block is the statement code. Declaring the output variable was for clarity only. This function should be shortened to:
CREATE FUNCTION testDateOnly (@InDateTime datetime)
RETURNS varchar (10)
RETURN CONVERT (varchar (10), @InDateTime, 101)
Inline Table UDFs
These User Defined Functions return a table variable that was created by a single select statement. Almost like a simply constructed non-updatable view, but having the benefit of accepting input parameters.
This next function looks all the employees in the pubs database that start with a letter that is passed in as a parameter. In Query Analyzer, enter and run:
CREATE FUNCTION dbo.LookByFName (@FirstLetter char (1))
RETURN SELECT *
WHERE LEFT (fname, 1) = @FirstLetter
To use the new function, enter:
SELECT * FROM dbo.LookByFName (‘A’)
All the rows having a first name starting with A were returned. The return is a Table Variable, not to be confused with a temporary table. Table variables are new in SQL 2000. They are a special data type whose scope is limited to the process that declared it. Table variables are stated to have performance benefits over temporary tables. None of my personal testing has found this result though.
Multi Statement UDFs
Multi Statement User Defined Functions are very similar to Stored Procedures. They both allow complex logic to take place inside the function. There are a number of restrictions unique to functions though. The Multi Statement UDF will always return a table variable–and only one table variable. There is no way to return multiple result sets. In addition, a User Defined Function cannot call a Stored Procedure from inside itself. They also cannot execute dynamic SQL. Remember also, that UDFs cannot use non-deterministic built in functions. So GETDATE and RAND cannot be used. Error handling is restricted. RAISERROR and @@ERROR are invalid from inside User Defined Functions. Like other programming languages, the purpose of a User Defined Function is to create a stand-alone code module to be reused over and over by the global application.
For a Multi Statement test, we will create a modified version of the LookByFName function. This new function will accept the same input parameter. But rather than return a table from a simple select, a specific table will be created, and data in it will be manipulated prior to the return:
CREATE FUNCTION dbo.multi_test (@FirstLetter char (1))
RETURNS @Result TABLE
fname varchar (20),
on_probation char (1)
INSERT INTO @Result
SELECT fname, hire_date
WHERE LEFT (fname, 1) = @FirstLetter
SET on_probation = ‘N’
SET on_probation = ‘Y’
WHERE hire_date < ’01/01/1991′
To use the new function, execute:
SELECT * FROM dbo.multi_test (‘A’)
With the new Multi Statement Function, we can manipulate data like a Stored Procedure, but use it in statement areas like a View.
For example, only specific columns can be returned.
SELECT fname FROM dbo.multi_test (‘A’)
The function can also be joined like a view:
SELECT e.lname, f.fname
FROM employee e INNER JOIN dbo.multi_test (‘A’) f ON e.fname = f.fname
Advantages of User Defined Functions
Before SQL 2000, User Defined Functions (UDFs) were not available. Stored Procedures were often used in their place. When advantages or disadvantages of User Defined Functions are discussed, the comparison is usually to Stored Procedures.
One of the advantages of User Defined Functions over Stored Procedures, is the fact that a UDF can be used in a Select, Where, or Case statement. They also can be used to create joins. In addition, User Defined Functions are simpler to invoke than Stored Procedures from inside another SQL statement.
Disadvantages of User Defined Functions
User Defined Functions cannot be used to modify base table information. The DML statements INSERT, UPDATE, and DELETE cannot be used on base tables. Another disadvantage is that SQL functions that return non-deterministic values are not allowed to be called from inside User Defined Functions. GETDATE is an example of a non-deterministic function. Every time the function is called, a different value is returned. Therefore, GETDATE cannot be called from inside a UDF you create.
SQL Optimization Tips:
- Use views and stored procedures instead of heavy-duty queries. This can reduce network traffic, because your client will send to server only stored procedure or view name (perhaps with some parameters) instead of large heavy-duty queries text. This can be used to facilitate permission management also, because you can restrict user access to table columns they should not see.
- Try to use constraints instead of triggers, whenever possible. Constraints are much more efficient than triggers and can boost performance. So you should use constraints instead of triggers, whenever possible.
- Use table variables instead of temporary tables. Table variables require less locking and logging resources than temporary tables, so table variables should be used whenever possible. The table variables are available in SQL Server 2000 only.
- Try to use UNION ALL statement instead of UNION , whenever possible. The UNION ALL statement is much faster than UNION , because UNION ALL statement does not look for duplicate rows, and UNION statement does look for duplicate rows, whether or not they exist.
- Try to avoid using the DISTINCT clause, whenever possible. Because using the DISTINCT clause will result in some performance degradation, you should use this clause only when it is necessary.
- Try to avoid using SQL Server cursors, whenever possible. SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated sub-query or derived tables, if you need to perform row-by-row operations.
- Try to avoid the HAVING clause, whenever possible. The HAVING clause is used to restrict the result set returned by the GROUP BY clause. When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the rows into sets of grouped rows and aggregates their values, and then the HAVING clause eliminates undesired aggregated groups. In many cases, you can write your select statement so, that it will contain only WHERE and GROUP BY clauses without HAVING clause. This can improve the performance of your query.
- If you need to return the total table’s row count, you can use alternative way instead of SELECT COUNT (*) statement. Because SELECT COUNT (*) statement make a full table scan to return the total table’s row count, it can take very many time for the large table. There is another way to determine the total row count in a table. You can use sysindexes system table, in this case. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT(*): SELECT rows FROM sysindexes WHERE id = OBJECT_ID(‘table_name’) AND indid < 2 So, you can improve the speed of such queries in several times.
- Include SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a T-SQL statement. This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a T-SQL statement.
- Use the select statements with TOP keyword or the SET ROWCOUNT statement, if you need to return only the first n rows. This can improve performance of your queries, because the smaller result set will be returned. This can also reduce the traffic between the server and the clients.