A view is a virtual table created by a stored SQL statement that can span multiple tables. Views can be used as a method of security within your database, and they provide a simpler front end to a user querying the data.
A view is a virtual table that consists of columns from one or more tables. a view is an object that derives its data from one or more tables. These tables are referred to as base or underlying tables. Once you have defined a view, you can reference it like any other table in a database.
A view serves as a security mechanism. This ensures that users are able to retrieve and modify only the data seen by them. Users cannot see or access the remaining data in the underlying tables. A view also serves as a mechanism to simplify query execution. Complex queries can be stored in the form as a view, and data from the view can be extracted using simple queries.
Views ensure the security of data by restricting access to the following data:
- Specific rows of the tables.
- Specific columns of the tables.
- Specific rows and columns of the tables.
- Rows fetched by using joins.
- Statistical summary of data in a given tables.
- Subsets of another view or a subset of views and tables.
Some common examples of views are:
- A subset of rows or columns of a base table.
- A union of two or more tables.
- A join of two or more tables.
- A statistical summary of base tables.
- A subset of another view, or some combination of views and base table.
A view can be created by using the CREATE VIEW statement.
CREATE VIEW view_name
[(column_name [, column_name]….)]
AS select_statement [WITH CHECK OPTION]
View_name specifies the name of the view and must follow the rules for identifiers.
Column_name specifies the name of the column to be used in view. If the column_name option is not specified, then the view is created with the same columns as specified in the select_statement.
CREATE VIEW vwCustomer
SELECT CustomerId, Company Name, Phone
ii. Getting Information on a View:
SQL Server stores information on the view in the following system tables:
· SYSOBJECTS — stores the name of the view.
· SYSCOLUMNS — stores the names of the columns defined in the view.
· SYSDEPENDS — stores information on the view dependencies.
· SYSCOMMENTS — stores the text of the view definition.
There are also certain system-stored procedures that help retrieve information on views. The sp_help system-stored procedure displays view-related information. It displays the view definition, provided the name of the view is given as its parameter.
Displays the definition of the vwCustomer view
If a view is created with the WITH ENCRYPTION option, it cannot view the sp_helptext system-stored procedure.
iii. Altering Views:
You can modify a view without dropping it. This ensures that the permission on the view is also not lost. You can modify a view without affecting its dependent objects, such as triggers and stored procedures. You modify a view using the ALTER VIEW statement.
ALTER VIEW vwCustomer
SELECT CustomerId, Company Name, Phone, Fax
iv. Dropping Views:
You can drop a view from a database by using the DROP VIEW statement. When a view is dropped, it has no effect on the underlying tables. Dropping a view removes its definition and all the permissions assigned to it. Furthermore, if you query any views that reference a dropped view, you receive an error message. However, dropping a table that references a view does not drop the view automatically. You must drop it explicitly.
DROP VIEW view_name
View_name is the name of the view to be dropped.
You can drop multiple views with a single DROP VIEW statement. The names of the views that need to be dropped are separated by commas in the DROP VIEW statement.
v.. Renaming Views
You can rename a view without having to drop it. This ensures that the permissions on the view are not lost.
· A view can be renamed by using the sp_rename system stored procedure.
Sp_rename old_viewname, new_viewname
Old_viewname is the view that needs to be renamed.
New_viewname is the new name of the view.
Sp_rename vwCutomers vwCustomerDetails
Renames vwCutomers to vwCustomerDetails
vi. Manipulating Data Using Views:
You can modify data by using a view in only one of its base tables even though a view may be derived from multiple underlying tables. For example, a view vwNew that is derived from two tables, table and table2, can be used to modify either table or table2 in a single statement. A single data modification statement that affected both the underlying tables is not permitted.
vii. When to Use a View:
You need to have a goal in mind when creating a view. There are a number of scenarios where you will want to look for a view as a solution.
- To hide the complexity of the underlying database schema, or customize the data and schema for a set of users.
- To control access to rows and columns of data- a view can be a useful tool to provide both column and row level security for a database.
- To aggregate data for performance- if you need to roll up or aggregate data from a set of tables.
viii. Modifying Data through a View:
- Use INSTEAD OF triggers with logic to support INSERT, UPDATE and DELETE statements.
- Use updatable partitioned views that modify one or more member tables.
Advantages of views:
1. View the data without storing the data into the object.
2. Restricts the view of a table i.e. can hide some of columns in the tables.
3. Join two or more tables and show it as one object to user.
4. restricts the access of a table so that nobody can insert the rows into the table.
1. Can not use DML operations on this.
2. When table is dropped view becomes inactive… it depends on the table objects.
3. It is an object, so it occupies space.
The SQL JOIN statement is used to combine the data contained in two relational database tables based upon a common attribute.
Types of Join:
- Inner Join
- Outer Join
- Left Outer Join
- Right Outer Join
- Full Outer Join
- Cross Join
1. Inner Join: Inner Join is a default type join of SQL Server. It uses logical operators such as =, <, > to match the records in two tables. Inner Join includes equi join and natural joins.
Self Join: Self join joins a single sql database table to itself.
Equi Join: Equi Join returns all the columns from both tables and filters the records satisfying the matching condition specified in Join “ON” statement of sql inner join query.
2. Outer Join: Outer Join has further 3 sub categories as left, right and full. Outer Join uses these category names as keywords that can be specified in the FROM clause.
o Left Outer Join: Left Outer Join returns all the rows from the table specified first in the Left Outer Join Clause. If in the left table any row has no matching record in the right side table then that row returns null column values for that particular tuple.
o Right Outer Join: Right Outer Join is exactly the reverse method of Left Outer Join. It returns all the rows from right table and returns null values for the rows having no match in the left joined table.
o Full Outer Join: Full outer join returns all the rows from both left and right joined tables. If there is any match missing from the left table then it returns null column values for left side table and if there is any match missing from right table then it returns null value columns for the right side table.
2. Cross Join: Cross join works as a Cartesian product of rows for both left and right table. It combined each row of left table with all the rows of right table.
The SQL CREATE TRIGGER statement provides a way for the database management system to actively control, monitor, and manage a group of tables whenever an insert, update, or delete operation is performed. The statements specified in the SQL trigger are executed each time an SQL insert, update, or delete operation is performed. An SQL trigger may call stored procedures or user-defined functions to perform additional processing when the trigger is executed.
Unlike stored procedures, an SQL trigger cannot be directly called from an application. Instead, an SQL trigger is invoked by the database management system on the execution of a triggering insert, update, or delete operation. The definition of the SQL trigger is stored in the database management system and is invoked by the database management system, when the SQL table, that the trigger is defined on, is modified.
A trigger is a Database object just like a stored procedure or we can say it is a special kind of Stored procedure which fires after (/before) a specified language event executes. More specifically, it is for the object which is attached to a Table or View or Database schemas for tracking the operations on them. The main difference between a trigger and a stored procedure is that the former is attached to a table or view and is fired only when an INSERT, UPDATE, and/or DELETE occurs, while a stored procedure executes at any time when it is called.
TRIGGER myTrigger ON Employee FOR INSERT,
IF EXISTS (
Inserted i JOIN employee e
ON i.ID = e.ID)
Types of Triggers
There are some added types in SQL Server 2005 for triggering actions:
1. DML Triggers
INSTEAD OF Triggers
2. DDL Triggers
3. CLR Triggers
These triggers are fired when a Data Manipulation Language (DML) event takes place. These are attached to a Table or View and are fired only when an INSERT, UPDATE and/or DELETE event occurs.
- INSTEAD OF Triggers:
INSTEAD-OF triggers are objects that will execute instead of data manipulation statements in the database engine. For example, attaching an INSTEAD-OF INSERT trigger to a table will tell the database engine to execute that trigger instead of executing the statement that would insert values into that table.
Why INSTEAD OF Trigger?
INSTEAD-OF triggers are very powerful objects in SQL Server. They allow the developer to divert the database engine to do something different than what the user is trying to do. An example of this would be to add an INSTEAD-OF trigger to any table in your database that rolls back transactions on tables that you do not want modified. You must be careful when using this method because the INSTEAD-OF trigger will need to be disabled before any specified modifications can occur to this table.
Perhaps a more functional reason to use an INSTEAD-OF trigger would be to add the trigger to a view. Adding an INSTEAD-OF trigger to a view essentially allows you to create updateable views. Updateable views allow you to totally abstract your database schema so you can potentially design a system in such a way that your database developers do not have to worry about the OLTP database schema and instead rely upon a standard set of views for data modifications.
CREATE VIEW order_info AS
SELECT c.customer_id, c.cust_last_name, c.cust_first_name,
o.order_id, o.order_date, o.order_status
FROM customers c, orders o
WHERE c.customer_id = o.customer_id;
Normally this view would not be updatable, because the primary key of the orders table (order_id) is not unique in the result set of the join view. To make this view updatable, create an INSTEAD OF trigger on the view to process INSERT statements directed to the view. The PL/SQL trigger implementation is shown in italics.
CREATE OR REPLACE TRIGGER order_info_insert
INSTEAD OF INSERT ON order_info
PRAGMA EXCEPTION_INIT (duplicate_info, -00001);
INSERT INTO customers
(customer_id, cust_last_name, cust_first_name)
INSERT INTO orders (order_id, order_date, customer_id)
WHEN duplicate_info THEN
msg=> ‘Duplicate customer or order ID’);
You can now insert into both base tables through the view (as long as all NOT NULL columns receive values):
INSERT INTO order_info VALUES
(999, ‘Smith’, ‘John’, 2500, ’13-MAR-2001′, 0);
- AFTER Triggers:
AFTER triggers are executed after the action of the INSERT, UPDATE, or DELETE statement is performed. This is the only option available in earlier versions on Microsoft SQL Server and also AFTER triggers can be specified only on tables only. Here is a sample trigger creation statement on the Users table.
------ Creating a DML trigger in T-SQL -------
SET NOCOUNT ON
CREATE TABLE UserTable (User_ID int IDENTITY, User_Name varchar (30), Type varchar (10))
CREATE TRIGGER tr_User_INSERT
PRINT GETDATE ()
Go INSERT UserTable (User_Name, Type) VALUES ('James', 'ADMIN')
------ Result ---------------
Apr 30 2007 7:04AM
DDL triggers are new to SQL Server 2005. This type of triggers, like regular triggers, fire stored procedures in response to an event. They fire in response to a variety of Data Definition Language (DDL) events. These events are specified by the T-SQL statements that are start with the keywords CREATE, ALTER, and DROP. Certain stored procedures that perform DDL-like operations can also fire this. These are used for administrative tasks like auditing and regulating database operations.
A CLR triggers can be any of the above, e.g. can be a DDL or DML one or can also be an AFTER or INSTEAD OF trigger. Here we need to execute one or more methods written in managed codes that are members of an assembly created in the ..Net framework. Again, that assembly must be deployed in SQL Server 2005 using CREATE assembly statement.
The Microsoft.SqlServer.Server Namespace contains the required classes and enumerations for this objective.
Steps for creating CLR Triggers:
The following are required steps for creating a CLR-Trigger of DML (After trigger) type for Insert action. This database Object is executed as the result of a user action against a table i.e. for an INSERT statement.
Creating a .NET class of triggering action
Making assembly(.DLL) from that Class
Enabling CLR environment in that database.
Registering the assembly in SQL Server
Creating CLR Trigger using that assembly
A clustered index stores the actual data rows at the leaf level of the index. Returning to the example above, that would mean that the entire row of data associated with the primary key value of 123 would be stored in that leaf node. An important characteristic of the clustered index is that the indexed values are sorted in either ascending or descending order. As a result, there can be only one clustered index on a table or view. In addition, data in a table is sorted only if a clustered index has been defined on a table.
Note: A table that has a clustered index is referred to as a clustered table. A table that has no clustered index is referred to as a heap.
Unlike a clustered indexed, the leaf nodes of a non-clustered index contain only the values from the indexed columns and row locators that point to the actual data rows, rather than contain the data rows themselves. This means that the query engine must take an additional step in order to locate the actual data.
A row locator’s structure depends on whether it points to a clustered table or to a heap. If referencing a clustered table, the row locator points to the clustered index, using the value from the clustered index to navigate to the correct data row. If referencing a heap, the row locator points to the actual data row.
Non-clustered indexes cannot be sorted like clustered indexes; however, you can create more than one non-clustered index per table or view. SQL Server 2005 supports up to 249 non-clustered indexes, and SQL Server 2008 support up to 999. This certainly doesn’t mean you should create that many indexes. Indexes can both help and hinder performance, as I explain later in the article.
In addition to being able to create multiple non-clustered indexes on a table or view, you can also add included columns to your index. This means that you can store at the leaf level not only the values from the indexed column, but also the values from non-indexed columns. This strategy allows you to get around some of the limitations on indexes. For example, you can include non-indexed columns in order to exceed the size limit of indexed columns (900 bytes in most cases).
In addition to an index being clustered or non-clustered, it can be configured in other ways:
- Composite index: An index that contains more than one column. In both SQL Server 2005 and 2008, you can include up to 16 columns in an index, as long as the index doesn’t exceed the 900-byte limit. Both clustered and non-clustered indexes can be composite indexes.
- Unique Index: An index that ensures the uniqueness of each value in the indexed column. If the index is a composite, the uniqueness is enforced across the columns as a whole, not on the individual columns. For example, if you were to create an index on the FirstName and LastName columns in a table, the names together must be unique, but the individual names can be duplicated.
A unique index is automatically created when you define a primary key or unique constraint:
- Primary key: When you define a primary key constraint on one or more columns, SQL Server automatically creates a unique, clustered index if a clustered index does not already exist on the table or view. However, you can override the default behavior and define a unique, non-clustered index on the primary key.
- Unique: When you define a unique constraint, SQL Server automatically creates a unique, non-clustered index. You can specify that a unique clustered index be created if a clustered index does not already exist on the table.
- Covering index: A type of index that includes all the columns that are needed to process a particular query. For example, your query might retrieve the FirstName and LastName columns from a table, based on a value in the ContactID column. You can create a covering index that includes all three columns.
Microsoft SQL Server supports two types of indexes:
- Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this reason, each database table may have only one clustered index. If a PRIMARY KEY constraint is created for a database table and no clustered index currently exists for that table, SQL Server automatically creates a clustered index on the primary key.
- Non-clustered indexes are created outside of the database table and contain a sorted list of references to the table itself. SQL Server 2000 supports a maximum of 249 non-clustered indexes per table. However, it’s important to keep in mind that non-clustered indexes slow down the data modification and insertion process, so indexes should be kept to a minimum
One of the hardest tasks facing database administrators is the selection of appropriate columns for non-clustered indexes. You should consider creating non-clustered indexes on any columns that are frequently referenced in the WHERE clauses of SQL statements. Other good candidates are columns referenced by JOIN and GROUP BY operations.
You may wish to also consider creating non-clustered indexes that cover all of the columns used by certain frequently issued queries. These queries are referred to as “covered queries” and experience excellent performance gains.
Indexes allow you to speed query performance on commonly used columns and improve the overall processing speed of your database.
Although indexes can reduce access time significantly, they can also have adverse effects on performance. Before you create indexes, consider the effects of multiple indexes on disk space and processing time:
· Each index requires storage or disk space. The exact amount depends on the size of the table and the size and number of columns in the index.
· Each INSERT or DELETE operation performed on a table requires additional updating of each index on that table. This is also true for each UPDATE operation that changes the value of an index key.
· The LOAD utility rebuilds or appends to any existing indexes.
The index free space MODIFIED BY parameter can be specified on the LOAD command to override the index PCTFREE used when the index was created.
· Each index potentially adds an alternative access path for a query for the optimizer to consider, which increases the compilation time.
Choose indexes carefully to address the needs of the application program.
Difference B/W Clustered and Non-Clustered Index:
There are clustered and non-clustered indexes. A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
A non-clustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a non-clustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.
- Every index increases the time in takes to perform INSERTS, UPDATES and DELETES, so the number of indexes should not be very much. Try to use maximum 4-5 indexes on one table, not more. If you have read-only table, then the number of indexes may be increased.
- Keep your indexes as narrow as possible. This reduces the size of the index and reduces the number of reads required to read the index.
- Try to create indexes on columns that have integer values rather than character values.
- If you create a composite (multi-column) index, the order of the columns in the key is very important. Try to order the columns in the key as to enhance selectivity, with the most selective columns to the leftmost of the key.
- If you want to join several tables, try to create surrogate integer keys for this purpose and create indexes on their columns.
- Create surrogate integer primary key (identity for example) if your table will not have many insert operations.
- Clustered indexes are more preferable than non-clustered, if you need to select by a range of values or you need to sort results set with GROUP BY or ORDER BY.
- If your application will be performing the same query over and over on the same table, consider creating a covering index on the table.
- You can use the SQL Server Profiler Create Trace Wizard with “Identify Scans of Large Tables” trace to determine which tables in your database may need indexes. This trace will show which tables are being scanned by queries instead of using an index.
- You can use the sp_MSforeachtable undocumented stored procedure to rebuild all indexes in your database. Try to schedule it to execute during CPU idle time and slow production periods.
sp_MSforeachtable @command1=”print ‘?’ DBCC DBREINDEX (‘?’)”
USER DEFINED FUNCTIONS:
UDF’s support a wide range of functionality and just because you can’t commit write changes to a database, it doesn’t mean that you should turn a blind eye to them. Here’s a list of the statements that you can include in the body of a UDF:
- Flow control statements such as IF…ELSE, CASE, GOTO labels, etc.
- UPDATE, INSERT and DELETE commands to modify the contents of tables which are local to the UDF and have been created using SQL Server 2000’s news TABLE variable type.
- The EXECUTE command to execute stored procedures and extended stored procedures.
- The SET and SELECT statements to assign a value to a variable.
- SELECT statements to retrieve field values from tables into local variables.
- CURSOR, OPEN CURSOR, CLOSE CURSOR, DEALLOCATE CURSOR and FETCH NEXT FROM statements as long as the cursor is both created and disposed of within the scope of the UDF. Note that actual rows can’t be returned, and they must be explicitly returned into type-matching variables using the FETCH NEXT INTO statement.
- DECLARE statements to create variables local (in terms of scope) to the UDF.
- Internally defined SQL variables which are prepended with “@@”, such as @@ERROR and @@FETCH_STATUS.
Types of User Defined Functions
There are three different types of User Defined Functions. Each type refers to the data being returned by the function. Scalar functions return a single value. In Line Table functions return a single table variable that was created by a select statement. The final UDF is a Multi-statement Table Function. This function returns a table variable whose structure was created by hand, similar to a Create Table statement. It is useful when complex data manipulation inside the function is required.
Our first User Defined Function will accept a date time, and return only the date portion. Scalar functions return a value. From inside Query Analyzer, enter:
CREATE FUNCTION dbo.DateOnly (@InDateTime datetime)
RETURNS varchar (10)
DECLARE @MyOutput varchar (10)
SET @MyOutput = CONVERT (varchar (10), @InDateTime, 101)
To call our function, execute: SELECT dbo.DateOnly (GETDATE ())
Notice the User Defined Function must be prefaced with the owner name, DBO in this case. In addition, GETDATE can be used as the input parameter, but could not be used inside the function itself. Other built in SQL functions that cannot be used inside a User Defined Function include: RAND , NEWID, @@CONNCECTIONS, @@TIMETICKS, and @@PACK_SENT. Any built in function that is non-deterministic.
The statement begins by supplying a function name and input parameter list. In this case, a date time value will be passed in. The next line defines the type of data the UDF will return. Between the BEGIN and END block is the statement code. Declaring the output variable was for clarity only. This function should be shortened to:
CREATE FUNCTION testDateOnly (@InDateTime datetime)
RETURNS varchar (10)
RETURN CONVERT (varchar (10), @InDateTime, 101)
Inline Table UDFs
These User Defined Functions return a table variable that was created by a single select statement. Almost like a simply constructed non-updatable view, but having the benefit of accepting input parameters.
This next function looks all the employees in the pubs database that start with a letter that is passed in as a parameter. In Query Analyzer, enter and run:
CREATE FUNCTION dbo.LookByFName (@FirstLetter char (1))
RETURN SELECT *
WHERE LEFT (fname, 1) = @FirstLetter
To use the new function, enter:
SELECT * FROM dbo.LookByFName (‘A’)
All the rows having a first name starting with A were returned. The return is a Table Variable, not to be confused with a temporary table. Table variables are new in SQL 2000. They are a special data type whose scope is limited to the process that declared it. Table variables are stated to have performance benefits over temporary tables. None of my personal testing has found this result though.
Multi Statement UDFs
Multi Statement User Defined Functions are very similar to Stored Procedures. They both allow complex logic to take place inside the function. There are a number of restrictions unique to functions though. The Multi Statement UDF will always return a table variable–and only one table variable. There is no way to return multiple result sets. In addition, a User Defined Function cannot call a Stored Procedure from inside itself. They also cannot execute dynamic SQL. Remember also, that UDFs cannot use non-deterministic built in functions. So GETDATE and RAND cannot be used. Error handling is restricted. RAISERROR and @@ERROR are invalid from inside User Defined Functions. Like other programming languages, the purpose of a User Defined Function is to create a stand-alone code module to be reused over and over by the global application.
For a Multi Statement test, we will create a modified version of the LookByFName function. This new function will accept the same input parameter. But rather than return a table from a simple select, a specific table will be created, and data in it will be manipulated prior to the return:
CREATE FUNCTION dbo.multi_test (@FirstLetter char (1))
RETURNS @Result TABLE
fname varchar (20),
on_probation char (1)
INSERT INTO @Result
SELECT fname, hire_date
WHERE LEFT (fname, 1) = @FirstLetter
SET on_probation = ‘N’
SET on_probation = ‘Y’
WHERE hire_date < ’01/01/1991′
To use the new function, execute:
SELECT * FROM dbo.multi_test (‘A’)
With the new Multi Statement Function, we can manipulate data like a Stored Procedure, but use it in statement areas like a View.
For example, only specific columns can be returned.
SELECT fname FROM dbo.multi_test (‘A’)
The function can also be joined like a view:
SELECT e.lname, f.fname
FROM employee e INNER JOIN dbo.multi_test (‘A’) f ON e.fname = f.fname
Advantages of User Defined Functions
Before SQL 2000, User Defined Functions (UDFs) were not available. Stored Procedures were often used in their place. When advantages or disadvantages of User Defined Functions are discussed, the comparison is usually to Stored Procedures.
One of the advantages of User Defined Functions over Stored Procedures, is the fact that a UDF can be used in a Select, Where, or Case statement. They also can be used to create joins. In addition, User Defined Functions are simpler to invoke than Stored Procedures from inside another SQL statement.
Disadvantages of User Defined Functions
User Defined Functions cannot be used to modify base table information. The DML statements INSERT, UPDATE, and DELETE cannot be used on base tables. Another disadvantage is that SQL functions that return non-deterministic values are not allowed to be called from inside User Defined Functions. GETDATE is an example of a non-deterministic function. Every time the function is called, a different value is returned. Therefore, GETDATE cannot be called from inside a UDF you create.
SQL Optimization Tips:
- Use views and stored procedures instead of heavy-duty queries. This can reduce network traffic, because your client will send to server only stored procedure or view name (perhaps with some parameters) instead of large heavy-duty queries text. This can be used to facilitate permission management also, because you can restrict user access to table columns they should not see.
- Try to use constraints instead of triggers, whenever possible. Constraints are much more efficient than triggers and can boost performance. So you should use constraints instead of triggers, whenever possible.
- Use table variables instead of temporary tables. Table variables require less locking and logging resources than temporary tables, so table variables should be used whenever possible. The table variables are available in SQL Server 2000 only.
- Try to use UNION ALL statement instead of UNION , whenever possible. The UNION ALL statement is much faster than UNION , because UNION ALL statement does not look for duplicate rows, and UNION statement does look for duplicate rows, whether or not they exist.
- Try to avoid using the DISTINCT clause, whenever possible. Because using the DISTINCT clause will result in some performance degradation, you should use this clause only when it is necessary.
- Try to avoid using SQL Server cursors, whenever possible. SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated sub-query or derived tables, if you need to perform row-by-row operations.
- Try to avoid the HAVING clause, whenever possible. The HAVING clause is used to restrict the result set returned by the GROUP BY clause. When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the rows into sets of grouped rows and aggregates their values, and then the HAVING clause eliminates undesired aggregated groups. In many cases, you can write your select statement so, that it will contain only WHERE and GROUP BY clauses without HAVING clause. This can improve the performance of your query.
- If you need to return the total table’s row count, you can use alternative way instead of SELECT COUNT (*) statement. Because SELECT COUNT (*) statement make a full table scan to return the total table’s row count, it can take very many time for the large table. There is another way to determine the total row count in a table. You can use sysindexes system table, in this case. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT(*): SELECT rows FROM sysindexes WHERE id = OBJECT_ID(‘table_name’) AND indid < 2 So, you can improve the speed of such queries in several times.
- Include SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a T-SQL statement. This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a T-SQL statement.
- Use the select statements with TOP keyword or the SET ROWCOUNT statement, if you need to return only the first n rows. This can improve performance of your queries, because the smaller result set will be returned. This can also reduce the traffic between the server and the clients.
It is the building block of an application which provides all required execution information to “CLR”.
Functions of an Assembly:
à It contains IL that gets executed by CLR
à It forms the security boundaries
à It allows the side – by –side execution of multiple version of same assembly.
Features of assemblies
· Assemblies are self-describing.
· Version dependencies are recorded inside an assembly manifest.
· Assemblies can be loaded side-by-side.
· Application isolation is endured using application domains.
· Installation can be as easy as copying the files that belong to an assembly.
Assemblies can be private or shared.
It contains the information about the assembly like identity of an assembly, Scope of the assembly and also contains the security permissions.
It contains the data about the object that is Name of an object, the bases, interfaces implemented in that object and also the types and members of an object.
3. IL code
Compiled version of an application Program code
Resources used by that Assembly.
Namespace and Assembly
Assembly is physical grouping of logical units but Namespace logically groups classes. Namespace can span in multiple assemblies.
Types of an Assembly
There are two types
1. Private Assembly
This is used by a single application and this assembly is stored in that application installation folder.
2. Shared Assembly
This is used by multiple applications and this is stored in Global Assembly Catch (GAC). We have to give cryptographically Strong Name for these shared assemblies.
A strong name is a .NET assembly name combined with its version number and other information to uniquely identify the assembly. This allows multiple versions of the same assembly to peacefully co-exist in the global assembly cache, where shared assemblies are typically stored.
Three main goals for having strong name
1. Name Uniqueness:
Name of shared assemblies should be unique globally.
2. Prevent Name Spoofing:
Developer doesn’t want some one releasing the version of their assemblies. We can avoid this problem by using strong name.
3. Provide identity on Reference:
While resolving reference to an assembly, strong name guarantees that the loaded assembly came from the expected publisher.
Use of strong names
Strong names are required to store shared assemblies in the global assembly cache (GAC). This is because the GAC allows multiple versions of the same assembly to reside on your system simultaneously, so that each application can find and use its own version of your assembly. This helps avoid DLL Hell, where applications that may be compiled to different versions of your assembly could potentially break because they are all forced to use the same version of your assembly.
Another reason to use strong names is to make it difficult for hackers to spoof your assembly, in other words, replace or inject your assembly with a virus or malicious code.
Strong name key file
A strong name key file has a .snk extension and contains a unique public-private key pair. You use the strong name key file to digitally sign your assembly (see below). Note that this type of file is not secure, as the private key in a .snk file can be easily compromised.
For added protection, Visual Studio can encrypt a strong name key file, which produces a file with the .pfx (Personal Information exchange) extension. The .pfx file is more secure because whenever someone attempts to use the encrypted key, she will be prompted for the password.
Steps for Giving strong name to DLL
- Open .net command prompt.
- Go to the folder containing DLL.
- Type sn -k test.snk, you can use any file name instead of test. This will create test.snk file in that folder.
- Open the assemblyinfo.cs file of project.
- Type file path in this tag [assembly:AssemblyKeyFile@”E:\hemant\practice\HP\bin\Debug\HP.snk”)]
- Build application, finally your strong name created for your DLL.
Signing an Assembly
The first thing that you need to do is create a public/private key pair that will be used to encrypt the assembly.
To create a key/pair you can use the strong name utility:
sn –k <file name
This will create the file containing the keys defaulting to RSA encryption. You should keep this file save as you will need it for signing future assemblies if you wish to keep them authenticated.
Now, there are a few different methods for strongly naming an assembly, either from within Visual Studio or through the command line.
If you wish to give developers access to the private key you can simply add in an attribute to the project. This code should go within the AssemblyInfo.cs file.
If you do not wish to give your developers access to your private key you can delay sign the assembly.
Then the assembly must have verification turned off otherwise the assembly will not load on the developer’s machine:
c:\> sn -Vr myassembly.dll
Finally, when the code is ready to be released you can fully sign the assembly using the sn.exe tool.
c:\> sn -R myassembly.dll mykey.snk
Signing and Delay Signing
Delayed signing refers to a technique of partially signing assemblies while they are in development. Signing an assembly basically certifies that assembly by the manufacturer and prevents tampering and hi-jacking of that assembly. The signing is accomplished by using public key/private key encoding of parts of the assembly. The public key is embedded in the assembly and will be used by third-parties who want to reference the assembly. There are many more benefits to signing an assembly, but the main purpose of delayed signing is to allow a company to protect and control its private key and only use it during the packaging process. A delayed signed assembly can still be used like a signed assembly; you just can’t package and ship it.
Global Assembly Catch (GAC)
Shared assemblies must have globally unique names derived from their strong names (public keys generated to distinguish assemblies from other assemblies that may have the same names) which are Microsoft’s solution to DLL hell.
The GAC, not the registry, is the store for this information. When an application loads the GAC is examined by the Common Language Runtime to determine if the version of a shared component exists that is compatible with the one referenced by the application.
The gacutil.exe that ships with .NET can be used to add or remove a shared assembly from the GAC.
à To add a shared assembly, from the command line enter:
gacutil.exe /i myassembly.dll
à To remove a shared assembly, from the command line enter:
gacutil.exe /u myassembly.dll
When viewing the GAC in Windows Explorer shfusion.dll is used to provide the user interface. You can locate and rename this DLL to view the GAC like any other folder.
During Run Time, the CLR invokes the JIT compiler to convert the MSIL into Native code (i.e. Executable Binary Code). When the Function is called, the IL of the function’s body is converted to Native code Just In Time. So the part of code that is not used by the particular run is never converted to native code. If some IL is converted to native code, the next time it’s needed, the CLR reuses the same copy without re-compiling.
Types of JIT:
· Standard JIT
(1) PRE JIT: It Compiles complete source code to native code in a single Compilation.
(2) ECONO JIT: It compiles only those methods that are called at Runtime.
(3) NORMALE JIT: It compiles only those methods that are called at Runtime and are stored in cache