Friday, December 24, 2010

JDBC


JDBC 

Overview

WhatisJDBC?
JDBC stands for "Java DataBase Connectivity". It is an API (Application Programming Interface), which consists of a set of Java classes, interfaces and exceptions and a specification to which both JDBC driver vendors and JDBC developers (like you) adhere when developing applications.
JDBC is a very popular data access standard. RDBMS (Relational Database Management Systems) or third-party vendors develop drivers, which adhere to the JDBC specification. Since the drivers adhered to JDBC specification, the JDBC application developers can replace one driver for their application with another better one without having to rewrite their application. If they had used some proprietary API provided by some RDBMS vendor, they will not have been able to change the driver and/or database without having to rewrite the complete application.
Call-level interfaces such as JDBC are programming interfaces allowing external access to SQL database manipulation and update commands. They allow the integration of SQL calls into a general programming environment by providing library routines, which interface with the database. In particular, Java based JDBC has a rich collection of routines which make such an interface extremely simple and intuitive.
Here is an easy way of visualizing what happens in a call level interface: You are writing a normal Java program. Somewhere in the program, you need to interact with a database. Using standard library routines, you open a connection to the database. You then use JDBC to send your SQL code to the database, and process the results that are returned. When you are done, you close the connection.
whyuseJDBC?
JDBC is there only to help you (a Java developer) develop data access applications without having to learn and use proprietary APIs provided by different RDBMS vendors. You just have to learn JDBC and then you can be sure that you'll be able to develop data access applications which can access different RDBMS using different JDBC drivers.

JDBC-database interaction
JDBC Architecture is divided into 2 parts:
  • JDBC API (java.sql & javax.sql packages)
  • JDBC Driver Types
JDBC API
The JDBC API is available in the java.sql and javax.sql packages. Following are important JDBC classes, interfaces and exceptions in the java.sql package:
  • DriverManager - Loads JDBC drivers in memory. Can also be used to open connections to a data source.
  • Connection - Represents a connection with a data source. Is also used for creating Statement, PreparedStatement and CallableStatement objects.
  • Statement - Represents static SQL statement. Can be used to retrieve ResultSet object/s.
  • PreparedStatement - Higher performance alternative to Statement object represents a precompiled SQL statement.
  • CallableStatement - Represents a stored procedure. Can be used to execute stored procedures in a RDBMS, which supports them.
  • ResultSet - Represents a database result set generated by using a SELECT SQL statement.
  • SQLException - An exception class, which encapsulates database base access errors.

 JDBC Driver Types

Type 1 :JDBC-ODBC bridge drivers

Type 1 drivers use a bridge technology to connect a Java client to an ODBC database system. The JDBC-ODBC Bridge from Sun and InterSolv is the only existing example of a Type 1 driver. Type 1 drivers require some sort of non-Java software to be installed on the machine running your code, and they are implemented using native code.

Type 2: Native-API partly Java drivers

Type 2 drivers use a native code library to access a database, wrapping a thin layer of Java around the native library. For example, with Oracle databases, the native access might be through the Oracle Call Interface (OCI) libraries that were originally designed for C/C++ programmers. Type 2 drivers are implemented with native code, so they may perform better than all-Java drivers, but they also add an element of risk, as a defect in the native code can crash the Java Virtual Machine.

Type 3: Net-protocol All-Java drivers

Type 3 drivers define a generic network protocol that interfaces with a piece of custom middleware. The middleware component might use any other type of driver to provide the actual database access. BEA's WebLogic product line (formerly known as WebLogic Tengah and before that as jdbcKona/T3) is an example. These drivers are especially useful for applet deployment, since the actual JDBC classes can be written entirely in Java and downloaded by the client on the fly.

Type 4: Native-protocol All-Java drivers

Type 4 drivers are written entirely in Java. They understand database-specific networking protocols and can access the database directly without any additional software. These drivers are also well suited for applet programming, provided that the Java security manager allows TCP/IP connections to the database server.
When you are selecting a driver, you need to balance speed, reliability, and portability. Different applications have different needs. A standalone, GUI-intensive program that always runs on a Windows NT system will benefit from the additional speed of a Type 2, native-code driver. An applet might need to use a Type 3 driver to get around a firewall. A servlet that is deployed across multiple platforms might require the flexibility of a Type 4 driver.SUN encourages to develop and use type 4 drivers in your applications.

JDBC URLs

A JDBC driver uses a JDBC URL to identify and connect to a particular database. These URLs are generally of the form:
jdbc:driver:databasename
The actual standard is quite fluid, however, as different databases require different information to connect successfully. For example, the Oracle JDBC-Thin driver uses a URL of the form:
jdbc:oracle:thin:@site:port:database
while the JDBC-ODBC Bridge uses:
jdbc:odbc:datasource;odbcoptions
The only requirement is that a driver be able to recognize its own URLs.
The first thing to do, of course, is to install Java, JDBC and the DBMS on your working machines. Since we want to interface with an Oracle database, we would need a driver for this specific database as well.

The JDBC-ODBC Bridge

The JDBC-ODBC Bridge ships with JDK 1.1 and the Java 2 SDK for Windows and Solaris systems. The bridge provides an interface between JDBC and database drivers written using Microsoft's Open DataBase Connectivity (ODBC) API. The bridge was originally written to allow the developer community to get up and running quickly with JDBC. Since the bridge makes extensive use of native method calls, it is not recommended for long-term or high-volume deployment.
The bridge is not a required component of the Java SDK, so most web browsers or other runtime environments do not support it. Using the bridge in an applet requires a browser with a JVM that supports the JDBC-ODBC Bridge, as well as a properly configured ODBC driver and data source on the client side.
The JDBC URL subprotocol odbc has been reserved for the bridge. Like most JDBC URLs, it allows programs to encode extra information about the connection. ODBC URLs are of the form:
jdbc:odbc:datasourcename[;attribute-name=attribute-value]*
For instance, a JDBC URL pointing to an ODBC data source named companydb with the CacheSize attribute set to 10 looks like this:
jdbc:odbc:companydb;CacheSize=10

Establishing A Connection

As we said earlier, before a database can be accessed, a connection must be opened between our program(client) and the database(server). This involves two steps:
  • Load the vendor specific driver
Why would we need this step? To ensure portability and code reuse, the API was designed to be as independent of the version or the vendor of a database as possible. Since different DBMS's have different behavior, we need to tell the driver manager which DBMS we wish to use, so that it can invoke the correct driver.
An Oracle driver is loaded using the following code:
      Class.forName("oracle.jdbc.driver.OracleDriver")
  • Make the connection
The java.sql.Connection object, which encapsulates a single connection to a particular database, forms the basis of all JDBC data handling code. An application can maintain multiple connections, up to the limits imposed by the database system itself. Once the driver is loaded and ready for a connection to be made, you may create an instance of a Connection object using:
The DriverManager.getConnection( ) method as:
Connection con = DriverManager.getConnection("url", "user", "password");
You pass three arguments to getConnection( ): a JDBC URL, a database username, and a password. For databases that don't require explicit logins, the user and password strings should be left blank. When the method is called, the DriverManager queries each registered driver, asking if it understands the URL. If a driver recognizes the URL, it returns a Connection object.
The getConnection( ) method has two other variants that are less frequently used. One variant takes a single String argument and tries to create a connection to that JDBC URL without a username or password, or with a username and password embedded in the URL itself.
   Connection con = DriverManager.getConnection(
      "jdbc:oracle:thin:@dbaprod1:1544:SHR1_PRD", username, passwd);
The first string is the URL for the database including the protocol (jdbc), the vendor (oracle), the driver (thin), the server (dbaprod1), the port number (1521), and a server instance (SHR1_PRD). The username and passwd are your username and password, the same as you would enter into SQLPLUS to access your account.
The connection returned in the last step is an open connection which we will use to pass SQL statements to the database. When a Connection has outlived its usefulness, you should be sure to explicitly close it by calling its close( ) method. This frees up any memory being used by the object, and, more importantly, it releases any other database resources the connection may be holding on to. These resources (cursors, handles, and so on) can be much more valuable than a few bytes of memory, as they are often quite limited. This is particularly important in applications such as servlets that might need to create and destroy thousands of JDBC connections between restarts. Because of the way some JDBC drivers are designed, it is not safe to rely on Java's garbage collection to remove unneeded JDBC connections.
In this code snippet, con is an open connection, and we will use it below.

Statements

Once you have created a Connection, you can begin using it to execute SQL statements. This is usually done via Statement objects. There are actually three kinds of statements in JDBC:
Statement
Represents a basic SQL statement
PreparedStatement
Represents a precompiled SQL statement, which can offer improved performance
CallableStatement

Allows JDBC programs complete access to stored procedures within the database itself

Statement
A JDBC Statement object is used to send your SQL statements to the DBMS, and should not to be confused with an SQL statement. A JDBC Statement object is associated with an open connection, and not any single SQL Statement. You can think of a JDBC Statement object as a channel sitting on a connection, and passing one or more of your SQL statements (which you ask it to execute) to the DBMS.
An active connection is needed to create a Statement object. To get a Statement object, call the createStatement( ) method of a Connection:
    Statement stmt = con.createStatement() ;
At this point, a Statement object exists, but it does not have an SQL statement to pass on to the DBMS.
Once you have created a Statement, use it to execute SQL statements. A statement can either be a query that returns results or an operation that manipulates the database in some way. If you are performing a query, use the executeQuery( ) method of the Statement object:
ResultSet rs = stmt.executeQuery("SELECT * FROM CUSTOMERS");
Here we've used executeQuery() to run a SELECT statement. This call returns a ResultSet object that contains the results of the query
Statement also provides an executeUpdate( ) method, for running SQL statements that don't return results, such as the UPDATE and DELETE statements. executeUpdate( ) returns an integer that indicates the number of rows in the database that were altered.
If you don't know whether a SQL statement is going to return results (such as when the user is entering the statement in a form field), you can use the execute( ) method of Statement. This method returns true if there is a result associated with the statement. In this case, the ResultSet can be retrieved using the getResultSet( ) method and the number of updated rows can be retrieved using getUpdateCount( ):
Statement unknownSQL = con.createStatement(  );
if(unknownSQL.execute(sqlString)) {
 ResultSet rs = unknownSQL.getResultSet(  );
 // Display the results
} 
else {
 System.out.println("Rows updated: " + unknownSQL.getUpdateCount(  ));
}

It is important to remember that a Statement object represents a single SQL statement. A call to executeQuery( ), executeUpdate( ), or execute( ) implicitly closes any active ResultSet associated with the Statement. In other words, you need to be sure you are done with the results from a query before you execute another query with the same Statement object. If your application needs to execute more than one simultaneous query, you need to use multiple Statement objects. As a general rule, calling the close( ) method of any JDBC object also closes any dependent objects, such as a Statement generated by a Connection or a ResultSet generated by a Statement, but well-written JDBC code closes everything explicitly.

Example: A Simple JDBC Example
import java.sql.*;
 
public class JDBCSample {
 
 public static void main(java.lang.String[] args) {
   try {
     // This is where we load the driver
     Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
   } 
   catch (ClassNotFoundException e) {
     System.out.println("Unable to load Driver Class");
     return;
   }
  
   try {
     // All database access is within a try/catch block. Connect to database,
     // specifying particular database, username, and password
     Connection con = DriverManager.getConnection("jdbc:odbc:companydb",
              "", "");
  
     // Create and execute an SQL Statement
     Statement stmt = con.createStatement(  );
     ResultSet rs = stmt.executeQuery("SELECT FIRST_NAME FROM EMPLOYEES");
 
     // Display the SQL Results
     while(rs.next(  )) {
       System.out.println(rs.getString("FIRST_NAME"));
     }
 
     // Make sure our database resources are released
     rs.close(  );
     stmt.close(  );
     con.close(  );
 
     } 
     catch (SQLException se) {
       // Inform user of any SQL errors
       System.out.println("SQL Exception: " + se.getMessage(  ));
           } 
    } 
}

Example starts out by loading a JDBC driver class (in this case, Sun's JDBC-ODBC Bridge). Then it creates a database connection, represented by a Connection object, using that driver. With the database connection, we can create a Statement object to represent an SQL statement. Executing an SQL statement produces a ResultSet that contains the results of a query. The program displays the results and then cleans up the resources it has used. If an error occurs, a SQLException is thrown, so our program traps that exception and displays some of the information it encapsulates

Creating JDBC PreparedStatement

Prepared Statements

The PreparedStatement object is a close relative of the Statement object. Both accomplish roughly the same thing: running SQL statements. PreparedStatement, however, allows you to precompile your SQL and run it repeatedly, adjusting specific parameters as necessary. Since processing SQL strings is a large part of a database's overhead, getting compilation out of the way at the start can significantly improve performance. As with Statement, you create a PreparedStatement object from a Connection object. In this case, though, the SQL is specified at creation instead of execution, using the prepareStatement( ) method of Connection:
PreparedStatement pstmt = con.prepareStatement(
 "INSERT INTO EMPLOYEES (NAME, PHONE) VALUES (?, ?)");
This SQL statement inserts a new row into the EMPLOYEES table, setting the NAME and PHONE columns to certain values. Since the whole point of a PreparedStatement is to be able to execute the statement repeatedly, we don't specify values in the call to prepareStatement( ), but instead use question marks (?) to indicate parameters for the statement. To actually run the statement, we specify values for the parameters and then execute the statement:
pstmt.clearParameters(  );
pstmt.setString(1, "Jimmy Adelphi");
pstmt.setString(2, "201 555-7823");
pstmt.executeUpdate(  );
Before setting parameters, we clear out any previously specified parameters with the clearParameters( ) method. Then we can set the value for each parameter (indexed from 1 to the number of question marks) using the setString( ) method. PreparedStatement defines numerous setXXX( ) methods for specifying different types of parameters; Finally, we use the executeUpdate( ) method to run the SQL.

 

Sometimes, it is more convenient or more efficient to use a PreparedStatement object for sending SQL statements to the DBMS. The main feature which distinguishes it from its superclass Statement, is that unlike Statement, it is given an SQL statement right when it is created. This SQL statement is then sent to the DBMS right away, where it is compiled. Thus, in effect, a PreparedStatement is associated as a channel with a connection and a compiled SQL statement.
The advantage offered is that if you need to use the same, or similar query with different parameters multiple times, the statement can be compiled and optimized by the DBMS just once. Contrast this with a use of a normal Statement where each use of the same SQL statement requires a compilation all over again.
PreparedStatements are also created with a Connection method. The following snippet shows how to create a parameterized SQL statement with three input parameters:
                  PreparedStatement prepareUpdatePrice = con.prepareStatement( 
                     "UPDATE Sells SET price = ? WHERE bar = ? AND beer = ?");
Before we can execute a PreparedStatement, we need to supply values for the parameters. This can be done by calling one of the setXXX methods defined in the class PreparedStatement. Most often used methods are setInt, setFloat, setDouble, setString etc. You can set these values before each execution of the prepared statement.
Continuing the above example, we would write:
                  prepareUpdatePrice.setInt(1, 3);
                  prepareUpdatePrice.setString(2, "Bar Of Foo");
                  prepareUpdatePrice.setString(3, "BudLite");

Executing CREATE/INSERT/UPDATE Statements

Executing SQL statements in JDBC varies depending on the ``intention'' of the SQL statement. DDL (data definition language) statements such as table creation and table alteration statements, as well as statements to update the table contents, are all executed using the method executeUpdate. Notice that these commands change the state of the database, hence the name of the method contains ``Update''.
The following snippet has examples of executeUpdate statements.
                  Statement stmt = con.createStatement();
 
                  stmt.executeUpdate("CREATE TABLE Sells " +
                     "(bar VARCHAR2(40), beer VARCHAR2(40), price REAL)" );
                  stmt.executeUpdate("INSERT INTO Sells " +
                     "VALUES ('Bar Of Foo', 'BudLite', 2.00)" );
 
                  String sqlString = "CREATE TABLE Bars " +
                     "(name VARCHAR2(40), address VARCHAR2(80), license INT)" ;
                  stmt.executeUpdate(sqlString);
Since the SQL statement will not quite fit on one line on the page, we have split it into two strings concatenated by a plus sign(+) so that it will compile. Pay special attention to the space following "INSERT INTO Sells" to separate it in the resulting string from "VALUES". Note also that we are reusing the same Statement object rather than having to create a new one.
When executeUpdate is used to call DDL statements, the return value is always zero, while data modification statement executions will return a value greater than or equal to zero, which is the number of tuples affected in the relation.
While working with a PreparedStatement, we would execute such a statement by first plugging in the values of the parameters (as seen above), and then invoking the executeUpdate on it.
                     int n = prepareUpdatePrice.executeUpdate() ;

Executing SELECT Statements

As opposed to the previous section statements, a query is expected to return a set of tuples as the result, and not change the state of the database. Not surprisingly, there is a corresponding method called executeQuery, which returns its results as a ResultSet object:
                  String bar, beer ;
                  float price ;
 
                  ResultSet rs = stmt.executeQuery("SELECT * FROM Sells");
                  while ( rs.next() ) {
                     bar = rs.getString("bar");
                     beer = rs.getString("beer");
                     price = rs.getFloat("price");
                     System.out.println(bar + " sells " + beer + " for " + price + " Dollars.");
                  }
The bag of tuples resulting from the query are contained in the variable rs which is an instance of ResultSet. A set is of not much use to us unless we can access each row and the attributes in each row. The ResultSet provides a cursor to us, which can be used to access each row in turn. The cursor is initially set just before the first row. Each invocation of the method next causes it to move to the next row, if one exists and return true, or return false if there is no remaining row.
We can use the getXXX method of the appropriate type to retrieve the attributes of a row. In the previous example, we used getString and getFloat methods to access the column values. Notice that we provided the name of the column whose value is desired as a parameter to the method. Also note that the VARCHAR2 type bar, beer have been converted to Java String, and the REAL to Java float.
Equivalently, we could have specified the column number instead of the column name, with the same result. Thus the relevant statements would be:
                     bar = rs.getString(1);
                     price = rs.getFloat(3);
                     beer = rs.getString(2);
While working with a PreparedStatement, we would execute a query by first plugging in the values of the parameters, and then invoking the executeQuery on it.
                     ResultSet rs = prepareUpdatePrice.executeQuery() ;

Results

When an SQL query executes, the results form a pseudo-table that contains all rows that fit the query criteria. For instance, here's a textual representation of the results of the query string "SELECT NAME, CUSTOMER_ID, PHONE FROM CUSTOMERS":
NAME                             CUSTOMER_ID  PHONE
-------------------------------- ----------- -------------------
Jane Markham                      1           617 555-1212
Louis Smith                       2           617 555-1213
Woodrow Lang                      3           508 555-7171
Dr. John Smith                    4           (011) 42 323-1239
This kind of textual representation is not very useful for Java programs. Instead, JDBC uses the java.sql.ResultSet interface to encapsulate the query results as Java primitive types and objects. You can think of a ResultSet as an object that represents an underlying table of query results, where you use method calls to navigate between rows and retrieve particular column values.
A Java program might handle the previous query as follows:
Statement stmt = con.createStatement(  );
ResultSet rs = stmt.executeQuery(
 "SELECT NAME, CUSTOMER_ID, PHONE FROM CUSTOMERS");
 
while(rs.next(  )) {
 System.out.print("Customer #" + rs.getString("CUSTOMER_ID"));
 System.out.print(", " + rs.getString("NAME"));
 System.out.println(", is at " + rs.getString("PHONE");
}
rs.close(  );
stmt.close(  );
Here's the resulting output:
Customer #1, Jane Markham, is at 617 555-1212
Customer #2, Louis Smith, is at 617 555-1213
Customer #3, Woodrow Lang, is at 508 555-7171
Customer #4, Dr. John Smith, is at (011) 42 323-1239
The code loops through each row of the ResultSet using the next( ) method. When you start working with a ResultSet, you are positioned before the first row of results. That means you have to call next( ) once just to access the first row. Each time you call next( ), you move to the next row. If there are no more rows to read, next( ) returns false. Note that with the JDBC 1.0 ResultSet, you can only move forward through the results and, since there is no way to go back to the beginning, you can read them only once. The JDBC 2.0 ResultSet, which we discuss later, overcomes these limitations.
Individual column values are read using the getString( ) method. getString( ) is one of a family of getXXX( ) methods, each of which returns data of a particular type. There are two versions of each getXXX( ) method: one that takes the case-insensitive String name of the column to be read (e.g., "PHONE", "CUSTOMER_ID") and one that takes a SQL-style column index. Note that column indexes run from 1 to n, unlike Java array indexes, which run from 0 to n-1, where n is the number of columns.
The most important getXXX( ) method is getObject( ), which can return any kind of data packaged in an object wrapper. For example, calling getObject( ) on an integer field returns an Integer object, while calling it on a date field yields a java.sql.Date object. Table 2-1 lists the different getXXX( ) methods, along with the corresponding SQL data type and Java data type. Where the return type for a getXXX( ) method is different from the Java type, the return type is shown in parentheses. Note that thejava.sql.Types class defines integer constants that represent the standard SQL data types.
Table 2-1: SQL Data Types, Java Types, and Default getXXX( ) Methods
SQL Data Type
Java Type
getXXX( ) Method
VARCHAR,CHAR
String
getString( )
BIT
Boolean (boolean)
getBoolean( )
TINYINT
Integer (byte)
getByte( )
SMALLINT
Integer (short)
getShort( )
INTEGER
Integer (int)
getInt( )
BIGINT
Long (long)
getLong( )
REAL
Float (float)
getFloat( )
FLOAT
Double (double)
getDouble( )
DOUBLE
Double (double)
getDouble( )

Note that this table merely lists the default mappings according to the JDBC specification, and some drivers don't follow these mappings exactly. Also, a certain amount of casting is permitted. For instance, the getString( ) method returns a String representation of just about any data type.

Notes on Accessing ResultSet

There are means to make scroll-able cursors allow free access of any row in the result set. By default, cursors scroll forward only and are read only. When creating a Statement for a Connection, you can change the type of ResultSet to a more flexible scrolling or updatable model:
Table 2-2: JDBC 2.0 Record Scrolling Functions
Method
Function
first( )
Move to the first record.
last( )
Move to the last record.
next( )
Move to the next record.
previous( )
Move to the previous record.
                     ResultSet rs = stmt.executeQuery("SELECT * FROM Sells");
The different options for types are TYPE_FORWARD_ONLY, TYPE_SCROLL_INSENSITIVE, and TYPE_SCROLL_SENSITIVE. You can choose whether the cursor is read-only or updatable using the options CONCUR_READ_ONLY, and CONCUR_UPDATABLE. With the default cursor, you can scroll forward using rs.next(). With scroll-able cursors you have more options:
                     rs.previous();           // moves back one tuple (tuple 2)

Transactions

JDBC allows SQL statements to be grouped together into a single transaction. Thus, we can ensure the ACID (Atomicity, Consistency, Isolation, Durability) properties using JDBC transactional features.
Transaction control is performed by the Connection object. When a connection is created, by default it is in the auto-commit mode. This means that each individual SQL statement is treated as a transaction by itself, and will be committed as soon as it's execution finished. We can turn off auto-commit mode for an active connection with :
                     con.setAutoCommit(false) ; 
and turn it on again with :
                     con.setAutoCommit(true) ; 
Once auto-commit is off, no SQL statements will be committed (that is, the database will not be permanently updated) until you have explicitly told it to commit by invoking the commit() method:
                     con.commit() ; 
At any point before commit, we may invoke rollback() to rollback the transaction, and restore values to the last commit point (before the attempted updates).
Here is an example which ties these ideas together:
                     con.setAutoCommit(false);
                     Statement stmt = con.createStatement();
         stmt.executeUpdate("INSERT INTO Sells VALUES('Bar Of Foo', 'BudLite', 1.00)" );
                     con.rollback();
         stmt.executeUpdate("INSERT INTO Sells VALUES('Bar Of Joe', 'Miller', 2.00)" );
                     con.commit();
                     con.setAutoCommit(true);
Lets walk through the example to understand the effects of various methods. We first set auto-commit off, indicating that the following statements need to be considered as a unit. We attempt to insert into the Sells table the ('Bar Of Foo', 'BudLite', 1.00) tuple. However, this change has not been made final (committed) yet. When we invoke rollback, we cancel our insert and in effect we remove any intention of inserting the above tuple. Note that Sells now is still as it was before we attempted the insert. We then attempt another insert, and this time, we commit the transaction. It is only now that Sells is now permanently affected and has the new tuple in it. Finally, we reset the connection to auto-commit again.

DBMS Questions

1. What is database?
A database is a logically coherent collection of data with some inherent meaning, representing some aspect of real world and which is designed, built and populated with data for a specific purpose.

2. What is DBMS?
It is a collection of programs that enables user to create and maintain a database. In other words it is general-purpose software that provides the users with the processes of defining, constructing and manipulating the database for various applications.

3. What is a Database system?
The database and DBMS software together is called as Database system.

4. Advantages of DBMS?
 Redundancy is controlled.
 Unauthorised access is restricted.
 Providing multiple user interfaces.
 Enforcing integrity constraints.
 Providing backup and recovery.

5. Disadvantage in File Processing System?
 Data redundancy & inconsistency.
 Difficult in accessing data.
 Data isolation.
 Data integrity.
 Concurrent access is not possible.
 Security Problems.

6. Describe the three levels of data abstraction?
The are three levels of abstraction:
 Physical level: The lowest level of abstraction describes how data are stored.
 Logical level: The next higher level of abstraction, describes what data are stored in database and what relationship among those data.
 View level: The highest level of abstraction describes only part of entire database.
7. Define the "integrity rules"
There are two Integrity rules.
 Entity Integrity: States that “Primary key cannot have NULL value”
 Referential Integrity: States that “Foreign Key can be either a NULL value or should be Primary Key value of other relation.

8. What is extension and intension?
Extension -
It is the number of tuples present in a table at any instance. This is time dependent.
Intension -
It is a constant value that gives the name, structure of table and the constraints laid on it.

9. What is System R? What are its two major subsystems?
System R was designed and developed over a period of 1974-79 at IBM San Jose Research Center. It is a prototype and its purpose was to demonstrate that it is possible to build a Relational System that can be used in a real life environment to solve real life problems, with performance at least comparable to that of existing system.
Its two subsystems are
 Research Storage
 System Relational Data System.

10. How is the data structure of System R different from the relational structure?
Unlike Relational systems in System R
 Domains are not supported
 Enforcement of candidate key uniqueness is optional
 Enforcement of entity integrity is optional
 Referential integrity is not enforced

11. What is Data Independence?
Data independence means that “the application is independent of the storage structure and access strategy of data”. In other words, The ability to modify the schema definition in one level should not affect the schema definition in the next higher level.
Two types of Data Independence:
 Physical Data Independence: Modification in physical level should not affect the logical level.
 Logical Data Independence: Modification in logical level should affect the view level.
NOTE: Logical Data Independence is more difficult to achieve

12. What is a view? How it is related to data independence?
A view may be thought of as a virtual table, that is, a table that does not really exist in its own right but is instead derived from one or more underlying base table. In other words, there is no stored file that direct represents the view instead a definition of view is stored in data dictionary.
Growth and restructuring of base tables is not reflected in views. Thus the view can insulate users from the effects of restructuring and growth in the database. Hence accounts for logical data independence.

13. What is Data Model?
A collection of conceptual tools for describing data, data relationships data semantics and constraints.

14. What is E-R model?
This data model is based on real world that consists of basic objects called entities and of relationship among these objects. Entities are described in a database by a set of attributes.

15. What is Object Oriented model?
This model is based on collection of objects. An object contains values stored in instance variables with in the object. An object also contains bodies of code that operate on the object. These bodies of code are called methods. Objects that contain same types of values and the same methods are grouped together into classes.

16. What is an Entity?
It is a 'thing' in the real world with an independent existence.

17. What is an Entity type?
It is a collection (set) of entities that have same attributes.

18. What is an Entity set?
It is a collection of all entities of particular entity type in the database.

19. What is an Extension of entity type?
The collections of entities of a particular entity type are grouped together into an entity set.

20. What is Weak Entity set?
An entity set may not have sufficient attributes to form a primary key, and its primary key compromises of its partial key and primary key of its parent entity, then it is said to be Weak Entity set.

21. What is an attribute?
It is a particular property, which describes the entity.

22. What is a Relation Schema and a Relation?
A relation Schema denoted by R(A1, A2, …, An) is made up of the relation name R and the list of attributes Ai that it contains. A relation is defined as a set of tuples. Let r be the relation which contains set tuples (t1, t2, t3, ..., tn). Each tuple is an ordered list of n-values t=(v1,v2, ..., vn).

23. What is degree of a Relation?
It is the number of attribute of its relation schema.

24. What is Relationship?
It is an association among two or more entities.

25. What is Relationship set?
The collection (or set) of similar relationships.

26. What is Relationship type?
Relationship type defines a set of associations or a relationship set among a given set of entity types.

27. What is degree of Relationship type?
It is the number of entity type participating.

25. What is DDL (Data Definition Language)?
A data base schema is specifies by a set of definitions expressed by a special language called DDL.

26. What is VDL (View Definition Language)?
It specifies user views and their mappings to the conceptual schema.

27. What is SDL (Storage Definition Language)?
This language is to specify the internal schema. This language may specify the mapping between two schemas.

28. What is Data Storage - Definition Language?
The storage structures and access methods used by database system are specified by a set of definition in a special type of DDL called data storage-definition language.

29. What is DML (Data Manipulation Language)?
This language that enable user to access or manipulate data as organised by appropriate data model.
 Procedural DML or Low level: DML requires a user to specify what data are needed and how to get those data.
 Non-Procedural DML or High level: DML requires a user to specify what data are needed without specifying how to get those data.

31. What is DML Compiler?
It translates DML statements in a query language into low-level instruction that the query evaluation engine can understand.

32. What is Query evaluation engine?
It executes low-level instruction generated by compiler.

33. What is DDL Interpreter?
It interprets DDL statements and record them in tables containing metadata.

34. What is Record-at-a-time?
The Low level or Procedural DML can specify and retrieve each record from a set of records. This retrieve of a record is said to be Record-at-a-time.

35. What is Set-at-a-time or Set-oriented?
The High level or Non-procedural DML can specify and retrieve many records in a single DML statement. This retrieve of a record is said to be Set-at-a-time or Set-oriented.

36. What is Relational Algebra?
It is procedural query language. It consists of a set of operations that take one or two relations as input and produce a new relation.

37. What is Relational Calculus?
It is an applied predicate calculus specifically tailored for relational databases proposed by E.F. Codd. E.g. of languages based on it are DSL ALPHA, QUEL.

38. How does Tuple-oriented relational calculus differ from domain-oriented relational calculus
The tuple-oriented calculus uses a tuple variables i.e., variable whose only permitted values are tuples of that relation. E.g. QUEL
The domain-oriented calculus has domain variables i.e., variables that range over the underlying domains instead of over relation. E.g. ILL, DEDUCE.

39. What is normalization?
It is a process of analysing the given relation schemas based on their Functional Dependencies (FDs) and primary key to achieve the properties
 Minimizing redundancy
 Minimizing insertion, deletion and update anomalies.

40. What is Functional Dependency?
A Functional dependency is denoted by X Y between two sets of attributes X and Y that are subsets of R specifies a constraint on the possible tuple that can form a relation state r of R. The constraint is for any two tuples t1 and t2 in r if t1[X] = t2[X] then they have t1[Y] = t2[Y]. This means the value of X component of a tuple uniquely determines the value of component Y.

41. When is a functional dependency F said to be minimal?
 Every dependency in F has a single attribute for its right hand side.
 We cannot replace any dependency X A in F with a dependency Y A where Y is a proper subset of X and still have a set of dependency that is equivalent to F.
 We cannot remove any dependency from F and still have set of dependency that is equivalent to F.

42. What is Multivalued dependency?
Multivalued dependency denoted by X Y specified on relation schema R, where X and Y are both subsets of R, specifies the following constraint on any relation r of R: if two tuples t1 and t2 exist in r such that t1[X] = t2[X] then t3 and t4 should also exist in r with the following properties
 t3[x] = t4[X] = t1[X] = t2[X]
 t3[Y] = t1[Y] and t4[Y] = t2[Y]
 t3[Z] = t2[Z] and t4[Z] = t1[Z]
where [Z = (R-(X U Y)) ]

43. What is Lossless join property?
It guarantees that the spurious tuple generation does not occur with respect to relation schemas after decomposition.

44. What is 1 NF (Normal Form)?
The domain of attribute must include only atomic (simple, indivisible) values.

45. What is Fully Functional dependency?
It is based on concept of full functional dependency. A functional dependency X Y is full functional dependency if removal of any attribute A from X means that the dependency does not hold any more.

46. What is 2NF?
A relation schema R is in 2NF if it is in 1NF and every non-prime attribute A in R is fully functionally dependent on primary key.

47. What is 3NF?
A relation schema R is in 3NF if it is in 2NF and for every FD X A either of the following is true
 X is a Super-key of R.
 A is a prime attribute of R.
In other words, if every non prime attribute is non-transitively dependent on primary key.

48. What is BCNF (Boyce-Codd Normal Form)?
A relation schema R is in BCNF if it is in 3NF and satisfies an additional constraint that for every FD X A, X must be a candidate key.

49. What is 4NF?
A relation schema R is said to be in 4NF if for every Multivalued dependency X Y that holds over R, one of following is true
 X is subset or equal to (or) XY = R.
 X is a super key.

50. What is 5NF?
A Relation schema R is said to be 5NF if for every join dependency {R1, R2, ..., Rn} that holds R, one the following is true
 Ri = R for some i.
 The join dependency is implied by the set of FD, over R in which the left side is key of R.
51. What is Domain-Key Normal Form?
A relation is said to be in DKNF if all constraints and dependencies that should hold on the the constraint can be enforced by simply enforcing the domain constraint and key constraint on the relation.

52. What are partial, alternate,, artificial, compound and natural key?
Partial Key:
It is a set of attributes that can uniquely identify weak entities and that are related to same owner entity. It is sometime called as Discriminator.
Alternate Key:
All Candidate Keys excluding the Primary Key are known as Alternate Keys.
Artificial Key:
If no obvious key, either stand alone or compound is available, then the last resort is to simply create a key, by assigning a unique number to each record or occurrence. Then this is known as developing an artificial key.
Compound Key:
If no single data element uniquely identifies occurrences within a construct, then combining multiple elements to create a unique identifier for the construct is known as creating a compound key.
Natural Key:
When one of the data elements stored within a construct is utilized as the primary key, then it is called the natural key.

53. What is indexing and what are the different kinds of indexing?
Indexing is a technique for determining how quickly specific data can be found.
Types:
 Binary search style indexing
 B-Tree indexing
 Inverted list indexing
 Memory resident table
 Table indexing

54. What is system catalog or catalog relation? How is better known as?
A RDBMS maintains a description of all the data that it contains, information about every relation and index that it contains. This information is stored in a collection of relations maintained by the system called metadata. It is also called data dictionary.

55. What is meant by query optimization?
The phase that identifies an efficient execution plan for evaluating a query that has the least estimated cost is referred to as query optimization.

56. What is join dependency and inclusion dependency?
Join Dependency:
A Join dependency is generalization of Multivalued dependency.A JD {R1, R2, ..., Rn} is said to hold over a relation R if R1, R2, R3, ..., Rn is a lossless-join decomposition of R . There is no set of sound and complete inference rules for JD.
Inclusion Dependency:
An Inclusion Dependency is a statement of the form that some columns of a relation are contained in other columns. A foreign key constraint is an example of inclusion dependency.

57. What is durability in DBMS?
Once the DBMS informs the user that a transaction has successfully completed, its effects should persist even if the system crashes before all its changes are reflected on disk. This property is called durability.

58. What do you mean by atomicity and aggregation?
Atomicity:
Either all actions are carried out or none are. Users should not have to worry about the effect of incomplete transactions. DBMS ensures this by undoing the actions of incomplete transactions.
Aggregation:
A concept which is used to model a relationship between a collection of entities and relationships. It is used when we need to express a relationship among relationships.

59. What is a Phantom Deadlock?
In distributed deadlock detection, the delay in propagating local information might cause the deadlock detection algorithms to identify deadlocks that do not really exist. Such situations are called phantom deadlocks and they lead to unnecessary aborts.

60. What is a checkpoint and When does it occur?
A Checkpoint is like a snapshot of the DBMS state. By taking checkpoints, the DBMS can reduce the amount of work to be done during restart in the event of subsequent crashes.

61. What are the different phases of transaction?
Different phases are
 Analysis phase
 Redo Phase
 Undo phase

62. What do you mean by flat file database?
It is a database in which there are no programs or user access languages. It has no cross-file capabilities but is user-friendly and provides user-interface management.

63. What is "transparent DBMS"?
It is one, which keeps its Physical Structure hidden from user.

64. Brief theory of Network, Hierarchical schemas and their properties
Network schema uses a graph data structure to organize records example for such a database management system is CTCG while a hierarchical schema uses a tree data structure example for such a system is IMS.

65. What is a query?
A query with respect to DBMS relates to user commands that are used to interact with a data base. The query language can be classified into data definition language and data manipulation language.

66. What do you mean by Correlated subquery?
Subqueries, or nested queries, are used to bring back a set of rows to be used by the parent query. Depending on how the subquery is written, it can be executed once for the parent query or it can be executed once for each row returned by the parent query. If the subquery is executed for each row of the parent, this is called a correlated subquery.
A correlated subquery can be easily identified if it contains any references to the parent subquery columns in its WHERE clause. Columns from the subquery cannot be referenced anywhere else in the parent query. The following example demonstrates a non-correlated subquery.
E.g. Select * From CUST Where '10/03/1990' IN (Select ODATE From ORDER Where CUST.CNUM = ORDER.CNUM)

67. What are the primitive operations common to all record management systems?
Addition, deletion and modification.

68. Name the buffer in which all the commands that are typed in are stored
‘Edit’ Buffer

69. What are the unary operations in Relational Algebra?
PROJECTION and SELECTION.

70. Are the resulting relations of PRODUCT and JOIN operation the same?
No.
PRODUCT: Concatenation of every row in one relation with every row in another.
JOIN: Concatenation of rows from one relation and related rows from another.

71. What is RDBMS KERNEL?
Two important pieces of RDBMS architecture are the kernel, which is the software, and the data dictionary, which consists of the system-level data structures used by the kernel to manage the database
You might think of an RDBMS as an operating system (or set of subsystems), designed specifically for controlling data access; its primary functions are storing, retrieving, and securing data. An RDBMS maintains its own list of authorized users and their associated privileges; manages memory caches and paging; controls locking for concurrent resource usage; dispatches and schedules user requests; and manages space usage within its table-space structures
.
72. Name the sub-systems of a RDBMS
I/O, Security, Language Processing, Process Control, Storage Management, Logging and Recovery, Distribution Control, Transaction Control, Memory Management, Lock Management

73. Which part of the RDBMS takes care of the data dictionary? How
Data dictionary is a set of tables and database objects that is stored in a special area of the database and maintained exclusively by the kernel.

74. What is the job of the information stored in data-dictionary?
The information in the data dictionary validates the existence of the objects, provides access to them, and maps the actual physical storage location.

75. Not only RDBMS takes care of locating data it also
determines an optimal access path to store or retrieve the data

76. How do you communicate with an RDBMS?
You communicate with an RDBMS using Structured Query Language (SQL)

77. Define SQL and state the differences between SQL and other conventional programming Languages
SQL is a nonprocedural language that is designed specifically for data access operations on normalized relational database structures. The primary difference between SQL and other conventional programming languages is that SQL statements specify what data operations should be performed rather than how to perform them.

78. Name the three major set of files on disk that compose a database in Oracle
There are three major sets of files on disk that compose a database. All the files are binary. These are
 Database files
 Control files
 Redo logs
The most important of these are the database files where the actual data resides. The control files and the redo logs support the functioning of the architecture itself.
All three sets of files must be present, open, and available to Oracle for any data on the database to be useable. Without these files, you cannot access the database, and the database administrator might have to recover some or all of the database using a backup, if there is one.

79. What is an Oracle Instance?
The Oracle system processes, also known as Oracle background processes, provide functions for the user processes—functions that would otherwise be done by the user processes themselves
Oracle database-wide system memory is known as the SGA, the system global area or shared global area. The data and control structures in the SGA are shareable, and all the Oracle background processes and user processes can use them.
The combination of the SGA and the Oracle background processes is known as an Oracle instance

80. What are the four Oracle system processes that must always be up and running for the database to be useable
The four Oracle system processes that must always be up and running for the database to be useable include DBWR (Database Writer), LGWR (Log Writer), SMON (System Monitor), and PMON (Process Monitor).

81. What are database files, control files and log files. How many of these files should a database have at least? Why?
Database Files
The database files hold the actual data and are typically the largest in size. Depending on their sizes, the tables (and other objects) for all the user accounts can go in one database file—but that's not an ideal situation because it does not make the database structure very flexible for controlling access to storage for different users, putting the database on different disk drives, or backing up and restoring just part of the database.
You must have at least one database file but usually, more than one files are used. In terms of accessing and using the data in the tables and other objects, the number (or location) of the files is immaterial.
The database files are fixed in size and never grow bigger than the size at which they were created
Control Files
The control files and redo logs support the rest of the architecture. Any database must have at least one control file, although you typically have more than one to guard against loss. The control file records the name of the database, the date and time it was created, the location of the database and redo logs, and the synchronization information to ensure that all three sets of files are always in step. Every time you add a new database or redo log file to the database, the information is recorded in the control files.
Redo Logs
Any database must have at least two redo logs. These are the journals for the database; the redo logs record all changes to the user objects or system objects. If any type of failure occurs, the changes recorded in the redo logs can be used to bring the database to a consistent state without losing any committed transactions. In the case of non-data loss failure, Oracle can apply the information in the redo logs automatically without intervention from the DBA.
The redo log files are fixed in size and never grow dynamically from the size at which they were created.

82. What is ROWID?
The ROWID is a unique database-wide physical address for every row on every table. Once assigned (when the row is first inserted into the database), it never changes until the row is deleted or the table is dropped.
The ROWID consists of the following three components, the combination of which uniquely identifies the physical storage location of the row.
 Oracle database file number, which contains the block with the rows
 Oracle block address, which contains the row
 The row within the block (because each block can hold many rows)
The ROWID is used internally in indexes as a quick means of retrieving rows with a particular key value. Application developers also use it in SQL statements as a quick way to access a row once they know the ROWID

83. What is Oracle Block? Can two Oracle Blocks have the same address?
Oracle "formats" the database files into a number of Oracle blocks when they are first created—making it easier for the RDBMS software to manage the files and easier to read data into the memory areas.
The block size should be a multiple of the operating system block size. Regardless of the block size, the entire block is not available for holding data; Oracle takes up some space to manage the contents of the block. This block header has a minimum size, but it can grow.
These Oracle blocks are the smallest unit of storage. Increasing the Oracle block size can improve performance, but it should be done only when the database is first created.
Each Oracle block is numbered sequentially for each database file starting at 1. Two blocks can have the same block address if they are in different database files.

84. What is database Trigger?
A database trigger is a PL/SQL block that can defined to automatically execute for insert, update, and delete statements against a table. The trigger can e defined to execute once for the entire statement or once for every row that is inserted, updated, or deleted. For any one table, there are twelve events for which you can define database triggers. A database trigger can call database procedures that are also written in PL/SQL.

85. Name two utilities that Oracle provides, which are use for backup and recovery.
Along with the RDBMS software, Oracle provides two utilities that you can use to back up and restore the database. These utilities are Export and Import.
The Export utility dumps the definitions and data for the specified part of the database to an operating system binary file. The Import utility reads the file produced by an export, recreates the definitions of objects, and inserts the data
If Export and Import are used as a means of backing up and recovering the database, all the changes made to the database cannot be recovered since the export was performed. The best you can do is recover the database to the time when the export was last performed.

86. What are stored-procedures? And what are the advantages of using them.
Stored procedures are database objects that perform a user defined operation. A stored procedure can have a set of compound SQL statements. A stored procedure executes the SQL commands and returns the result to the client. Stored procedures are used to reduce network traffic.

87. How are exceptions handled in PL/SQL? Give some of the internal exceptions' name
PL/SQL exception handling is a mechanism for dealing with run-time errors encountered during procedure execution. Use of this mechanism enables execution to continue if the error is not severe enough to cause procedure termination.
The exception handler must be defined within a subprogram specification. Errors cause the program to raise an exception with a transfer of control to the exception-handler block. After the exception handler executes, control returns to the block in which the handler was defined. If there are no more executable statements in the block, control returns to the caller.
User-Defined Exceptions
PL/SQL enables the user to define exception handlers in the declarations area of subprogram specifications. User accomplishes this by naming an exception as in the following example:
ot_failure EXCEPTION;
In this case, the exception name is ot_failure. Code associated with this handler is written in the EXCEPTION specification area as follows:
EXCEPTION
when OT_FAILURE then
out_status_code := g_out_status_code;
out_msg := g_out_msg;
The following is an example of a subprogram exception:
EXCEPTION
when NO_DATA_FOUND then
g_out_status_code := 'FAIL';
RAISE ot_failure;
Within this exception is the RAISE statement that transfers control back to the ot_failure exception handler. This technique of raising the exception is used to invoke all user-defined exceptions.
System-Defined Exceptions
Exceptions internal to PL/SQL are raised automatically upon error. NO_DATA_FOUND is a system-defined exception. Table below gives a complete list of internal exceptions.

PL/SQL internal exceptions.

Exception Name
Oracle Error
CURSOR_ALREADY_OPEN ORA-06511
DUP_VAL_ON_INDEX ORA-00001
INVALID_CURSOR ORA-01001
INVALID_NUMBER ORA-01722
LOGIN_DENIED ORA-01017
NO_DATA_FOUND ORA-01403
NOT_LOGGED_ON ORA-01012
PROGRAM_ERROR ORA-06501
STORAGE_ERROR ORA-06500
TIMEOUT_ON_RESOURCE ORA-00051
TOO_MANY_ROWS ORA-01422
TRANSACTION_BACKED_OUT ORA-00061
VALUE_ERROR ORA-06502
ZERO_DIVIDE ORA-01476

In addition to this list of exceptions, there is a catch-all exception named OTHERS that traps all errors for which specific error handling has not been established.

88. Does PL/SQL support "overloading"? Explain
The concept of overloading in PL/SQL relates to the idea that you can define procedures and functions with the same name. PL/SQL does not look only at the referenced name, however, to resolve a procedure or function call. The count and data types of formal parameters are also considered.
PL/SQL also attempts to resolve any procedure or function calls in locally defined packages before looking at globally defined packages or internal functions. To further ensure calling the proper procedure, you can use the dot notation. Prefacing a procedure or function name with the package name fully qualifies any procedure or function reference.

89. Tables derived from the ERD
a) Are totally unnormalised
b) Are always in 1NF
c) Can be further denormalised
d) May have multi-valued attributes

(b) Are always in 1NF

90. Spurious tuples may occur due to
i. Bad normalization
ii. Theta joins
iii. Updating tables from join
a) i & ii b) ii & iii
c) i & iii d) ii & iii

(a) i & iii because theta joins are joins made on keys that are not primary keys.

91. A B C is a set of attributes. The functional dependency is as follows
AB -> B
AC -> C
C -> B
a) is in 1NF
b) is in 2NF
c) is in 3NF
d) is in BCNF

(a) is in 1NF since (AC)+ = { A, B, C} hence AC is the primary key. Since C B is a FD given, where neither C is a Key nor B is a prime attribute, this it is not in 3NF. Further B is not functionally dependent on key AC thus it is not in 2NF. Thus the given FDs is in 1NF.

92. In mapping of ERD to DFD
a) entities in ERD should correspond to an existing entity/store in DFD
b) entity in DFD is converted to attributes of an entity in ERD
c) relations in ERD has 1 to 1 correspondence to processes in DFD
d) relationships in ERD has 1 to 1 correspondence to flows in DFD

(a) entities in ERD should correspond to an existing entity/store in DFD

93. A dominant entity is the entity
a) on the N side in a 1 : N relationship
b) on the 1 side in a 1 : N relationship
c) on either side in a 1 : 1 relationship
d) nothing to do with 1 : 1 or 1 : N relationship

(b) on the 1 side in a 1 : N relationship

94. Select 'NORTH', CUSTOMER From CUST_DTLS Where REGION = 'N' Order By
CUSTOMER Union Select 'EAST', CUSTOMER From CUST_DTLS Where REGION = 'E' Order By CUSTOMER
The above is
a) Not an error
b) Error - the string in single quotes 'NORTH' and 'SOUTH'
c) Error - the string should be in double quotes
d) Error - ORDER BY clause

(d) Error - the ORDER BY clause. Since ORDER BY clause cannot be used in UNIONS

95. What is Storage Manager?
It is a program module that provides the interface between the low-level data stored in database, application programs and queries submitted to the system.

96. What is Buffer Manager?
It is a program module, which is responsible for fetching data from disk storage into main memory and deciding what data to be cache in memory.

97. What is Transaction Manager?
It is a program module, which ensures that database, remains in a consistent state despite system failures and concurrent transaction execution proceeds without conflicting.

98. What is File Manager?
It is a program module, which manages the allocation of space on disk storage and data structure used to represent information stored on a disk.

99. What is Authorization and Integrity manager?
It is the program module, which tests for the satisfaction of integrity constraint and checks the authority of user to access data.

100. What are stand-alone procedures?
Procedures that are not part of a package are known as stand-alone because they independently defined. A good example of a stand-alone procedure is one written in a SQL*Forms application. These types of procedures are not available for reference from other Oracle tools. Another limitation of stand-alone procedures is that they are compiled at run time, which slows execution.

101. What are cursors give different types of cursors.
PL/SQL uses cursors for all database information accesses statements. The language supports the use two types of cursors
 Implicit
 Explicit

102. What is cold backup and hot backup (in case of Oracle)?
 Cold Backup:
It is copying the three sets of files (database files, redo logs, and control file) when the instance is shut down. This is a straight file copy, usually from the disk directly to tape. You must shut down the instance to guarantee a consistent copy.
If a cold backup is performed, the only option available in the event of data file loss is restoring all the files from the latest backup. All work performed on the database since the last backup is lost.
 Hot Backup:
Some sites (such as worldwide airline reservations systems) cannot shut down the database while making a backup copy of the files. The cold backup is not an available option.
So different means of backing up database must be used — the hot backup. Issue a SQL command to indicate to Oracle, on a tablespace-by-tablespace basis, that the files of the tablespace are to backed up. The users can continue to make full use of the files, including making changes to the data. Once the user has indicated that he/she wants to back up the tablespace files, he/she can use the operating system to copy those files to the desired backup destination.
The database must be running in ARCHIVELOG mode for the hot backup option.
If a data loss failure does occur, the lost database files can be restored using the hot backup and the online and offline redo logs created since the backup was done. The database is restored to the most consistent state without any loss of committed transactions.

103. What are Armstrong rules? How do we say that they are complete and/or sound
The well-known inference rules for FDs
 Reflexive rule :
If Y is subset or equal to X then X Y.
 Augmentation rule:
If X Y then XZ YZ.
 Transitive rule:
If {X Y, Y Z} then X Z.
 Decomposition rule :
If X YZ then X Y.
 Union or Additive rule:
If {X Y, X Z} then X YZ.
 Pseudo Transitive rule :
If {X Y, WY Z} then WX Z.
Of these the first three are known as Amstrong Rules. They are sound because it is enough if a set of FDs satisfy these three. They are called complete because using these three rules we can generate the rest all inference rules.

104. How can you find the minimal key of relational schema?
Minimal key is one which can identify each tuple of the given relation schema uniquely. For finding the minimal key it is required to find the closure that is the set of all attributes that are dependent on any given set of attributes under the given set of functional dependency.
Algo. I Determining X+, closure for X, given set of FDs F
1. Set X+ = X
2. Set Old X+ = X+
3. For each FD Y Z in F and if Y belongs to X+ then add Z to X+
4. Repeat steps 2 and 3 until Old X+ = X+

Algo.II Determining minimal K for relation schema R, given set of FDs F
1. Set K to R that is make K a set of all attributes in R
2. For each attribute A in K
a. Compute (K – A)+ with respect to F
b. If (K – A)+ = R then set K = (K – A)+


105. What do you understand by dependency preservation?
Given a relation R and a set of FDs F, dependency preservation states that the closure of the union of the projection of F on each decomposed relation Ri is equal to the closure of F. i.e.,
((R1(F)) U … U (Rn(F)))+ = F+
if decomposition is not dependency preserving, then some dependency is lost in the decomposition.

106. What is meant by Proactive, Retroactive and Simultaneous Update.
Proactive Update:
The updates that are applied to database before it becomes effective in real world .
Retroactive Update:
The updates that are applied to database after it becomes effective in real world .
Simulatneous Update:
The updates that are applied to database at the same time when it becomes effective in real world .

107. What are the different types of JOIN operations?
Equi Join: This is the most common type of join which involves only equality comparisions. The disadvantage in this type of join is that there

Count files in direcotry

/*
* CSCI 2410
* Meghan E. Hembree
* Friday, September 3, 2010, 11:00:00am
*
* Description: (Number of files in a directory) Write a program that
* prompts the user to enter a directory and displays the number of the
* files in the directory.
*
*/
//package exercise20_29;

import java.io.File;
import java.util.Scanner;
public class Count{
public static void main(String[] args) {
//Prompt the user to enter a directory or a file
System.out.print("Enter a directory or a file: ");
Scanner input = new Scanner(System.in);
String file = input.nextLine();

//Display the size
System.out.println(getSize(new File(file)) + " files");
}
public static long getSize(File file){
//Store the total size of all files
long size = 0;

if(file.isDirectory()){
//All files and subdirectories
File[] files = file.listFiles();
for (int i = 0; i < files.length; i++){
//Recursive call
size += getSize(files[i]);
}
}
//Base case
else{
size += file.length();
}

return size;
}
File f = new File("C:/Text");
int count = 0;
for (File file : f.listFiles()) {
if (file.isFile()) {
count++;
}
}
System.out.println("Number of files: " + count);


}