Tuesday, May 15, 2012

oracle sql plsql expert part 2


Oracle 10g New Features: Flashback and RMAN
Flashback Database - This feature introduces the FLASHBACK DATABASE statement in SQL. It allows you to quickly bring your database to a prior point in time by undoing all of the changes that have taken place since that time. This operation is fast, because you do not need to restore the backups. This results in much less downtime following data corruption or human error.
Flashback Standby Database - This feature improves the switchover and failover time of a standby database. You no longer need to specify a log apply delay, because you can now roll back the standby database if an error occurs on the primary and is propagated to the standby.
Flashback Reinstantiation - This feature reduces the need to reinstantiate the old primary database following a failover. This in turn lets you restore full resiliency after a failure more quickly. You can do this by using the SQL statement FLASHBACK DATABASE to roll back the primary database in time to synchronize with the standby database.
Flashback Drop - Oracle now provides a way to restore tables that were dropped accidentally.
Flashback Table - This feature introduces the FLASHBACK TABLE statement in SQL, which lets you quickly recover a table to a previous point in time without restoring a backup.
Flashback Row History - Using undo data stored in the database, you can now view the changes to one or more rows along with all the metadata of the changes.
Flashback Transaction History - This feature lets you examine changes to the database at the transaction level. With flashback transaction history, you can diagnose problems, perform analysis, and audit transactions.
Here is a list of enhanced RMAN features:
Automated Channel Failover for Backup and Restore - Recovery Manager (RMAN) now automatically retries a failed backup or restore operation, reducing the risk of stranding you without a backup of the Oracle database because of an error.
Automated File Creation During Recovery - This feature enhances RMAN recovery by automatically creating and recovering datafiles that have never been backed up.
Simplified Backups to Disk - Image backups provide fast recovery by being readily usable. The Recovery Manager (RMAN) BACKUP command has been enhanced to perform image copy backups at the database, tablespace, and datafile level.
Proxy Copy Backup of Archivelogs - You can now back up archive logs by using the Recovery Manager (RMAN) Proxy Copy.
Incrementally Updated Backups - You can now apply a Recovery Manager (RMAN) incremental backup to a datafile image backup. This results in reduced recovery time, because fewer logs need to be applied, and reduced time to back up the database, because you do not always have to back up the whole database.
Simplified Recovery Through Resetlogs - You no longer have to backup your database following an incomplete recovery or an OPEN RESETLOGS operation. This is an enabling feature for Flashback Reinstantiation.
Full Database Begin Backup Command - It is no longer necessary to issue a separate command to place each tablespace in hot backup mode. You can now use the ALTER DATABASE statement to place all tablespaces in backup mode. Also, the BEGIN BACKUP command now runs faster than before.
Changes to the ALTER DATABASE END BACKUP Command - You can issue the ALTER DATABASE END BACKUP command when the database is open.
Change-Aware Incremental Backups - By using a new type of log file to track blocks that have changed in the database, Recovery Manager (RMAN) can avoid scanning the entire datafile during an incremental backup. Instead, the amount of data scanned is proportional to the amount of data changed.
Automated Disk-Based Backup and Recovery - This release supports automated disk-based backup and recovery. The result is a simplified and unified storage location for backups, archivelogs, and any other files needed for Oracle recovery. It also provides automatic deletion of the files after they have been successfully backed up by RMAN, and the equivalent of a disk cache for tape, which reduces the time needed to restore a file from tape. It reduces the risk of an out-of-space condition on disk by deleting files that are no longer necessary for successful database recovery.
RMAN Database Dropping and Deregistration - The new DROP DATABASE and UNREGISTER DATABASE RMAN commands remove the database and its entry from the RMAN recovery catalog.
Automated TSPITR Instantiation - This feature automatically creates the auxiliary instance needed to perform tablespace point-in-time recovery (TSPITR) and incorporates the RMAN TSPITR operations.
Simplified Recovery Manager Cataloging of Backup Files - You can now catalog RMAN proprietary backup metadata into a backup repository. If a backup is overwritten in the control file, or a backup file is moved to a new location on disk, then you can easily uncatalog the backup metadata from the repository.
regexp Everyday most of us deal with multiple string functions in Sql. May it be for truncating a string, searching for a substring or locating the presence of special characters. The regexp functions available in Oracle 10g can help us achieve the above tasks in a simpler and faster way.
1.    Validate a string for alphabets only
select case when regexp_like('Google' ,'^[[:alpha:]]{6}$') then 'Match Found' else 'No Match Found' end as output from dual;
Output: Match Found
In the above example we tried to check if the source string contains only 6 alphabets and hence we got a match for “Google”.
Now let’s try to understand the pattern '^[[:alpha:]]{6}$'
^ marks the start of the string
[[:alpha:]] specifies alphabet class
{6} specifies the number of alphabets
$ marks the end of the string
2.    Validate a string for lower case alphabets only
select case when regexp_like('Terminator' ,'^([[:lower:]]{3,12})$') then 'Match Found' else 'No Match Found' end as output from dual;
Output: No Match Found
regexp_replace, regexp_like, regexp_substr
With 10g, regular expressions are finally available in SQL. That is, they were already supported through the owa_pattern package.
The new operators and functions are regexp_like, regexp_instr, regexp_substr and regexp_replace



Using an Integrity Constraint to Enforce a Phone Number Format
Regular expressions are a useful way to enforce integrity constraints. For example, suppose that you want to ensure that phone numbers are entered into the database in a standard format. Example 4-1 creates a contacts table and adds a check constraint to the p_number column to enforce the following format mask:
(XXX) XXX-XXXX
Example 4-1 Enforcing a Phone Number Format with Regular Expressions
CREATE TABLE contacts
(
  l_name    VARCHAR2(30),
  p_number  VARCHAR2(30)
    CONSTRAINT p_number_format     
      CHECK ( REGEXP_LIKE ( p_number, '^\(\d{3}\) \d{3}-\d{4}$' ) )
);

Table 4-6 explains the elements of the regular expression.
Table 4-6 Explanation of the Regular Expression Elements in Example 4-1
Regular Expression Element
Matches . . .
^
The beginning of the string.
\(
A left parenthesis. The backward slash (\) is an escape character that indicates that the left parenthesis following it is a literal rather than a grouping expression.
\d{3}
Exactly three digits.
\)
A right parenthesis. The backward slash (\) is an escape character that indicates that the right parenthesis following it is a literal rather than a grouping expression.
(space character)
A space character.
\d{3}
Exactly three digits.
-
A hyphen.
\d{4}
Exactly four digits.
$
The end of the string.

Example 4-2 shows a SQL script that attempts to insert seven phone numbers into the contacts table. Only the first two INSERT statements use a format that conforms to the p_number_format constraint; the remaining statements generate check constraint errors.
Example 4-2 insert_contacts.sql
-- first two statements use valid phone number format
INSERT INTO contacts (p_number)
  VALUES(  '(650) 555-5555'   );
INSERT INTO contacts (p_number)
  VALUES(  '(215) 555-3427'   );
-- remaining statements generate check contraint errors
INSERT INTO contacts (p_number)
  VALUES(  '650 555-5555'     );



Oracle autonomous transaction
An autonomous transaction is an independent transaction that is initiated by another transaction (the parent transaction). An autonomous transaction can modify data and commit or rollback independent of the state of the parent transaction.

The autonomous transaction must commit or roll back before the autonomous transaction is ended and the parent transaction continues.

An autonomous transactions is available from Oracle 8i.

An autonomous transaction is defined in the declaration of a pl/sql block. This can be an anonymous block, function, procedure, object method or trigger.
This is done by adding the statement 'PRAGMA AUTONOMOUS_TRANSACTION;' anywhere in the declaration block.

There isn't much involved in defining a PL/SQL block as an autonomous transaction. You simply include the following statement in your declaration section:
PRAGMA AUTONOMOUS_TRANSACTION;

Sample code:
PROCEDURE test_autonomous
IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
insert ....
commit;
END test_autonomous;
Autonomous transactions can be used for logging in the database independent of the rollback/commit of the parent transaction.
Autonomous Transactions
Oracle can suspend a transaction and transfer execution control to an independent child transaction.
This child transaction is called an autonomous transaction.
An autonomous transaction is completely independent of the calling transaction.
An autonomous transaction does not share resources, locks, or any commit dependencies with the main transaction.
Autonomous transactions can include just as much functionality as any other database transactions.
Autonomous transactions are useful for creating software components that can be reused in numerous applications.
One advantage of using an autonomous transaction is that DML can be executed and committed, even if the main transaction is rolled back.
Setting up the syntax for an autonomous transaction
Autonomous Transaction Syntax

declare
    pragma autonomous_transaction;
begin
    ...
    number of statements
    ...
 commit;(or rollback;) ?End of transaction 1
    ...
    number of statements
    ...
 commit;(or rollback;) ?End of transaction 2
end;


A pragma autonomous transaction is a PL/SQL compiler directive to define an autonomous transaction.
The following types of PL/SQL blocks can be defined as autonomous transactions:
  • Stored procedures and functions.
  • Local procedures and functions defined in a PL/SQL declaration block.
  • Packaged procedures and functions.
  • Type methods.
  • Top-level anonymous blocks.
The easiest way to understand autonomous transactions is to see them in action. To do this, we create a test table and populate it with two rows. Notice that the data is not commited.
CREATE TABLE at_test (
id NUMBER NOT NULL,
description VARCHAR2(50) NOT NULL
);

INSERT INTO at_test (id, description) VALUES (1, 'Description for 1');
INSERT INTO at_test (id, description) VALUES (2, 'Description for 2');

SELECT * FROM at_test;

ID DESCRIPTION
---------- --------------------------------------------------
1 Description for 1
2 Description for 2

2 rows selected.

SQL>
Next, we insert another 8 rows using an anonymous block declared as an autonomous transaction, which contains a commit statement.
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
FOR i IN 3 .. 10 LOOP
INSERT INTO at_test (id, description)
VALUES (i, 'Description for ' || i);
END LOOP;
COMMIT;
END;
/

PL/SQL procedure successfully completed.

SELECT * FROM at_test;

ID DESCRIPTION
---------- --------------------------------------------------
1 Description for 1
2 Description for 2
3 Description for 3
4 Description for 4
5 Description for 5
6 Description for 6
7 Description for 7
8 Description for 8
9 Description for 9
10 Description for 10

10 rows selected.

SQL>
As expected, we now have 10 rows in the table. If we now issue a rollback statement we get the following result.
ROLLBACK;
SELECT * FROM at_test;

ID DESCRIPTION
---------- --------------------------------------------------
3 Description for 3
4 Description for 4
5 Description for 5
6 Description for 6
7 Description for 7
8 Description for 8
9 Description for 9
10 Description for 10

8 rows selected.

SQL>
The 2 rows inserted by our current session (transaction) have been rolled back, while the rows inserted by the autonomous transactions remain. The presence of the PRAGMA AUTONOMOUS_TRANSACTION compiler directive made the anonymous block run in its own transaction, so the internal commit statement did not affect the calling session. As a result rollback was still able to affect the DML issued by the current statement.
Autonomous transactions are commonly used by error logging routines, where the error messages must be preserved, regardless of the the commit/rollback status of the transaction. For example, the following table holds basic error messages.
CREATE TABLE error_logs (
id NUMBER(10) NOT NULL,
log_timestamp TIMESTAMP NOT NULL,
error_message VARCHAR2(4000),
CONSTRAINT error_logs_pk PRIMARY KEY (id)
);

CREATE SEQUENCE error_logs_seq;
We define a procedure to log error messages as an autonomous transaction.
CREATE OR REPLACE PROCEDURE log_errors (p_error_message IN VARCHAR2) AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO error_logs (id, log_timestamp, error_message)
VALUES (error_logs_seq.NEXTVAL, SYSTIMESTAMP, p_error_message);
COMMIT;
END;
/
The following code forces an error, which is trapped and logged.
BEGIN
INSERT INTO at_test (id, description)
VALUES (998, 'Description for 998');

-- Force invalid insert.
INSERT INTO at_test (id, description)
VALUES (999, NULL);
EXCEPTION
WHEN OTHERS THEN
log_errors (p_error_message => SQLERRM);
ROLLBACK;
END;
/

PL/SQL procedure successfully completed.

SELECT * FROM at_test WHERE id >= 998;

no rows selected

SELECT * FROM error_logs;

ID LOG_TIMESTAMP
---------- ---------------------------------------------------------------------------
ERROR_MESSAGE
----------------------------------------------------------------------------------------------------
1 28-FEB-2006 11:10:10.107625
ORA-01400: cannot insert NULL into ("TIM_HALL"."AT_TEST"."DESCRIPTION")


1 row selected.

SQL>
From this we can see that the LOG_ERRORS transaction was separate to the anonymous block. If it weren't, we would expect the first insert in the anonymous block to be preserved by the commit statement in the LOG_ERRORS procedure.












Collection Types in PL/SQL
"Collection" means a program variable containing more than one value. The word "array" has a more specific meaning depending on your programming language and computer science background. According to the PL/SQL documentation at least, other languages provide arrays, sets, bags, linked lists and hash tables. The equivalent types in PL/SQL can all be referred to as "collections", and PL/SQL provides three of them:
Associative Array: TYPE t IS TABLE OF something INDEX BY PLS_INTEGER;
Nested Table: TYPE t IS TABLE OF something;
VARRAY: TYPE t IS VARRAY (123) OF something;

Collection types created in SQL

Collections are part of the SQL language in Oracle, so you use them without even going into PL/SQL:
CREATE TYPE VARCHAR2_TT AS TABLE OF VARCHAR2(100)
/
Type created.

SELECT column_value AS val
FROM TABLE(VARCHAR2_TT('Apple','Banana','Apricot'))
WHERE column_value NOT LIKE 'A%';

VAL
--------------------
Banana

SQL vs PL/SQL collection types: summary

You can declare any type you like within PL/SQL, and you get a lot of flexibility that way such as the use of PL/SQL-only language features, but remember that SQL does not know PL/SQL, and so you cannot use your PL/SQL types in SQL queries.
The other main differences are listed below.
Scope
What that means
Collection Types
PL/SQL
Declared only in PL/SQL code - no "CREATE OR REPLACE TYPE". SQL doesn't know anything about them.

No initialisation or extending required - just assign values to any arbitrary element, doesn't even have to be consecutive.

You can choose what to "index by" - PLS_INTEGER, BINARY_INTEGER2 or VARCHAR2.

No constructor - you must assign values explicitly.

Can't treat as a table in queries, e.g. you cannot SELECT * FROM TABLE(myarray)
Associative Array
SQL and PL/SQL
Declared either in PL/SQL code or with "CREATE OR REPLACE TYPE".

Must be initialised before use, e.g. myarray mytype := mytype();

Have constructors - you can assign values using mytype('x','y','z');

Must be extended as required, e.g. myarray.EXTEND; to add each array element.

Can treat as a table in queries e.g. SELECT * FROM TABLE(myarray) (if created in SQL with CREATE TYPE).
Nested Table
VARRAY

collection extensions in 10g

This article covers a range of new set operators, functions and conditions introduced in 10g. Briefly, these extensions enable us to work more efficiently and elegantly with collections, both in SQL and PL/SQL. Collections, based on user-defined types, have been available in Oracle since version 8 (note that a collection is a SQL type, not a PL/SQL table). They offer enormous potential for extending the PL/SQL language and there are many examples of how to use them available on the web and in forums.
In this article we will be looking at examples of the following:
  • Collection comparisons;
  • Collection functions such as SET, CARDINALITY and POWERMULTISET;
  • Collection conditions such as MEMBER and SUBMULTISET; and
  • Collection operators such as MULTISET EXCEPT and MULTISET UNION.
Note that most of the examples will be in PL/SQL. This is not because the new operators cannot be used in SQL (they are all primarily SQL extensions in fact) - it is because it is easier to see the results of the operators with a few DBMS_OUTPUT statements.

setup

First we'll create a simple collection type to use in the examples.
SQL> CREATE OR REPLACE TYPE varchar2_ntt AS TABLE OF VARCHAR2(4000);
2 /

Type created.

comparing collections

Prior to 10g, if we wished to compare the contents of one collection with another, we'd have to resort to looping through the collections in PL/SQL or using SQL techniques (such as TABLE() and MINUS for example). Now, however, collections can be compared for equality as any "regular" datatype (note that to compare collections of complex types, a MAP method is required. For an example, see the section on complex collection types at the end of this article). In the following example, we'll initialise three collections, two of which are equal, and then test them for equality under a range of scenarios.
SQL> DECLARE
2
3 nt0 varchar2_ntt := varchar2_ntt('A','B','C');
4 nt1 varchar2_ntt := varchar2_ntt('C','D','E');
5 nt2 varchar2_ntt := varchar2_ntt('C','D','E');
6
7 PROCEDURE compare (
8 test_in IN VARCHAR2,
9 ntl_in IN varchar2_ntt,
10 ntr_in IN varchar2_ntt
11 ) IS
12 BEGIN
13 DBMS_OUTPUT.PUT(RPAD('['||test_in||']',30) || '=> ');
14 IF ntl_in = ntr_in THEN
15 DBMS_OUTPUT.PUT_LINE( 'EQUAL' );
16 ELSE
17 DBMS_OUTPUT.PUT_LINE( 'NOT EQUAL' );
18 END IF;
19 END compare;
20
21 BEGIN
22
23 /* Compare populated collections... */
24 compare( '1: nt0->nt1, populated', nt0, nt1 );
25 compare( '2: nt1->nt2, populated', nt1, nt2 );
26
27 /* Compare collections after deletion... */
28 nt0.DELETE;
29 nt1.DELETE;
30 compare( '3: nt0->nt1, deleted', nt0, nt1 );
31
32 /* Compare newly initialised collections... */
33 nt0 := varchar2_ntt();
34 nt1 := varchar2_ntt();
35 compare( '4: nt0->nt1, initialised', nt0, nt1 );
36
37 END;
38 /
[1: nt0->nt1, populated] => NOT EQUAL
[2: nt1->nt2, populated] => EQUAL
[3: nt0->nt1, deleted] => EQUAL
[4: nt0->nt1, initialised] => EQUAL

PL/SQL procedure successfully completed.
Equality tests for collections in 10g also include the use of IN (which after all is just a convenient method of bundling many similar equality-OR tests into a shorter syntax). We can see this below. Note that the order of the elements is irrelevant. In the following example, collections nt1 and nt2 are equivalent, which is why the IN test returns TRUE.
SQL> DECLARE
2 nt0 varchar2_ntt := varchar2_ntt('A','B','C');
3 nt1 varchar2_ntt := varchar2_ntt('C','D','E');
4 nt2 varchar2_ntt := varchar2_ntt('E','C','D');
5 BEGIN
6 IF nt2 IN ( nt0, nt1 ) THEN
7 DBMS_OUTPUT.PUT_LINE( 'Collection nt2 is equal to at least one other.' );
8 END IF;
9 END;
10 /
Collection nt2 is equal to at least one other.

PL/SQL procedure successfully completed.

cardinality

In PL/SQL, it has always been trivial to get a count of elements in an array or collection, using the COUNT pseudo-method. The new CARDINALITY function in 10g now makes it trivial to count the elements of a collection in SQL. This function is logically equivalent to the COUNT pseudo-method. In the following examples we'll show simple examples of the function's usage in both PL/SQL and SQL.
SQL> DECLARE
2 nt varchar2_ntt := varchar2_ntt( 'A','B','C','C' );
3 BEGIN
4 DBMS_OUTPUT.PUT_LINE( 'Count = ' || nt.COUNT );
5 DBMS_OUTPUT.PUT_LINE( 'Cardinality = ' || CARDINALITY(nt) );
6 END;
7 /
Count = 4
Cardinality = 4

PL/SQL procedure successfully completed.
This time we can see the CARDINALITY function being used in SQL whereas the COUNT method cannot.
SQL> SELECT CARDINALITY(nt)
2 FROM (
3 SELECT varchar2_ntt('A','B','C','C') AS nt
4 FROM dual
5 );

CARDINALITY(NT)
---------------
4

1 row selected.

SQL> SELECT nt.COUNT
2 FROM (
3 SELECT varchar2_ntt('A','B','C','C') AS nt
4 FROM dual
5 );
SELECT nt.COUNT
*
ERROR at line 1:
ORA-00904: "NT"."COUNT": invalid identifier
The fact that the CARDINALITY function is available to us in SQL means that we can also use it in filter predicates.
SQL> SELECT *
2 FROM (
3 SELECT varchar2_ntt('A','B') AS nt
4 FROM dual
5 UNION ALL
6 SELECT varchar2_ntt('A','B','C','D') AS nt
7 FROM dual
8 )
9 WHERE CARDINALITY(nt) > 2;

NT
-----------------------------------
VARCHAR2_NTT('A', 'B', 'C', 'D')

1 row selected.

set

The new SET function in 10g returns a distinct range of elements from a collection (i.e. DISTINCT). For simple collections such as the one we have used throughout this article, it's very straightforward, as the following example demonstrates.
SQL> SELECT nt
2 , SET(nt) AS nt_set
3 FROM (
4 SELECT varchar2_ntt('A','B','C','C') AS nt
5 FROM dual
6 );

NT NT_SET
----------------------------------- ------------------------------
VARCHAR2_NTT('A', 'B', 'C', 'C') VARCHAR2_NTT('A', 'B', 'C')

1 row selected.
SQL> SELECT nt.column_value AS distinct_element
2 FROM TABLE(SET(varchar2_ntt('A','B','C','C'))) nt;

DISTINCT_ELEMENT
----------------
A
B
C

3 rows selected.

is (not) a set

In addition to turning collections into valid sets, we can also test to see whether a collection is already a set, as the following example shows. Note that a restriction with this condition is that the collection must be initialised and not NULL.
SQL> DECLARE
2
3 nt_null varchar2_ntt;
4 nt_initialised varchar2_ntt := varchar2_ntt();
5 nt_set varchar2_ntt := varchar2_ntt('A','B','C');
6 nt_not_set varchar2_ntt := varchar2_ntt('A','B','C','C');
7
8 PROCEDURE test (
9 test_in IN VARCHAR2,
10 result_in IN BOOLEAN
11 ) IS
12 BEGIN
13 DBMS_OUTPUT.PUT( RPAD('[' || test_in || ']',30) || '=> ');
14 DBMS_OUTPUT.PUT_LINE(CASE result_in
15 WHEN TRUE
16 THEN 'TRUE'
17 WHEN FALSE
18 THEN 'FALSE'
19 ELSE 'NULL'
20 END);
21 END test;
22
23 BEGIN
24
25 /* First test the NULL collection... */
26 test( 'nt_null IS A SET', nt_null IS A SET );
27 test( 'nt_null IS NOT A SET', nt_null IS NOT A SET );
28
29 /* Now test the valid collections... */
30 test( 'nt_initialised IS A SET', nt_initialised IS A SET );
31 test( 'nt_set IS A SET', nt_set IS A SET );
32 test( 'nt_not_set IS A SET', nt_not_set IS A SET );
33 test( 'SET(nt_not_set) IS A SET', SET(nt_not_set) IS A SET );
34
35 END;
36 /
[nt_null IS A SET] => NULL
[nt_null IS NOT A SET] => NULL
[nt_initialised IS A SET] => TRUE
[nt_set IS A SET] => TRUE
[nt_not_set IS A SET] => FALSE
[SET(nt_not_set) IS A SET] => TRUE

PL/SQL procedure successfully completed.

cardinality revisited

We have already seen the new CARDINALITY function above. The term "cardinality" can sometimes be ambiguous in Oracle. It is sometimes used to describe the count of a domain of values (for example, when describing bitmap indexes) but most of the time it is used to describe a straight count (think CBO). If we wish to get the true cardinality of a collection, we can combine CARDINALITY and SET (for SQL old-timers, this is the logical equivalent of COUNT(DISTINCT)). We can demonstrate this easily as follows.
SQL> DECLARE
2 nt varchar2_ntt := varchar2_ntt( 'A','B','C','C' );
3 BEGIN
4 DBMS_OUTPUT.PUT_LINE( 'Cardinality = ' || CARDINALITY( nt ) );
5 DBMS_OUTPUT.PUT_LINE( 'True cardinality = ' || CARDINALITY( SET( nt ) ) );
6 END;
7 /
Cardinality = 4
True cardinality = 3

PL/SQL procedure successfully completed.

is (not) empty

As we saw with CARDINALITY, we can now explicitly test for collections with a specific number of elements. We can also test for empty collections using the IS (NOT) EMPTY condition. As with the IS (NOT) A SET examples earlier in this article, the restriction on NULL collections applies. This is demonstrated below (remember that these are available in SQL).
SQL> DECLARE
2
3 nt_null varchar2_ntt;
4 nt_initialised varchar2_ntt := varchar2_ntt();
5 nt_populated varchar2_ntt := varchar2_ntt('A','B','C');
6
7 PROCEDURE test (
8 test_in IN VARCHAR2,
9 result_in IN BOOLEAN
10 ) IS
11 BEGIN
12 DBMS_OUTPUT.PUT( RPAD('[' || test_in || ']',40) || '=> ');
13 DBMS_OUTPUT.PUT_LINE(CASE
14 WHEN result_in
15 THEN 'TRUE'
16 ELSE 'FALSE'
17 END);
18 END test;
19
20 BEGIN
21
22 /* First test the NULL collection... */
23 test( 'nt_null IS EMPTY', nt_null IS EMPTY );
24 test( 'nt_null IS NOT EMPTY', nt_null IS NOT EMPTY );
25
26
27 /* Test the valid collections... */
28 test( 'nt_initialised IS EMPTY', nt_initialised IS EMPTY );
29 test( 'nt_populated IS NOT EMPTY', nt_populated IS NOT EMPTY );
30
31 /* Delete the populated collection and repeat test for emptiness... */
32 nt_populated.DELETE;
33 test( 'nt_populated (deleted) IS NOT EMPTY', nt_populated IS NOT EMPTY );
34
35 END;
36 /
[nt_null IS EMPTY] => FALSE
[nt_null IS NOT EMPTY] => FALSE
[nt_initialised IS EMPTY] => TRUE
[nt_populated IS NOT EMPTY] => TRUE
[nt_populated (deleted) IS NOT EMPTY] => FALSE

PL/SQL procedure successfully completed.
We can see from the example that a NULL collection is neither empty nor not empty. This is something to consider when you pass potentially-NULL collections around as parameters for use in IF/CASE tests or SQL predicates.

submultiset conditions

Submultiset conditions belong to the IN family of SQL conditions and enable us to compare a collection or expression against a subset of another collection (a logical equivalent of "IF collection_A/expression_A IN (elements of collection_B)". The MEMBER OF condition determines whether a literal (or bind variable) exists within a collection and SUBMULTISET OF compares two collections. The following example demonstrates both methods for submultiset comparison.
SQL> DECLARE
2
3 nt1 varchar2_ntt := varchar2_ntt( 'C','D','E' );
4 nt2 varchar2_ntt := varchar2_ntt( 'C','D','E','E' );
5 nt3 varchar2_ntt := varchar2_ntt( 'A','B','C' );
6
7 PROCEDURE test (
8 test_in IN VARCHAR2,
9 result_in IN BOOLEAN
10 ) IS
11 BEGIN
12 DBMS_OUTPUT.PUT( RPAD('[' || test_in || ']',30) || '=> ');
13 DBMS_OUTPUT.PUT_LINE(CASE
14 WHEN result_in
15 THEN 'TRUE'
16 ELSE 'FALSE'
17 END);
18 END test;
19
20 BEGIN
21
22 /* SUBMULTISET tests... */
23 test( 'nt1 SUBMULTISET OF nt2', nt1 SUBMULTISET OF nt2 );
24 test( 'nt2 SUBMULTISET OF nt3', nt2 SUBMULTISET OF nt3 );
25 test( 'nt2 NOT SUBMULTISET OF nt3', nt2 NOT SUBMULTISET OF nt3 );
26
27 /* MEMBER OF tests... */
28 test( 'D MEMBER OF nt1', 'D' MEMBER OF nt1 );
29 test( 'F MEMBER OF nt3', 'F' MEMBER OF nt3 );
30
31 END;
32 /
[nt1 SUBMULTISET OF nt2] => TRUE
[nt2 SUBMULTISET OF nt3] => FALSE
[nt2 NOT SUBMULTISET OF nt3] => TRUE
[D MEMBER OF nt1] => TRUE
[F MEMBER OF nt3] => FALSE

PL/SQL procedure successfully completed.

multiset operators

Oracle 10g has introduced a number of MULTISET operators. These combine two or more collections of the same type and return a collection that satisfies the operator. In high-level terms, these can largely be considered as equivalent to the SQL SET operators (MINUS, UNION [ALL], INTERSECT), but limited to collections rather than sets of data resulting from SQL queries. The MULTISET operators are listed below with their SQL equivalents where applicable.
Multiset Operator SQL Set Equivalent
MULTISET UNION DISTINCT UNION
MULTISET UNION UNION ALL
MULTISET UNION ALL UNION ALL
MULTISET INTERSECT INTERSECT
MULTISET INTERSECT DISTINCT N/A
MULTISET EXCEPT MINUS
MULTISET EXCEPT DISTINCT N/A
The following PL/SQL block demonstrates each of the MULTISET operators by displaying the resulting collection from each operation.
SQL> DECLARE
2
3 nt0 varchar2_ntt := varchar2_ntt('A','B','C','C');
4 nt1 varchar2_ntt := varchar2_ntt('C','C','D','E','E');
5
6 PROCEDURE display (
7 type_in IN VARCHAR2,
8 nt_in IN varchar2_ntt
9 ) IS
10 i PLS_INTEGER;
11 BEGIN
12 DBMS_OUTPUT.PUT( CHR(10) || RPAD(type_in,50,'.') );
13 i := nt_in.FIRST;
14 WHILE i IS NOT NULL LOOP
15 DBMS_OUTPUT.PUT( nt_in(i) || ' ' );
16 i := nt_in.NEXT(i);
17 END LOOP;
18 DBMS_OUTPUT.NEW_LINE;
19 END display;
20
21 BEGIN
22
23 display('MULTISET UNION',
24 nt0 MULTISET UNION nt1 );
25
26 display('MULTISET UNION ALL (SAME AS MULTISET UNION)',
27 nt0 MULTISET UNION ALL nt1 );
28
29 display('MULTISET UNION DISTINCT',
30 nt0 MULTISET UNION DISTINCT nt1 );
31
32 display('MULTISET EXCEPT (nt0 -> nt1)',
33 nt0 MULTISET EXCEPT nt1 );
34
35 display('MULTISET EXCEPT (nt1 -> nt0)',
36 nt1 MULTISET EXCEPT nt0 );
37
38 display('MULTISET EXCEPT DISTINCT (nt1 -> nt0)',
39 nt1 MULTISET EXCEPT DISTINCT nt0 );
40
41 display('MULTISET INTERSECT',
42 nt0 MULTISET INTERSECT nt1 );
43
44 display('MULTISET INSERSECT DISTINCT',
45 nt0 MULTISET INTERSECT DISTINCT nt1 );
46
47 display('SET OF AN INTERSECT (SAME AS INTERSECT DISTINCT)',
48 SET( nt0 MULTISET INTERSECT nt1 ) );
49
50 END;
51 /

MULTISET UNION....................................A B C C C C D E E

MULTISET UNION ALL (SAME AS MULTISET UNION).......A B C C C C D E E

MULTISET UNION DISTINCT...........................A B C D E

MULTISET EXCEPT (nt0 -> nt1)......................A B

MULTISET EXCEPT (nt1 -> nt0)......................D E E

MULTISET EXCEPT DISTINCT (nt1 -> nt0).............D E

MULTISET INTERSECT................................C C

MULTISET INSERSECT DISTINCT.......................C

SET OF AN INTERSECT (SAME AS INTERSECT DISTINCT)..C

PL/SQL procedure successfully completed.
The following examples simply demonstrate that MULTISETs are available in SQL.
SQL> SELECT *
2 FROM TABLE( varchar2_ntt('A','B','C')
3 MULTISET UNION DISTINCT
4 varchar2_ntt('B','C','D') );

COLUMN_VALUE
------------------------------
A
B
C
D

4 rows selected.

SQL> SELECT varchar2_ntt('A','B','C')
2 MULTISET EXCEPT
3 varchar2_ntt('B','C','D') AS multiset_except
4 FROM dual;

MULTISET_EXCEPT
------------------------------
VARCHAR2_NTT('A')

1 row selected.

powermultiset functions

For the completists amongst us, the POWERMULTISET functions enable us to create a range of collections from a single input collection. Unlike all other new features in this article, the POWERMULTISET functions are available in SQL only. There are two functions in this "family", as follows.
  • POWERMULTISET
  • POWERMULTISET_BY_CARDINALITY
The POWERMULTISET function takes an input collection as an argument and returns a "collection of collections" that represent every unique multiset of data possible from the input. Clear?!? It can be quite difficult to imagine at first, but suppose we have a collection comprising elements (A,B). There are three unique multisets of data possible from this: (A), (B), (A,B). Given this input, therefore, POWERMULTISET would return a collection with three elements; where each element is itself a collection.
Throughout this article, we've used a simple collection type of VARCHAR2(4000) to represent our individual sets of data. For POWERMULTISET to return a collection of these multisets from a given input, we must create a new type "over the top of it" (a TYPE of TYPE, if you like). This is quite simple as the following demonstrates.
SQL> CREATE TYPE varchar2_ntts AS TABLE OF varchar2_ntt;
2 /

Type created.
We now have a collection type that represents multiple instances of our original collection type. We can now demonstrate POWERMULTISET, as follows. To keep the output short, we'll return a collection of all multisets available from an input collection of just three elements.
SQL> SELECT CAST(
2 POWERMULTISET(
3 varchar2_ntt('A','B','C')) AS varchar2_ntts) AS pwrmltset
4 FROM dual;

PWRMLTSET
--------------------------------------------------------------------------------
VARCHAR2_NTTS(VARCHAR2_NTT('A'), VARCHAR2_NTT('B'), VARCHAR2_NTT('A', 'B'), VARC
HAR2_NTT('C'), VARCHAR2_NTT('A', 'C'), VARCHAR2_NTT('B', 'C'), VARCHAR2_NTT('A',
'B', 'C'))
1 row selected.
As a collection in SQL, this is probably of little use to most of us. We are more likely to use POWERMULTISET in a TABLE expression, if at all. For this, the "collection type of collection type" (VARCHAR2_NTTS in the above example), is not required. The following example drops the VARCHAR2_NTTS type and demonstrates an alternative invocation of POWERMULTISET.
SQL> DROP TYPE varchar2_ntts;

Type dropped.

SQL> SELECT *
2 FROM TABLE(POWERMULTISET(varchar2_ntt('A','B','C')));

COLUMN_VALUE
------------------------------
VARCHAR2_NTT('A')
VARCHAR2_NTT('B')
VARCHAR2_NTT('A', 'B')
VARCHAR2_NTT('C')
VARCHAR2_NTT('A', 'C')
VARCHAR2_NTT('B', 'C')
VARCHAR2_NTT('A', 'B', 'C')

7 rows selected.
The POWERMULTISET_BY_CARDINALITY function takes this one stage further by enabling us to specify the cardinality of the multisets that we wish to return. In the following example, we'll request all multisets of three elements that can be derived from an input collection of five elements. Note that in this example, we have a duplicate element to demonstrate that the POWERMULTISET functions deal with multisets and not sets (i.e. not distinct values).
SQL> SELECT *
2 FROM TABLE(
3 POWERMULTISET_BY_CARDINALITY(
4 varchar2_ntt('A','B','C','D','D'), 3));

COLUMN_VALUE
------------------------------
VARCHAR2_NTT('A', 'B', 'C')
VARCHAR2_NTT('A', 'B', 'D')
VARCHAR2_NTT('A', 'C', 'D')
VARCHAR2_NTT('B', 'C', 'D')
VARCHAR2_NTT('A', 'B', 'D')
VARCHAR2_NTT('A', 'C', 'D')
VARCHAR2_NTT('B', 'C', 'D')
VARCHAR2_NTT('A', 'D', 'D')
VARCHAR2_NTT('B', 'D', 'D')
VARCHAR2_NTT('C', 'D', 'D')

10 rows selected.

working with complex collections

So far we've been using a simple collection type for all examples. Furthermore, all examples work equally in SQL or PL/SQL. The same is not true for complex collection types (i.e. based on object types with multiple attributes). For complex collections to be compared to other collections or be used with any of the SUBMULTISET conditions, MULTISET operators or POWERMULTISET and SET functions, we must enable their elements to be sorted. We do this by including a single MAP order method in the type definition. In the following example, we'll create a complex type without a MAP order method and see how Oracle resolves the sorting when we call the SET function.
SQL> CREATE TYPE complex_ot AS OBJECT
2 ( attr_one VARCHAR2(10)
3 , attr_two VARCHAR2(10)
4 , attr_three NUMBER );
5 /

Type created.

SQL> CREATE TYPE complex_ntt AS TABLE OF complex_ot;
2 /

Type created.
First we'll call the SET function in a simple SQL statement.
SQL> SELECT *
2 FROM TABLE(
3 SET( complex_ntt(complex_ot('data', 'more data', 1),
4 complex_ot('data', 'some data', 2),
5 complex_ot('data', 'dupe data', 3),
6 complex_ot('data', 'dupe data', 3)) ));

ATTR_ONE ATTR_TWO ATTR_THREE
---------- ---------- ----------
data more data 1
data some data 2
data dupe data 3

3 rows selected.
Interestingly, we have the correct answer without a MAP method. It appears as though SQL "understands" our complex type. We can now test the same types in a simple PL/SQL block.
SQL> DECLARE
2 nt complex_ntt := complex_ntt(complex_ot('data', 'more data', 1),
3 complex_ot('data', 'some data', 2),
4 complex_ot('data', 'dupe data', 3),
5 complex_ot('data', 'dupe data', 3));
6 BEGIN
7 DBMS_OUTPUT.PUT_LINE( 'Complex collection has ' ||
8 TO_CHAR(CARDINALITY(SET(nt))) ||
9 ' distinct elements.' );
10 END;
11 /
TO_CHAR(CARDINALITY(SET(nt))) ||
*
ERROR at line 8:
ORA-06550: line 8, column 46:
PLS-00306: wrong number or types of arguments in call to 'SET'
ORA-06550: line 7, column 4:
PL/SQL: Statement ignored
In PL/SQL, Oracle cannot determine the SET from the complex collection (although the error message is not particularly helpful in this instance). This time we'll create the requisite MAP method and try again.
SQL> CREATE TYPE complex_ot AS OBJECT
2 ( attr_one VARCHAR2(10)
3 , attr_two VARCHAR2(10)
4 , attr_three NUMBER(3)
5 , MAP MEMBER FUNCTION sort_key RETURN VARCHAR2 );
6 /

Type created.

SQL> CREATE TYPE BODY complex_ot AS
2 MAP MEMBER FUNCTION sort_key RETURN VARCHAR2 IS
3 BEGIN
4 RETURN LPAD(attr_one,10) ||
5 LPAD(attr_two,10) ||
6 TO_CHAR(attr_three,'fm000');
7 END;
8 END;
9 /

Type body created.

SQL> CREATE TYPE complex_ntt AS TABLE OF complex_ot;
2 /

Type created.

SQL> DECLARE
2 nt complex_ntt := complex_ntt(complex_ot('data', 'more data', 1),
3 complex_ot('data', 'some data', 2),
4 complex_ot('data', 'dupe data', 3),
5 complex_ot('data', 'dupe data', 3));
6 BEGIN
7 DBMS_OUTPUT.PUT_LINE( 'Complex collection has ' ||
8 TO_CHAR(CARDINALITY(SET(nt))) ||
9 ' distinct elements.' );
10 END;
11 /
Complex collection has 3 distinct elements.

PL/SQL procedure successfully completed.





Dynamic SQL



A Simple Dynamic Query
There are different ways of writing Dynamic SQL. The easiest approach is to use EXECUTE IMMEDIATE in PL/SQL block.
Using REF CURSORS is another method.
declare
ename varchar2(100);
begin
execute immediate 'select FIRST_NAME from employees where employee_id=100' INTO ename;
end;
/







Catching Errors
Using the standard built-in exceptions or user defined exceptions it is possible to capture the error during query execution.
declare
ename varchar2(100);
begin
execute immediate 'select FIRST_NAME from employees where DEPARTMENT_ID=90' INTO ename;
exception
WHEN NO_DATA_FOUND then DBMS_output.put_line('No Data Available');
WHEN TOO_MANY_ROWS then DBMS_output.put_line('More than one row returned');
end;
/







Using Bind Variables
It is possible to construct a query using bind variables. When you bind in a value, the SQL string itself does not contain the value; it contains only the placeholder name. Therefore, you can bind different values to the same SQL statement without changing the statement. Since it's the same statement, your application is more likely to be able to take advantage of the pre-parsed cursors that are cached in the System Global Area (SGA) of the database. The binding can be done as follows
declare
ename varchar2(100);
v_empid number := 100;
begin
execute immediate 'select FIRST_NAME from employees where employee_id=:empid' INTO ename USING v_empid;
exception
WHEN NO_DATA_FOUND then DBMS_output.put_line('No Data Available');
WHEN TOO_MANY_ROWS then DBMS_output.put_line('More than one row returned');
end;
/











Using Collections
If the query fetches more than one row, then the output should be redirected to a collection.
declare
TYPE varchartab is TABLE OF varchar2(100);
enametab varchartab;
begin
execute immediate 'select FIRST_NAME from employees where DEPARTMENT_ID=90' BULK COLLECT INTO enametab;
exception
WHEN NO_DATA_FOUND then DBMS_output.put_line('No Data Available');
WHEN TOO_MANY_ROWS then DBMS_output.put_line('More than one row returned');
end;
/




Exception Handling


In this section we will discuss about the following,
1) What is Exception Handling.
2) Structure of Exception Handling.
3) Types of Exception Handling.

1) What is Exception Handling?

PL/SQL provides a feature to handle the Exceptions which occur in a PL/SQL Block known as exception Handling. Using Exception Handling we can test the code and avoid it from exiting abruptly. When an exception occurs a messages which explains its cause is recieved.
PL/SQL Exception message consists of three parts.
1) Type of Exception
2) An Error Code
3) A message
By Handling the exceptions we can ensure a PL/SQL block does not exit abruptly.

2) Structure of Exception Handling.

The General Syntax for coding the exception section
DECLARE
Declaration section
BEGIN
Exception section
EXCEPTION
WHEN ex_name1 THEN
-Error handling statements
WHEN ex_name2 THEN
-Error handling statements
WHEN Others THEN
-Error handling statements
END;
General PL/SQL statments can be used in the Exception Block.
When an exception is raised, Oracle searches for an appropriate exception handler in the exception section. For example in the above example, if the error raised is 'ex_name1 ', then the error is handled according to the statements under it. Since, it is not possible to determine all the possible runtime errors during testing fo the code, the 'WHEN Others' exception is used to manage the exceptions that are not explicitly handled. Only one exception can be raised in a Block and the control does not return to the Execution Section after the error is handled.
If there are nested PL/SQL blocks like this.
DELCARE
Declaration section
BEGIN
DECLARE
Declaration section
BEGIN
Execution section
EXCEPTION
Exception section
END;
EXCEPTION
Exception section
END;
In the above case, if the exception is raised in the inner block it should be handled in the exception block of the inner PL/SQL block else the control moves to the Exception block of the next upper PL/SQL Block. If none of the blocks handle the exception the program ends abruptly with an error.

3) Types of Exception.

There are 3 types of Exceptions.
a) Named System Exceptions
b) Unnamed System Exceptions
c) User-defined Exceptions

a) Named System Exceptions

System exceptions are automatically raised by Oracle, when a program violates a RDBMS rule. There are some system exceptions which are raised frequently, so they are pre-defined and given a name in Oracle which are known as Named System Exceptions.
For example: NO_DATA_FOUND and ZERO_DIVIDE are called Named System exceptions.
Named system exceptions are:
1) Not Declared explicitly,
2) Raised implicitly when a predefined Oracle error occurs,
3) caught by referencing the standard name within an exception-handling routine.
Exception Name
Reason
Error Number
CURSOR_ALREADY_OPEN When you open a cursor that is already open. ORA-06511
INVALID_CURSOR When you perform an invalid operation on a cursor like closing a cursor, fetch data from a cursor that is not opened. ORA-01001
NO_DATA_FOUND When a SELECT...INTO clause does not return any row from a table. ORA-01403
TOO_MANY_ROWS When you SELECT or fetch more than one row into a record or variable. ORA-01422
ZERO_DIVIDE When you attempt to divide a number by zero. ORA-01476
For Example: Suppose a NO_DATA_FOUND exception is raised in a proc, we can write a code to handle the exception as given below.
BEGIN
Execution section
EXCEPTION
WHEN NO_DATA_FOUND THEN
dbms_output.put_line ('A SELECT...INTO did not return any row.');
END;

b) Unnamed System Exceptions

Those system exception for which oracle does not provide a name is known as unamed system exception. These exception do not occur frequently. These Exceptions have a code and an associated message.
There are two ways to handle unnamed sysyem exceptions:
1. By using the WHEN OTHERS exception handler, or
2. By associating the exception code to a name and using it as a named exception.
We can assign a name to unnamed system exceptions using a Pragma called EXCEPTION_INIT. EXCEPTION_INIT will associate a predefined Oracle error number to a programmer_defined exception name.
Steps to be followed to use unnamed system exceptions are
• They are raised implicitly.
• If they are not handled in WHEN Others they must be handled explicity.
• To handle the exception explicity, they must be declared using Pragma EXCEPTION_INIT as given above and handled referecing the user-defined exception name in the exception section.
The general syntax to declare unnamed system exception using EXCEPTION_INIT is:
DECLARE
exception_name EXCEPTION;
PRAGMA EXCEPTION_INIT (exception_name, Err_code);
BEGIN
Execution section
EXCEPTION
WHEN exception_name THEN
handle the exception
END;
For Example: Lets consider the product table and order_items table from sql joins.
Here product_id is a primary key in product table and a foreign key in order_items table.
If we try to delete a product_id from the product table when it has child records in order_id table an exception will be thrown with oracle code number -2292.
We can provide a name to this exception and handle it in the exception section as given below.
DECLARE
Child_rec_exception EXCEPTION;
PRAGMA
EXCEPTION_INIT (Child_rec_exception, -2292);
BEGIN
Delete FROM product where product_id= 104;
EXCEPTION
WHEN Child_rec_exception
THEN Dbms_output.put_line('Child records are present for this product_id.');
END;
/

c) User-defined Exceptions

Apart from sytem exceptions we can explicity define exceptions based on business rules. These are known as user-defined exceptions.
Steps to be followed to use user-defined exceptions:
• They should be explicitly declared in the declaration section.
• They should be explicitly raised in the Execution Section.
• They should be handled by referencing the user-defined exception name in the exception section.
For Example: Lets consider the product table and order_items table from sql joins to explain user-defined exception.
Lets create a business rule that if the total no of units of any particular product sold is more than 20, then it is a huge quantity and a special discount should be provided.
DECLARE
huge_quantity EXCEPTION;
CURSOR product_quantity is
SELECT p.product_name as name, sum(o.total_units) as units
FROM order_tems o, product p
WHERE o.product_id = p.product_id;
quantity order_tems.total_units%type;
up_limit CONSTANT order_tems.total_units%type := 20;
message VARCHAR2(50);
BEGIN
FOR product_rec in product_quantity LOOP
quantity := product_rec.units;
IF quantity > up_limit THEN
message := 'The number of units of product ' || product_rec.name ||
' is more than 20. Special discounts should be provided.
Rest of the records are skipped. '
RAISE huge_quantity;
ELSIF quantity < up_limit THEN
v_message:= 'The number of unit is below the discount limit.';
END IF;
dbms_output.put_line (message);
END LOOP;
EXCEPTION
WHEN huge_quantity THEN
dbms_output.put_line (message);
END;
/

RAISE_APPLICATION_ERROR ( )

RAISE_APPLICATION_ERROR is a built-in procedure in oracle which is used to display the user-defined error messages along with the error number whose range is in between -20000 and -20999.
Whenever a message is displayed using RAISE_APPLICATION_ERROR, all previous transactions which are not committed within the PL/SQL Block are rolled back automatically (i.e. change due to INSERT, UPDATE, or DELETE statements).
RAISE_APPLICATION_ERROR raises an exception but does not handle it.
RAISE_APPLICATION_ERROR is used for the following reasons,
a) to create a unique id for an user-defined exception.
b) to make the user-defined exception look like an Oracle error.
The General Syntax to use this procedure is:
RAISE_APPLICATION_ERROR (error_number, error_message);

• The Error number must be between -20000 and -20999
• The Error_message is the message you want to display when the error occurs.
Steps to be folowed to use RAISE_APPLICATION_ERROR procedure:
1. Declare a user-defined exception in the declaration section.
2. Raise the user-defined exception based on a specific business rule in the execution section.
3. Finally, catch the exception and link the exception to a user-defined error number in RAISE_APPLICATION_ERROR.
Using the above example we can display a error message using RAISE_APPLICATION_ERROR.
DECLARE
huge_quantity EXCEPTION;
CURSOR product_quantity is
SELECT p.product_name as name, sum(o.total_units) as units
FROM order_tems o, product p
WHERE o.product_id = p.product_id;
quantity order_tems.total_units%type;
up_limit CONSTANT order_tems.total_units%type := 20;
message VARCHAR2(50);
BEGIN
FOR product_rec in product_quantity LOOP
quantity := product_rec.units;
IF quantity > up_limit THEN
RAISE huge_quantity;
ELSIF quantity < up_limit THEN
v_message:= 'The number of unit is below the discount limit.';
END IF;
Dbms_output.put_line (message);
END LOOP;
EXCEPTION
WHEN huge_quantity THEN
raise_application_error(-2100, 'The number of unit is above the discount limit.');
END;
/

Oracle's explain plan
Whenever you read or write data in Oracle, you do so by issuing an SQL statement. One of Oracle's task when it receives such a statement is to build a query execution plan. An execution plan defines how Oracle finds or writes the data. For example, an important decision that Oracle has to take is if it uses indexes or not. And if there are more indexes, which of these is used. All this is contained in an execution plan.
If one wants to explore such an execution plan, Oracle provides the SQL statement EXPLAIN PLAN to determine this.
The general syntax of EXPLAIN PLAN is:

explain plan for your-precious-sql-statement;

If you do an EXPLAIN PLAN, Oracle will analyze the statment and fill a special table with the Execution plan for that statement. You can indicate which table has to be filled with the following SQL command:

explain plan into table_name for your-precious-sql-statement;

If you omit the INTO TABLE_NAME clause, Oracle fills a table named PLAN_TABLE by default.
The Plan Table
The plan table is the table that Oracle fills when you have it explain an execution plan for an SQL statement. You must make sure such a plan table exists. Oracle ships with the script UTLXPLAN.SQL which creates this table, named PLAN_TABLE (which is the default name used by EXPLAIN PLAN). If you like, however, you can choose any other name for the plan table, as long as you have been granted insert on it and it has all the fields as here.
The fields (attributes) within the plan table
Arguably, the most important fields within the plan table are operation, option, object_name, id, and parent_id. The pair operation and object_name define what operation would be done on (or with) object_name. If an operation has an id which other operations have as parent_id, it means the other operations feed their result to the parent.
Possible values for operation are:

DELETE STATEMENT
INSERT STATEMENT
SELECT STATEMENT
UPDATE STATEMENT
AND-EQUAL
CONNECT BY
CONCATENATION
COUNT
DOMAIN INDEX
FILTER
FIRST ROW
FOR UPDATE
HASH JOIN
INDEX
INLIST ITERATOR
INTERSECTION
MERGE JOIN
MINUS
NESTED LOOPS
PARTITION,
REMOTE
SEQUENCE
SORT
TABLE ACCESS
UNION
VIEW

Option tells more about how an operation would be done. For example, the operation TABLE ACCESS can have the options: FULL or BY ROWID or many others. Full in this case means, that the entire table is accessed (takes a long time if table is huge) whereas BY ROWID means, Oracle knows where (from which block) the rows are to be retrieved, which makes the time to access the table shorter.
dbms_xplan
As of 9i, dbms_xplan can be used to format the plan table.
Operations
The following table is used to demonstrate EXPLAIN PLAN:

create table test_for_ep (a number, b varchar2(100));

Now, let's explain the plan for selecting everything on that table:

delete plan_table;
explain plan for select /*+ rule */ * from test_for_ep where a = 5;

Displaying the execution plan
In order to view the explained plan, we have to query the plan table:

select
substr (lpad(' ', level-1) || operation || ' (' || options || ')',1,30 ) "Operation",
object_name "Object"
from
plan_table
start with id = 0
connect by prior id=parent_id;

This statement is a simplified version of utlxpls.sql. utlxpls.sql is a script that Oracle ships.
Here's the output of the explain plan:

SELECT STATEMENT ()
TABLE ACCESS (FULL) TEST_FOR_EP

First, take a look at the indention: TABLE ACCESS is indented right. In an explain plan output, the more indented an operation is, the earlier it is executed. And the result of this operation (or operations, if more than one have are equally indented AND have the same parent) is then feeded to the parent operation. In this case, TABLE ACCESS is made first, and its result feeded to SELECT STATEMENT (which is not an actual operation). Note the FULL in paranthesis in TABLE ACCESS: this means that the entire table is accessed.
Btw, sql*plus automatically explains the plan for you if autotrace is enabled.
Now, let's create an index on that table:

create index test_for_ep_ix on test_for_ep (a);

And do the same select statement again:

delete plan_table;
explain plan for select /*+ rule */ * from test_for_ep where a = 5;

The plan is now:

SELECT STATEMENT ()
TABLE ACCESS (BY INDEX ROWID) TEST_FOR_EP
INDEX (RANGE SCAN) TEST_FOR_EP_IX

Obviously, the index (TEST_FOR_EP_IX) is used first (most indented) then used for a TABLE ACCESS, second most indented, then the result is returned. The table access is not done by a full table scan but rather by using the data's rowid.
INDEX
In the last example, Oracle employed an INDEX (RANGE SCAN). The RANGE SCAN basically means, that the index was used, but that it can return more than one row. Now, we create a unique index to see how this alters the explain plan:

create table test_for_ep (a number, b varchar2(100), constraint uq_tp unique(a));

delete plan_table;
explain plan for select /*+ rule */ * from test_for_ep where a = 5;

The explained plan is:

SELECT STATEMENT ()
TABLE ACCESS (BY INDEX ROWID) TEST_FOR_EP
INDEX (UNIQUE SCAN) UQ_TP

INDEX (UNIQUE SCAN) means, that this index is used, and it sort of guarantees that this index returnes exactly one rowid. What happens, if we query the field not for equality but for greater than (a>5)?

explain plan for select /*+ rule */ * from test_for_ep where a > 5;

Here, we see that the index is used, but for a RANGE SCAN:

SELECT STATEMENT ()
TABLE ACCESS (BY INDEX ROWID) TEST_FOR_EP
INDEX (RANGE SCAN) UQ_TP

If we only query fields of a table that are already in an index, Oracle doesn't have to read the data blocks because it can get the relevant data from the index:

create table test_for_ep (a number, b varchar2(100), constraint uq_tp unique(a));

delete plan_table;
explain plan for select /*+ rule */ a from test_for_ep where a > 5 and a < 50;

Here's the query execution planexecution plan. No table access anymore!

SELECT STATEMENT ()
INDEX (RANGE SCAN) UQ_TP

MERGE JOIN
See here. The first table's join key is ba while the second table's join key is aa.

create table test_for_ep_a (aa number, ab varchar2(100));
create table test_for_ep_b (ba number, bb varchar2(100));

Note, there are no indexes on both of the tables. Now, we join the tables on aa and ba:

explain plan for
select /*+ rule */ a.aa from test_for_ep_a a, test_for_ep_b b where
a.aa=b.ba and a.aa > 5;

As there are no indexes, both tables must be TABLE ACCESSed (FULL). After these accesses, their results are sorted.

SELECT STATEMENT ()
MERGE JOIN ()
SORT (JOIN)
TABLE ACCESS (FULL) TEST_FOR_EP_B
SORT (JOIN)
TABLE ACCESS (FULL) TEST_FOR_EP_A

Note MERGE JOINs can only be used for equi joins, as is demonstrated in NESTED LOOPS
NESTED LOOPS
For each relevant row in the first table (driving table), find all matching rows in the other table (probed table).
See also here.

create table test_for_ep_a (aa number, ab varchar2(100));
create table test_for_ep_b (ba number, bb varchar2(100));

explain plan for
select /*+ rule */ a.aa from test_for_ep_a a, test_for_ep_b b where
a.aa > b.ba and a.aa > 5;

Note, there is no equi join to join test_for_ep_a and test_for_ep_b, (a.aa > b.ba)

SELECT STATEMENT ()
NESTED LOOPS ()
TABLE ACCESS (FULL) TEST_FOR_EP_B
TABLE ACCESS (FULL) TEST_FOR_EP_A

Now, we put an index on TEST_FOR_EP_B and see how that influences our nested loop:

create table test_for_ep_a (aa number, ab varchar2(100));
create table test_for_ep_b (ba number, bb varchar2(100), constraint uq_ba unique(ba));

delete plan_table;
explain plan for
select /*+ rule */ a.aa from test_for_ep_a a, test_for_ep_b b where
a.aa > b.ba;

The plan is:

SELECT STATEMENT ()
NESTED LOOPS ()
TABLE ACCESS (FULL) TEST_FOR_EP_A
INDEX (RANGE SCAN) UQ_BA

Interpreted, this means: TEST_FOR_EP_A is fully accessed and for each row, TEST_FOR_EP_B (or more accurately, its index UQ_BA) is probed. Thinking about it, this makes sense, doing the costly TABLE ACCESS once and use the index for each row. Then again, thinking about it, if TEST_FOR_EP_A is very small nad TEST_FOR_EP_B is large, this doesn't make sense anymore. This is when the Cost Based Optimizer comes into play.
Sorts
Aggregate Sorts
Whenever a result set must be sorted, the operation is sort. If this sort is used to return a single row (for example max or min) the options is AGGREGATE. Consider the following example:

create table t_ep (
w date,
v number,
x varchar2(40)
);


delete plan_table;
explain plan for select /*+ rule */ max(w) from t_ep where v=4;

SELECT STATEMENT ()
SORT (AGGREGATE)
TABLE ACCESS (FULL) T_EP

Now: creating an index:

alter table t_ep add constraint uq_t_ep unique(v);
delete plan_table;
explain plan for select /*+ rule */ max(w) from t_ep where v=4;

SELECT STATEMENT ()
SORT (AGGREGATE)
TABLE ACCESS (BY INDEX ROWID) T_EP
INDEX (UNIQUE SCAN) UQ_T_EP

TKPROF
If you want to know, how much time an SQL statement acutally used, use TKPROF


In short, user defined types in Oracle are more complex data types based on the built-in (standard) data types and can be used in both PL/SQL and in SQL

There are several kinds of user-defined types (UDTs). Some for each type and its constraints are shown below:
  1. Column types: These have no multiple column types but are valid only for single column. The code for a single column is shown below:
CREATE OR REPLACE TYPE NAME_T AS OBJECT (
COL VARCHAR2 (30))
But the problem with column type is that the basic datatype is only valid for a column and if chained records are stored in more than one column then this type cannot be used.
  1. Multi-Column:
CREATE OR REPLACE TYPE ADDR_T AS OBJECT (
ADDR1 VARCHAR2 (50),
ADDR2 VARCHAR2 (50),
CITY VARCHAR2 (30),
STATE VARCHAR2 (2),
ZIP_4 VARCHAR2(9));
Here each column can have different type but it is not applicable to individual fields.
  1. Row Types: These include single and multiple rows and form the foundation of object tables/views:
CREATE OR REPLACE TYPE EMP_T AS OBJECT (
EMP_ID NUMBER (10),
LNAME_TX NAME_T,
FNAME_TX NAME_T,
BIRTH_DATE DATE);
Not supported by query systems.


Let's suppose we wnat to store information about employees. In relational terms we create a table (a relation) based on the attributes (tuples) of an employee that we want to store. An employee table (relational) defintion therefore might look something like this:

CREATE TABLE employee
(employee_id NUMBER
,employee_name VARCHAR2(30)
,salary NUMBER
,dept_id NUMBER);

But, as just mentioned Oracle allows to define our own types, so we might define an employee type as follows:

CREATE TYPE employee_t AS OBJECT
(employee_id NUMBER
,employee_name VARCHAR2(30)
,salary NUMBER
,dept_id NUMBER);

We can then create a table of that type in the database:

CREATE TABLE employee OF employee_t;

Alternatively we can use that type for an attribute in a relational table in Oracle:


CREATE TABLE department
(emp employee_t
mgr varchar2(40));

Or as a nested table:

CREATE TYPE employee AS TABLE OF employee_t;

CREATE TABLE department
(dept_id NUMBER
,dept_name VARCHAR2(40)
,emps employee)
NESTED TABLE emps STORE AS dept_emps_tab;

The Oracle database also allows us to define types in terms of other user defined types. For example:

CREATE TYPE address AS OBJECT
(addr_line_1 VARCHAR2(240)
,addr_line_2 VARCHAR2(240)
,postal_code VARCHAR2(20));

CREATE TYPE person AS OBJECT
(name VARCHAR2(40)
,home_address address
,home_phone NUMBER
,work_address address
,work_phone NUMBER);

CREATE TABLE contacts
(contact_id NUMBER
,contact_details person);

User-defined types work much the same way in Oracle PL/SQL which has the same built-in data types as SQL plus a few extra ones such as BOOLEAN.

PL/SQL tables are known as collections and can be based on object types defined in SQL or types defined in PL/SQL. For example the following piece of code creates a type of collection known as an associative array:

DECLARE
TYPE address_ty IS TABLE OF VARCHAR2(120) INDEX BY VARCHAR2(10);
addresses address_ty;
BEGIN
addresses('home') := '1 The Close, Chiswick, SW3 4AB';
addresses('work') :=
'Bank of England, Threadneedle Street, London, EC2R 8AH';
END:

Asumming addr_ty has already been declared in our Oracle database as follows:

CREATE TYPE addr_ty IS TABLE OF VARCHAR2(20);

The following piece of PL/SQL code makes use of that type declaration.

DECLARE
addresses addr_ty;
BEGIN
addresses :=addr_ty('1 The Close, Chiswick, SW3 4AB', 'Bank of England, Threadneedle St., London, EC2R 8AH');
END;


An exception is a section in a PL/SQL program that captures and processes runtime errors.
[edit] Predefined exceptions

Oracle provides some predefined exceptions that can be used, including: NO_DATA_FOUND, TOO_MANY_ROWS, DUP_VAL_ON_INDEX, VALUE_ERROR, ZERO_DIVIDE, INVALID_CURSOR, NOT_LOGGED_ON, etc.

Code example:

DECLARE
d VARCHAR2(1);
BEGIN
SELECT dummy INTO d FROM dual WHERE 1=2;
EXCEPTION
WHEN no_data_found THEN
DBMS_OUTPUT.PUT_LINE('ERROR: No data!');
END;

[edit] Custom exceptions

Here is an example of how to define a custom expection:

DECLARE
view_doesnot_exist EXCEPTION;
PRAGMA EXCEPTION_INIT(view_doesnot_exist, -942);
BEGIN
...
EXCEPTION
WHEN view_doesnot_exist THEN null;
END;

The -924 in the above example links the name view_doesnot_exist to ORA-00924: table or view does not exist.

Custom exceptions can be declared in a separate package to make them "global" and reusable. Here is an example:

CREATE OR REPLACE PACKAGE my_exceptions
AS
insert_null_into_notnull EXCEPTION;
PRAGMA EXCEPTION_INIT(insert_null_into_notnull, -1400);

update_null_to_notnull EXCEPTION;
PRAGMA EXCEPTION_INIT(update_null_to_notnull, -1407);
END my_exceptions;
/

CREATE OR REPLACE PROCEDURE use_an_exception AS
BEGIN
-- application specific code ...
NULL;
EXCEPTION
WHEN my_exceptions.insert_null_into_notnull THEN
-- application specific handling for ORA-01400: cannot insert NULL into (%s)
RAISE;
END;
/

oracle sql plsql expert part 1


ROLLUP

In addition to the regular aggregation results we expect from the GROUP BY clause, the ROLLUP extension produces group subtotals from right to left and a grand total. If "n" is the number of columns listed in the ROLLUP, there will be n+1 levels of subtotals.
SELECT fact_1_id,
       fact_2_id,
       SUM(sales_value) AS sales_value
FROM   dimension_tab
GROUP BY ROLLUP (fact_1_id, fact_2_id)
ORDER BY fact_1_id, fact_2_id;

 FACT_1_ID  FACT_2_ID SALES_VALUE
---------- ---------- -----------
         1          1     4363.55
         1          2     4794.76
         1          3     4718.25
         1          4     5387.45
         1          5     5027.34
         1               24291.35
         2          1     5652.84
         2          2     4583.02
         2          3     5555.77
         2          4     5936.67
         2          5     4508.74
         2               26237.04
                         50528.39

13 rows selected.

SQL>

CUBE

In addition to the subtotals generated by the ROLLUP extension, the CUBE extension will generate subtotals for all combinations of the dimensions specified. If "n" is the number of columns listed in the CUBE, there will be 2n subtotal combinations.
SELECT fact_1_id,
       fact_2_id,
       SUM(sales_value) AS sales_value
FROM   dimension_tab
GROUP BY CUBE (fact_1_id, fact_2_id)
ORDER BY fact_1_id, fact_2_id;

 FACT_1_ID  FACT_2_ID SALES_VALUE
---------- ---------- -----------
         1          1     4363.55
         1          2     4794.76
         1          3     4718.25
         1          4     5387.45
         1          5     5027.34
         1               24291.35
         2          1     5652.84
         2          2     4583.02
         2          3     5555.77
         2          4     5936.67
         2          5     4508.74
         2               26237.04
                    1    10016.39
                    2     9377.78
                    3    10274.02
                    4    11324.12
                    5     9536.08
                         50528.39

18 rows selected.

GROUPING Functions

GROUPING

It can be quite easy to visually identify subtotals generated by rollups and cubes, but to do it programatically you really need something more accurate than the presence of null values in the grouping columns. This is where the GROUPING function comes in. It accepts a single column as a parameter and returns "1" if the column contains a null value generated as part of a subtotal by a ROLLUP or CUBE operation or "0" for any other value, including stored null values.
The following query is a repeat of a previous cube, but the GROUPING function has been added for each of the dimensions in the cube.
SELECT fact_1_id,
       fact_2_id,
       SUM(sales_value) AS sales_value,
       GROUPING(fact_1_id) AS f1g, 
       GROUPING(fact_2_id) AS f2g
FROM   dimension_tab
GROUP BY CUBE (fact_1_id, fact_2_id)
ORDER BY fact_1_id, fact_2_id;

 FACT_1_ID  FACT_2_ID SALES_VALUE        F1G        F2G
---------- ---------- ----------- ---------- ----------
         1          1     4363.55          0          0
         1          2     4794.76          0          0
         1          3     4718.25          0          0
         1          4     5387.45          0          0
         1          5     5027.34          0          0
         1               24291.35          0          1
         2          1     5652.84          0          0
         2          2     4583.02          0          0
         2          3     5555.77          0          0
         2          4     5936.67          0          0
         2          5     4508.74          0          0
         2               26237.04          0          1
                    1    10016.39          1          0
                    2     9377.78          1          0
                    3    10274.02          1          0
                    4    11324.12          1          0
                    5     9536.08          1          0
                         50528.39          1          1

18 rows selected.

SQL>
From this we can see:
  • F1G=0,F2G=0 : Represents a row containing regular subtotal we would expect from a GROUP BY operation.
  • F1G=0,F2G=1 : Represents a row containing a subtotal for a distinct value of the FACT_1_ID column, as generated by ROLLUP and CUBE operations.
  • F1G=1,F2G=0 : Represents a row containing a subtotal for a distinct value of the FACT_2_ID column, which we would only see in a CUBE operation.
  • F1G=1,F2G=1 : Represents a row containing a grand total for the query, as generated by ROLLUP and CUBE operations.
It would now be easy to write a program to accurately process the data.
The GROUPING columns can used for ordering or filtering results.
SELECT fact_1_id,
       fact_2_id,
       SUM(sales_value) AS sales_value,
       GROUPING(fact_1_id) AS f1g, 
       GROUPING(fact_2_id) AS f2g
FROM   dimension_tab
GROUP BY CUBE (fact_1_id, fact_2_id)
HAVING GROUPING(fact_1_id) = 1 OR GROUPING(fact_2_id) = 1
ORDER BY GROUPING(fact_1_id), GROUPING(fact_2_id);

 FACT_1_ID  FACT_2_ID SALES_VALUE        F1G        F2G
---------- ---------- ----------- ---------- ----------
         1               24291.35          0          1
         2               26237.04          0          1
                    4    11324.12          1          0
                    3    10274.02          1          0
                    2     9377.78          1          0
                    1    10016.39          1          0
                    5     9536.08          1          0
                         50528.39          1          1

8 rows selected.

SQL>

GROUPING_ID

The GROUPING_ID function provides an alternate and more compact way to identify subtotal rows. Passing the dimension columns as arguments, it returns a number indicating the GROUP BY level.
SELECT fact_1_id,
       fact_2_id,
       SUM(sales_value) AS sales_value,
       GROUPING_ID(fact_1_id, fact_2_id) AS grouping_id
FROM   dimension_tab
GROUP BY CUBE (fact_1_id, fact_2_id)
ORDER BY fact_1_id, fact_2_id;

 FACT_1_ID  FACT_2_ID SALES_VALUE GROUPING_ID
---------- ---------- ----------- -----------
         1          1     4363.55           0
         1          2     4794.76           0
         1          3     4718.25           0
         1          4     5387.45           0
         1          5     5027.34           0
         1               24291.35           1
         2          1     5652.84           0
         2          2     4583.02           0
         2          3     5555.77           0
         2          4     5936.67           0
         2          5     4508.74           0
         2               26237.04           1
                    1    10016.39           2
                    2     9377.78           2
                    3    10274.02           2
                    4    11324.12           2
                    5     9536.08           2
                         50528.39           3

18 rows selected.

SQL>

GROUP_ID

It's possible to write queries that return the duplicate subtotals, which can be a little confusing. The GROUP_ID function assigns the value "0" to the first set, and all subsequent sets get assigned a higher number. The following query forces duplicates to show the GROUP_ID function in action.
SELECT fact_1_id,
       fact_2_id,
       SUM(sales_value) AS sales_value,
       GROUPING_ID(fact_1_id, fact_2_id) AS grouping_id,
       GROUP_ID() AS group_id
FROM   dimension_tab
GROUP BY GROUPING SETS(fact_1_id, CUBE (fact_1_id, fact_2_id))
ORDER BY fact_1_id, fact_2_id;

 FACT_1_ID  FACT_2_ID SALES_VALUE GROUPING_ID   GROUP_ID
---------- ---------- ----------- ----------- ----------
         1          1     4363.55           0          0
         1          2     4794.76           0          0
         1          3     4718.25           0          0
         1          4     5387.45           0          0
         1          5     5027.34           0          0
         1               24291.35           1          1
         1               24291.35           1          0
         2          1     5652.84           0          0
         2          2     4583.02           0          0
         2          3     5555.77           0          0
         2          4     5936.67           0          0
         2          5     4508.74           0          0
         2               26237.04           1          1
         2               26237.04           1          0
                    1    10016.39           2          0
                    2     9377.78           2          0
                    3    10274.02           2          0
                    4    11324.12           2          0
                    5     9536.08           2          0
                         50528.39           3          0

20 rows selected.

SQL>
If necessary, you could then filter the results using the group.
SELECT fact_1_id,
       fact_2_id,
       SUM(sales_value) AS sales_value,
       GROUPING_ID(fact_1_id, fact_2_id) AS grouping_id,
       GROUP_ID() AS group_id
FROM   dimension_tab
GROUP BY GROUPING SETS(fact_1_id, CUBE (fact_1_id, fact_2_id))
HAVING GROUP_ID() = 0
ORDER BY fact_1_id, fact_2_id;

 FACT_1_ID  FACT_2_ID SALES_VALUE GROUPING_ID   GROUP_ID
---------- ---------- ----------- ----------- ----------
         1          1     4363.55           0          0
         1          2     4794.76           0          0
         1          3     4718.25           0          0
         1          4     5387.45           0          0
         1          5     5027.34           0          0
         1               24291.35           1          0
         2          1     5652.84           0          0
         2          2     4583.02           0          0
         2          3     5555.77           0          0
         2          4     5936.67           0          0
         2          5     4508.74           0          0
         2               26237.04           1          0
                    1    10016.39           2          0
                    2     9377.78           2          0
                    3    10274.02           2          0
                    4    11324.12           2          0
                    5     9536.08           2          0
                         50528.39           3          0

18 rows selected.

SQL>

GROUPING SETS

Calculating all possible subtotals in a cube, especially those with many dimensions, can be quite an intensive process. If you don't need all the subtotals, this can represent a considerable amount of wasted effort. The following cube with three dimensions gives 8 levels of subtotals (GROUPING_ID: 0-7), shown here.
SELECT fact_1_id,
       fact_2_id,
       fact_3_id,
       SUM(sales_value) AS sales_value,
       GROUPING_ID(fact_1_id, fact_2_id, fact_3_id) AS grouping_id
FROM   dimension_tab
GROUP BY CUBE(fact_1_id, fact_2_id, fact_3_id)
ORDER BY fact_1_id, fact_2_id, fact_3_id;
If we only need a few of these levels of subtotaling we can use the GROUPING SETS expression and specify exactly which ones we need, saving us having to calculate the whole cube. In the following query we are only interested in subtotals for the "FACT_1_ID, FACT_2_ID" and "FACT_1_ID, FACT_3_ID" groups.
SELECT fact_1_id,
       fact_2_id,
       fact_3_id,
       SUM(sales_value) AS sales_value,
       GROUPING_ID(fact_1_id, fact_2_id, fact_3_id) AS grouping_id
FROM   dimension_tab
GROUP BY GROUPING SETS((fact_1_id, fact_2_id), (fact_1_id, fact_3_id))
ORDER BY fact_1_id, fact_2_id, fact_3_id;

 FACT_1_ID  FACT_2_ID  FACT_3_ID SALES_VALUE GROUPING_ID
---------- ---------- ---------- ----------- -----------
         1          1                4363.55           1
         1          2                4794.76           1
         1          3                4718.25           1
         1          4                5387.45           1
         1          5                5027.34           1
         1                     1      2737.4           2
         1                     2     1854.29           2
         1                     3     2090.96           2
         1                     4     2605.17           2
         1                     5     2590.93           2
         1                     6      2506.9           2
         1                     7     1839.85           2
         1                     8     2953.04           2
         1                     9     2778.75           2
         1                    10     2334.06           2
         2          1                5652.84           1
         2          2                4583.02           1
         2          3                5555.77           1
         2          4                5936.67           1
         2          5                4508.74           1
         2                     1     3512.69           2
         2                     2     2847.94           2
         2                     3      2972.5           2
         2                     4     2534.06           2
         2                     5     3115.99           2
         2                     6     2775.85           2
         2                     7     2208.19           2
         2                     8     2358.55           2
         2                     9     1884.11           2
         2                    10     2027.16           2

30 rows selected.

SQL>
Notice how we have gone from returning 198 rows with 8 subtotal levels in the cube, to just 30 rows with 2 subtotal levels.

Composite Columns

ROLLUP and CUBE consider each column independently when deciding which subtotals must be calculated. For ROLLUP this means stepping back through the list to determine the groupings.
ROLLUP (a, b, c)
(a, b, c)
(a, b)
(a)
()
CUBE creates a grouping for every possible combination of columns.
CUBE (a, b, c)
(a, b, c)
(a, b)
(a, c)
(a)
(b, c)
(b)
(c)
()
Composite columns allow columns to be grouped together with braces so they are treated as a single unit when determining the necessary groupings. In the following ROLLUP columns "a" and "b" have been turned into a composite column by the additional braces. As a result the group of "a" is not longer calculated as the column "a" is only present as part of the composite column in the statement.
ROLLUP ((a, b), c)
(a, b, c)
(a, b)
()

Not considered:
(a)
In a similar way, the possible combinations of the following CUBE are reduced because references to "a" or "b" individually are not considered as they are treated as a single column when the groupings are determined.
CUBE ((a, b), c)
(a, b, c)
(a, b)
(c)
()

Not considered:
(a, c)
(a)
(b, c)
(b)
The impact of this is shown clearly in the follow two statements, whose output is shown here and here. The regular cube returns 198 rows and 8 groups (0-7), while the cube with the composite column returns only 121 rows with 4 groups (0, 1, 6, 7)
-- Regular Cube.
SELECT fact_1_id,
       fact_2_id,
       fact_3_id,
       SUM(sales_value) AS sales_value,
       GROUPING_ID(fact_1_id, fact_2_id, fact_3_id) AS grouping_id
FROM   dimension_tab
GROUP BY CUBE(fact_1_id, fact_2_id, fact_3_id)
ORDER BY fact_1_id, fact_2_id, fact_3_id;

-- Cube with composite column.
SELECT fact_1_id,
       fact_2_id,
       fact_3_id,
       SUM(sales_value) AS sales_value,
       GROUPING_ID(fact_1_id, fact_2_id, fact_3_id) AS grouping_id
FROM   dimension_tab
GROUP BY CUBE((fact_1_id, fact_2_id), fact_3_id)
ORDER BY fact_1_id, fact_2_id, fact_3_id;

Concatenated Groupings

Concatenated groupings are defined by putting together multiple GROUPING SETS, CUBEs or ROLLUPs separated by commas. The resulting groupings are the cross-product of all the groups produced by the individual grouping sets. It might be a little easier to understand what this means by looking at an example. The following GROUPING SET results in 2 groups of subtotals, one for the fact_1_id column and one for the fact_id_2 column.
SELECT fact_1_id,
       fact_2_id,
       SUM(sales_value) AS sales_value,
       GROUPING_ID(fact_1_id, fact_2_id) AS grouping_id
FROM   dimension_tab
GROUP BY GROUPING SETS(fact_1_id, fact_2_id)
ORDER BY fact_1_id, fact_2_id;

 FACT_1_ID  FACT_2_ID SALES_VALUE GROUPING_ID
---------- ---------- ----------- -----------
         1               24291.35           1
         2               26237.04           1
                    1    10016.39           2
                    2     9377.78           2
                    3    10274.02           2
                    4    11324.12           2
                    5     9536.08           2

7 rows selected.

SQL>
The next GROUPING SET results in another 2 groups of subtotals, one for the fact_3_id column and one for the fact_4_id column.
SELECT fact_3_id,
       fact_4_id,
       SUM(sales_value) AS sales_value,
       GROUPING_ID(fact_3_id, fact_4_id) AS grouping_id
FROM   dimension_tab
GROUP BY GROUPING SETS(fact_3_id, fact_4_id)
ORDER BY fact_3_id, fact_4_id;

 FACT_3_ID  FACT_4_ID SALES_VALUE GROUPING_ID
---------- ---------- ----------- -----------
         1                6250.09           1
         2                4702.23           1
         3                5063.46           1
         4                5139.23           1
         5                5706.92           1
         6                5282.75           1
         7                4048.04           1
         8                5311.59           1
         9                4662.86           1
        10                4361.22           1
                    1     4718.55           2
                    2      5439.1           2
                    3      4643.4           2
                    4      4515.3           2
                    5     5110.27           2
                    6     5910.78           2
                    7     4987.22           2
                    8     4846.25           2
                    9     5458.82           2
                   10      4898.7           2

20 rows selected.

SQL>
If we combine them together into a concatenated grouping we get 4 groups of subtotals. The output of the following query is shown here.
SELECT fact_1_id,
       fact_2_id,
       fact_3_id,
       fact_4_id,
       SUM(sales_value) AS sales_value,
       GROUPING_ID(fact_1_id, fact_2_id, fact_3_id, fact_4_id) AS grouping_id
FROM   dimension_tab
GROUP BY GROUPING SETS(fact_1_id, fact_2_id), GROUPING SETS(fact_3_id, fact_4_id)
ORDER BY fact_1_id, fact_2_id, fact_3_id, fact_4_id;
The output from the previous three queries produce the following groupings.
GROUPING SETS(fact_1_id, fact_2_id) 
(fact_1_id)
(fact_2_id)

GROUPING SETS(fact_3_id, fact_4_id) 
(fact_3_id)
(fact_4_id)

GROUPING SETS(fact_1_id, fact_2_id), GROUPING SETS(fact_3_id, fact_4_id) 
(fact_1_id, fact_3_id)
(fact_1_id, fact_4_id)
(fact_2_id, fact_3_id)
(fact_2_id, fact_4_id)
So we can see the final cross-product of the two GROUPING SETS that make up the concatenated grouping. A generic summary would be as follows.
GROUPING SETS(a, b), GROUPING SETS(c, d) 
(a, c)
(a, d)
(b, c)
(b, d)



Oracle Database 10g: New Features For Administrators OCP Exam Articles




Automatic Database Diagnostic Monitor (ADDM) in Oracle Database 10g - This article describes several methods for producing reports from the Automatic Database Diagnostic Monitor (ADDM) in Oracle 10g.
Automatic SQL Tuning in Oracle Database 10g - This article the discusses the new features which automate the tuning of SQL statements in Oracle 10g.
Automatic Storage Management (ASM) in Oracle Database 10g - Automatic Storage Management (ASM) is a new feature introduced in Oracle 10g to simplify the storage and administration of Oracle files.
Automatic Workload Repository (AWR) in Oracle Database 10g - The automatic workload repository (AWR) is the latest evolution of statspack which provides the central information store for all Oracle 10g self-tuning functionality.
Database Security Enhancements in Oracle Database 10g - Oracle 10g contains many improvements to Virtual Private Database (VPD), Fine Grained Auditing (FGA) and data encryption.
DBMS_FILE_TRANSFER Package in Oracle Database 10g - Use this new API to make binary copies of files on the local server or to transfer files between the local server and remote servers.
Flashback New Features and Enhancements in Oracle Database 10g - Use the new and enhanced features of flashback technology to view old data, track data changes and recover from accidental data loss without reverting to LogMiner or point in time recoveries.
Improved VLDB Support in Oracle Database 10g - Take advantage of the improvements in Very Large Database (VLDB) support in Oracle 10g.
Installation Enhancements in Oracle Database 10g - This article provides a brief description of the installation enhancements which are relevant to the OCP upgrade exam for Oracle 10g.
MERGE Statement Enhancements in Oracle Database 10g - Learn about the enhancements to the merge statement in Oracle Database 10g.
Oracle Data Pump in Oracle Database 10g - Test drive the Oracle 10g replacement for the EXP and IMP utilities.
Performance Tuning Enhancements in Oracle Database 10g - Oracle 10g includes many performance tuning enhancements which take some of the mystery out of being a DBA.
PL/SQL Enhancements in Oracle Database 10g - Get to grips with the Oracle 10g enhancements to PL/SQL using simple cut & paste examples.
Resource Manager Enhancements in Oracle Database 10g - In Oracle 10g resource manager is more flexible and even easier to configure than before. Check out the enhancements here.
RMAN Enhancements in Oracle Database 10g - Check out the large number of enhancements in the Oracle 10g version of RMAN. If you are not currently using this tool you're running out of excusses.
Services in Oracle Database 10g - Run applications on a subset of Real Application Cluser (RAC) nodes using services.
Server Configuration in Oracle Database 10g - Oracle 10g includes some changes relating to basic server configuration which are relevant to the OCP upgrade exam.
Scheduler in Oracle Database 10g - Oracle Database 10g includes a flexible scheduler to replace the DBMS_JOB package.
Space, Object, and Transaction Management in Oracle Database 10g - A description of the new features relating to space, object and transaction management in Oracle 10g.
SQL Access Advisor in Oracle Database 10g - Use this advisor to get suggestions for indexes and materialized views that might help system performance.
SQL*Plus Enhancements in Oracle Database 10g - Check out enhancements to SQL*Plus in Oracle Database 10g.
Tablespace Management Enhancements in Oracle Database 10g - Oracle 10g includes several small but neat enhancements related to tablespace management.
Upgrading to Oracle Database 10g - This article provides a brief overview of the areas involved in upgrading existing databases to Oracle 10g.




How To Pass a Parameter to a Cursor?
When you define a cursor, you can set a formal parameter in the cursor. The formal parameter will be replaced by an actual parameter in the OPEN cursor statement. Here is a good example of a cursor with two parameters:
CREATE OR REPLACE PROCEDURE FYI_CENTER AS
  CURSOR emp_cur(low NUMBER, high NUMBER)
    IS SELECT * FROM employees WHERE salary >= low
      AND salary <= high;
BEGIN
  FOR row IN emp_cur(12000,15000) LOOP
    DBMS_OUTPUT.PUT_LINE(row.first_name || ' '
      || row.last_name 
      || ': ' || row.salary); 
  END LOOP; 
END;
/
How To Use Attributes of the Implicit Cursor?
Right after executing a DML statement, you retrieve any attribute of the implicit cursor by using SQL%attribute_name, as shown in the following tutorial exercise:
CREATE TABLE student (id NUMBER(5) PRIMARY KEY,
  first_name VARCHAR(80) NOT NULL,
  last_name VARCHAR(80) NOT NULL);
Table created.

DECLARE
  id NUMBER;
  first_name CHAR(10);
BEGIN
  id := 29;
  first_name := 'Bob';
  INSERT INTO student VALUES(id, first_name, 'Henry');
  first_name := 'Joe';
  INSERT INTO student VALUES(id+1, first_name, 'Bush');
  DBMS_OUTPUT.PUT_LINE('# of rows inserted: '
    || SQL%ROWCOUNT);
 
  first_name := 'Fyi';
  UPDATE student SET first_name = first_name WHERE id = 29;
  IF SQL%FOUND THEN
      DBMS_OUTPUT.PUT_LINE('# of rows updated: '
        || SQL%ROWCOUNT);
  END IF;

  UPDATE student SET first_name = first_name
    WHERE id = id+1;
  IF SQL%NOTFOUND THEN
    DBMS_OUTPUT.PUT_LINE('No records updated.');
  END IF;
    
  DELETE FROM student WHERE id = id;                    
  DBMS_OUTPUT.PUT_LINE('# of rows deleted: '
    || SQL%ROWCOUNT);
END; 
How To Define an Explicit Cursor?
An explicit cursor must be defined in the declaration part of a procedure or function with the CURSOR ... IS statement as shown in the following sample script:
DECLARE
  CURSOR c_list IS SELECT * FROM countries;
  CURSOR t_list IS SELECT * FROM employees
    WHERE employee_id = 100;
BEGIN
  NULL;
END;
/
Cursor Variable?
A cursor variable is a variable of a specific REF CURSOR data type, which is a pointer to a data structure resource connects to query statement result, similar to the CURSOR data type.. The advantage of using cursor variables is that cursor variables can be used as variables to pass between procedures and functions.
Define a Cursor Variable?
To define cursor variable, you must decide which REF CURSOR data type to use. There are 3 ways to select a REF CURSOR data type:
  • Define your own specific REF CURSOR types using the TYPE ... RETURN statement.
  • Define your own generic REF CURSOR type using the TYPE ... statement.
  • Use the system defined REF CURSOR type: SYS_REFCURSOR.
The follwoing tutorial exercise defines 3 cursor variables in 3 different ways:
CREATE OR REPLACE PROCEDURE FYI_CENTER AS
  TYPE emp_ref IS REF CURSOR RETURN employees%ROWTYPE;
  TYPE any_ref IS REF CURSOR;
  emp_cur emp_ref;
  any_cur any_ref;
  sys_cur SYS_REFCURSOR;
BEGIN
  NULL;
END;
/
Open a Cursor Variable?
A cursor variable must be opened with a specific query statement before you can fetch data fields from its data rows. To open a cursor variable, you can use the OPEN ... FOR statement as shown in the following tutorial exercise:
CREATE OR REPLACE PROCEDURE FYI_CENTER AS
  TYPE emp_ref IS REF CURSOR RETURN employees%ROWTYPE;
  TYPE any_ref IS REF CURSOR;
  emp_cur emp_ref;
  any_cur any_ref;
  sys_cur SYS_REFCURSOR;
BEGIN
  OPEN emp_cur FOR SELECT * FROM employees;
  OPEN any_cur FOR SELECT * FROM employees;
  OPEN sys_cur FOR SELECT * FROM employees;
  CLOSE sys_cur;
  CLOSE any_cur;
  CLOSE emp_cur;
END;
/
Cursor Variables Are Easier to Use than Cursors?
Cursor variables are easier to use than cursors because:
  • Cursor variables are easier to define. No need to give a specific query statement.
  • Cursor variables are easier to open. You can specify the query dynamically at the time of open.
  • Cursor variables can be passed into procedures or functions.

Oracle Database 11g New Features

It hardly seems five minutes since Oracle 10g was released and we were discussing the new features of that, but now we have Oracle 11g Database. Below are the main new features of Oracle Database 11g.

Oracle Real Application Testing

This feature makes it easier to test systems before and after database upgrades and hardware and operating system changes so you can catch and fix problems more quickly.

Enhanced Partitioning

These capabilities include partitioning by parent/child references; partitioning by virtual columns, more composite partitioning choices; and interval partitioning, which automatically generates new partitions based on intervals, such as every day or every month.

Advanced Compression

Oracle Database 11g supports data compression for update, insert and delete operations commonly used in online transaction processing (OLTP) applications.  Oracle Database 11g table compression improves database performance with better use of memory for caching data and reduced I/O for table scans.

Flashback Data Archive

Part of the Oracle Total Recall option, this feature lets you query data in selected tables “as of” earlier times, which adds a time dimension to your data for change tracking, information lifecycle management, auditing and compliance.  Flashback data archive stores change data automatically to enable fast query access to old versions of the data.

Oracle SecureFiles

This feature lets you store large objects such as images, large text objects, or advanced datatypes including XML, medical imaging, and geospatial raster objects inside the database.  The performance is apparently comparable to file systems, and Oracle SecureFiles provides advanced functionality including intelligent compression, transparent encryption and transparent deduplication.

PL/SQL Function Result Cache

This new PL/SQL function allows you cache frequently retrieved data, and get the data from the block buffer cache, instead of checking to see if the query has already been parsed, finding the data in the buffer and then returning it.

Monday, May 7, 2012

daily coding


Monday, April 16, 2012

Working with ferite Language - web development

simple login authentication 


. . . 
function login_button_clicked(object sender, string extra)
    {   
        array clist=[];
        string custNm=.UserName.textValue();
        string custpass=.Password.textValue();
        string sql = "select count(*) counts from workflow_users where username='$custNm' and password='$custpass'";

        clist=ObjRunObject.rawQuery(sql);
        clist.each() using(comp)
        {
            if (comp.counts==1)
            {
            Request.current().location("AdminPanel");
            }
            else
            {
            println("User Name or Password missmatch ! please check");
            }
        };
    }
 

data save in db
string custNm=.CustomerName.textValue();
        string custAdd=.CustomerAddress.textValue();
        string sql = "insert into workflow_customertest (name,address) values('$custNm','$custAdd')";
        Database.query(sql);

image in ferite
code:
img border="0" style="vertical-align: top;" src="/wf/Skeleton.app/Resources/Templates/Master.template/Images/attached_file.gif"

n.b. always place code inside <> as it is removed for some technical issue.

Office Activation