Walk through Oracle 10g Database Features and Enhancements

Something about 10g:

Oracle Application Server 10g provides full support for the Java 2 Platform Enterprise Edition (J2EE), XML, and emerging Web services standards. With Oracle Application Server 10g, you can simplify the access to the information for your customers.

With the release of 10g many new features are been provided to the developer that would help them to use in coding for better performance.

With my article would like to put some light on the these new features

A Brief History of Oracle

Let us look into the History of Oracle Database with the details of their release: 

My focus would be on the basic enhancements in 10g when compare to the earlier release, these basic commands and syntax will come handy and guide us(Developers) with better coding.

Here we would be learning on few of the SQL * Plus Feature 

  1. SPOOL
  2. SET SERVEROUTPUT
  3. DBMS_OUTPUT
  4. DESCRIBE FEATURE
  5. SET SQLPROMPT
  6. Flash Back
  7. Regular Expressions
  8. Dual optimization( Fast Dual)
  9. Merge
  10. Exception Enhancements
  11. Dml error logging in oracle 10g
  12. Collect function in 10g

1. Spool Enhancements: 

The SPOOL command has been enhanced to include CREATE, REPLACE, and APPEND options. These provide better control over output file creation.

APPEND, CREATE and REPLACE extensions to SPOOL and SAVE

The following extensions have been added to the SPOOL

    •  REPLACE – (Default) This option replaces an existing file or creates it if it is not already present.
    • CREATE – This option creates a new file or produces an error if the file already exists.
    • APPEND – This option appends to an existing file, or creates a new file if it’s not already present.

2. SET SERVEROUTPUT Enhancements:

    • It is common for PL/SQL developers to use the DBMS_OUTPUT.PUT_LINE procedure to write to the console during testing and debugging
    • To enable output from DBMS_OUTPUT.PUT_LINE you must enable SERVEROUTPUT
    • In Oracle 10g this command has been enhanced to include a default of UNLIMITED buffer size eliminating the need to specify a buffer size
    • You may also specify “WORD_WRAPPED” to cause DBMS_OUTPUT.PUT_LINE output to be wrapped at clear word breaks

                                              set serveroutputon size 1000000 –-size limited

                set serveroutputon unlimited –size unlimited

                set serveroutputon–size unlimited (default)

3. dbms_output Enhancements:

    • DBMS_OUTPUT maximum line length

In Oracle 10.1 and below – 255 bytes

In Oracle 10.2 and above – 32767 bytes 

    • DBMS_OUTPUT maximum output buffer size

In Oracle 10.1 and below – 1000000 bytes

In Oracle 10.2 and above – unlimited

 SQL> set serveroutput on

SQL> show serveroutput

serveroutput ON SIZE UNLIMITED FORMAT WORD_WRAPPED.

4. DESCRIBE Behavior

Earlier when you tried to DESCRIBE an object that was INVALID, oracle just tell you the object was invalid but now in Oracle 10g when you DESCRIBE an INVALID object, Oracle will attempt to re-compile and validate it.

5. SQLPROMPT

This feature lets you identify the schema and server you are connected to.

A few more runtime variables have been introduced to help you identify the instance you are connected to and as which user you have connected.

The following new variables are as follows:

                _DATE : the current date or a pre-defined string.

                _PRIVILEGE : level of connection, SYSDBA, SYSOPER, or NULL for normal.

                _USER : user you are connected to.

Example:

SQL> SET SQLPROMPT “_DATE : _USER’@’_CONNECT_IDENTIFIER _PRIVILEGE > “

01-JUN-12 : “USER@INSTANCE_NAME>”

6. Flashback New Features:

Oracle9i introduced the DBMS_FLASHBACK package to allow queries to reference older versions but Oracle 10g has taken this technology a step further making it simpler to use.

Flashback Features:

 1. Flashback query: Flashback Query allows the contents of a table to be queried with reference to a specific point in time

2. Flashback table: The FLASHBACK TABLE command allows point in time recovery of individual tables

3. Flashback drop:  The default action of a DROP TABLE command is to move the table to the recycle bin, rather than actually dropping it. The PURGE option can be used to permanently drop a table.

4. Flashback database:  The FLASHBACK DATABASE command is a fast alternative to performing an incomplete recovery.

Flashback Query allows the contents of a table to be queried with reference to a specific point in time, using the AS OF clause

Flashback Table Feature: 

The FLASHBACK TABLE command allows point in time recovery of individual tables subject to the following requirements.

  1.   You must have either the FLASHBACK ANY TABLE system privilege or have     FLASHBACK object privilege on the  table.
  2.   You must have SELECT, INSERT, DELETE, and ALTER privileges on the table.
  3.   There must be enough information in the undo tablespace to complete the operation.
  4.   Row movement must be enabled on the table (ALTER TABLE tablename ENABLE ROW MOVEMENT;).

Flashback Table Dropping Feature:

Flashback drop:

 Careful! Dropping tables no longer really drops them…This might be a problem for applications with lots of “temp”-type tables

               drop table <Table_name>;

                show recyclebin;

                flashback table <Table_name> to before drop;

                drop table <Table_name> purge; –Tables can also be dropped without sending them to the recycle bin.                                                                                    

                                  purge recyclebin;

Recyclebin Artifacts:

   Dropping a table:

  • Removes indexes on the dropped table
  • Invalidates views, procedures, functions, and other objects dependent upon the table

 After using “FLASHBACK TABLE”

  • Indexes are restored with different names
  • Primary key constraints are restored using a different name
  • Foreign key constraints are not restored
  • Views, procedures, functions, and other objects dependent upon the table will work once validated (may be done manually or automatically)

Regular Expressions

Oracle now has three functions that allow the use of POSIX-compliant regular expressions in SQL

  •  REGEXP_LIKE – Similar to LIKE except it uses a regular expression as the search string.

              SELECT * FROM emp WHERE emp_name LIKE (‘C%’) OR emp_name LIKE (‘P%’);

              SELECT * FROM emp WHERE REGEXP_LIKE (emp_name, ‘^C|^P’);

  • REGEXP_INSTR – Similar to INSTR except it uses a regular expression rather than a literal as the search string.
  • REGEXP_REPLACE – Similar to REPLACE except it uses a regular expression as the search string
  • REGEXP_SUBSTR – Returns the string matching the regular expression. Not really similar to SUBSTR(REGEXP_SUBSTR Search for string matching pattern and return substring)

EXAMPLE:

SET SERVEROUTPUT ON

DECLARE

  l_text    VARCHAR2(100) := ‘My credit card details are: 1234 1234 1234 1234‘;

  l_regular_expr   VARCHAR2(50)  := ‘[0-9]{4} [0-9]{4} [0-9]{4} [0-9]{4}’;

  l_credit_card_1  VARCHAR2(50)  := ‘1234 1234 1234 1234’;

  l_credit_card_2  VARCHAR2(50)  := ‘123c 1234 1234 1234’;

BEGIN

  — REGEXP_INSTR

  IF REGEXP_INSTR(l_text, l_regular_expr) > 0 THEN

    DBMS_OUTPUT.put_line(‘REGEXP_INSTR: Your input contains credit card details, this is a security risk!’);

  END IF;

  — REGEXP_LIKE

  IF REGEXP_LIKE(l_credit_card_1, l_regular_expr) THEN

  DBMS_OUTPUT.put_line(‘REGEXP_LIKE: Good Credit Card: ‘ ||  l_credit_card_1);

  END IF;

  IF NOT REGEXP_LIKE(l_credit_card_2, l_regular_expr) THEN

    DBMS_OUTPUT.put_line(‘REGEXP_LIKE: Bad Credit Card : ‘ || l_credit_card_2);

  END IF;

  — REGEXP_REPLACE

  DBMS_OUTPUT.put_line(‘REGEXP_REPLACE: Before: ‘ || l_text);

  DBMS_OUTPUT.put_line(‘REGEXP_REPLACE: After : ‘ || REGEXP_REPLACE(l_text, l_regular_expr, ‘**** **** **** ****’));

 — REGEXP_SUBSTR

  DBMS_OUTPUT.put_line(‘REGEXP_SUBSTR: Matching String : ‘ || REGEXP_SUBSTR(l_text, l_regular_expr));

  END;

8. Dual optimization (Fast Dual):

  • For years, developers have used the Dual table for “quick” queries only to find during performance tuning that scans involving dual could be expensive
  • In Oracle 10g the optimizer knows about dual and implements an operation called “fast dual” greatly speeding access

The FAST DUAL optimization plan improves performance by significantly reducing the consistent reads and thus benefits such applications that frequently use the DUAL table

Although selecting from DUAL is fast but it can be slow if you do it millions of times.

9. Merge Enhanced:

MERGE now allows:–

  1.  Specification of either update, or insert, or both
  2.  Deletion of rows during update

In Oracle 10g, either the INSERT or UPDATE operations are optional (you must have one, but not necessarily both, unlike in 9i).

10. Exception Enhancements:

 Oracle has introduced an enhancement to exception handling and error messages in 10g.

A package called FORMAT_ERROR_BACKTRACE can be used to get the true origin of an exception in a nested stack of PL/SQL calls

This can be better explained using an example:

SQL> CREATE PROCEDURE p3 AS

  2  BEGIN

  3     –filler

  4     –filler

  5     RAISE PROGRAM_ERROR;

  6  END p3;

  7  /

 Procedure created.

 SQL> CREATE PROCEDURE p2 AS

  2  BEGIN

  3     p3;

  4  END p2;

  5  /

 Procedure created.

 SQL> CREATE PROCEDURE p1 AS

  2  BEGIN

  3     –filler

  4     –filler

  5     –filler

  6     –filler

  7     p2;

  8  END p1;

  9  /

 Procedure created.

Now we can call the top-level procedure P1 and report the propagation points of any exceptions.

SQL> BEGIN

  2     p1;

  3  EXCEPTION

  4     WHEN OTHERS THEN

  5        DBMS_OUTPUT.PUT_LINE(

  6           DBMS_UTILITY.FORMAT_ERROR_BACKTRACE

  7           );

  8  END;

  9  /

ORA-06512: at “SCOTT.P3”, line 5

ORA-06512: at “SCOTT.P2”, line 3

ORA-06512: at “SCOTT.P1”, line 7

ORA-06512: at line 2

 PL/SQL procedure successfully completed.

11. Dml error logging in oracle 10g

This article introduces DML error logging; a major new feature of Oracle 10g Release 2 for bulk SQL operations. DML error logging enables us to trap “bad data” and filter it to a log table without failing our overall DML statement. This has never been possible in SQL before, although we could use complex constraint management and application code to achieve a slightly similar end-result. DML error logging is more similar in concept to the FORALL SAVE EXCEPTIONS construct in PL/SQL (new in Oracle 9i).

Overview of dml error logging

With this feature, we can add a clause to our bulk DML statements (INSERT, UPDATE, MERGE and DELETE) to prevent the statement failing on hitting exceptions (i.e. “bad data”). Exceptional rows are added to a specifically-created errors table for investigation and/or intervention. In addition, we can control the number of bad records we will tolerate before failing the entire statement.

There are two components to DML error logging as follows:

  • LOG ERRORS clause to DML statements; and
  • DBMS_ERRLOG package for managing error tables.

We shall examine both of these components in this article, but first we will create some sample tables.

Getting started: sample data

We will use two tables in our DML error logging examples, as follows. Note that for the examples, I created a user named EL with just CREATE SESSION, CREATE TABLE and a tablespace quota.

SQL> CREATE TABLE src (x,y,z)

  2  AS

  3     SELECT object_id

  4     ,      object_type

  5     ,      object_name

  6     FROM   all_objects

  7     WHERE  ROWNUM <= 5;

Table created.

SQL> CREATE TABLE tgt

  2  AS

  3     SELECT *

  4     FROM   src

  5     WHERE  ROWNUM <= 3;

Table created.

SQL> ALTER TABLE tgt ADD

  2     CONSTRAINT pk_tgt

  3     PRIMARY KEY (x);

Table altered.

We have a source table (SRC) and a target table (TGT). The data is setup in such a way that a standard INSERT..SELECT from SRC into TGT will fail, as follows.

SQL> INSERT INTO tgt SELECT * FROM src;

INSERT INTO tgt SELECT * FROM src

*

ERROR at line 1:

ORA-00001: unique constraint (EL.PK_TGT) violated

On this basis, we can now introduce the new DML error logging feature. To begin, we will require an exceptions table.

Creating the error log table

DML error logging works on the principle of trapping exceptions in bulk SQL statements and re-directing the “bad data” to an error table. The error table is created using an API in the new DBMS_ERRLOG package. The minimum amount of information we need to supply to this is the name of the target table we wish to trap exceptions for. Oracle will by default create an error table named “ERR$_SUBSTR(our_table_name,1,25)“. If we so choose, we can optionally control the name, owner and tablespace of the error log table by supplying the relevant parameters.

Given this, we will now create an error log table for TGT and provide a friendly name of our own.

SQL> BEGIN

  2     DBMS_ERRLOG.CREATE_ERROR_LOG(

  3        dml_table_name      => ‘TGT’,        –<– required

  4        err_log_table_name  => ‘TGT_ERRORS’  –<– optional

  5        );

  6  END;

  7  /

PL/SQL procedure successfully completed.

SQL> SELECT table_name FROM user_tables;

TABLE_NAME

——————————

TGT

SRC

TGT_ERRORS

The error log table has a number of metadata columns (describing the nature of the exceptional data) and also a VARCHAR2 representation of the base-table columns themselves. The VARCHAR2 columns enable us to see erroneous data that perhaps did not satisfy its base-table datatype. Needless to say, there is a limitation on the datatypes that can be converted to VARCHAR2. For example, ANYDATA, user-defined types and LOBs cannot be captured in the error log table.

The structure of the TGT_ERRORS table as follows.

Invoking dml error logging

Now we have some sample data and an error log table in place, we are ready to see DML error logging in action. We’ll begin by replaying the failed INSERT..SELECT from earlier and then describe the new syntax elements.

SQL> INSERT INTO tgt

  2  SELECT * FROM src

  3  LOG ERRORS INTO tgt_errors (‘INSERT..SELECT..RL=UNLIMITED’)

  4  REJECT LIMIT UNLIMITED;

2 rows created.

LOG ERRORS clause is how we invoke DML error logging. We are telling Oracle that we wish our DML statement to succeed in the event that we encounter exceptions below a specified threshold;

INTO tgt_errors extension to the LOG ERRORS clause is only necessary when using a non-default error log table name, as we are in this article; The optional literal in brackets enables us to “tag” any bad data that is written to our error table. This helps us to differentiate exceptional data over time. Note this tag can (and should) be a bind variable in “real” applications;

REJECT LIMIT clause, this is how we specify our threshold for errors (i.e. the number of exceptions we will allow before Oracle fails the entire DML statement).

Error log data

Re-visiting our example, therefore, we can see that with DML error logging our insert succeeded but only for 2 rows. We know this is fewer than the number of records in our staging table, so we should check the error log table, as follows (using Tom Kyte’s print_table procedure for convenience). Note in particular our tags, which can help us find the bad data quickly on a busy system and also the error message assigned to each failed row (we just happen to have the same exception for each due to the setup of the sample data). In addition, we can see the actual data that we were trying to insert.

The logged data is not part of the same transaction, which we can demonstrate with a simple rollback. We can see that the error log data is still in the log table. On repeated re-runs and failures, therefore, it will be necessary to tag each statement in such a way as to make then easily identifiable. The tags we’ve used so far in this article would obviously be useless under such a scenario.

Note that there is a ROWID column in the logging table. This is used when an UPDATE (or update part of a MERGE) fails and provides the ROWID of the target row that was being updated. As we saw with the INSERT example, the “bad data” that caused the exception is recorded in the logging table.

Reject limit

The default reject limit is 0 (i.e. if this part of the LOG ERRORS clause is omitted). In our first DML error logging example, we used an unlimited reject limit. With this option, a DML statement will succeed even if none of its target operations succeed (i.e. all data is “bad”). If we set an explicit reject limit and exceed it, the entire statement fails but n+1 errors are still logged (where n is the reject limit). We can see this as follows by setting a reject limit of 1. Note that we have changed our tag accordingly to assist with the lookup against the error log.

Dml error logging in pl/sql

We can see that DML error logging is fully supported in PL/SQL. The SQL%ROWCOUNT attribute will report the successful rowcount only. Unfortunately, there doesn’t appear to be an attribute or exception to indicate that errors were logged, so the only option is to examine the error log table itself. In the following example, we will reset our sample data and table and embed our SQL inside a PL/SQL block. We will also use a bind variable for the logging tag.

Dropping the error log table

To remove the error log table, we have to manually drop it. Unusually, the DBMS_ERRLOG package does not supply an API for this, but as it is simply a table without any other objects attached to it, we can simply drop it ourselves.

12. Collect function in 10g

Oracle 10g has introduced an extremely useful new group function, COLLECT. This function enables us to aggregate data into a collection, retaining multiple records of data within a single row (like a nested table). One of the main benefits of this function is that it makes “string aggregation” (one of the web’s most-requested Oracle technique) very simple. This article will introduce the COLLECT function and then demonstrate how it can be used to aggregate multiple records into a single value (a technique known as “string aggregation”).

An overview of the collect function

We’ll start by demonstrating the COLLECT function. We’ll run a simple query against the EMP table to collect the names of all employees by department.

Something looks a little unusual here, but ignoring the strange identifier for a moment, we can see that the COLLECT function has aggregated the employee names per department as requested.

System-Generated Types

Moving on to the strange identifier in the example output, we can see that Oracle has created a collection type to support the COLLECT function. The behaviour is different between 10g releases 1 and 2, so we’ll investigate each separately.

10g Release 1

For our example EMP query above (executed in a 10.1 database), Oracle has created a supporting type named “SYSTPXeCjDqbWSqWrshgYrRPR4Q==”. We can find this in the dictionary as follows.

SQL> SELECT owner
  2  ,      typecode
  3  FROM   all_types
  4  WHERE  type_name = 'SYSTPXeCjDqbWSqWrshgYrRPR4Q==';
 OWNER                          TYPECODE
------------------------------ ------------------------------
SYS                            COLLECTION
 1 row selected.
 

It appears as though Oracle generates a supporting collection type every time it hard-parses a SQL statement that uses the COLLECT function. In Oracle 10.1, the type is created in the SYS schema. If we try to use this new type as follows, we’ll find that we cannot.

In addition to the burden this must place on the parsing process, we might also be concerned about the number of system-generated types that might start appearing in our database (for reasons unknown, some DBAs and developers are worried about this sort of thing). If we flush the shared pool or even bounce the database, the type persists, even though it no longer supports a cached SQL statement. Oracle Support’s official line on this (which used to be “bounce the database to remove the type”) is that SMON cleans up unused types “after a period”. Recent experience suggests that this period can be anything up to 24 hours after the bounce, so in an online database with a large shared pool, these types could stick around for some considerable period. Whether we choose to worry about this or not is another matter entirely!

10g Release 2

In Oracle 10g Release 2, the type is created in the schema that parses the SQL statement, as we can see in the following example (we must first repeat the original EMP query to generate a type).

Furthermore, removing this system-generated type is much more simple in 10.2 than in 10.1. Firstly, because we own the type, we can simply drop it as follows (assuming that the SQL statement that generated it is no longer required).

SQL> DROP TYPE "SYSTPo3itZvoiRAyeH+f5LKv6+Q==";
 
Type dropped.

Bear in mind, however, that this type is supporting a SQL cursor. Therefore, if we do decide to drop the system-generated type as above, the underlying SQL cursor will be removed from the shared pool. Therefore, a re-run of the original SQL statement will need to be hard-parsed and a new type will be created accordingly.

Alternatively, a bounce of the database will drop the type immediately (rather than at some point during the next 24 hours, which is the 10.1 behaviour). Despite this, we are probably best to leave the SQL cursor to age out of the shared pool naturally, leaving SMON to clean up at a later stage.

 

Using our own collection Types

It is possible to use our own collection types with COLLECT. The CAST function (available at least as far back as Oracle 8.0 and possibly further) can be used to turn the results of the COLLECT into a type of our choosing. Note that this doesn’t stop Oracle creating system-generated types to support the SQL statement, but it does make the results easier to work with.

In the following example, we’ll create a standard VARCHAR2 collection type and CAST the results of our collected employee names.

Note that if you are casting collections of numbers, Oracle can be particularly fussy about precisions and scales, as the following example demonstrates. We’ll create a general collection of number and then attempt to cast a collection of employee salaries (the EMP.SAL column is defined as NUMBER(7,2)).

This is rather a confusing problem, as the unconstrained NUMBER type should easily incorporate a NUMBER(7,2). To wrap this up, however, there are two simple solutions. We can either make the collected column fit the type or the type fit the column, as shown below. First we’ll make the column fit the type.

Furthermore Collect can be used for string aggregation also.

Hope this illustration of the 10g features and enhancements is showcased well and easily understandable, which would help and guide the developers for better coding.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: