Monday, May 12, 2008

ORA-12801 ORA-01114 ORA-27072 and Linux Error: 25

We have recently experienced below issue while running some long running recon scripts. These scripts demand about 70 GB of temp space and failed with below error after running for 1.5 hours.

java.sql.SQLException: ORA-12801: error signaled in parallel query server P011
ORA-01114: IO error writing block to file 1004 (block # 413969)
ORA-27072: skgfdisp: I/O error
Linux Error: 25: Inappropriate ioctl for device
Additional information: 413969


After consulting with our SA and some research in google, we came to know about the root cause of this issue.

One of our DBA has created a 2 temp files with 20 GB each on a file system that has only 30 GB free space. Initially I wondered, How the ORACLE allowed to create a 40 GB temp files though we just have 30 GB free space on File system. But then I came to know about oracle sparse temp files creation and understood that oracle doesn’t actually allocate/reserve the space when we create temp files.


******************************************************************
What are Sparse Files?
On many Unix file systems, files containing long strings of nulls can be stored much more efficiently than other files. To be specific, if a string of nulls spans an entire allocation block, that whole block is not stored on disk at all. Files where one or more blocks are omitted in this way are called sparse files. The missing blocks are also known as holes.

Note that sparse files are not the same as compressed files. Sparse files are exactly the same as their non-sparse equivalents when they are read.The Unix kernel simply fills in nulls for the missing blocks.

Sparse files are created by seeking beyond the end of a file and then writing data. Because of the nature of these applications, sparse files are often created by random-access database programs.


******************************************************************

Here is the brief explanation from TOM KYTE about these temp sparse files.

One of the nuances of true temporary files is that if the operating system permits it – the temporary files will be created “sparse”. That is, they will not actually consume disk storage until they need to. You can see that easily using this example (on Red Hat Linux in this case)



SQL> !df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 74807888 41999488 29008368 60% /
/dev/hda1 102454 14931 82233 16%
/bootnone 1030804 0 1030804 0% /dev/shm


SQL> create temporary tablespace temp_huge tempfile '/tmp/temp_huge' size 2048m
/
Tablespace created.



SQL> !df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 74807888 41999616 29008240 60% /
/dev/hda1 102454 14931 82233 16%
/bootnone 1030804 0 1030804 0% /dev/shm



NOTE: df is a Unix command to show “disk free”. This command showed that I have 29,008,368 K free in the file system containing /tmp before I added a 2gig tempfile to the database. After I added that file, I had 29,008,240 K free in the file system. Apparently it only took 128 K of storage to hold that file. But if we “ls” it:



SQL> !ls -l
/tmp/temp_huge-rw-rw---- 1 ora10g ora10g 2147491840 Jan 2 16:34 /tmp/temp_huge


It appears to be a normal “2 gig” file. But it is only consuming some 128 K of storage. The reason I point this out is because I would be able to actually create hundreds of these two gigabyte temporary files – even though I have roughly 29 GIG free. Sounds great – free storage for all! The problem is as we started to use these temporary files and they started expanding out – you would rapidly hit errors stating “no more space”. Since the space is allocated as needed, you stand a definite chance of running out of room (especially if after you create the tempfiles someone else fills up the file system with other stuff). How to solve this differs from OS to OS, on Linux some of the options are to use ‘dd’ to fill the file or use ‘cp’ to create a non-sparse file. For example:



SQL> !cp --sparse=never /tmp/temp_huge /tmp/temp_huge2

SQL> !df
Filesystem 1K-blocks Used Available Use% Mounted on

/dev/hda2 74807888 44099336 26908520 63% /
/dev/hda1 102454 14931 82233 16% /boot
none 1030804 0 1030804 0% /dev/shm


SQL> drop tablespace temp_huge;
Tablespace dropped.


SQL> create temporary tablespace temp_huge 2 tempfile '/tmp/temp_huge2' reuse; Tablespace created.


SQL> !df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 74807888 44099396 26908460 63% /
/dev/hda1 102454 14931 82233 16% /boot
none 1030804 0 1030804 0% /dev/shm


After copying the sparse 2 G file to “/tmp/temp_huge2” and creating the temporary tablespace using that tempfile with the REUSE option – we are assured that tempfile has allocated all of its file system space and our database actually has 2 G of temporary space to work with.


I would also recommend you to read the below Metalink articles and a discussion from asktom if you would like to know more about it:

Metalink Notes: 1020110.6 251336.1 121732.1
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:34014729642755

Thursday, April 17, 2008

Transporatable Tablesapces

For the last 10 days, I have spent a great amount of time doing R&D on TTS, reading Metalink articles, whitepapers and oracle docs. I have compiled some notes extracting information from all these docs so that I can use it as reference.


Transportable Tablespaces
The transportable tablespaces feature is introduced in V8.1.5 the first release of racle8i; this feature is sometimes referred to as pluggable tablespaces. You need to have installed the Enterprise Edition of Oracle to generate a transportable tablespace set, but any edition of Oracle (Enterprise, Standard or Personal) can be used to plug a transportable tablespace set into an Oracle database. The tablespaces being transported can be either dictionary managed or locally managed

Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data, because transporting a tablespace only requires the copying of datafiles and integrating the tablespace structural information. You can also use transportable tablespaces to move index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

The transportable tablespace mechanism consists in the combination of 2 things:
- An export of the metadata (instead of the data) of a tablespace
- A copy of the files belonging to the tablespace

These 2 parts can then be transferred onto another compatible operating system in order to "plug" the tablespace with its files on another database:
- Restore the files in the new location
- Import the metadata of the tablespace specifying the new location of the files


The appropriate environment is set by SYS when
$ORACLE_HOME/rdbms/admin/catproc.sql is executed. More specifically, catproc script runs the following scripts to implement the transportable tablespaces:
- $ORACLE_HOME/rdbms/admin/catplug.sql
- $ORACLE_HOME/rdbms/admin/dbmsplts.sql
- $ORACLE_HOME/rdbms/admin/prvtplts.plb

The SYS packages DBMS_PLUGTS and DBMS_TTS are available to users to handle transportable tablespaces.

Some views have been modified to reflect this new feature. In particular, the column PLUGGED_IN has been added to views V$DATAFILE (value 1 or 0) and DBA_TABLESPACES (value YES or NO). In addition, some views have been added o the dictionary for internal use (STRADDLING_TS_OBJECTS,
TS_PLUG_INFO, UNI_PLUGGABLE_SET_CHECK, PLUGGABLE_SET_CHECK).

EXPORT command has new keywords: TRANSPORT_TABLESPACE and TABLESPACES.
IMPORT command has also new keywords: TRANSPORT_TABLESPACE, TABLESPACES, DATAFILES and TTS_OWNERS.


RESTRICTIONS
Principal limitations for transportable tablespaces are:

In Oracle8i and Oracle9i, the source and target database had to be on the same hardware platform. For example, you could transport tablespaces between Sun Solaris Oracle databases, or you can transport tablespaces between NT Oracle databases. However, you couldn't transport a tablespace from a SUN Solaris Oracle database to an NT Oracle database. Starting with Oracle Database 10g, you can transport tablespaces across platforms. If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.


Character set as well as national character set of the source and the target must be the same

In Oracle8i one of the restrictions was that the block size of both databases must be the same. In Oracle9i the introduction of multiple block sizes has removed this restriction

Users whose default tablespace is getting exported must exist in the target database before importing.
Target database must not have tablespace of the same name. (If you are using 10g or above, you'll be able to rename existing tablespace.)

The tablespace must be self-contained (no reference pointing outside this tablespace like index, constraint, partition, LOB...). This can be checked using the DBMS_TTS.TRANSPORT_SET_CHECK procedure. The TS_LIST parameter accepts a comma separated list of tablespace names and the INCL_CONSTRAINTS parameter indicates if constraints should be included in the check
Example:
EXEC DBMS_TTS.TRANSPORT_SET_CHECK(ts_list=>'POOL_DATA', incl_constraints => TRUE);

The TRANSPORT_SET_VIOLATIONS view can be used to check for any violations:
SELECT * FROM sys.transport_set_violations;

Oracle versions in source and target must be 8.1.x and above.

You cannot transport the SYSTEM tablespace, or SYSTEM tablespace objects or objects owned by the user SYS. Some examples of such objects are PL/SQL, Java classes, callouts, views, synonyms, users, privileges, dimensions, directories, and sequences.

Most database entities, such as data in a tablespace or structural information associated with the tablespace, behave normally after being transported to a different database. Some exceptions arise with the following entities:
- Advanced queues: transportable tablespaces with 8.0-compatible advanced queues with multiple recipients are not supported.
- Domain indexes: only supported with Oracle9i and Oracle10g transportable tablespaces.
- Materialized views/snapshot/replication: only supported with Oracle10g transportable tablespaces.
- Function-based indexes: only supported with Oracle10g transportable tablespaces (PL/SQL function-based indexes are not supported).
- Scoped REFs: only supported with Oracle10g transportable tablespaces.


Transportable Tablespaces FAQs:

Q: How to Connect AS SYSDBA when Using Export or Import?
This document demonstrates how to connect AS SYSDBA when starting an export or import. Incorrect usage of single or double quotes can result in errors.

Windows: exp 'sys/change_on_install@instance as sysdba' tables=scott.emp
Unix : exp \'sys/change_on_install@instance as sysdba\' tables=scott.emp

Q: Can a tablespace be renamed when transporting to the target database?
This will be possible with the Oracle10g release. With Oracle9i and lower releases this is not possible.
Transported tablespaces are plugged in a read-only configuration; however the file header also stores the tablespace name, so any attempt to rename it would require an update to the file header. Similarly at unplug time there is a requirement that the tablespace is in read only mode and the same restrictions therefore apply - you can't
'RENAME' a tablespace, because you might only be taking a copy of the datafile.

Q: Can a datafile be renamed when transporting to the target database?
Yes. For this you need to use the OpenVMS rename command. Change the physical file name as desired after copying to the target directory. Then use the new datafile name in the DATAFILES parameter of the import utility.
Example:

1. Assume the data file TEST_2.dbs has been copied from the source database to the target database under the original physical name.
2. Use the OpenVMS RENAME command to alter the physical file name at the OS level.
Example: $ rename TEST_2.dbs TEST_9.dbs
3. Edit the import parameter file and change TEST_2.dbs to TEST_9.dbs:
USERID='SYS/CHANGE_ON_INSTALL AS SYSDBA'
TRANSPORT_TABLESPACE=Y
TABLESPACES=SCOTT_TS
DATAFILES=(TEST_9.DBS,TEST_1.DBS)
FILE=SCOTT_TS.DMP
4. Import the metadata.
5. Review V$DATAFILE to see that TEST_9.dbs is there and TEST_2.dbs is not.
Q: What order will the datafiles be added on the target database?The order is controlled automatically by the export and import operations. Changing the order of the physical file names within the DATAFILES parameter of the import utility has no affect on the order that files are added to the target database. With the import operation only the absolute file numbers will change; the relative file numbers remain unaltered.
Q: What schema will own the tables in the transported tablespace?
The tables will be owned by the same schema owner as in the source database unless the FROMUSER/TOUSER import parameters are used.

Q: Will the schema owner be created automatically in the target database?
No. You must have the schema owner (database user) already set up in the target database.

Q: Are ROWID's modified during the import on the target database?
No. ROWID's are dependent upon relative file numbers. These file numbers remain the same when importing the tablespaces into the new database. Only the absolute file numbers are changed. See the Oracle Administrator's Guide for more information.

Please review below MetaLink articles for more information.
Note 77523.1 - Transportable Tablespaces -- An Example to setup and use
Note 100693.1 - Getting Started with Transportable Tablespaces
Note 103013.1 - Renaming Tablespace Using Transportable Tablespace Feature
Note 132904.1 - Compatibility Matrix for Export & Import between Different Oracle Versions
Note:144212.1 - Transporting Tablespaces with Different Block Sizes Between Databases

Example: http://www.oracle-base.com/articles/9i/TransportableTablespaces9i.php

Monday, February 18, 2008

Tuning Import process

We sync four of our System Testing DBs with Prod DBs data once in two months. I have come up witht he below notes based on my experience and also from the notes I have collected from various sites and forums.

Before starting import:
  • Change DB to No Archive Log Mode.
  • Create large redo log files and drop existing redo logs. ( 2gb each or more..)
  • Increase the size of LOG_BUFFERS, DB_CACHE_SIZE considerably in the init$SID.ora file.
  • Place the dump files to be imported on a separate physical disk drive from the oracle data files and Redo log files.
  • Create a large Undo tablespace.
  • Identify the Big tables and drop Indexes.
  • Disable logging by setting the hidden parameter _disable_logging = true to reduce redo. ( The DB will be unrecoverable if DB crashes in the middle as the redo log files won't have the required data to do crash recovery. Take full Db backup to recover before setting this parameter.)

Start the Imp process with the below options:

  • BUFFER ( set a larger buffer )
  • RECORDLENGTH = 64k ( This needs to be set according to your I/O chunk size)
  • COMMIT = N (This option provides a significant performance increase. Make sure you have large UNDO/ROLLBACK segment)
  • INDEXES=N (Create an index file so that you can create indexes AFTER you have imported data)
  • RESUMABLE = Y
After Import:
create the indexfile and create indexes for the big tables in parallel mode.

Review the below metalink article for more information on tuning import process:
Doc ID: Note:93763.1 Subject: Tuning Considerations When Import Is Slow