Cron is a time-based job scheduler in Unix-like computer operating systems. 'cron' is short for Chronograph. Cron enables users to schedule jobs (commands or shell scripts) to run automatically at a certain time or date.
> connected with oracle user.
[oracle@oraclelinux bin]$ pwd
/bin
crontab -e - opens the user's crontab file for viewing/editing
crontab -l - simply lists the crontab file's contents for the user. Think of it as a "cat" function for the crontab.
-------------------------------
[oracle@oraclelinux bin]$ crontab -e
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=oracle
HOME=/
*/5 * * * * sh /home/oracle/rakesh.sh
-----------------------------------------
The script will execute every five minute.
http://en.wikipedia.org/wiki/Cron
http://www.crontabrocks.org/
--------------------------------------------------
rakesh.sh shell script :
--------------------------------------------------
#!bin/bash
#PATH=$PATH:$HOME/bin;export PATH
unset USERNAME
#DISPLAY=10.142.200.141:0.0; export DISPLAY
#Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_SID=jsildb; export ORACLE_SID
ORACLE_HOME_LISTENER=LISTENER; export ORACLE_HOME_LISTENER
ORATAB=/etc/oratab; export ORATAB
#ORACLE_TERM=xterm; export ORACLE_TERM
export PATH=/u01/app/oracle/product/10.2.0/db_1/bin:$PATH
export PATH=/usr/sbin/:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:lib:usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
#LD_ASSUME_KERNEL=2.4.1; export LD_ASSUME_KERNEL
export DATE=$(date +"%m_%d_%y_%H:%M:%S_%p")
exp rakesh/rakesh@jsildb file=/home/oracle/erp_backups/rakesh_$DATE.dmp log=/home/oracle/erp_backups/rakesh_$DATE.log statistics=none
Friday, November 6, 2009
Tuesday, November 3, 2009
export bash script in linux with timestamp...
I have created export bash script in linux operating system to export schema and attached time of export with dump & log file..
- connect with oracle user
-home/backup.sh
- - - - - - - - - - - - - - - - - - - -
#!bin/bash
export DATE=$(date +"%m_%d_%y_%H:%M:%S_%p")
/u01/app/oracle/product/10.2.0/db_1/bin/exp rakesh/rakesh@rockdb file=/home/oracle/backups/rakesh$DATE.dmp log=/home/oracle/backups/rakesh$DATE.log statistics=none
- - - - - - - - - - - - -
to run script...
./backup.sh
- connect with oracle user
-home/backup.sh
- - - - - - - - - - - - - - - - - - - -
#!bin/bash
export DATE=$(date +"%m_%d_%y_%H:%M:%S_%p")
/u01/app/oracle/product/10.2.0/db_1/bin/exp rakesh/rakesh@rockdb file=/home/oracle/backups/rakesh$DATE.dmp log=/home/oracle/backups/rakesh$DATE.log statistics=none
- - - - - - - - - - - - -
to run script...
./backup.sh
Wednesday, July 29, 2009
ORA-00031: session marked for kill
Today, i was trying to kill a session as explained in
http://rakeshocp.blogspot.com/2009/04/kill-session.html but i was getting error "ORA-00031: session marked for kill" i got to know that we need to kill an OS session then. I wanted to kill "hassan's" session.
select spid, osuser, s.program from v$process p, v$session s where p.addr=s.paddr;
SPID OSUSER PROGRAM
------------ ------------------------------ ----------------
4844 rakesh.kumar
7168 rakesh.kumar plsqldev.exe
6012 Hassan plsqldev.exe
C:\Documents and Settings\Administrator>orakill icpora 6012
Kill of thread id 6012 in instance icpora successfully signalled.
C:\Documents and Settings\Administrator>
Thanks to the Don Burleson
http://www.dba-oracle.com/t_kill_process_windows.htm
http://rakeshocp.blogspot.com/2009/04/kill-session.html but i was getting error "ORA-00031: session marked for kill" i got to know that we need to kill an OS session then. I wanted to kill "hassan's" session.
select spid, osuser, s.program from v$process p, v$session s where p.addr=s.paddr;
SPID OSUSER PROGRAM
------------ ------------------------------ ----------------
4844 rakesh.kumar
7168 rakesh.kumar plsqldev.exe
6012 Hassan plsqldev.exe
C:\Documents and Settings\Administrator>orakill icpora 6012
Kill of thread id 6012 in instance icpora successfully signalled.
C:\Documents and Settings\Administrator>
Thanks to the Don Burleson
http://www.dba-oracle.com/t_kill_process_windows.htm
Tuesday, June 16, 2009
When your query takes too long
What to do when your query is too slow? Check the link below for details:
http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0
http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0
Tuesday, June 2, 2009
COLD BACKUP batch file
We have scheduled a Cold backup of DR site: check the steps below
create a coldbackup.bat
-----------------------------
cd D:\oracle\product\10.2.0\db_1\BIN
sqlplus "sys/rock@rockstd as sysdba" @D:\COLDBACKUP_WEEKLY\coldbackup.sql
-----------------------------
steps for creating batch file: http://rakeshocp.blogspot.com/2007/09/batch-file-scripts.html
-----------------------------
coldbackup.sql
-----------------------------
ALTER DATABASE recover managed standby database cancel;
SHUTDOWN IMMEDIATE;
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\DATAFILES
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\CONTROLFILES
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\REDOLOGS
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\ARCHIVELOGS
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\PARAMETERFILES
host copy D:\oracle\product\10.2.0\db_1\database\INITrockstd.ORA F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\PARAMETERFILES;
host copy E:\DATAFILES\*.DBF F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\DATAFILES;
host copy E:\REDOLOGS\*.LOG F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\REDOLOGS;
host copy E:\CONTROLFILES\*.CTL F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\CONTROLFILES;
host copy E:\ARCHIVES\*.ARC F:\ROCKSTD_Coldbackup\"%DATE:/=_%\ARCHIVELOGS;
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
exit;
exit
create a coldbackup.bat
-----------------------------
cd D:\oracle\product\10.2.0\db_1\BIN
sqlplus "sys/rock@rockstd as sysdba" @D:\COLDBACKUP_WEEKLY\coldbackup.sql
-----------------------------
steps for creating batch file: http://rakeshocp.blogspot.com/2007/09/batch-file-scripts.html
-----------------------------
coldbackup.sql
-----------------------------
ALTER DATABASE recover managed standby database cancel;
SHUTDOWN IMMEDIATE;
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\DATAFILES
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\CONTROLFILES
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\REDOLOGS
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\ARCHIVELOGS
host MD F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\PARAMETERFILES
host copy D:\oracle\product\10.2.0\db_1\database\INITrockstd.ORA F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\PARAMETERFILES;
host copy E:\DATAFILES\*.DBF F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\DATAFILES;
host copy E:\REDOLOGS\*.LOG F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\REDOLOGS;
host copy E:\CONTROLFILES\*.CTL F:\ROCKSTD_Coldbackup\"%DATE:/=_%"\CONTROLFILES;
host copy E:\ARCHIVES\*.ARC F:\ROCKSTD_Coldbackup\"%DATE:/=_%\ARCHIVELOGS;
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
exit;
exit
Monday, June 1, 2009
Failover Steps
Failover:
A failover is when the primary database is unavailable. Failover is performed only in the event of a catastrophic failure of the primary database, and the failover results in a transition of a standby database to the primary role. The database administrator can configure Data Guard to ensure no data loss.
We performed a failover in our Disastor Recovery scenario. Where the primary database was taken offline (as unavialble), and standby database was acticated as Primary Database.
The Standby database will now be activated as the new Primary database.
SQL> conn sys/****@rockstd as sysdba
SQL> alter database recover managed standby database finish force;
SQL> alter database commit to switchover to primary;
SQL> alter database open;
A failover is when the primary database is unavailable. Failover is performed only in the event of a catastrophic failure of the primary database, and the failover results in a transition of a standby database to the primary role. The database administrator can configure Data Guard to ensure no data loss.
We performed a failover in our Disastor Recovery scenario. Where the primary database was taken offline (as unavialble), and standby database was acticated as Primary Database.
The Standby database will now be activated as the new Primary database.
SQL> conn sys/****@rockstd as sysdba
SQL> alter database recover managed standby database finish force;
SQL> alter database commit to switchover to primary;
SQL> alter database open;
Friday, May 29, 2009
Switchover Steps
Once the standby database is setup using Data Guard and is configured properly, we should test switchover and failover scenarios to be better prepared for a real life disaster situation.
A switchover allows the primary database to switch roles with its standby database. There is no data loss during a switchover. We can switch back to the original Primary database later by performing another switchover. A switchover is performed on a database when the need arises to upgrade or change configuration settings on the Primary database. The Standby database can temporarily be used as the Primary database for zero downtime while the actual Primary database is being upgraded or changed.
Switchover Configuration:
--------------------------
Database Role---------Service-----
Primary-------------ROCKPRIM
Standby-------------ROCKSTD
Before Switchover:
--------------------------
1. Verify the primary database instance is open and the standby database instance is mounted.
2. Verify there are no active users connected to the databases, if there are active sessions then kill active sessions.
3. Make sure the last redo data transmitted from the Primary database was applied on the standby database. Issue the following commands on Primary database and Standby database to find out:
select sequence#, applied from v$archvied_log; or select max(sequence#)
from v$archived_log where applied = ‘YES’;
4. Make sure that log_archive_dest_2 entry is specified in both databases.
Switchover Steps:
------------------
1. Initiate the switchover on the primary database ROCKPRIM:
SQL> conn sys/****@ROCKPRIM as sysdba
SQL> alter database commit to switchover to physical standby with session shutdown;
2. After step 1 finishes, Switch the original physical standby database ROCKSTD to primary role:
SQL> conn sys/****@ROCKSTD as sysdba
SQL> alter database commit to switchover to primary;
SQL> select database_role from v$database;
DATABASE_ROLE
----------------
PRIMARY
3. Immediately after issuing command in step 2. Shut down and restart the former primary instance ROCKPRIM:
SQL> conn sys/****@ROCKPRIM as sysdba
SQL> shutdown immediate;
SQL> startup nomount;
SQL> alter database mount standby database;
SQL> alter database recover managed standby database disconnect from session;
SQL> select database_role from v$database;
DATABASE_ROLE
------------------------------------------------
PHYSICAL STANDBY
* Connect at new primary database ROCKSTD, and perform a SWITCH LOGFILE to start sending redo data to the standby database ROCK.
A switchover allows the primary database to switch roles with its standby database. There is no data loss during a switchover. We can switch back to the original Primary database later by performing another switchover. A switchover is performed on a database when the need arises to upgrade or change configuration settings on the Primary database. The Standby database can temporarily be used as the Primary database for zero downtime while the actual Primary database is being upgraded or changed.
Switchover Configuration:
--------------------------
Database Role---------Service-----
Primary-------------ROCKPRIM
Standby-------------ROCKSTD
Before Switchover:
--------------------------
1. Verify the primary database instance is open and the standby database instance is mounted.
2. Verify there are no active users connected to the databases, if there are active sessions then kill active sessions.
3. Make sure the last redo data transmitted from the Primary database was applied on the standby database. Issue the following commands on Primary database and Standby database to find out:
select sequence#, applied from v$archvied_log; or select max(sequence#)
from v$archived_log where applied = ‘YES’;
4. Make sure that log_archive_dest_2 entry is specified in both databases.
Switchover Steps:
------------------
1. Initiate the switchover on the primary database ROCKPRIM:
SQL> conn sys/****@ROCKPRIM as sysdba
SQL> alter database commit to switchover to physical standby with session shutdown;
2. After step 1 finishes, Switch the original physical standby database ROCKSTD to primary role:
SQL> conn sys/****@ROCKSTD as sysdba
SQL> alter database commit to switchover to primary;
SQL> select database_role from v$database;
DATABASE_ROLE
----------------
PRIMARY
3. Immediately after issuing command in step 2. Shut down and restart the former primary instance ROCKPRIM:
SQL> conn sys/****@ROCKPRIM as sysdba
SQL> shutdown immediate;
SQL> startup nomount;
SQL> alter database mount standby database;
SQL> alter database recover managed standby database disconnect from session;
SQL> select database_role from v$database;
DATABASE_ROLE
------------------------------------------------
PHYSICAL STANDBY
* Connect at new primary database ROCKSTD, and perform a SWITCH LOGFILE to start sending redo data to the standby database ROCK.
Tuesday, May 19, 2009
TDE - Transparent Data Encryption
Encrypt sensitive data transparently without writing a single line of code.
It's your organization's worst nightmare: Someone has stolen backup tapes of your database. Sure, you built a secure system, encrypted the most sensitive assets, and built a firewall around the database servers. But the thief took the easy approach: He took the backup tapes, ostensibly to restore your database on a different server, start the database on it, and then browse the data at his leisure. Protecting the database data from such theft is not just good practice; it's a requirement for compliance with most laws, regulations, and guidelines. How can you protect your database from this vulnerability?
- Encrypt the sensitive data and store encryption key in a separate location called wallet.
- Without the keys stolen data is worthless.
- Define a column as encrypted.
- When user insert the data, the database transparently encrypts it and stores in the column, similarly when users select the column the database automatically decrypts it.
- Backups and archive logs are also in encrypted format.
- If the data on the disk is stolen, it can't be retrieved without master key, which is in the wallet not part of the stolen data.
- Even if the wallet is stolen, the master key can't be retrieved from it without the wallet password.
How does it work :
1. Specify wallet location in sqlnet.ora D:\oracle\product\10.2.0\admin\rock and place following lines:
ENCRYPTION_WALLET_LOCATION =(SOURCE= (METHOD=file) (METHOD_DATA= (DIRECTORY=D:\oracle\product\10.2.0\admin\rock)))
2. Create the wallet
Now you must create the wallet and set the password to access it. To do this, issue the following as a SYS user with the ALTER SYSTEM privilege:
SQL> alter system set encryption key authenticated by "soni";
System altered.
This command
-Creates the wallet in the location specified in Step 1
-Sets the password of the wallet as "soni"
-Opens the wallet for TDE to store and retrieve the master key
- Restart the database
Every time you open the database, you'll have to open the wallet using the same password as follows:
SQL> alter system set encryption wallet open authenticated by "soni";
System altered.
You can close the wallet like this:
alter system set encryption wallet close;
The wallet must be open for TDE to work. If the wallet is closed, you can access all nonencrypted columns, but not encrypted columns.
On a regular schema, suppose you have a table of account holders as follows:
SQL> conn hr/hr@rock
Connected.
SQL>
create table accounts
(
ACC_NO NUMBER(2),
ACC_NAME VARCHAR2(30),
SSN VARCHAR2(9)
)
/
Currently, the table has all data in clear text. You want to convert the column SSN, which holds the Social Security Number, to be stored as encrypted. You can issue
alter table accounts modify (ssn encrypt);
SQL> desc accounts;
Name Null? Type
-------------------------- -------- ----------------------------
ACC_NO NUMBER(2)
ACC_NAME VARCHAR2(30)
SSN VARCHAR2(9) ENCRYPT
This statement does two things:
-It creates an encryption key for the table. If you change another column in the same table to use the encrypted format, the same table key will be used.
-It converts all values in the column to encrypted format.
Using Data Pump with TDE
By default, if you use the Data Pump export utility (EXPDP) to export data from a table with encrypted columns, the data in the resulting dump file will be in clear text, even the encrypted column data. The following command exports the ACCOUNTS table—with its encrypted columns—and returns a warning:
SQL> conn sys/rock@rock as sysdba
Connected.
SQL>
SQL> create or replace directory expdp as 'D:\expdp_dir';
Directory created.
SQL>
SQL> grant read, write on directory expdp to hr;
Grant succeeded.
SQL>
H:\>expdp hr/hr@rock directory=expdp tables=accounts
Export: Release 10.2.0.1.0 - Production on Wednesday, 20 May, 2009 9:27:23
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "HR"."SYS_EXPORT_TABLE_01": hr/********@rock directory=expdp tables=accountsEstimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE. . exported "HR"."ACCOUNTS" 5.585 KB 1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TABLE_01 is: D:\EXPDP_DIR\EXPDAT.DMP
Job "HR"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 09:27:48
H:\>
This is just a warning, not an error; the rows will still be exported.
To protect your encrypted column data in the data pump dump files, you can password-protect your dump file when exporting the table. This password, specified as an ENCRYPTION_PASSWORD parameter in the EXPDP command, applies to this export process only; this is not the password of the wallet.
SQL> grant read, write on directory expdp to soni;
Grant succeeded.
H:\>expdp hr/hr@rock directory=expdp encryption_password=soni tables=accounts
H:\>impdp hr/hr@rock directory=expdp encryption_password=soni tables=accounts
-----------------------------------------------------------------------------------------------
for more details check the link below :
http://www.oracle.com/technology/oramag/oracle/05-sep/o55security.html
It's your organization's worst nightmare: Someone has stolen backup tapes of your database. Sure, you built a secure system, encrypted the most sensitive assets, and built a firewall around the database servers. But the thief took the easy approach: He took the backup tapes, ostensibly to restore your database on a different server, start the database on it, and then browse the data at his leisure. Protecting the database data from such theft is not just good practice; it's a requirement for compliance with most laws, regulations, and guidelines. How can you protect your database from this vulnerability?
- Encrypt the sensitive data and store encryption key in a separate location called wallet.
- Without the keys stolen data is worthless.
- Define a column as encrypted.
- When user insert the data, the database transparently encrypts it and stores in the column, similarly when users select the column the database automatically decrypts it.
- Backups and archive logs are also in encrypted format.
- If the data on the disk is stolen, it can't be retrieved without master key, which is in the wallet not part of the stolen data.
- Even if the wallet is stolen, the master key can't be retrieved from it without the wallet password.
How does it work :
1. Specify wallet location in sqlnet.ora D:\oracle\product\10.2.0\admin\rock and place following lines:
ENCRYPTION_WALLET_LOCATION =(SOURCE= (METHOD=file) (METHOD_DATA= (DIRECTORY=D:\oracle\product\10.2.0\admin\rock)))
2. Create the wallet
Now you must create the wallet and set the password to access it. To do this, issue the following as a SYS user with the ALTER SYSTEM privilege:
SQL> alter system set encryption key authenticated by "soni";
System altered.
This command
-Creates the wallet in the location specified in Step 1
-Sets the password of the wallet as "soni"
-Opens the wallet for TDE to store and retrieve the master key
- Restart the database
Every time you open the database, you'll have to open the wallet using the same password as follows:
SQL> alter system set encryption wallet open authenticated by "soni";
System altered.
You can close the wallet like this:
alter system set encryption wallet close;
The wallet must be open for TDE to work. If the wallet is closed, you can access all nonencrypted columns, but not encrypted columns.
On a regular schema, suppose you have a table of account holders as follows:
SQL> conn hr/hr@rock
Connected.
SQL>
create table accounts
(
ACC_NO NUMBER(2),
ACC_NAME VARCHAR2(30),
SSN VARCHAR2(9)
)
/
Currently, the table has all data in clear text. You want to convert the column SSN, which holds the Social Security Number, to be stored as encrypted. You can issue
alter table accounts modify (ssn encrypt);
SQL> desc accounts;
Name Null? Type
-------------------------- -------- ----------------------------
ACC_NO NUMBER(2)
ACC_NAME VARCHAR2(30)
SSN VARCHAR2(9) ENCRYPT
This statement does two things:
-It creates an encryption key for the table. If you change another column in the same table to use the encrypted format, the same table key will be used.
-It converts all values in the column to encrypted format.
Using Data Pump with TDE
By default, if you use the Data Pump export utility (EXPDP) to export data from a table with encrypted columns, the data in the resulting dump file will be in clear text, even the encrypted column data. The following command exports the ACCOUNTS table—with its encrypted columns—and returns a warning:
SQL> conn sys/rock@rock as sysdba
Connected.
SQL>
SQL> create or replace directory expdp as 'D:\expdp_dir';
Directory created.
SQL>
SQL> grant read, write on directory expdp to hr;
Grant succeeded.
SQL>
H:\>expdp hr/hr@rock directory=expdp tables=accounts
Export: Release 10.2.0.1.0 - Production on Wednesday, 20 May, 2009 9:27:23
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "HR"."SYS_EXPORT_TABLE_01": hr/********@rock directory=expdp tables=accountsEstimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE. . exported "HR"."ACCOUNTS" 5.585 KB 1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TABLE_01 is: D:\EXPDP_DIR\EXPDAT.DMP
Job "HR"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 09:27:48
H:\>
This is just a warning, not an error; the rows will still be exported.
To protect your encrypted column data in the data pump dump files, you can password-protect your dump file when exporting the table. This password, specified as an ENCRYPTION_PASSWORD parameter in the EXPDP command, applies to this export process only; this is not the password of the wallet.
SQL> grant read, write on directory expdp to soni;
Grant succeeded.
H:\>expdp hr/hr@rock directory=expdp encryption_password=soni tables=accounts
H:\>impdp hr/hr@rock directory=expdp encryption_password=soni tables=accounts
-----------------------------------------------------------------------------------------------
for more details check the link below :
http://www.oracle.com/technology/oramag/oracle/05-sep/o55security.html
Tuesday, May 12, 2009
AUTOTRACE and TKPROF
As we have seen the result of select count(*) from cat; with TKPROF utility http://rakeshocp.blogspot.com/2009/05/tkprof.html now we will compare the result with autotrace.
SQL> show user
USER is "HR"
SQL> set autotrace on
SP2-0618: Cannot find the Session Identifier. Check PLUSTRACE role is enabled
SP2-0611: Error enabling STATISTICS report
SQL> conn sys/rock@rock as sysdba
Connected.
SQL> grant plustrace to hr;
grant plustrace to hr
*
ERROR at line 1:
ORA-01919: role 'PLUSTRACE' does not exist
SQL> conn sys/rock@rock as sysdba
Connected.
SQL> @D:\oracle\product\10.2.0\db_1\sqlplus\admin\plustrce.sql;
SQL>
SQL> drop role plustrace;
drop role plustrace
*
ERROR at line 1:ORA-01919: role 'PLUSTRACE' does not exist
SQL> create role plustrace;
Role created.
SQL>SQL> grant select on v_$sesstat to plustrace;
Grant succeeded.
SQL> grant select on v_$statname to plustrace;
Grant succeeded.
SQL> grant select on v_$mystat to plustrace;
Grant succeeded.
SQL> grant plustrace to dba with admin option;
Grant succeeded.
SQL>
SQL> set echo off
SQL>
SQL> grant plustrace to hr;
Grant succeeded.
SQL> show userUSER is "SYS"
SQL> conn hr/hr@rock
Connected.
SQL> set autotrace on
SQL>
SQL> select count(*) from cat;
COUNT(*)
----------
21
Execution Plan
----------------------------------------------------------
ERROR:
ORA-01039: insufficient privileges on underlying objects of the view
SP2-0612: Error generating AUTOTRACE EXPLAIN report
Statistics
----------------------------------------------------------
333 recursive calls
0 db block gets
1060 consistent gets
7 physical reads
0 redo size
411 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
15 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
You don’t need any special system privileges in order to use the EXPLAIN PLAN statement. However, you do need to have INSERT privileges on the plan table, and you must have sufficient privileges to execute the statement you are trying to explain. The one difference is that in order to explain a statement that involves views, you must have privileges on all of the tables that make up the view. If you don’t, you’ll get an “ORA-01039: insufficient privileges on underlying objects of the view” error.
http://www.dbspecialists.com/files/presentations/use_explain.html
SQL> show user
USER is "HR"
SQL> set autotrace on
SP2-0618: Cannot find the Session Identifier. Check PLUSTRACE role is enabled
SP2-0611: Error enabling STATISTICS report
SQL> conn sys/rock@rock as sysdba
Connected.
SQL> grant plustrace to hr;
grant plustrace to hr
*
ERROR at line 1:
ORA-01919: role 'PLUSTRACE' does not exist
SQL> conn sys/rock@rock as sysdba
Connected.
SQL> @D:\oracle\product\10.2.0\db_1\sqlplus\admin\plustrce.sql;
SQL>
SQL> drop role plustrace;
drop role plustrace
*
ERROR at line 1:ORA-01919: role 'PLUSTRACE' does not exist
SQL> create role plustrace;
Role created.
SQL>SQL> grant select on v_$sesstat to plustrace;
Grant succeeded.
SQL> grant select on v_$statname to plustrace;
Grant succeeded.
SQL> grant select on v_$mystat to plustrace;
Grant succeeded.
SQL> grant plustrace to dba with admin option;
Grant succeeded.
SQL>
SQL> set echo off
SQL>
SQL> grant plustrace to hr;
Grant succeeded.
SQL> show userUSER is "SYS"
SQL> conn hr/hr@rock
Connected.
SQL> set autotrace on
SQL>
SQL> select count(*) from cat;
COUNT(*)
----------
21
Execution Plan
----------------------------------------------------------
ERROR:
ORA-01039: insufficient privileges on underlying objects of the view
SP2-0612: Error generating AUTOTRACE EXPLAIN report
Statistics
----------------------------------------------------------
333 recursive calls
0 db block gets
1060 consistent gets
7 physical reads
0 redo size
411 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
15 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
You don’t need any special system privileges in order to use the EXPLAIN PLAN statement. However, you do need to have INSERT privileges on the plan table, and you must have sufficient privileges to execute the statement you are trying to explain. The one difference is that in order to explain a statement that involves views, you must have privileges on all of the tables that make up the view. If you don’t, you’ll get an “ORA-01039: insufficient privileges on underlying objects of the view” error.
http://www.dbspecialists.com/files/presentations/use_explain.html
Monday, May 11, 2009
TKPROF
TKPROF stands for Transient Kernel PROFiler.
The TKPROF program converts Oracle trace files into a more readable form. If you have a problem query you can use TKPROF to get more information. To get the most out of the utility you must enable timed statistics by setting the init.ora parameter or performing the following command:
SQL> ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
System altered.
SQL> @D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlxplan.sql;
Table created.
SQL> CREATE PUBLIC SYNONYM PLAN_TABLE FOR SYS.PLAN_TABLE;
Synonym created.
SQL> GRANT SELECT, INSERT, UPDATE, DELETE ON SYS.PLAN_TABLE TO PUBLIC;
Grant succeeded.
SQL> conn hr/hr@rock
Connected.
SQL> ALTER SESSION SET SQL_TRACE = TRUE;
Session altered.
SQL> select count(*) from cat;
COUNT(*)
----------
20
SQL> ALTER SESSION SET SQL_TRACE = FALSE;
Session altered.
A trace file will be created at D:\oracle\product\10.2.0\admin\rock\udump, the new trace file is rock_ora_4844.trc. This can then be interpreted using TKPROF at the commmand prompt as follows:
H:\>tkprof D:\oracle\product\10.2.0\admin\rock\udump\rock_ora_4844.trc D:\oracle\product\10.2.0\admin\rock\udump\hr.txt explain=hr/hr@rock table=sys.plan_table
The resulting output file contains the following type of information for all SQL statements processed, including the ALTER SESSION commands:
----------------------------------------------------------------------------------------
TKPROF: Release 10.2.0.1.0 - Production on Tue May 12 11:38:44 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: D:\oracle\product\10.2.0\admin\rock\udump\rock_ora_4844.trc
Sort options: default
********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************
ALTER SESSION SET SQL_TRACE = TRUE
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- --------- ---------- ---------- ----------
Parse----0------0.00----0.00-------0-------0---------0----------0-----
Execute--1------0.00----0.00-------0-------0---------0----------0-----
Fetch----0------0.00----0.00-------0-------0---------0----------0-----
------- ------ -------- ---------- ---------- ---------- ---------- ---------
total-----1-------0.00----0.00-------0------0----------0----------0-----
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 169 (HR)
********************************************************************************
select text
from view$
where rowid=:1
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- --------- ---------- ---------- ---------
Parse----1-----0.00----0.00------0-------0----------0---------0-----
Execute--1-----0.00----0.00------0-------0----------0---------0-----
Fetch----1-----0.00----0.00------ 0-------2----------0---------1-----
------- ------ -------- ---------- ---------- ---------- ---------- --------
total-----3-----0.00----0.00-------0-------2----------0---------1-----
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY USER ROWID VIEW$ (cr=1 pr=0 pw=0 time=22 us)
********************************************************************************
select count(*)
from cat
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse-----1-----0.00---0.06------0--------0---------0---------0--------
Execute--1------0.00---0.00------0--------0---------0---------0--------
Fetch----2------0.00---0.01------0---------968------0----------1--------
------- ------ -------- ---------- ---------- ---------- ---------- -----------
total-----4------0.00----0.07-----0---------968------0----------1--------
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 169 (HR)
Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=968 pr=0 pw=0 time=10364 us)
20 FILTER (cr=968 pr=0 pw=0 time=10117 us)
45 TABLE ACCESS FULL OBJ$ (cr=920 pr=0 pw=0 time=9744 us)
0 TABLE ACCESS CLUSTER TAB$ (cr=48 pr=0 pw=0 time=266 us)
16 INDEX UNIQUE SCAN I_OBJ# (cr=32 pr=0 pw=0 time=126 us)(object id 3)
error during execute of EXPLAIN PLAN statement
ORA-01039: insufficient privileges on underlying objects of the view
parse error offset: 84
********************************************************************************
ALTER SESSION SET SQL_TRACE = FALSE
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse-----1------0.00---0.00-----0--------0---------0----------0------
Execute---1------0.00---0.00-----0--------0---------0----------0------
Fetch-----0------0.00---0.00-----0--------0---------0----------0------
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total------2-------0.00---0.00-----0-------0---------0----------0-------
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 169 (HR)
********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse----2-------0.00---0.06-----0--------0----------0--------0-------
Execute--3-------0.00---0.00-----0--------0----------0--------0-------
Fetch-----2-------0.00---0.01-----0--------968-------0---------1------
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total------7--------0.00---0.07----0--------968--------0--------1-------
Misses in library cache during parse: 2
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- -------- ---------- -------- ----------
Parse----1------0.00----0.00-----0--------0---------0----------0----
Execute--1----- 0.00----0.00-----0--------0---------0----------0----
Fetch----1------0.00----0.00-----0--------2---------0----------1-----
------- ------ -------- ---------- ---------- ---------- ------- ----------
total-----3------0.00----0.00-----0--------2---------0----------1------
Misses in library cache during parse: 1
Misses in library cache during execute: 1
3 user SQL statements in session.
1 internal SQL statements in session.
4 SQL statements in session.
0 statements EXPLAINed in this session.
********************************************************************************
Trace file: D:\oracle\product\10.2.0\admin\rock\udump\rock_ora_4844.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
3 user SQL statements in trace file.
1 internal SQL statements in trace file.
4 SQL statements in trace file.
4 unique SQL statements in trace file.
56 lines in trace file.
7 elapsed seconds in trace file.
----------------------------------------------------------------------------------------
http://www.oracle-base.com/articles/8i/TKPROFAndOracleTrace.php
The TKPROF program converts Oracle trace files into a more readable form. If you have a problem query you can use TKPROF to get more information. To get the most out of the utility you must enable timed statistics by setting the init.ora parameter or performing the following command:
SQL> ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
System altered.
SQL> @D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlxplan.sql;
Table created.
SQL> CREATE PUBLIC SYNONYM PLAN_TABLE FOR SYS.PLAN_TABLE;
Synonym created.
SQL> GRANT SELECT, INSERT, UPDATE, DELETE ON SYS.PLAN_TABLE TO PUBLIC;
Grant succeeded.
SQL> conn hr/hr@rock
Connected.
SQL> ALTER SESSION SET SQL_TRACE = TRUE;
Session altered.
SQL> select count(*) from cat;
COUNT(*)
----------
20
SQL> ALTER SESSION SET SQL_TRACE = FALSE;
Session altered.
A trace file will be created at D:\oracle\product\10.2.0\admin\rock\udump, the new trace file is rock_ora_4844.trc. This can then be interpreted using TKPROF at the commmand prompt as follows:
H:\>tkprof D:\oracle\product\10.2.0\admin\rock\udump\rock_ora_4844.trc D:\oracle\product\10.2.0\admin\rock\udump\hr.txt explain=hr/hr@rock table=sys.plan_table
The resulting output file contains the following type of information for all SQL statements processed, including the ALTER SESSION commands:
----------------------------------------------------------------------------------------
TKPROF: Release 10.2.0.1.0 - Production on Tue May 12 11:38:44 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: D:\oracle\product\10.2.0\admin\rock\udump\rock_ora_4844.trc
Sort options: default
********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************
ALTER SESSION SET SQL_TRACE = TRUE
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- --------- ---------- ---------- ----------
Parse----0------0.00----0.00-------0-------0---------0----------0-----
Execute--1------0.00----0.00-------0-------0---------0----------0-----
Fetch----0------0.00----0.00-------0-------0---------0----------0-----
------- ------ -------- ---------- ---------- ---------- ---------- ---------
total-----1-------0.00----0.00-------0------0----------0----------0-----
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 169 (HR)
********************************************************************************
select text
from view$
where rowid=:1
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- --------- ---------- ---------- ---------
Parse----1-----0.00----0.00------0-------0----------0---------0-----
Execute--1-----0.00----0.00------0-------0----------0---------0-----
Fetch----1-----0.00----0.00------ 0-------2----------0---------1-----
------- ------ -------- ---------- ---------- ---------- ---------- --------
total-----3-----0.00----0.00-------0-------2----------0---------1-----
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY USER ROWID VIEW$ (cr=1 pr=0 pw=0 time=22 us)
********************************************************************************
select count(*)
from cat
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse-----1-----0.00---0.06------0--------0---------0---------0--------
Execute--1------0.00---0.00------0--------0---------0---------0--------
Fetch----2------0.00---0.01------0---------968------0----------1--------
------- ------ -------- ---------- ---------- ---------- ---------- -----------
total-----4------0.00----0.07-----0---------968------0----------1--------
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 169 (HR)
Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=968 pr=0 pw=0 time=10364 us)
20 FILTER (cr=968 pr=0 pw=0 time=10117 us)
45 TABLE ACCESS FULL OBJ$ (cr=920 pr=0 pw=0 time=9744 us)
0 TABLE ACCESS CLUSTER TAB$ (cr=48 pr=0 pw=0 time=266 us)
16 INDEX UNIQUE SCAN I_OBJ# (cr=32 pr=0 pw=0 time=126 us)(object id 3)
error during execute of EXPLAIN PLAN statement
ORA-01039: insufficient privileges on underlying objects of the view
parse error offset: 84
********************************************************************************
ALTER SESSION SET SQL_TRACE = FALSE
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse-----1------0.00---0.00-----0--------0---------0----------0------
Execute---1------0.00---0.00-----0--------0---------0----------0------
Fetch-----0------0.00---0.00-----0--------0---------0----------0------
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total------2-------0.00---0.00-----0-------0---------0----------0-------
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 169 (HR)
********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse----2-------0.00---0.06-----0--------0----------0--------0-------
Execute--3-------0.00---0.00-----0--------0----------0--------0-------
Fetch-----2-------0.00---0.01-----0--------968-------0---------1------
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total------7--------0.00---0.07----0--------968--------0--------1-------
Misses in library cache during parse: 2
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- -------- ---------- -------- ----------
Parse----1------0.00----0.00-----0--------0---------0----------0----
Execute--1----- 0.00----0.00-----0--------0---------0----------0----
Fetch----1------0.00----0.00-----0--------2---------0----------1-----
------- ------ -------- ---------- ---------- ---------- ------- ----------
total-----3------0.00----0.00-----0--------2---------0----------1------
Misses in library cache during parse: 1
Misses in library cache during execute: 1
3 user SQL statements in session.
1 internal SQL statements in session.
4 SQL statements in session.
0 statements EXPLAINed in this session.
********************************************************************************
Trace file: D:\oracle\product\10.2.0\admin\rock\udump\rock_ora_4844.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
3 user SQL statements in trace file.
1 internal SQL statements in trace file.
4 SQL statements in trace file.
4 unique SQL statements in trace file.
56 lines in trace file.
7 elapsed seconds in trace file.
----------------------------------------------------------------------------------------
http://www.oracle-base.com/articles/8i/TKPROFAndOracleTrace.php
tuning SQL statement...AUTOTRACE
I am trying to use autotrace utility to check the execution plan of table, and later I will create index to check whether the performance is improved.
SQL> conn sys/rock@rock as sysdba
Connected.
SQL> create table test as select * from all_objects;
Table created.
SQL> exec dbms_stats.gather_table_stats(ownname => 'SYS',tabname=>'TEST',force=>true);
PL/SQL procedure successfully completed.
SQL>
SQL> select count (*) from test;
COUNT(*)
----------
49771
SQL> desc test
Name
------------------------------------------------------------------
OWNER
OBJECT_NAME
SUBOBJECT_NAME
OBJECT_ID
DATA_OBJECT_ID
OBJECT_TYPE
CREATED
LAST_DDL_TIME
TIMESTAMP
STATUS
TEMPORARY
GENERATED
SECONDARY
SQL>
SQL> select count (distinct(data_object_id)) from test;
COUNT(DISTINCT(DATA_OBJECT_ID))
-------------------------------
3504
SQL> autotrace on;
SQL> select * from test where data_object_id=151596;
OWNER OBJECT_NAME OBJECT_ID
------------------------------ ------------------------------
RMAN2 TF_P 151596
Execution Plan
----------------------------------------------------------
Plan hash value: 1357081020
--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 1 93 154 (2) 00:00:02
* 1 TABLE ACCESS FULL TEST 1 93 154 (2) 00:00:02
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("DATA_OBJECT_ID"=151596)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
689 consistent gets
0 physical reads
0 redo size
1202 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> create index test_data_idx on test(data_object_id)
Index created.
SQL> select * from test where data_object_id=151596;
OWNER OBJECT_NAME OBJECT_ID
------------------------------ ------------------------------
RMAN2 TF_P 151596
Execution Plan
----------------------------------------------------------
Plan hash value: 2271528063
---------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 93 2 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID TEST 1 93 2 (0) 00:00:01
* 2 INDEX RANGE SCAN TEST_DATA_IDX 1 1 (0) 00:00:01
---------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("DATA_OBJECT_ID"=151596)
Statistics
----------------------------------------------------------
149 recursive calls
0 db block gets
28 consistent gets
1 physical reads
0 redo size
1206 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
3 sorts (memory) 0 sorts (disk)
1 rows processed
Conclusion: By creating index the cost of CPU is improved from 154% to 2 %, because the row was accessed by INDEX ROWID.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Explain plan Vs AUTOTRACE
http://www.oracle-base.com/articles/8i/ExplainPlanUsage.php
SQL> conn sys/rock@rock as sysdba
Connected.
SQL> create table test as select * from all_objects;
Table created.
SQL> exec dbms_stats.gather_table_stats(ownname => 'SYS',tabname=>'TEST',force=>true);
PL/SQL procedure successfully completed.
SQL>
SQL> select count (*) from test;
COUNT(*)
----------
49771
SQL> desc test
Name
------------------------------------------------------------------
OWNER
OBJECT_NAME
SUBOBJECT_NAME
OBJECT_ID
DATA_OBJECT_ID
OBJECT_TYPE
CREATED
LAST_DDL_TIME
TIMESTAMP
STATUS
TEMPORARY
GENERATED
SECONDARY
SQL>
SQL> select count (distinct(data_object_id)) from test;
COUNT(DISTINCT(DATA_OBJECT_ID))
-------------------------------
3504
SQL> autotrace on;
SQL> select * from test where data_object_id=151596;
OWNER OBJECT_NAME OBJECT_ID
------------------------------ ------------------------------
RMAN2 TF_P 151596
Execution Plan
----------------------------------------------------------
Plan hash value: 1357081020
--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 1 93 154 (2) 00:00:02
* 1 TABLE ACCESS FULL TEST 1 93 154 (2) 00:00:02
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("DATA_OBJECT_ID"=151596)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
689 consistent gets
0 physical reads
0 redo size
1202 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> create index test_data_idx on test(data_object_id)
Index created.
SQL> select * from test where data_object_id=151596;
OWNER OBJECT_NAME OBJECT_ID
------------------------------ ------------------------------
RMAN2 TF_P 151596
Execution Plan
----------------------------------------------------------
Plan hash value: 2271528063
---------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 93 2 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID TEST 1 93 2 (0) 00:00:01
* 2 INDEX RANGE SCAN TEST_DATA_IDX 1 1 (0) 00:00:01
---------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("DATA_OBJECT_ID"=151596)
Statistics
----------------------------------------------------------
149 recursive calls
0 db block gets
28 consistent gets
1 physical reads
0 redo size
1206 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
3 sorts (memory) 0 sorts (disk)
1 rows processed
Conclusion: By creating index the cost of CPU is improved from 154% to 2 %, because the row was accessed by INDEX ROWID.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Explain plan Vs AUTOTRACE
http://www.oracle-base.com/articles/8i/ExplainPlanUsage.php
Tuesday, April 28, 2009
kill session -
Killing session - I want to kill all sessions of DB user abm_test.
alter system kill session 'session-id,session-serial' immediate;
This command kills a session. The session-id and session-serial parameters are found in the v$session view (columns sid and serial#).
SQL> show user
USER is "SYS"
SQL>
SQL> select * from v$session
where username = 'ABM_TEST';
SQL> select SID, SERIAL#, USERNAME, osuser, PROGRAM from v$session where username = 'ABM_TEST';
SID SERIAL# USERNAME OSUSER PROGRAM
-------- --------- --------------------- --------------------------- -------------
184 54 ABM_TEST rakesh.kumar sqlplus.exe
SQL> alter system kill session '184,54' immediate;
System altered.
SQL>
alter system kill session 'session-id,session-serial' immediate;
This command kills a session. The session-id and session-serial parameters are found in the v$session view (columns sid and serial#).
SQL> show user
USER is "SYS"
SQL>
SQL> select * from v$session
where username = 'ABM_TEST';
SQL> select SID, SERIAL#, USERNAME, osuser, PROGRAM from v$session where username = 'ABM_TEST';
SID SERIAL# USERNAME OSUSER PROGRAM
-------- --------- --------------------- --------------------------- -------------
184 54 ABM_TEST rakesh.kumar sqlplus.exe
SQL> alter system kill session '184,54' immediate;
System altered.
SQL>
Sunday, April 26, 2009
Auto start of cmd file after Machine Reboot
Yesterday, Systems department has restarted the machine after their activity, Databases restarted automatically but there are some other batch files & Authentication servers exe file which needs to be run for ERP connectivity. So I have configured for auto execution of batch files whenever the Machine is restarted, check the below steps.
run >
regedit
HKEY_LOCAL_MACHINE>
SOFTWARE>
MICROSOFT>
WINDOWS>
CURRENVERSION>
RUN> Then right click new>
string value >
assigned a name to file,
specified complete path of cmd file in double quotes.
DONE
run >
regedit
HKEY_LOCAL_MACHINE>
SOFTWARE>
MICROSOFT>
WINDOWS>
CURRENVERSION>
RUN> Then right click new>
string value >
assigned a name to file,
specified complete path of cmd file in double quotes.
DONE
Thursday, April 23, 2009
RMAN - Recovery Catalog
I am trying to configure RMAN Catalog..
target db in linux = db name rock, db id 3304705878
recovery cat in xp = db name rock db id 3255117147 (rocklinux in tns entry for accessing Linux Dabaase)
Enter user-name: sys@rock as sysdba
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
1. sql> create tablespace rman_ts
datafile 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\rman_01.dbf' size 20m
autoextend on
2. create user rman1
identified by rman1
quota unlimited on rman_tsd
efault tablespace rman_ts;
3.SQL> grant recovery_catalog_owner to rman1;
Grant succeeded.
SQL> grant connect, resource to rman1;
Grant succeeded.
H:\>rman catalog rman1/rman1@rock log=D:\catalog.log
RMAN> create catalog tablespace rman_ts;
RMAN> exit
H:\>rman target sys/rock@rocklinux
Recovery Manager: Release 10.2.0.1.0 - Production on Wed Apr 22 12:49:23 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ROCK (DBID=3304705878)
RMAN> connect catalog rman1/rman1@rock
connected to recovery catalog database
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
SQL> conn rman1/rman1@rock
Connected.
SQL> select * from rc_database;
DB_KEY DBINC_KEY DBID NAME RESETLOGS_CHANGE# RESETLOGS
---------- ---------- ---------- -------- ----------------- ---------
1 2 3304705878 ROCK 516893 20-APR-09
target db in linux = db name rock, db id 3304705878
recovery cat in xp = db name rock db id 3255117147 (rocklinux in tns entry for accessing Linux Dabaase)
Enter user-name: sys@rock as sysdba
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
1. sql> create tablespace rman_ts
datafile 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\rman_01.dbf' size 20m
autoextend on
2. create user rman1
identified by rman1
quota unlimited on rman_tsd
efault tablespace rman_ts;
3.SQL> grant recovery_catalog_owner to rman1;
Grant succeeded.
SQL> grant connect, resource to rman1;
Grant succeeded.
H:\>rman catalog rman1/rman1@rock log=D:\catalog.log
RMAN> create catalog tablespace rman_ts;
RMAN> exit
H:\>rman target sys/rock@rocklinux
Recovery Manager: Release 10.2.0.1.0 - Production on Wed Apr 22 12:49:23 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ROCK (DBID=3304705878)
RMAN> connect catalog rman1/rman1@rock
connected to recovery catalog database
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
SQL> conn rman1/rman1@rock
Connected.
SQL> select * from rc_database;
DB_KEY DBINC_KEY DBID NAME RESETLOGS_CHANGE# RESETLOGS
---------- ---------- ---------- -------- ----------------- ---------
1 2 3304705878 ROCK 516893 20-APR-09
Friday, April 10, 2009
Oracle Tuning..
Oracle top down tuning tips
http://www.dba-oracle.com/art_tuning1.htm
Oracle Database Performance Tuning FAQs http://www.orafaq.com/wiki/Oracle_database_Performance_Tuning_FAQ
when your query takes too long:
http://forums.oracle.com/forums/thread.jspa?threadID=501834
Oracle Tuning Tips
http://it.toolbox.com/blogs/all-about-dev/oracle-tuning-tips-8151
What's New in Oracle Performance? (10g)
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/whatsnew.htm
AWR - Automatic Workload Repository
Learn to use the new feature that collects database performance statistics and metrics for analysis and tuning, shows the exact time spent in the database, and even saves session information When you have a database performance problem, what is the first thing you do to address it? One common approach is to see if a pattern exists: Answering questions such as "Is the same problem recurrent?", "Does it occur during a specific time period?", and "Is there a link between two problems?" will almost always lead to a better diagnosis.
As a DBA you probably have invested in a third-party or homegrown tool to collect elaborate statistics during database operation and derive performance metrics from them. In a crisis, you access those metrics for comparisons to the present. Replaying these past events can shed light on current problems, so continuously capturing relevant statistics becomes important for performance analysis. For some time, Oracle's solution in this area has been its built-in tool, Statspack. While it can prove invaluable in certain cases, it often lacks the robustness required by performance troubleshooting exercises. Oracle Database 10g offers a significant improvement: the Automatic Workload Repository (AWR). The AWR installs along with the database and captures not only statistics, but the derived metrics as well.
http://www.oracle.com/technology/pub/articles/10gdba/week6_10gdba.html
Database General - Oracle Performance Tuning – Part 1
http://www.databasejournal.com/features/oracle/article.php/3548291/Oracle-Performance-Tuning--Part-1.htm
http://www.dba-oracle.com/art_tuning1.htm
Oracle Database Performance Tuning FAQs http://www.orafaq.com/wiki/Oracle_database_Performance_Tuning_FAQ
when your query takes too long:
http://forums.oracle.com/forums/thread.jspa?threadID=501834
Oracle Tuning Tips
http://it.toolbox.com/blogs/all-about-dev/oracle-tuning-tips-8151
What's New in Oracle Performance? (10g)
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/whatsnew.htm
AWR - Automatic Workload Repository
Learn to use the new feature that collects database performance statistics and metrics for analysis and tuning, shows the exact time spent in the database, and even saves session information When you have a database performance problem, what is the first thing you do to address it? One common approach is to see if a pattern exists: Answering questions such as "Is the same problem recurrent?", "Does it occur during a specific time period?", and "Is there a link between two problems?" will almost always lead to a better diagnosis.
As a DBA you probably have invested in a third-party or homegrown tool to collect elaborate statistics during database operation and derive performance metrics from them. In a crisis, you access those metrics for comparisons to the present. Replaying these past events can shed light on current problems, so continuously capturing relevant statistics becomes important for performance analysis. For some time, Oracle's solution in this area has been its built-in tool, Statspack. While it can prove invaluable in certain cases, it often lacks the robustness required by performance troubleshooting exercises. Oracle Database 10g offers a significant improvement: the Automatic Workload Repository (AWR). The AWR installs along with the database and captures not only statistics, but the derived metrics as well.
http://www.oracle.com/technology/pub/articles/10gdba/week6_10gdba.html
Database General - Oracle Performance Tuning – Part 1
http://www.databasejournal.com/features/oracle/article.php/3548291/Oracle-Performance-Tuning--Part-1.htm
Tuesday, April 7, 2009
Oracle 10.2 - (RMAN + SYS user Audit_Trail = XML, Extended) bug
After enabling auditing for sys users, as discussed in my below post.
http://rakeshocp.blogspot.com/2008/12/auditing-sys-user-in-10g.html
When we are trying to connect database with RMAN utility, it fails and gives below error.
H:\>rman
Recovery Manager: Release 10.2.0.1.0 - Production on Tue Apr 7 14:59:17 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect target sys/rock@rock
RMAN-00571: =======================================================
RMAN-00569: ============= ERROR MESSAGE STACK FOLLOWS ==============
RMAN-00571: =======================================================
ORA-09817: Write to audit file failed.
RMAN>
I got to know its bug in Oracle 10g Release 10.2.0.1.0. Check my post at oracle forums .
http://forums.oracle.com/forums/message.jspa?messageID=3282187#3282187
...
http://rakeshocp.blogspot.com/2008/12/auditing-sys-user-in-10g.html
When we are trying to connect database with RMAN utility, it fails and gives below error.
H:\>rman
Recovery Manager: Release 10.2.0.1.0 - Production on Tue Apr 7 14:59:17 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect target sys/rock@rock
RMAN-00571: =======================================================
RMAN-00569: ============= ERROR MESSAGE STACK FOLLOWS ==============
RMAN-00571: =======================================================
ORA-09817: Write to audit file failed.
RMAN>
I got to know its bug in Oracle 10g Release 10.2.0.1.0. Check my post at oracle forums .
http://forums.oracle.com/forums/message.jspa?messageID=3282187#3282187
...
Thursday, April 2, 2009
Undo tablespace issue....
There were some issues in HARD Diskk of Test Machine, and systems department wanted to format/replace the hard disk (F Drive, where Oracle is installed). So I closed the test database, and later systems departmentd copied all folders, including Oracle folder, where all datafiles, control files, redo logs are stored. They restored all folders after completing their activity.
After that the systems department wanted the Database team to verify whether the Oracle is working properly or not. When we started Oracle services, and Oracle Database, it was automatically shuting down the instance, after checking the alert.log file we got below error:
DEBUG: Replaying xcb 0x4e7d126c, pmd 0x4deeb4b8 for failed op 8
Errors in file f:\oracle\product\10.2.0\admin\icpora\udump\icpora_ora_4192.trc:
ORA-00600: internal error code, arguments: [4194], [49], [44], [], [], [], [], []
ORA-00600: internal error code, arguments: [4194], [49], [44], [], [], [], [], []
Metalink says:
A mismatch has been detected between Redo records and rollback (Undo) records.
We are validating the Undo record number relating to the change being applied againstthe maximum undo record number recorded in the undo block.This error is reported when the validation fails.
ARGUMENTS:
Arg [a] Maximum Undo record number in Undo block
Arg [b] Undo record number from Redo block
IMPACT:PROCESS FAILUREPOSSIBLE ROLLBACK SEGMENT CORRUPTION
SUGGESTIONS:This error may indicate a rollback segment corruption.This may require a recovery from a database backup depending on the situation.
We deleted untobs01.dbf and created undotbs2 tablespace undotbs02.dbf
1. alter database datafile '/home/u01/app/oracle/oradata/orcl/undotbs01.dbf' offline drop;
2. create undo tablespace undotbs2 datafile '/home/u01/app/oracle/oradata/orcl/undotbs02.dbf' size 1024m;
3. ALTER SYSTEM SET undo_tablespace='UNDOTBS2'
The system started working properly, the Full Database export batch file runs in nights, when we checked the log of export file it indicated below error:
EXP-00008: ORACLE error 376 encountered
ORA-00376: file 2 cannot be read at this time
ORA-01110: data file 2: 'F:\ORACLE\PRODUCT\10.2.0\ORADATA\ICPORA\UNDOTBS01.DBF'
EXP-00000: Export terminated unsuccessfully
when we checked the v$recover_file, data file 2 is not found for recovery. so when we issued the drop tablespace undotbs1 command, we got error
ERROR at line 1:
ORA-01548: active rollback segment ‘_SYSSMU01$’ found, terminate dropping tablespace
select segment_name,status,tablespace_name from dba_rollback_segs where status='NEEDS RECOVERY';
SEGMENT_NAME STATUS TABLESPACE_NAME
—————————— —————- —————–
_SYSSMU01$ NEEDS RECOVERY UNDOTBS1
_SYSSMU02$ NEEDS RECOVERY UNDOTBS1
_SYSSMU03$ NEEDS RECOVERY UNDOTBS1
_SYSSMU04$ NEEDS RECOVERY UNDOTBS1
_SYSSMU05$ NEEDS RECOVERY UNDOTBS1
_SYSSMU06$ NEEDS RECOVERY UNDOTBS1
_SYSSMU07$ NEEDS RECOVERY UNDOTBS1
_SYSSMU08$ NEEDS RECOVERY UNDOTBS1
We closed the, database edited init.ora file and added parameters (comment undo_management=auto and entry for corrupted_rollback_segments
#undo_management=AUTO
undo_tablespace=UNDOTBS2
_corrupted_rollback_segments =('_SYSSMU01$','_SYSSMU02$','_SYSSMU03$','_SYSSMU04$','_SYSSMU05$','_SYSSMU06$','_SYSSMU07$','_SYSSMU08$'
1. STARTUP RESTRICT MOUNT pfile=C:\Oracle\init.ora
2. Drop rollback segment "_SYSSMU01$"
3. Droped all segments one by one that we entered in initfile. (step 2 drop command)
4. drop TABLESPACE UNDOTBS1;
5. shutdown
6. deleted entry of _corrupted_rollback_segments and removed comment from undo_management
7. STARTUP MOUNT pfile=C:\Oracle\init.ora8 and later opened the database.
8. Taken Full database export, it was successfully exported.
Speciall thanks to the writer/owner of below page, It helped me alot.
http://www.my-whiteboard.com/oracle-dba/how-to-drop-and-recreate-oracle-undo-tablespace-and-its-data-files.html
After that the systems department wanted the Database team to verify whether the Oracle is working properly or not. When we started Oracle services, and Oracle Database, it was automatically shuting down the instance, after checking the alert.log file we got below error:
DEBUG: Replaying xcb 0x4e7d126c, pmd 0x4deeb4b8 for failed op 8
Errors in file f:\oracle\product\10.2.0\admin\icpora\udump\icpora_ora_4192.trc:
ORA-00600: internal error code, arguments: [4194], [49], [44], [], [], [], [], []
ORA-00600: internal error code, arguments: [4194], [49], [44], [], [], [], [], []
Metalink says:
A mismatch has been detected between Redo records and rollback (Undo) records.
We are validating the Undo record number relating to the change being applied againstthe maximum undo record number recorded in the undo block.This error is reported when the validation fails.
ARGUMENTS:
Arg [a] Maximum Undo record number in Undo block
Arg [b] Undo record number from Redo block
IMPACT:PROCESS FAILUREPOSSIBLE ROLLBACK SEGMENT CORRUPTION
SUGGESTIONS:This error may indicate a rollback segment corruption.This may require a recovery from a database backup depending on the situation.
We deleted untobs01.dbf and created undotbs2 tablespace undotbs02.dbf
1. alter database datafile '/home/u01/app/oracle/oradata/orcl/undotbs01.dbf' offline drop;
2. create undo tablespace undotbs2 datafile '/home/u01/app/oracle/oradata/orcl/undotbs02.dbf' size 1024m;
3. ALTER SYSTEM SET undo_tablespace='UNDOTBS2'
The system started working properly, the Full Database export batch file runs in nights, when we checked the log of export file it indicated below error:
EXP-00008: ORACLE error 376 encountered
ORA-00376: file 2 cannot be read at this time
ORA-01110: data file 2: 'F:\ORACLE\PRODUCT\10.2.0\ORADATA\ICPORA\UNDOTBS01.DBF'
EXP-00000: Export terminated unsuccessfully
when we checked the v$recover_file, data file 2 is not found for recovery. so when we issued the drop tablespace undotbs1 command, we got error
ERROR at line 1:
ORA-01548: active rollback segment ‘_SYSSMU01$’ found, terminate dropping tablespace
select segment_name,status,tablespace_name from dba_rollback_segs where status='NEEDS RECOVERY';
SEGMENT_NAME STATUS TABLESPACE_NAME
—————————— —————- —————–
_SYSSMU01$ NEEDS RECOVERY UNDOTBS1
_SYSSMU02$ NEEDS RECOVERY UNDOTBS1
_SYSSMU03$ NEEDS RECOVERY UNDOTBS1
_SYSSMU04$ NEEDS RECOVERY UNDOTBS1
_SYSSMU05$ NEEDS RECOVERY UNDOTBS1
_SYSSMU06$ NEEDS RECOVERY UNDOTBS1
_SYSSMU07$ NEEDS RECOVERY UNDOTBS1
_SYSSMU08$ NEEDS RECOVERY UNDOTBS1
We closed the, database edited init.ora file and added parameters (comment undo_management=auto and entry for corrupted_rollback_segments
#undo_management=AUTO
undo_tablespace=UNDOTBS2
_corrupted_rollback_segments =('_SYSSMU01$','_SYSSMU02$','_SYSSMU03$','_SYSSMU04$','_SYSSMU05$','_SYSSMU06$','_SYSSMU07$','_SYSSMU08$'
1. STARTUP RESTRICT MOUNT pfile=C:\Oracle\init.ora
2. Drop rollback segment "_SYSSMU01$"
3. Droped all segments one by one that we entered in initfile. (step 2 drop command)
4. drop TABLESPACE UNDOTBS1;
5. shutdown
6. deleted entry of _corrupted_rollback_segments and removed comment from undo_management
7. STARTUP MOUNT pfile=C:\Oracle\init.ora8 and later opened the database.
8. Taken Full database export, it was successfully exported.
Speciall thanks to the writer/owner of below page, It helped me alot.
http://www.my-whiteboard.com/oracle-dba/how-to-drop-and-recreate-oracle-undo-tablespace-and-its-data-files.html
Wednesday, February 18, 2009
creating new database from old database cold backup.
I am writing below steps that will create a new database ICPORA that will be a complete replica of old database ICPORA.
3. If you have new database with the old DB name then go to step 5 otherwise step 4. In my situation I have new database with name ROCK, so I need ICPORA service/instance to operate, for that I need to create new service ICPORA.
4. create new service with old database name with ORADIM utility.
H:\>oradim -new -sid icpora -intpwd icpora -startmode m -pfile 'D:\oracle\product\10.2.0\oradata\icpora\INITicpora.ORA'
Instance created.
5.
H:\>set oracle_sid=icpora
H:\>sqlplus
SQL*Plus: Release 10.2.0.1.0 - Production on Wed Feb 18 15:19:02 2009
Copyright (c) 1982
Enter user-name: sys as sysdba
Enter password:
Connected to an idle instance.
SQL> startup nomount pfile='D:\oracle\product\10.2.0\oradata\icpora\INITicpora.ORA';
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1253124 bytes
Variable Size 486539516 bytes
Database Buffers 578813952 bytes
Redo Buffers 7135232 bytes
SQL> startup mount pfile='D:\oracle\product\10.2.0\oradata\icpora\INITicpora.ORA';
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1253124 bytes
Variable Size 486539516 bytes
Database Buffers 578813952 bytes
Redo Buffers 7135232 bytes
Database mounted.
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'E:\DATAFILES\SYSTEM01.DBF'
Note: The above error is encountered because the contents/information stored in the control file is different from the current situation. (i.e paths of files). We need to recreate the controlfile, the below command will generate a script for recreating a controlfile in udump folder.
SQL> alter database backup controlfile to trace;
Database altered.
6. Copy the important contents of script generated in udump folder and modify with new paths and execute it. Note execute in NOMOUNT Mode.
SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount pfile='D:\oracle\product\10.2.0\oradata\icpora\INITicpora.ORA';
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1253124 bytes
Variable Size 486539516 bytes
Database Buffers 578813952 bytes
Redo Buffers 7135232 bytes
SQL> CREATE CONTROLFILE REUSE DATABASE "ICPORA" NORESETLOGS ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 100
5 MAXINSTANCES 8
6 MAXLOGHISTORY 18688
7 LOGFILE
8 GROUP 1 'D:\oracle\product\10.2.0\oradata\icpora\REDO01.LOG' SIZE 100M,
9 GROUP 2 'D:\oracle\product\10.2.0\oradata\icpora\REDO02.LOG' SIZE 100M,
10 GROUP 3 'D:\oracle\product\10.2.0\oradata\icpora\REDO03.LOG' SIZE 100M
11 -- STANDBY LOGFILE
12 DATAFILE
13 'D:\oracle\product\10.2.0\oradata\icpora\SYSTEM01.DBF',
14 'D:\oracle\product\10.2.0\oradata\icpora\SYSTEM02.DBF',
15 'D:\oracle\product\10.2.0\oradata\icpora\SYSAUX01.DBF',
16 'D:\oracle\product\10.2.0\oradata\icpora\ABAMCO01.DBF',
17 'D:\oracle\product\10.2.0\oradata\icpora\ABAMCO02.DBF',
18 'D:\oracle\product\10.2.0\oradata\icpora\EDP01.DBF',
19 'D:\oracle\product\10.2.0\oradata\icpora\UNDOTBS2.DBF',
20 'D:\oracle\product\10.2.0\oradata\icpora\USERS01.DBF',
21 'D:\oracle\product\10.2.0\oradata\icpora\AXIS_ALERT01.DBF',
22 'D:\oracle\product\10.2.0\oradata\icpora\RISKMETER01.DBF'
23 CHARACTER SET WE8MSWIN1252
24 ;
Control file created.
SQL> RECOVER DATABASE
Media recovery complete.
SQL> -- All logs need archiving and a log switch is needed.
SQL> ALTER SYSTEM ARCHIVE LOG ALL;
System altered.
SQL> alter database open;
Database altered.
Scenario: I have new database ROCK and old database ICPORA, I have cold backup of ICPORA, and Now I want recreate ICPORA at ROCK database. I have taken backup of both database.
1. Copy all datafiles, controlfiles, redologs, archivelogs, init files from ICPORA to new database ROCK (oradata). Delete all files from ROCK database after backup and paste ICPORA files.
2. Check init.ora file and modify/specify the correct paths for ArchiveLog, Controlfiles and other files.3. If you have new database with the old DB name then go to step 5 otherwise step 4. In my situation I have new database with name ROCK, so I need ICPORA service/instance to operate, for that I need to create new service ICPORA.
4. create new service with old database name with ORADIM utility.
H:\>oradim -new -sid icpora -intpwd icpora -startmode m -pfile 'D:\oracle\product\10.2.0\oradata\icpora\INITicpora.ORA'
Instance created.
5.
H:\>set oracle_sid=icpora
H:\>sqlplus
SQL*Plus: Release 10.2.0.1.0 - Production on Wed Feb 18 15:19:02 2009
Copyright (c) 1982
Enter user-name: sys as sysdba
Enter password:
Connected to an idle instance.
SQL> startup nomount pfile='D:\oracle\product\10.2.0\oradata\icpora\INITicpora.ORA';
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1253124 bytes
Variable Size 486539516 bytes
Database Buffers 578813952 bytes
Redo Buffers 7135232 bytes
SQL> startup mount pfile='D:\oracle\product\10.2.0\oradata\icpora\INITicpora.ORA';
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1253124 bytes
Variable Size 486539516 bytes
Database Buffers 578813952 bytes
Redo Buffers 7135232 bytes
Database mounted.
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'E:\DATAFILES\SYSTEM01.DBF'
Note: The above error is encountered because the contents/information stored in the control file is different from the current situation. (i.e paths of files). We need to recreate the controlfile, the below command will generate a script for recreating a controlfile in udump folder.
SQL> alter database backup controlfile to trace;
Database altered.
6. Copy the important contents of script generated in udump folder and modify with new paths and execute it. Note execute in NOMOUNT Mode.
SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount pfile='D:\oracle\product\10.2.0\oradata\icpora\INITicpora.ORA';
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1253124 bytes
Variable Size 486539516 bytes
Database Buffers 578813952 bytes
Redo Buffers 7135232 bytes
SQL> CREATE CONTROLFILE REUSE DATABASE "ICPORA" NORESETLOGS ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 100
5 MAXINSTANCES 8
6 MAXLOGHISTORY 18688
7 LOGFILE
8 GROUP 1 'D:\oracle\product\10.2.0\oradata\icpora\REDO01.LOG' SIZE 100M,
9 GROUP 2 'D:\oracle\product\10.2.0\oradata\icpora\REDO02.LOG' SIZE 100M,
10 GROUP 3 'D:\oracle\product\10.2.0\oradata\icpora\REDO03.LOG' SIZE 100M
11 -- STANDBY LOGFILE
12 DATAFILE
13 'D:\oracle\product\10.2.0\oradata\icpora\SYSTEM01.DBF',
14 'D:\oracle\product\10.2.0\oradata\icpora\SYSTEM02.DBF',
15 'D:\oracle\product\10.2.0\oradata\icpora\SYSAUX01.DBF',
16 'D:\oracle\product\10.2.0\oradata\icpora\ABAMCO01.DBF',
17 'D:\oracle\product\10.2.0\oradata\icpora\ABAMCO02.DBF',
18 'D:\oracle\product\10.2.0\oradata\icpora\EDP01.DBF',
19 'D:\oracle\product\10.2.0\oradata\icpora\UNDOTBS2.DBF',
20 'D:\oracle\product\10.2.0\oradata\icpora\USERS01.DBF',
21 'D:\oracle\product\10.2.0\oradata\icpora\AXIS_ALERT01.DBF',
22 'D:\oracle\product\10.2.0\oradata\icpora\RISKMETER01.DBF'
23 CHARACTER SET WE8MSWIN1252
24 ;
Control file created.
SQL> RECOVER DATABASE
Media recovery complete.
SQL> -- All logs need archiving and a log switch is needed.
SQL> ALTER SYSTEM ARCHIVE LOG ALL;
System altered.
SQL> alter database open;
Database altered.
Subscribe to:
Posts (Atom)