Friday, May 29, 2009

Switchover Steps

Once the standby database is setup using Data Guard and is configured properly, we should test switchover and failover scenarios to be better prepared for a real life disaster situation.

A switchover allows the primary database to switch roles with its standby database. There is no data loss during a switchover. We can switch back to the original Primary database later by performing another switchover. A switchover is performed on a database when the need arises to upgrade or change configuration settings on the Primary database. The Standby database can temporarily be used as the Primary database for zero downtime while the actual Primary database is being upgraded or changed.

Switchover Configuration:
--------------------------
Database Role---------Service-----
Primary-------------ROCKPRIM
Standby-------------ROCKSTD

Before Switchover:
--------------------------

1. Verify the primary database instance is open and the standby database instance is mounted.
2. Verify there are no active users connected to the databases, if there are active sessions then kill active sessions.
3. Make sure the last redo data transmitted from the Primary database was applied on the standby database. Issue the following commands on Primary database and Standby database to find out:

select sequence#, applied from v$archvied_log; or select max(sequence#)
from v$archived_log where applied = ‘YES’;

4. Make sure that log_archive_dest_2 entry is specified in both databases.

Switchover Steps:
------------------
1. Initiate the switchover on the primary database ROCKPRIM:

SQL> conn sys/****@ROCKPRIM as sysdba
SQL> alter database commit to switchover to physical standby with session shutdown;

2. After step 1 finishes, Switch the original physical standby database ROCKSTD to primary role:

SQL> conn sys/****@ROCKSTD as sysdba
SQL> alter database commit to switchover to primary;

SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PRIMARY

3. Immediately after issuing command in step 2. Shut down and restart the former primary instance ROCKPRIM:

SQL> conn sys/****@ROCKPRIM as sysdba
SQL> shutdown immediate;
SQL> startup nomount;
SQL> alter database mount standby database;
SQL> alter database recover managed standby database disconnect from session;
SQL> select database_role from v$database;

DATABASE_ROLE
------------------------------------------------
PHYSICAL STANDBY


* Connect at new primary database ROCKSTD, and perform a SWITCH LOGFILE to start sending redo data to the standby database ROCK.

Tuesday, May 19, 2009

TDE - Transparent Data Encryption

Encrypt sensitive data transparently without writing a single line of code.

It's your organization's worst nightmare: Someone has stolen backup tapes of your database. Sure, you built a secure system, encrypted the most sensitive assets, and built a firewall around the database servers. But the thief took the easy approach: He took the backup tapes, ostensibly to restore your database on a different server, start the database on it, and then browse the data at his leisure. Protecting the database data from such theft is not just good practice; it's a requirement for compliance with most laws, regulations, and guidelines. How can you protect your database from this vulnerability?

- Encrypt the sensitive data and store encryption key in a separate location called wallet.
- Without the keys stolen data is worthless.
- Define a column as encrypted.
- When user insert the data, the database transparently encrypts it and stores in the column, similarly when users select the column the database automatically decrypts it.
- Backups and archive logs are also in encrypted format.
- If the data on the disk is stolen, it can't be retrieved without master key, which is in the wallet not part of the stolen data.
- Even if the wallet is stolen, the master key can't be retrieved from it without the wallet password.

How does it work :
1. Specify wallet location in sqlnet.ora D:\oracle\product\10.2.0\admin\rock and place following lines:
ENCRYPTION_WALLET_LOCATION =(SOURCE= (METHOD=file) (METHOD_DATA= (DIRECTORY=D:\oracle\product\10.2.0\admin\rock)))

2. Create the wallet
Now you must create the wallet and set the password to access it. To do this, issue the following as a SYS user with the ALTER SYSTEM privilege:
SQL> alter system set encryption key authenticated by "soni";
System altered.

This command
-Creates the wallet in the location specified in Step 1
-Sets the password of the wallet as "soni"
-Opens the wallet for TDE to store and retrieve the master key


- Restart the database

Every time you open the database, you'll have to open the wallet using the same password as follows:
SQL> alter system set encryption wallet open authenticated by "soni";
System altered.

You can close the wallet like this:
alter system set encryption wallet close;

The wallet must be open for TDE to work. If the wallet is closed, you can access all nonencrypted columns, but not encrypted columns.

On a regular schema, suppose you have a table of account holders as follows:

SQL> conn hr/hr@rock
Connected.


SQL>
create table accounts
(
ACC_NO NUMBER(2),
ACC_NAME VARCHAR2(30),
SSN VARCHAR2(9)
)
/

Currently, the table has all data in clear text. You want to convert the column SSN, which holds the Social Security Number, to be stored as encrypted. You can issue

alter table accounts modify (ssn encrypt);


SQL> desc accounts;

Name Null? Type

-------------------------- -------- ----------------------------
ACC_NO NUMBER(2)
ACC_NAME VARCHAR2(30)
SSN VARCHAR2(9) ENCRYPT

This statement does two things:
-It creates an encryption key for the table. If you change another column in the same table to use the encrypted format, the same table key will be used.
-It converts all values in the column to encrypted format.


Using Data Pump with TDE
By default, if you use the Data Pump export utility (EXPDP) to export data from a table with encrypted columns, the data in the resulting dump file will be in clear text, even the encrypted column data. The following command exports the ACCOUNTS table—with its encrypted columns—and returns a warning:

SQL> conn sys/rock@rock as sysdba
Connected.
SQL>

SQL> create or replace directory expdp as 'D:\expdp_dir';
Directory created.
SQL>

SQL> grant read, write on directory expdp to hr;
Grant succeeded.
SQL>

H:\>expdp hr/hr@rock directory=expdp tables=accounts
Export: Release 10.2.0.1.0 - Production on Wednesday, 20 May, 2009 9:27:23
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "HR"."SYS_EXPORT_TABLE_01": hr/********@rock directory=expdp tables=accountsEstimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE. . exported "HR"."ACCOUNTS" 5.585 KB 1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TABLE_01 is: D:\EXPDP_DIR\EXPDAT.DMP
Job "HR"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 09:27:48
H:\>

This is just a warning, not an error; the rows will still be exported.
To protect your encrypted column data in the data pump dump files, you can password-protect your dump file when exporting the table. This password, specified as an ENCRYPTION_PASSWORD parameter in the EXPDP command, applies to this export process only; this is not the password of the wallet.


SQL> grant read, write on directory expdp to soni;
Grant succeeded.


H:\>expdp hr/hr@rock directory=expdp encryption_password=soni tables=accounts

H:\>impdp hr/hr@rock directory=expdp encryption_password=soni tables=accounts


-----------------------------------------------------------------------------------------------
for more details check the link below :

http://www.oracle.com/technology/oramag/oracle/05-sep/o55security.html

Tuesday, May 12, 2009

AUTOTRACE and TKPROF

As we have seen the result of select count(*) from cat; with TKPROF utility http://rakeshocp.blogspot.com/2009/05/tkprof.html now we will compare the result with autotrace.

SQL> show user
USER is "HR"
SQL> set autotrace on
SP2-0618: Cannot find the Session Identifier. Check PLUSTRACE role is enabled
SP2-0611: Error enabling STATISTICS report

SQL> conn sys/rock@rock as sysdba
Connected.

SQL> grant plustrace to hr;
grant plustrace to hr
*
ERROR at line 1:
ORA-01919: role 'PLUSTRACE' does not exist

SQL> conn sys/rock@rock as sysdba
Connected.
SQL> @D:\oracle\product\10.2.0\db_1\sqlplus\admin\plustrce.sql;
SQL>
SQL> drop role plustrace;
drop role plustrace
*
ERROR at line 1:ORA-01919: role 'PLUSTRACE' does not exist

SQL> create role plustrace;
Role created.
SQL>SQL> grant select on v_$sesstat to plustrace;
Grant succeeded.
SQL> grant select on v_$statname to plustrace;
Grant succeeded.
SQL> grant select on v_$mystat to plustrace;
Grant succeeded.
SQL> grant plustrace to dba with admin option;
Grant succeeded.
SQL>
SQL> set echo off
SQL>
SQL> grant plustrace to hr;
Grant succeeded.
SQL> show userUSER is "SYS"
SQL> conn hr/hr@rock
Connected.
SQL> set autotrace on
SQL>

SQL> select count(*) from cat;
COUNT(*)
----------
21

Execution Plan
----------------------------------------------------------
ERROR:
ORA-01039: insufficient privileges on underlying objects of the view

SP2-0612: Error generating AUTOTRACE EXPLAIN report
Statistics
----------------------------------------------------------
333 recursive calls
0 db block gets
1060 consistent gets
7 physical reads
0 redo size
411 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
15 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>

You don’t need any special system privileges in order to use the EXPLAIN PLAN statement. However, you do need to have INSERT privileges on the plan table, and you must have sufficient privileges to execute the statement you are trying to explain. The one difference is that in order to explain a statement that involves views, you must have privileges on all of the tables that make up the view. If you don’t, you’ll get an “ORA-01039: insufficient privileges on underlying objects of the view” error.
http://www.dbspecialists.com/files/presentations/use_explain.html

Monday, May 11, 2009

TKPROF

TKPROF stands for Transient Kernel PROFiler.

The TKPROF program converts Oracle trace files into a more readable form. If you have a problem query you can use TKPROF to get more information. To get the most out of the utility you must enable timed statistics by setting the init.ora parameter or performing the following command:

SQL> ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
System altered.

SQL> @D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlxplan.sql;
Table created.

SQL> CREATE PUBLIC SYNONYM PLAN_TABLE FOR SYS.PLAN_TABLE;
Synonym created.

SQL> GRANT SELECT, INSERT, UPDATE, DELETE ON SYS.PLAN_TABLE TO PUBLIC;
Grant succeeded.

SQL> conn hr/hr@rock
Connected.

SQL> ALTER SESSION SET SQL_TRACE = TRUE;
Session altered.
SQL> select count(*) from cat;
COUNT(*)
----------
20

SQL> ALTER SESSION SET SQL_TRACE = FALSE;
Session altered.

A trace file will be created at D:\oracle\product\10.2.0\admin\rock\udump, the new trace file is rock_ora_4844.trc. This can then be interpreted using TKPROF at the commmand prompt as follows:

H:\>tkprof D:\oracle\product\10.2.0\admin\rock\udump\rock_ora_4844.trc D:\oracle\product\10.2.0\admin\rock\udump\hr.txt explain=hr/hr@rock table=sys.plan_table

The resulting output file contains the following type of information for all SQL statements processed, including the ALTER SESSION commands:

----------------------------------------------------------------------------------------
TKPROF: Release 10.2.0.1.0 - Production on Tue May 12 11:38:44 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: D:\oracle\product\10.2.0\admin\rock\udump\rock_ora_4844.trc
Sort options: default
********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************
ALTER SESSION SET SQL_TRACE = TRUE
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- --------- ---------- ---------- ----------
Parse----0------0.00----0.00-------0-------0---------0----------0-----
Execute--1------0.00----0.00-------0-------0---------0----------0-----
Fetch----0------0.00----0.00-------0-------0---------0----------0-----
------- ------ -------- ---------- ---------- ---------- ---------- ---------
total-----1-------0.00----0.00-------0------0----------0----------0-----

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 169 (HR)
********************************************************************************
select text
from view$
where rowid=:1

call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- --------- ---------- ---------- ---------
Parse----1-----0.00----0.00------0-------0----------0---------0-----
Execute--1-----0.00----0.00------0-------0----------0---------0-----
Fetch----1-----0.00----0.00------ 0-------2----------0---------1-----
------- ------ -------- ---------- ---------- ---------- ---------- --------
total-----3-----0.00----0.00-------0-------2----------0---------1-----

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY USER ROWID VIEW$ (cr=1 pr=0 pw=0 time=22 us)
********************************************************************************
select count(*)
from cat

call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse-----1-----0.00---0.06------0--------0---------0---------0--------
Execute--1------0.00---0.00------0--------0---------0---------0--------
Fetch----2------0.00---0.01------0---------968------0----------1--------
------- ------ -------- ---------- ---------- ---------- ---------- -----------
total-----4------0.00----0.07-----0---------968------0----------1--------
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 169 (HR)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=968 pr=0 pw=0 time=10364 us)
20 FILTER (cr=968 pr=0 pw=0 time=10117 us)
45 TABLE ACCESS FULL OBJ$ (cr=920 pr=0 pw=0 time=9744 us)
0 TABLE ACCESS CLUSTER TAB$ (cr=48 pr=0 pw=0 time=266 us)
16 INDEX UNIQUE SCAN I_OBJ# (cr=32 pr=0 pw=0 time=126 us)(object id 3)

error during execute of EXPLAIN PLAN statement
ORA-01039: insufficient privileges on underlying objects of the view
parse error offset: 84
********************************************************************************
ALTER SESSION SET SQL_TRACE = FALSE
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse-----1------0.00---0.00-----0--------0---------0----------0------
Execute---1------0.00---0.00-----0--------0---------0----------0------
Fetch-----0------0.00---0.00-----0--------0---------0----------0------
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total------2-------0.00---0.00-----0-------0---------0----------0-------
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 169 (HR)

********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse----2-------0.00---0.06-----0--------0----------0--------0-------
Execute--3-------0.00---0.00-----0--------0----------0--------0-------
Fetch-----2-------0.00---0.01-----0--------968-------0---------1------
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total------7--------0.00---0.07----0--------968--------0--------1-------

Misses in library cache during parse: 2
Misses in library cache during execute: 1

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call -----count---cpu---elapsed---disk-----query----current---- rows--
------- ------ -------- ---------- -------- ---------- -------- ----------
Parse----1------0.00----0.00-----0--------0---------0----------0----
Execute--1----- 0.00----0.00-----0--------0---------0----------0----
Fetch----1------0.00----0.00-----0--------2---------0----------1-----
------- ------ -------- ---------- ---------- ---------- ------- ----------
total-----3------0.00----0.00-----0--------2---------0----------1------

Misses in library cache during parse: 1
Misses in library cache during execute: 1
3 user SQL statements in session.
1 internal SQL statements in session.
4 SQL statements in session.
0 statements EXPLAINed in this session.
********************************************************************************
Trace file: D:\oracle\product\10.2.0\admin\rock\udump\rock_ora_4844.trc
Trace file compatibility: 10.01.00
Sort options: default

1 session in tracefile.
3 user SQL statements in trace file.
1 internal SQL statements in trace file.
4 SQL statements in trace file.
4 unique SQL statements in trace file.
56 lines in trace file.
7 elapsed seconds in trace file.

----------------------------------------------------------------------------------------
http://www.oracle-base.com/articles/8i/TKPROFAndOracleTrace.php

tuning SQL statement...AUTOTRACE

I am trying to use autotrace utility to check the execution plan of table, and later I will create index to check whether the performance is improved.

SQL> conn sys/rock@rock as sysdba
Connected.
SQL> create table test as select * from all_objects;
Table created.

SQL> exec dbms_stats.gather_table_stats(ownname => 'SYS',tabname=>'TEST',force=>true);
PL/SQL procedure successfully completed.
SQL>
SQL> select count (*) from test;
COUNT(*)
----------
49771

SQL> desc test
Name
------------------------------------------------------------------
OWNER
OBJECT_NAME
SUBOBJECT_NAME
OBJECT_ID
DATA_OBJECT_ID
OBJECT_TYPE
CREATED
LAST_DDL_TIME
TIMESTAMP
STATUS
TEMPORARY
GENERATED
SECONDARY
SQL>
SQL> select count (distinct(data_object_id)) from test;
COUNT(DISTINCT(DATA_OBJECT_ID))
-------------------------------
3504

SQL> autotrace on;

SQL> select * from test where data_object_id=151596;
OWNER OBJECT_NAME OBJECT_ID
------------------------------ ------------------------------
RMAN2 TF_P 151596

Execution Plan
----------------------------------------------------------
Plan hash value: 1357081020
--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 1 93 154 (2) 00:00:02
* 1 TABLE ACCESS FULL TEST 1 93 154 (2) 00:00:02
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("DATA_OBJECT_ID"=151596)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
689 consistent gets
0 physical reads
0 redo size
1202 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> create index test_data_idx on test(data_object_id)
Index created.
SQL> select * from test where data_object_id=151596;
OWNER OBJECT_NAME OBJECT_ID
------------------------------ ------------------------------
RMAN2 TF_P 151596

Execution Plan
----------------------------------------------------------
Plan hash value: 2271528063
---------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 93 2 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID TEST 1 93 2 (0) 00:00:01
* 2 INDEX RANGE SCAN TEST_DATA_IDX 1 1 (0) 00:00:01
---------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("DATA_OBJECT_ID"=151596)

Statistics
----------------------------------------------------------
149 recursive calls
0 db block gets
28 consistent gets
1 physical reads
0 redo size
1206 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
3 sorts (memory) 0 sorts (disk)
1 rows processed

Conclusion: By creating index the cost of CPU is improved from 154% to 2 %, because the row was accessed by INDEX ROWID.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Explain plan Vs AUTOTRACE
http://www.oracle-base.com/articles/8i/ExplainPlanUsage.php