Tuesday, December 4, 2007

Creating Stand By Database (Steps)

I have created standby database with following steps:
You will also find Primary Database initfile and Standby Database initfile at the bottom of the page
PRIMARY DATABASE
------------------------------
1. Change initrock.ora file
*.log_archive_dest_1='LOCATION=D:\oracle\Archive_rock'
*.log_archive_dest_state_1=ENABLE
*.log_archive_dest_2='SERVICE=ROCKSTD LGWR ASYNC'
*.log_archive_dest_state_2=ENABLE
*.fal_server='ROCKSTD'
*.fal_client='ROCK'
*.db_unique_name='rock'

2. Shutdown immediate;
2.1. Startup mount;
3. alter database create standby controlfile as 'd:\rockstd_control.ctl';
4. alter database open;
5.0 Shutdown immediate;
5.1 Startup Nomount;
5.2 Create Spfile from Pfile;
5.3 D:\oracle\product\10.2.0\db_1\database\SPFILEROCK.ORA, changes are recorded in current spfile and copy pfile from (D:\oracle\product\10.2.0\db_1\database) to standby database.
6. add at TNSNAMES.ora file
Rockstd is service name, which will be created at Standby database.
ABM-D0011.abc.com is the hostname where standby database will be created.

rockstd =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ABM-D0011.abc.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = rockstd)
)
)

Copied all datafiles, archivelogFiles, redologFiles, ControlFile, Initfile to STAND BY Database...
-----------------------------------------------------------------------------------
STAND BY DATABASE
--------------------------

1. Added at TNSNAMES.ORA
Rock is the service name of Primary Database.
HOST = 10.142.192.28 is the IP address of Primary Database.
rock =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.142.192.28)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = rock)
)
)
2. copied datafiles, redologs and rockstd_control at rock (new folder in OraData)
3. copy & paste initrock at D:\oracle\product\10.2.0\db_1\database and rename it to initrockstd.ora
4. check initrockstd file parameters and create folder (rock) and subfolders( adump, bdump, cdump, udump) in D:\oracle\product\10.2.0\admin
5. create folder (flash_recovery_area) at D:\oracle\product\10.2.0\rock\flash_recovery_area. the path is defined in parameter db_recovery_file_dest in initrockstd file.
6. intit file
*.fal_server='ROCK'
*.fal_client='ROCKSTD'
*.db_unique_name='rockstd'
*.standby_file_management=AUTO
7. Place the initrockstd file in oracle_home database folder.
8. REMOVE LOG_ARCHIVE_DEST_2='service=standby LGWR ASYNC' PARAMETER. (and log_archive_dest_state_1=ENABLE)
9. CHANGE LOG_ARCHIVE_DEST_1 PARAMETER and specify path for archived log file destination, copy all archives and paste at specified destination. MoreOver datafiles, redologs and control file must be in same specified path in Standby database as it has been specified at Primary Database.
10. create service rockstd using ORADIM Utility.

H:\>oradim -new -sid rockstd -syspwd rock -startmode m -pfile D:\oracle\product\10.2.0\db_1\database\initrockstd.ora
password should be same as primary sys password.

11. H:\>sqlplus
SQL*Plus: Release 10.2.0.1.0 - Production on Mon Dec 3 12:34:46 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter user-name: sys@rockstd as sysdba
Enter password:
Connected to an idle instance.

NOTE: If unable to connect to standby database and getting Listener error the ..

(SID_DESC =
(GLOBAL_DBNAME = standby)
(ORACLE_HOME = E:\oracle\product\10.2.0\db_1)
(SID_NAME=standby)
)

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 62915964 bytes
Database Buffers 96468992 bytes
Redo Buffers 7139328 bytes

SQL> alter database mount standby database;

Database altered.

12. Add additional Standby Redo logs

SQL> alter database add standby logfile 'D:\oracle\product\10.2.0\oradata\rock\REDO04.log' size 50m;
Database altered.
SQL> alter database add standby logfile 'D:\oracle\product\10.2.0\oradata\rock\REDO05.log' size 50m;

Database altered.
SQL> alter database add standby logfile 'D:\oracle\product\10.2.0\oradata\rock\REDO06.log' size 50m;

Database altered.
13. Start Recovery
alter database recover managed standby database disconnect from session;

14. Name and applied status of Archived.
select name, applied from v$archived_log;
15.
SQL> select process, status from v$managed_standby;
PROCESS STATUS
--------- ------------
ARCH CONNECTED
ARCH CONNECTED
RFS IDLE
MRP0 WAIT_FOR_LOG

ARCH.... receives archives from primary db. (Maximum 30 can be enabled.)
RFS....... applies received archives to standby db.
MRP0....resolves gaps of archived b/w fal_server & fal_client.

16. check alert_rockstd at D:\oracle\product\10.2.0\admin\rock\bdump for alerts.
------------------------------------------------------------------------------------

Adding datafile or creating tablespace at Primary Database.
If you have not set the STANDBY_FILE_MANAGEMENT initialization parameter to auto, you must re-create the control file on the standby database.

http://download-uk.oracle.com/docs/cd/A97630_01/server.920/a96653/sbydb_manage_ps.htm


If you have created another tablespace or added datafile to an existing tablespace at Primary Database and didn't set the STANDBY_FILE_MANAGEMENT initialization parameter to auto Then Archived at Standby database will not apply, and obviously there will be a gap between Primary & StandBy database archivelogs.
I have added tablespace testing1 and added datafile testing1.dbf at Primary database. (Shown below at Primary Database Section)

When you query the select name from v$datafile, you will find unnamed00004 filename instead of testing1.dbf.
SQL> select name from v$datafile;
NAME
----------------------------------------------------------
D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM01.DBF
D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\UNDOTBS01.DBF
D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSAUX01.DBF
D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00004
D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM03.DBF
D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\ABAMCO_TEST01.DBF

6 rows selected.
We learn that datafiles are not copid at standby database, for that one need to manually copy the datafiles.

Following steps must be followed after adding tablespace/datafile at Primary Database.
----------------------------------------------------------------------------------------PRIMARY Database.1. You need to manually copy datafile & recreate controlfile, Copy and paste both files at Standby Database.

SQL> create tablespace testing1
2 datafile 'D:\oracle\product\10.2.0\oradata\rock\testing1.dbf' size 50m;
Tablespace created.

SQL> alter tablespace testing1 begin backup;
Tablespace altered.
Copy the testing1.dbf datafile at any location.
SQL> alter tablespace testing1 end backup;
Tablespace altered.
SQL> alter database create standby controlfile as 'd:\rockstd_control.ctl';
Database altered.
SQL>
----------------------------------
Standby Database

SQL> alter database recover managed standby database cancel;
Database altered.

SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL>

1. Copy and paste controlfile & datafile at D:\oracle\product\10.2.0\oradata\rock
2.
SQL> startup nomount;
ORACLE instance started.

Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 62915964 bytes
Database Buffers 96468992 bytes
Redo Buffers 7139328 bytes
3.
SQL> alter database mount standby database;
Database altered.
4.
SQL> alter database recover managed standby database disconnect from session;
Database altered.
SQL>
------------------------------------------------------
Primary Database initrock.ora (Initfile)
-------------------------------------------rock.__db_cache_size=83886080
rock.__java_pool_size=4194304
rock.__large_pool_size=4194304
rock.__shared_pool_size=67108864
rock.__streams_pool_size=0
*.audit_file_dest='D:\oracle\product\10.2.0/admin/rock/adump'
*.background_dump_dest='D:\oracle\product\10.2.0/admin/rock/bdump'
*.compatible='10.2.0.1.0'
*.control_files='D:\oracle\product\10.2.0oradata\rock\control01.ctl',
'D:\oracle\product\10.2.0\oradata\rock\control02.ctl',
'D:\oracle\product\10.2.0\oradata\rock\control03.ctl'
*.core_dump_dest='D:\oracle\product\10.2.0/admin/rock/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='rock'
*.db_unique_name='rock'
*.db_recovery_file_dest='D:\oracle\product\10.2.0/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=rockXDB)'
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='D:\oracle\product\10.2.0/admin/rock/udump'
*.log_archive_dest_1="location=D:\oracle\Archive_rock"
*.log_archive_format='arc_%s_%t_%r.arc'
*.log_archive_dest_state_1=ENABLE
*.log_archive_dest_2='SERVICE=rockstd LGWR ASYNC'
*.log_archive_dest_state_2=ENABLE
*.fal_server='ROCKSTD'
*.fal_client='ROCK'
-------------------------------------------

Stnadby Database initrockstd.ora file
rock.__db_cache_size=83886080
rock.__java_pool_size=4194304
rock.__large_pool_size=4194304
rock.__shared_pool_size=67108864
rock.__streams_pool_size=0
*.audit_file_dest='D:\oracle\product\10.2.0/admin/rock/adump'
*.background_dump_dest='D:\oracle\product\10.2.0/admin/rock/bdump'
*.compatible='10.2.0.1.0'
*.control_files='D:\oracle\product\10.2.0oradata\rock\rockstd_control.ctl'
*.core_dump_dest='D:\oracle\product\10.2.0/admin/rock/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='rock'
*.db_unique_name='rockstd'
*.db_recovery_file_dest='D:\oracle\product\10.2.0\oradata\rock\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=rockXDB)
'*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='D:\oracle\product\10.2.0/admin/rock/udump'
*.log_archive_dest_1="location=D:\oracle\product\10.2.0oradata\rock\archive"
*.log_archive_format='arc_%s_%t_%r.arc'
*.log_archive_dest_state_2=ENABLE
*.fal_server='ROCK'
*.fal_client='ROCKSTD'

*.standby_file_management=auto--------------------------------------------------------------------------
Standby Database Maintenance
The Standby Database is maintained by setting the database in Recovery Mode. In this particular state the database is in a mount (not open) state and the database is allowed to receive the archived log files from the primary database and apply them for synchronization. In this mode the standby database is not open for any user connections and it only receives the archived logs from the primary database via the Log Transfer Services and applies the changes recorded in them through Log Apply Services.
In order to put the standby database in recover mode we issue the following commands:
--Mount the standby database and makes sure that it is not open for any user connections.
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;

-- Put the standby database in recovery mode and ready to receive archived logs.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

The command above puts the database in recovery mode and ready to receive the archived logs from the primary database. It also ensures that and users cannot connect to the standby database and any previously connected users are disconnected.

The standby database can also be put into read only mode for reporting purposes. This is accomplished by issuing the commands:

-- Cancel the recovery mode and stop the reception of applied logs.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

-- Put the standby database in read only mode for reporting purposes.
SQL> ALTER DATABASE OPEN READ ONLY;
This process can also be reversed and the database can be put back into recovery mode by reissuing the command
-- Put the standby database back into recovery mode.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
Switchover and Failover Operations
Standby Database Switchover
A database can be in one of two mutually exclusive modes (primary or standby). These roles can be altered at runtime without loss of data or resetting of redo logs. This process is known as a SWITCHOVER and can be performed using the following statements:

-- Connect to primary database and switchover to standby
SQL> CONNECT sys/password@ICPORA AS SYSDBA
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;

-- Shutdown the primary database
SQL> SHUTDOWN IMMEDIATE;

-- Mount the old primary database as the new standby database
SQL> STARTUP NOMOUNT PFILE=C:\oracle\ora92\database\initICPORA.ora
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;

-- Convert the standby database to a primary database
SQL> CONNECT sys/password@ICPSTD AS SYSDBA
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

-- Shutdown standby database
SQL> SHUTDOWN IMMEDIATE;

-- Open old standby database as primary
SQL>STARTUP PFILE=C:\oracle\ora92\database\initICPSTD.ora
The SWITCHOVER operation is useful in scenarios where the primary database needs to be temporarily taken offline or unavailable for user sessions for maintenance purposes. Switchover is a temporary switch from the primary database to the standby database.
Standby Database Failover

A FAILOVER occurs when database failover causes a standby database to be converted to a primary database. This process is not temporary and the failing over to the standby database will cause the primary to be unavailable to be online again.

A FAILOVER operation is initiated by issuing the following commands:

-- Finish the recovery mode on the standby database
SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;

--Activate standby database as the new primary database
SQL>ALTER DATABASE ACTIVATE STANDBY DATABASE;

It is important to note that one should not FAILOVER to a standby database other than in an emergency, because the failover operation is an unplanned transition that may result in the loss of application data.

Once a FAILOVER operation is performed, there is no going back. This is because the original primary database is not operational anymore and the standby database that you fail over to the primary role is no longer capable of returning to being a standby database in the original configuration.
-------------
-------------
When there is Link down or Network Problem, then obviously it's gonna be gap between Primary and Standby Databases Archived logs, So there are two solutions for that:
1. Manuall resolve Gaps,
2. Automatically
1. Manual (Recommended for few logs)
Step.1:
Manually copy all missing archived logs from Primary Database and paste at Standby Database Archived Log Folder. (Copy one file at a time Recommended)
Step.2: Register Archived logs at Standby Database (One by One)1. Conn sys/password@standbydb as sysdba
2. Shutdown Immediate;
3. Startup nomount;
4. Alter database mount Standby Database;
5. alter database register logfile 'd:\abc\abc.log';
6. alter database recover managed standby database disconnect from session;
2. Automatic (Recommended by large number of logs)The Production/Primary database must be taken offline, so its required that Automatic option should be used when there is NO or LESS activity on the Primary Database.
Step.1: Connect at Standby Database
conn sys/password@standbydb as sysdba
alter database recover managed standby database cancel;
Step.2: Connect at Primary Database
1. Shutown immediate;
2. Startup nomount;
3. Alter database mount;
4. Alter database open;
Step.3: Connect at Standby database1. Alter database recover managed standby database parrallel 8 nodelay disconnect from session;

Wednesday, November 7, 2007

Scenarios (My topics at Oracle Forums)

My database in ArchiveLog Mode and I created a table TEST, Later I made the whole database backup (RMAN Full DB backup). After that, I deleted the table TEST, and restored from full database backup. Surprisingly, there wasn't table TEST. Why?

Read at oracle forums, and how experts helped me in recovering TEST table.


http://forums.oracle.com/forums/thread.jspa?threadID=581258&start=0&tstart=0


-----------------------------------------------------------------------------------------------------------


Recovering dropped table from Backup. (User Managed Backup & Recovery)

http://forums.oracle.com/forums/thread.jspa?threadID=584733&tstart=0

Lost all Online Redo Log Files.

http://forums.oracle.com/forums/message.jspa?messageID=3134695#3134695

---------------------------------------------
Loss of one data file and all controlfile
So If datafile and all controlfiles are lost, then we can recover the controlfile and database though we can not recover the lost datafile. ( HOT BACKUP Scenario)

http://forums.oracle.com/forums/thread.jspa?threadID=872397&tstart=0

(To be Continued)

Saturday, October 27, 2007

Data Pump

Oracle Data Pump
Oracle 10g's new DataPump utility is designed as the eventual replacement for the original Oracle Import and Export utilities.

Some of the drawbacks in traditional Export and Import that needed to be addressed.
<> Operations are difficult to restart.
<> Execution is client-side and single-threaded.
<> Dump files can grow enormously.
<> Tuning mechanisms are limited.
<> Progress monitoring is difficult.
<> Database object filtering features are limited.

Enter The DataPump: Features Overview
Fortunately, Oracle 10g has addressed many of these issues with the DataPump, a new set of utilities that significantly expands the capabilities of the original Export and Import utilities.
<> Server-side parallel execution.
<> Improved control and restartability of operations.
<> Simplified monitoring of operational status..
<> Automated performance tuning.
<> Improved database object filtering.
<> Export dump file control enhancements.

for details visit of above visit http://www.databasejournal.com/features/oracle/article.php/3489491

DataPump 10g at Oracle Website by Arup Nanda
http://www.oracle.com/technology/pub/articles/10gdba/week4_10gdba.html

=============================================================================
Data Pump Export
First you need create a Directory for Data Pump Utility. and must grant read, write priviledges to user.

SQL> create or replace directory expdp as 'D:\expdp';
grant read, write on directory expdp to rakesh;

SQL> GRANT exp_full_database to rakesh;


---------------------------------------------------------------------------------------------Exporting with DataPump
H:>expdp erp/********@live-db tables=client_type directory=exp_dir dumpfile=datapump_erp.dmp

---------------------------------------------------------------------------------------------Importing with DataPump
H:\>impdp aa/********@rock tables=client_type directory=SYS_DMP REMAP_SCHEMA=erp:aa remap_tablespace=erp:users dumpfile=datapump_erp.dmp

-------------------Live Database----------------------TestDatabase-----
DB-Name----------Live-db---------------------------------rock---------
DB-Users-------------erp------------------------------------aa----------
Directoy-----------EXP_DIR----------------------------SYS_DMP------
Tablespaces----------ERP---------------------------------USERS-------

Above chart might help Young, novice DBA's to get any idea, how we can export from one database to another with DataPump.

Note: Before importing you Manually MUST COPY dumpfile from exp_dir to sys_dir.
---------------------------------------------------------------------------------------------
SQL> column directory_path format a70
SQL> select * from dba_directories;
OWNER DIRECTORY_NAME DIRECTORY_PATH
--------------- ------------------------------ -----------------------------------
SYS -------------EXPDP---------------------D:\expdp_dir
SYS-------------ABC------------------------d:\abc_dir
SYS-------------WORK_DIR----------------C:\ADE\aime_10.2_nt_push\oracle/work
SYS-------------SYS_DMP------------------D:\exp_dir
SYS-------------DATA_PUMP_DIR---------D:\oracle\product\10.2.0\admin\rock\dpdump\
SYS-------------ADMIN_DIR---------------C:\ADE\aime_10.2_nt_push\oracle/md/admin
6 rows selected.

SQL> select * from dba_directories;

SQL> GRANT read, write ON DIRECTORY expdp TO hr;
Grant succeeded.
SQL>

SQL> CREATE DIRECTORY hrdir AS 'D:\hr_dir';
Directory created.

SQL> grant read, write on directory hrdir to hr;
Grant succeeded.
SQL>

Exporting EMPLOYESS table...

H:\>expdp hr/hr@rock directory=hrdir dumpfile=hr_emptab.dmp logfile=hr_emptab.log tables=EMPLOYEES

Export: Release 10.2.0.1.0 - Production on Friday, 12 June, 2009 12:02:40
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "HR"."SYS_EXPORT_TABLE_01": hr/********@rock directory=hrdir dumpfile=hr_emptab.dmp logfile=hr_emptab.log tables=EMPLOYEES
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATATotal estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS. . exported "HR"."EMPLOYEES" 15.78 KB 107 rows
Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TABLE_01 is: D:\HR_DIR\HR_EMPTAB.DMP
Job "HR"."SYS_EXPORT_TABLE_01" successfully completed at 12:03:01

The above export command has also exported indexes, views, functions, packages, constraints, we need to export only TABLE EMPLOYEES, not other objects, check the below command.

H:\>expdp hr/hr@rock directory=hrdir dumpfile=hr_only_EMP_table.dmp logfile=hr_emp_table.log tables=EMPLOYEES exclude=index, view, REF_CONSTRAINT, function, trigger, constraint

Export: Release 10.2.0.1.0 - Production on Friday, 12 June, 2009 12:17:44
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "HR"."SYS_EXPORT_TABLE_01": hr/********@rock directory=hrdir dumpfile=hr_only_EMP_table.dmp logfile=hr_emp_table.log tables=EMPLOYEES exclude=index, view, REF_CONSTRAINT, function, trigger, constraint
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KBProcessing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS. . exported "HR"."EMPLOYEES" 15.78 KB 107 rows
Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TABLE_01 is: D:\HR_DIR\HR_ONLY_EMP_TABLE.DMPJob "HR"."SYS_EXPORT_TABLE_01" successfully completed at 12:18:01

Wednesday, October 3, 2007

Oracle Interview Questions

These are the Oracle DBA interview Questions that were asked to me for Fresh /Assitant Oracle DBA Job.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

1- You have written in your CV that As a DBA u can Test a Backup, How?
2- You have written in your CV, that you can monitor physical and logical backup, how?
3- You have written in your CV, that you can Rebuild Index to rectify segment fragmentation. (Coalesce)
4- You has written in your CV, that you can Automatic Schedule a logical backup. how?
5- HVM, when it will be decreased? How can I do it? (Truncate table)
6- Alert log file? What it contains?
7- Control file? What information it contains? When it is read?
8- I have lost my Control File & don’t have any Backup; Can I start DB & operate? How? (You create a control file in no mount mode.)
9- Mount- No Mount?
10- SGA? It’s Components?
11- Checkpoint? Why Checkpoint? Where it’s recorded? When it’s recorded?
12- LGWR? When it writes?
13- Duties of DBA?
14- What’s Statspack? How can I use Statspack?
15- Methods of Backup – (Cold & Hot Backup)?
16 -Why do you want to be DBA? Why Not a Developer?
17- What is RECOVERY Catalog? Why we need it? Complete command/steps of creating Recovery Catalog? How will it know about the Primary Database? -What role/Privileges are given to user when he is connected to Recovery Catalog? -How can I connect with RMAN? It’s Steps?
18- RMAN Incremental Backups? What are Differential & Cumulative Backups?
19- Write a statement/command for exporting all the objects of owner ‘HR’?
20- Direct=y
21- Standby Databases?
22- RAC?
23- Partitioning (List, Range, Hash)
24- If my application is slow, how can we make it efficient, faster? -How will you tune if application is slow?
25- RMAN Backups?
26- ADDM.
27- AWR.
28- Flashback? What is flashback in 10g?
29- What is command for restoring from recycle bin?

2nd interview.
30- Difference between User & Schema?
31- Cluster key? 32- RAC?
33- Stand By database?
33- What are duties of DBA?
34- What is Difference among NOMOUNT & MOUNT & OPEN modes? When Database will be available for read?
35- What is SGA? And what are its components?
36- How can we allocate SGA? How we can know it proper size? How should we estimate? How can we calculate size of SGA?
37- If database is running 24 hours a day, 7 days a week when I come to office in morning what should I do Check? What files, views, or Services should I check?
38- How can you perform Fragmentation in tables?
39-Suppose, if I export table data and then import? Does it perform fragmentation?
40- Interviewer Asked from me:-”Tell me if you are DBA, u left the office at night, and when u arrive at office in the morning”, what you will check in database?
41- How can you schedule a job?
42- RAID?
43- New feature of 10g.
44- What is Database? RDBMS? Rules?

HR Interview Questions?
1. Tell me more about yourself.
2. Why did you choose this career?
3. Why do you want to work here at ABC Inc.?
4. What would you say is your strength? Your weakness?
5. What did you learn from your last job/internship/educational experience?
7. Why do you want this job?

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
1- You have written in your CV that As a DBA u can Test a Backup, How?
Regularly test the backup & recovery srtategy, becaue it shows the integrity of the backups, the validity of the backup and recovery method, and the reduction of the problems before the occur in a production database.

Testing the backup and recovery plan ensures:
<> That the backup and recovery methods are sound
<> Integrity of backups
<> Ensures that the backup and recovery strategy meets business needs.
<> It minimizes problems before they occur in a production environment.
<> It ensures that personnel can react quickly and effectively in case any errors arise, avoiding a crisis situation.

2- You have written in your CV, that you can monitor physical and logical backup, how?
http://www.oracle.com/technology/deploy/availability/htdocs/BR_Overview.htm
A backup is a representative copy of data. This copy can include important parts of a database such as the control file, redo logs, and datafiles. A backup protects data from application error and acts as a safeguard against unexpected data loss, by providing a way to restore original data. Backups are divided into physical backups and logical backups. Physical backups are copies of physical database files. The phrase "backup and recovery" usually refers to the transfer of copied files from one location to another, along with the various operations performed on these files.
In contrast, logical backups contain data that is exported using SQL commands and stored in a binary file. Oracle records both committed and uncommitted changes in redo log buffers. Logical backups are used to supplement physical backups. Restoring a physical backup means reconstructing it and making it available to the Oracle server. To recover a restored backup, data is updated using redo records from the transaction log. The transaction log records changes made to the database after the backup was taken.
http://www.oracle.com/technology/deploy/availability/htdocs/BR_Overview.htm

Types of Backup
There are two kinds of database backups available to us physical backups and logical backups.
Physical Backup is an actual physical copy of the files of the database copied from one location to another.
Logical Backup is a copy of the data in the database but not a copy of the physical files.

Cold Backup
A cold backup is a physical backup.
During a cold backup the database is closed and not available to users. All files of the database are copied (image copy). The datafiles do not change during the copy so the database is in sync upon restore.
Used when: Service level allows for some down time for backup

Hot Backup
A hot backup is a physical backup. In a hot backup the database remains open and available to users. All files of the database are copied (image copy). There may be changes to the database as the copy is made and so all log files of changes being made during the backup must be saved too. Upon a restore, the changes in the log files are reapplied to bring the database in sync.
Used when: A full backup of a database is needed Service level allows no down time for the backup

Logical Backup
A logical backup is an extract of the database. All SQL statements to create the objects and all SQL statements to populate the objects are included in the extract. Oracle provides a utility export, to create the extract. A partner utility, import, is used to bring the data back into the database.
A logical backup can be done at the table, schema(or proxy owner), or database level. That is, we can extract only a list of specified tables, a list of specified schemas or the full database.

Used to:
Move or archive a database
Move or archive a table(s)
Move or archive a schema(s)
Verify the structures in the database.

3- You have written in your CV, that you can Rebuild Index to rectify segment fragmentation. (Coalesce)
Index: An index is a tree structure that allows direct access to a row in table.
Rebuild: An index contains deleted entries and should be rebuild, such a case of index on Order number of an Orders tables, where completed orders are deleted & new orders with higher numbers are added.

ALTER INDEX orders_index REBUILD
TABLESPACE index02;


<> Above command moves an index to a different TableSpace.
<> Improve space utilization by removing deleted entries.

COALESCE: If you encounter index fragmentation then you can rebuild or coalesce. Coalescing an index is a block rebuild that is performed online.
I.e. Merging B-tree index leaf blocks that can be freed for reuse.


ALTER INDEX hr.employees_idx COALESE;

----------------------------------------------------------
When one must rebuild Indexes, read the at the forums.oracle.com at below thread.
http://forums.oracle.com/forums/thread.jspa?threadID=580451&tstart=0

4- You has written in your CV, that you can Automatic Schedule a logical backup. how?
http://rakeshocp.blogspot.com/2007/09/batch-file-scripts.html
Visit above blog page for Automatic Scheduling of logical backup.

5- HVM, when it will be decreased? How can I do it? (Truncate table)
High water mark: The high water mark is the boundary between used and unused space in a segment. As requests for new free blocks that cannot be satisfied by existing free lists are received, the block to which the high water mark points becomes a used block, and the high water mark is advanced to the next block. In other words, the segment space to the left of the high water mark is used, and the space to the right of it is unused.

The high water mark is divides a segment into used blocks free blocks
Blocks below the high water mark (used blocks) have at least once contained data. This data might have been deleted.
Since Oracle knows that blocks beyond the high water mark don't have data, it only reads blocks up to the high water mark in a full table scan.
Oracle keeps track of the high water mark for a segment in the segment header.
Moving the high water mark
In normal DB operations, the high water mark only moves upwards, not downwards. The exceptions being the truncate.
If there is a lot of free space below the high water mark, one might consider to use alter table move statements. See On shrinking table sizes.
http://www.adp-gmbh.ch/blog/2005/july/20.html

truncate table table_name;
truncate cluster cluster_name;

A statement like delete from tablename deletes all records in the table, but it does not free any space (see On table sizes). In order to free the space as well, use truncate. However, a truncate can not be rolled back.
Truncate Table: Basically, a truncate statement resets the high water mark to its initial position.
A truncate statement cannot be used on a synonym.

Ways for reseting high water mark.
- export/import
-CTAS (Create table as select)
- dbms_redefinition

6- Alert log file? What it contains?
Location Defined by BACKGROUND_DUMP_DEST
Alert log file should be the 1st place when diagnosing day-to-day operations or errors.

Keeps record of
<> When DB was started & shutdown
<> List of all non default initialization parameters.
<> Startup of Background processes.
<> Log sequence number LGWR writing to.
<> Info abt log switch
<> Creation of tablespace & undo segments
<> Alter statement that has been issued.
<> Information abt errors messages ora-600 & extent errors.

7- Control file? What information it contains? When it is read?
Small binary file
Defines current state of physical database.
Required.
<> At mount state during startup
<>To operate the database
Linked to a single database.
Loss may require recovery.

Control file contains
<> Database Name & Identified
<> Timestamp of DB creation
<> Tablespace names
<> Names & location of data files & online redo log files
<> Current online redo log file information
<> Checkpoint information
<> Begin & end of undo segments
<> Redo log archive information
<> Backup Information
Read at the Mount mode of the database.


8- I have lost my Control File & don’t have any Backup; Can I start DB & operate? How? (You create a control file in no mount mode.)
Yes, we can operate the database, follow the steps below:
<> Start instance if necessary.
<> Shut down instance if start failed.
<> Start the Instance in Mount mode.
<> Run the trace file script to recreate the control file.
<> Determine if full back up is required and perform one if necessary.
<> Ensure that instance is started & database is open.


http://www.orafaq.com/wiki/Control_file_recovery

http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/control.htm#i1006360
Steps for Creating New Control Files
Complete the following steps to create a new control file.
1. Make a list of all datafiles and redo log files of the database.
If you follow recommendations for control file backups as discussed in
"Backing Up Control Files" , you will already have a list of datafiles and redo log files that reflect the current structure of the database. However, if you have no such list, executing the following statements will produce one.SELECT MEMBER FROM V$LOGFILE;SELECT NAME FROM V$DATAFILE; SELECT VALUE FROM V$PARAMETER WHERE NAME = 'control_files';
If you have no such lists and your control file has been damaged so that the database cannot be opened, try to locate all of the datafiles and redo log files that constitute the database. Any files not specified in step
5 are not recoverable once a new control file has been created. Moreover, if you omit any of the files that make up the SYSTEM tablespace, you might not be able to recover the database.

2. Shut down the database.
If the database is open, shut down the database normally if possible. Use the IMMEDIATE or ABORT clauses only as a last resort.

3. Back up all datafiles and redo log files of the database.

4. Start up a new instance, but do not mount or open the database:

5. STARTUP NOMOUNT
6.
7. Create a new control file for the database using the CREATE CONTROLFILE statement.
When creating a new control file, specify the RESETLOGS clause if you have lost any redo log groups in addition to control files. In this case, you will need to recover from the loss of the redo logs (step 8). You must specify the RESETLOGS clause if you have renamed the database. Otherwise, select the NORESETLOGS clause.

8. Store a backup of the new control file on an offline storage device. See "Backing Up Control Files" for instructions for creating a backup.

9. Edit the CONTROL_FILES initialization parameter for the database to indicate all of the control files now part of your database as created in step 5 (not including the backup control file). If you are renaming the database, edit the DB_NAME parameter in your instance parameter file to specify the new name.

10. Recover the database if necessary. If you are not recovering the database, skip to step 9.
If you are creating the control file as part of recovery, recover the database. If the new control file was created using the NORESETLOGS clause (step
5), you can recover the database with complete, closed database recovery.
If the new control file was created using the RESETLOGS clause, you must specify USING BACKUP CONTROL FILE. If you have lost online or archived redo logs or datafiles, use the procedures for recovering those files.
See Also:
Oracle Database Backup and Recovery Basics and Oracle Database Backup and Recovery Advanced User's Guide for information about recovering your database and methods of recovering a lost control file

11. Open the database using one of the following methods:
o If you did not perform recovery, or you performed complete, closed database recovery in step
8, open the database normally.
o ALTER DATABASE OPEN;
o If you specified RESETLOGS when creating the control file, use the ALTER DATABASE statement, indicating RESETLOGS.

o ALTER DATABASE OPEN RESETLOGS;
The database is now open and available for use.


http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/control.htm#i1006360
====================

Trace File:
When one of the Oracle background processes (such as
dbwr, lgwr, pmon, smon and so on ) encounter an exception, they will write a trace file.
These trace files are also recorded in the
alert.log.
Trace files are also created for
diagnostic dump events.
An
ORA-00600 error also produces a trace file.

9- Mount- No Mount?
NOMOUNT: An oracle instance can be started in NOMOUNT stage only during database creation or the recreation of control files.
MOUNT: To perform specific maintenance operations, don’t open the database.

• Renaming datafiles
• Enabling & disabling archive redo log file archiving options
• Performing full database recovery

10- SGA? It’s Components?
SGA: Shared Global Area is allocated at instance startup, and is fundamental component of Oracle Instance.

Size defined by SGA_MAX_SIZE.
Components
Shared Pool (SHARED_POOL_SIZE)
Database Buffer Cache (DB_CACHE_SIZE)
• Redo Log Buffer (LOG_BUFFER)Additional Components
• Large Pool
• Java Pool

11- Checkpoint? Why Checkpoint? Where it’s recorded? When it’s recorded?
Database Checkpoint: Checkpoints are you used to determine where recovery should start.
Checkpoint position – where recovery should start
Checkpoint queue – link list of dirty blocks
The position in the redo log where recovery should start is referred as to as the checkpoint position.

Types of Checkpoint
1-
Full check point
<> All dirty buffers are written
<> Shutdown normal, immediate, or transactional
Alter System Checkpoint
2- Incremental Checkpoint
3- Partial Checkpoint
<> Alter tablespace begin backup
<> Alter tablespace offline normal

When a checkpoint occurs, Oracle must update the headers of all datafiles to record the details of the checkpoint. This is done by the CKPT process. The CKPT process does not write blocks to disk; DBWn always performs that work.
Every 3 seconds CKPT records the RBA from the oldest entry to in the checkpoint queue in the control file. This RBA represents the point in the redo log at which instance recovery is to begin after an instance failure. It can do this because all of the data blocks represented in prior redo records are guaranteed to have been written do disk by DBWn
On the event on a log switch, does CKPT also write this information to the header of the datafiles.

12- LGWR? When it writes?
LGWR performs sequential writes from the Redo Log Buffer to the Online redo Log files under the following situations
<> At Commit
<> When the Redo Log Buffer is one-third full
<> When there is more than 1 MB of changes recorded in the Redo Log Buffer
<> Befoere DBW writes modified blocks
<> Every 3 seconds


13- Duties of DBA?
· To plan & Create Databases
· To Manage Database Availability
· To Manage Physical & Logical Structure
· To Manage Storage based on design
· To manage security
· Network Administration
· Backup & Recovery
· Database Tuning

14- What’s Statspack? How can I use Statspack?
STATSPACK: STATSPACK is a performance diagnosis tool, available since Oracle8i. STATSPACK can be considered BSTAT/ESTAT's successor, incorporating many new features. STATSPACK is a diagnosis tool for instance-wide performance problems; it also supports application tuning activities by providing data which identifies high-load SQL statements. STATSPACK can be used both proactively to monitor the changing load on a system, and also reactively to investigate a performance problem.

UTLBSTAT - UTLESTAT
The BSTAT-ESTAT utilities capture information directly from the Oracle's in-memory structures and then compare the information from two snapshots in order to produce an elapsed-time report showing the activity of the database. If we look inside utlbstat.sql and utlestat.sql, we see the SQL that samples directly from the view: V$SYSSTAT;

insert into stats$begin_stats select * from v$sysstat;
insert into stats$end_stats select * from v$sysstat;

How can I use Statspack
Create PERFSTAT Tablespace
The STATSPACK utility requires an isolated tablespace to obtain all of the objects and data. For uniformity, it is suggested that the tablespace be called PERFSTAT, the same name as the schema owner for the STATSPACK tables. It is important to closely watch the STATSPACK data to ensure that the stats$sql_summary table is not taking an inordinate amount of space.

SQL> CREATE TABLESPACE perfstat
DATAFILE '/u01/oracle/db/AKI1_perfstat.dbf' SIZE 1000M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
SEGMENT SPACE MANAGEMENT AUTO
PERMANENT
ONLINE;
Run catdbsyn.sql as SYS
Run dbmspool.sql as SYS
$ cd $ORACLE_HOME/rdbms/admin$ sqlplus "/ as sysdba"SQL> start spcreate.sql
Adjusting the STATSPACK Collection Level
STATSPACK has two types of collection options, level and threshold. The level parameter controls the type of data collected from Oracle, while the threshold parameter acts as a filter for the collection of SQL statements into the stats$sql_summary table.

SQL> SELECT * FROM stats$level_description ORDER BY snap_level;

Level 0: This level captures general statistics, including rollback segment, row cache, SGA, system events, background events, session events, system statistics, wait statistics, lock statistics, and Latch information.
Level 5: This level includes capturing high resource usage SQL Statements, along with all data captured by lower levels.
Level 6: This level includes capturing SQL plan and SQL plan usage information for high resource usage SQL Statements, along with all data captured by lower levels.
Level 7: This level captures segment level statistics, including logical and physical reads, row lock, itl and buffer busy waits, along with all data captured by lower levels.
Level 10: This level includes capturing Child Latch statistics, along with all data captured by lower levels.

15- Methods of Backup – (Cold & Hot Backup)?
A cold backup, also called an offline backup, is a database backup when the database is offline and thus not accessible for updating. This is the safest way to back up because it avoids the risk of copying data that may be in the process of being updated. However, a cold backup involves downtime because users cannot use the database while it is being backed up.
When system downtime must be minimized, a hot backup can provide an alternative to the cold backup. A hot backup can be done even as users access the database, but some method must be used to ensure that data being updated is noted and can be copied when the update is complete.

16 -Why do you want to be DBA? Why Not a Developer?
I think being an Oracle DBA is really rewarding. It can be a highly demanding job, but I feel that this is part of the exciting challenge of being a DBA. But I choose this path for myself and I am really happy with it. It is an exciting and rewarding job.

17- What is RECOVERY Catalog? Why we need it? Complete command/steps of creating Recovery Catalog? How will it know about the Primary Database? -What role/Privileges are given to user when he is connected to Recovery Catalog? -How can I connect with RMAN? It’s Steps?
What is RECOVERY Catalog
• Recovery Catalog is schema that is created in a separate tablespace.
• RMAN propagates information about the database structure, archived redo log files, and datafile copies into the recovery catalog from the control file of target database.

http://www.idevelopment.info/data/Oracle/DBA_tips/RMAN_9i/RMAN9_4.shtml

To use RMAN, a recovery catalog is not necessary. Remember that RMAN will always use the control file of the target database to store backup and recovery operations. To use a recovery catalog, you will first need to create a recovery catalog database and create a schema for it. The catalog (database objects) will be located in the default tablespace of the schema owner. Please note that the owner of the catalog cannot be the SYS user.

The recovery catalog database should be created on a different host, on different disks, and in a different database from the target databse you will be backing up. If you do not, the benefits of using a recovery catalog are lost if you loose the database and need to restore.

The first step is to create a database for the recovery catalog. For the purpose of this example, I created an Oracle 9.2.0 database named CATDB. The database has the following installed:

• You have access to the SYS password for the database.
• A temporary tablespace named TEMP already exists.
• A normal tablespace named TOOLS exists and will be used to store the recovery catalog.
• The database is configured in the same way as all normal databases, for example, catalog.sql and catproc.sql have been successfully run.

Now, let's create the recovery catalog:

1. Start SQL*Plus and then connect with SYSDBA privileges to the database containing the recovery catalog:
% sqlplus "sys/change_on_install as sysdba"
2. Create a user and schema for the recovery catalog:
3. SQL> CREATE USER rman IDENTIFIED BY rman
4. DEFAULT TABLESPACE tools
5. TEMPORARY TABLESPACE temp
6. QUOTA UNLIMITED ON tools;
7.
User created.
8. Grant the RECOVERY_CATALOG_OWNER role to the schema owner. This role provides the user with privileges to maintain and query the recovery catalog: 9.

SQL> GRANT RECOVERY_CATALOG_OWNER TO rman;
10. Grant succeeded.
11. Grant other desired privileges to teh RMAN user:
12. SQL> GRANT CONNECT, RESOURCE TO rman;
13. Grant succeeded.
14. After creating the catalog owner you should now create the catalog itself by using the CREATE CATALOG command within the RMAN interface. This command will create the catalog in the default tablespace of the catalog owner. you will need to connect to the database that will contain the catalog as teh catalog owner as follows:
15. % rman catalog rman/rman@catdb
16.
17. Recovery Manager: Release 9.2.0.1.0 - Production
18.
19. Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
20.
21. connected to recovery catalog databaserecovery catalog is not installed
22. Now, run the CREATE CATALOG command to create the catalog. Note that this process can take several minutes to complete.
23. RMAN> create catalog;
24. recovery catalog created

Registering the Target Database
Before using RMAN using a recovery catalog, you will need to register the taget database(s) in the recovery catalog. RMAN will obtain all information it needs to register the target database from the database itself.
As long as each target database has a distinct DBID, you can register more than one target database in the same recovery catalog. Each database registered in a given catalog must have a unique database identifier (DBID), but not necessarily a unique database name.
You can use either the command-line utilities provided by RMAN or the Oracle Enterprise Manager GUI to register the target database. For the purpose of this example, I will be using the command-line utilities. I will be registering a database named TARGDB to a recovery catalog within a database named CATDB. The target database must be either mounted or opened in order to register it.

% . oraenv
ORACLE_SID = [TARGDB] ? TARGDB
% rman target backup_admin/backup_admin catalog rman/rman@catdb
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.

connected to target database: TARGDB (DBID=2457750772)
connected to recovery catalog database

RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

18- RMAN Incremental Backups? What are Differential & Cumulative Backups?
Incremental backup is a backup that includes only those blocks that have changed since the previous backup.
click the link below for details about Differential and Cumulative Backups.

http://download-uk.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc1005.htm

19- Write a statement/command for exporting all the objects of owner ‘HR’?
Exp user=hr/hr full=y direct=y

20- Direct=y
Determines whether to use direct or conventional path export. Direct path exports bypass the SQL command, thereby enhancing performance

21- Standby Databases?
A standby database is a transactionally consistent copy of the primary database. A standby database is initially created from a backup copy of the primary database. Once created, Data Guard automatically maintains the standby database by transmitting primary database redo data to the standby system and then applying the redo logs to the standby database.
Similar to a primary database, a standby database can be either a single-instance Oracle database or an Oracle Real Application Clusters database.
A standby database can be either a physical standby database or a logical standby database:

Physical standby database
Provides a physically identical copy of the primary database, with on-disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database by recovering the redo data received from the primary database.
Logical standby database
Contains the same logical information as the production database, although the physical organization and structure of the data can be different. It is kept synchronized with the primary database by transforming the data in the redo logs received from the primary database into SQL statements and then executing the SQL statements on the standby database. A logical standby database can be used for other business purposes in addition to disaster recovery requirements. This allows users to access a logical standby database for queries and reporting purposes at any time. Thus, a logical standby database can be used concurrently for data protection and reporting.

For more details about Stand By databases visit the link below.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/concepts.htm

22- RAC?
Real Application Clusters harnesses the processing power of multiple interconnected computers. Real Application Clusters software and a collection of hardware known as a cluster unite the processing power of each component to create a robust computing environment.

You can use Real Application Clusters to deliver high performance, increased throughput, and high availability. Before deploying Real Application Clusters, however, you should understand Real Application Clusters processing.

In Real Application Clusters environments, all active instances can concurrently execute transactions against a shared database. Real Application Clusters coordinates each instance's access to the shared data to provide data consistency and data integrity.

Harnessing the power of clusters offers obvious advantages. A large task divided into subtasks and distributed among multiple nodes is completed sooner and more efficiently than if you processed the entire task on one node. Cluster processing also provides increased performance for larger workloads and for accommodating rapidly growing user populations.

For more details visit the link below.
http://download-uk.oracle.com/docs/cd/B10501_01/rac.920/a96597/psintro.htm


23- Partitioning (List, Range, Hash)

More details: http://www.devarticles.com/c/a/Oracle/Partitioning-in-Oracle/
Partitioning provides facilities for implementing large, scalable applications. Enables control over tables & indexes at a lower level of granularity than is possible with basic enterprise edition.

CREATE TABLESPACE part1
DATAFILE 'c:\temp\part01.dbf' SIZE 50
MBLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;

CREATE TABLESPACE part2
DATAFILE 'c:\temp\part02.dbf' SIZE 50M
BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;

CREATE TABLESPACE part3
DATAFILE 'c:\temp\part03.dbf' SIZE 50M
BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;

CREATE TABLESPACE part4

DATAFILE 'c:\temp\part04.dbf' SIZE 50M
BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;

HASH Partitioning
Hash Partitioning, which maps data to partitions based on a hashing algorithm, evenly distributing data between the partitions. This is typically used where ranges aren't appropriate, i.e. customer number, product ID
Enables partitioning of data that does not lend itself to range or list partitioning.To view the numbers Oracle uses for hashing:
SELECT program, sql_hash_value, prev_hash_value FROM gv$session;

CREATE TABLE hash_part (
prof_history_id NUMBER(10),
person_id NUMBER(10) NOT NULL,
organization_id NUMBER(10) NOT NULL,
record_date DATE NOT NULL,
prof_hist_comments VARCHAR2(2000))
PARTITION BY HASH (prof_history_id)
PARTITIONS 3
STORE IN (part1, part2, part3);

desc prof_hist

SELECT table_name, tablespace_name, partitioned
FROM user_tables;

desc user_tab_partitions

SELECT partition_name, tablespace_name
FROM user_tab_partitions;
---------------------------------------------------------------------------

LIST Partitioning
Explicitly controls how rows map to partitions. You do this by specifying a list of discrete values for the partitioning key in the description for each partition.

CREATE TABLE list_part (
deptno NUMBER(10),
deptname VARCHAR2(20),
quarterly_sales NUMBER(10,2),
state VARCHAR2(2))
PARTITION BY LIST (state)
(PARTITION q1_northwest VALUES ('OR', 'WA')
TABLESPACE part1,
PARTITION q1_southwest VALUES ('AZ', 'CA', 'NM')
TABLESPACE part2,
PARTITION q1_northeast VALUES ('NY', 'VT', 'NJ')
TABLESPACE part1,
PARTITION q1_southeast VALUES ('FL', 'GA')
TABLESPACE part2,
PARTITION q1_northcent VALUES ('MN', 'WI')
TABLESPACE part1,
PARTITION q1_southcent VALUES ('OK', 'TX')
TABLESPACE part2);

SELECT table_name, tablespace_name, partitioned
FROM user_tables;

SELECT partition_name, tablespace_name, high_valueFROM user_tab_partitions;

INSERT INTO list_part VALUES (10, 'A', 1000, 'OR');
INSERT INTO list_part VALUES (20, 'B', 1000, 'AZ');
INSERT INTO list_part VALUES (10, 'A', 1000, 'WA');
INSERT INTO list_part VALUES (20, 'B', 1000, 'WA');
INSERT INTO list_part VALUES (10, 'A', 1000, 'AZ');
INSERT INTO list_part VALUES (20, 'B', 1000, 'CA');
COMMIT;

SELECT * FROM list_part;
SELECT * FROM list_part PARTITION(q1_northwest);
---------------------------------------------------------------------------

RANGE Partition
Range Partitioning, which maps data to partitions based on a range of column values (usually a date column)

CREATE TABLE range_part (prof_history_id NUMBER(10),
person_id NUMBER(10) NOT NULL,
organization_id NUMBER(10) NOT NULL,
record_date DATE NOT NULL,
ph_comments VARCHAR2(200))
PARTITION BY RANGE (record_date) (
PARTITION yr0
VALUES LESS THAN (TO_DATE('01-JAN-2000','DD-MON-YYYY'))
TABLESPACE part1,
PARTITION yr1 VALUES LESS THAN (TO_DATE('01-JAN-2001','DD-MON-YYYY')) TABLESPACE part2,
PARTITION yr2 VALUES LESS THAN (TO_DATE('01-JAN-2002','DD-MON-YYYY'))
TABLESPACE part3,
PARTITION yr9 VALUES LESS THAN (MAXVALUE)
TABLESPACE part4);

SELECT table_name, tablespace_name, partitioned
FROM user_tables;

SELECT partition_name, tablespace_name, high_value
FROM user_tab_partitions;

24- If my application is slow, how can we make it efficient, faster? -How will you tune if application is slow?

25- RMAN Backups?
RMAN 9i
http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96566/rcmintro.htm

26- ADDM.
ADDM: Automatic database diagnostic monitor. ADDM enables Oracle to diagnose its own performance problems. For example, ADDM identifies the most resource intensive SQL statements and passes that statement to the SQL tuning advisor.

ADDM automatically monitors the state of the database at short, regular intervals (by default: 30 mins) which leads to database performance diagnostics. It promises that you can forget all of your scripts that link the many v$ views. ADDM can be run from Enterprise Manager or through a PL/SQL interface.

SQL tuning advisor In my last column, I focused on one of Oracle Database 10g's quick wins for DBAs—using the new SQL Tuning Advisor to quickly tune poorly performing SQL statements—and touched only briefly on the new built-in diagnostic engine,
ADDM details http://www.oracle-base.com/articles/10g/AutomaticDatabaseDiagnosticMonitor10g.php


27- AWR.
AWR automatic workload repository looks periodically at the system performance (by default every 60 minutes) and stores the information found (by default up to 7 days). This allows retrieving information about workload changes and database usage patterns.
This information is the basis for all self-management decisions. For example, it is thus possible to identify the SQL statements that have the
· larges CPU consumption
· most buffer gets
· disk reads
· most parse calls
· shared memory
The data is fed from MMON.
The information is stored in the sysaux tablespace.

28- Flashback? What is flashback in 10g?
With Oracle's flashback, it's possible to sort of travel back in time. That is, flashback provides a means to see data that was altered through DDL operations and the state of database objects that were changed with DML operations. Oracle subcategorises flashback into three categories: • Flashback queryGet previous data with the as of clause of a select statement. See Flashback query example 1.
• Flashback version queryGet changes of data between a time interval using the versions between clause of a select statement. See Flashback version query example 1.
• Flashback transaction queryFind changes of a specific transaction through the flashback_transaction_query view. See Flashback transaction version query example 1. • Flashback Table
• Flashback dropUndrops a dropped table.
• flashback database In order to make flashback possible, Oracle uses undo.

http://www.adp-gmbh.ch/ora/admin/flashback/version_query_ex_1.html

29- What is command for restoring from recycle bin?
To reinstate the table, all you have to do is use the FLASHBACK TABLE command:

SQL> FLASHBACK TABLE RECYCLETEST TO BEFORE DROP;

PURGE RECYCLEBIN;But what if you want to drop the table completely, without needing a flashback feature? In that case, you can drop it permanently using:

DROP TABLE RECYCLETEST PURGE;
This command will not rename the table to the recycle bin name; rather, it will be deleted permanently, as it would have been pre-10g.

2nd interview.
30- Difference between User & Schema?

A schema is a collection of database objects. A schema is owned by a database user and has the same name as that user. Schema objects are the logical structures that directly refer to the database's data. Schema objects include structures like tables, views, and indexes. (There is no relationship between a tablespace and a schema. Objects in the same schema can be in different tablespaces, and a tablespace can hold objects from different schemas.)

31- Cluster key?
A cluster provides an optional method of storing table data. A cluster is made up of a group of tables that share the same data blocks. The tables are grouped together because they share common columns and are often used together. For example, the emp and dept table share the deptno column. When you cluster the emp and dept tables (see Figure 18-1), Oracle physically stores all rows for each department from both the emp and dept tables in the same data blocks.

The cluster key is the column, or group of columns, that the clustered tables have in common. You specify the columns of the cluster key when creating the cluster. You subsequently specify the same columns when creating every table added to the cluster. Each cluster key value is stored only once each in the cluster and the cluster index, no matter how many rows of different tables contain the value.


32- RAC?
Go to Question/Answer 22

33- What are duties of DBA?
• To plan & Create Databases
• To Manage Database Availability
• To Manage Physical & Logical Structure
• To Manage Storage based on design
• To manage security
• Network Administration
• Backup & Recovery
• Database Tuning
• Import & Export
• Creating tables
• Creating Schemas and Objects

34- What is Difference among NOMOUNT & MOUNT & OPEN modes? When Database will be available for read?
NOMOUNT
When you issue the startup command, the first thing the database will do is enter the nomount stage. During the nomount stage, Oracle first opens and reads the initialization parameter file (init.ora) to see how the database is configured. For example, the sizes of all of the memory areas in Oracle are defined within the parameter file.
After the parameter file is accessed, the memory areas associated with the database instance are allocated. Also, during the nomount stage, the Oracle background processes are started. Together, we call these processes and the associated allocated memory the Oracle instance. Once the instance has started successfully, the database is considered to be in the nomount stage. If you issue the startup command, then Oracle will automatically move onto the next stage of the startup, the mount stage.
The database can be configured on NOMOUNT mode
<> When Creating a Database
<> When re-creating Control File.

MOUNT MODE
When the startup command enters the mount stage, it opens and reads the control file. The control file is a binary file that tracks important database information, such as the location of the database datafiles.
In the mount stage, Oracle determines the location of the datafiles, but does not yet open them. Once the datafile locations have been identified, the database is ready to be opened.
Some forms of recovery require that the database be opened in mount stage.

OPEN MODE
The last startup step for an Oracle database is the open stage. When Oracle opens the database, it accesses all of the datafiles associated with the database. Once it has accessed the database datafiles, Oracle makes sure that all of the database datafiles are consistent.

35- What is SGA? And what are its components?
System Global Area:
The System Global Area (SGA) is a shared memory region that contains data and control information for one Oracle instance. Oracle allocates the SGA when an instance starts and deallocates it when the instance shuts down. Each instance has its own SGA.
Users currently connected to an Oracle database share the data in the SGA. For optimal performance, the entire SGA should be as large as possible (while still fitting in real memory) to store as much data in memory as possible and to minimize disk I/O.
SGA Components: The information stored in the SGA is divided into several types of memory structures, including the database buffers, redo log buffer, and the shared pool.
Database Buffer Cache of the SGA: Database buffers store the most recently used blocks of data. The set of database buffers in an instance is the database buffer cache. The buffer cache contains modified as well as unmodified blocks. Because the most recently (and often, the most frequently) used data is kept in memory, less disk I/O is necessary, and performance is improved.
Redo Log Buffer of the SGA: The redo log buffer stores redo entries—a log of changes made to the database. The redo entries stored in the redo log buffers are written to an online redo log, which is used if database recovery is necessary. The size of the redo log is static.
Shared Pool of the SGA: The shared pool contains shared memory constructs, such as shared SQL areas. A shared SQL area is required to process every unique SQL statement submitted to a database. A shared SQL area contains information such as the parse tree and execution plan for the corresponding statement. A single shared SQL area is used by multiple applications that issue the same statement, leaving more shared memory for other uses.

36- How can we allocate SGA? How we can know it proper size? How should we estimate? How can we calculate size of SGA?
Check SGA Size Tips at following links.
http://www.dba-oracle.com/t_sga_sizing.htm

38- How can you perform Fragmentation in tables?
(No Comments)

check this link for 10g Shrink. http://www.dbazine.com/blogs/blog-cf/chrisfoot/blogentry.2005-09-10.8045550726

39-Suppose, if You export table data and then import? Does it perform fragmentation?
Yes..

40- Interviewer Asked from me:-”Tell me if you are DBA, u left the office at night, and when u arrive at office in the morning”, what you will check in database?
(No Comments)
I have recently joined a Company, and we have a Checklist that I used to check in the morning, that containts:
Primary Database:
* OracelServiceABC
* OracleListnerABC
* ArchivedLogSequence #
Backups:
* HOT BACKUPS
* Export Full Database
Standby Databases:
* OracelServiceABCYZ
* OracleListnerABCYZ
* ArchivedLogSequence #
* DataGuard Status

41- How can you schedule a job?
(No comments)
You can check in my blog for schedule a batch job.
http://rakeshocp.blogspot.com/2007/09/batch-file-scripts.html

42- RAID?
In computing, specifically computer storage, a Redundant Array of Independent Drives (or Disks), also known as Redundant Array of Inexpensive Drives (or Disks), (RAID) is an umbrella term for data storage schemes that divide and/or replicate data among multiple hard drives. RAID can be designed to provide increased data reliability or increased I/O performance, or both.

Follow the links below for RAID information
http://www.ecs.umass.edu/ece/koren/architecture/Raid/raidhome.html
http://en.wikipedia.org/wiki/RAID
http://www.dba-oracle.com/oracle_tips_raid_usage.htm

43- New feature of 10g for DBAz?
CHeck 10g new features for DBAz http://www.oracle.com/technology/pub/articles/10gdba/index.html

44- What is Database? RDBMS? Rules?
DATABASE: A collection of information organized in such a way that a computer program can quickly select desired pieces of data.

RDBMS: a type of database management system (DBMS) that stores data in the form of related tables. Relational databases are powerful because they require few assumptions about how data is related or how it will be extracted from the database. As a result, the same database can be viewed in many different ways. An important feature of relational systems is that a single database can be spread across several tables. This differs from flat-file databases, in which each database is self-contained in a single table.
Almost all full-scale database systems are RDBMS's. Small database systems, however, use other designs that provide less flexibility in posing queries.

Rules: Codd's 12 rules are a set of 12 rules proposed by Edgar F. Codd, a pioneer of the relational model for databases, designed to define what is required from a database management system in order for it to be considered relational i.e and RDBMS. visit the link below for Codd's 12 rules.
http://en.wikipedia.org/wiki/Codd

I have tried my level best to help Fresh DBAz to get an idea of Questions that are mostly asked for fresh DBA job.
I couldn't answer few questions thats why i have answered no comments in Answer.

Regards!!
Rakesh Kumar Soni.

Sunday, September 23, 2007

Batch File Scripts..

Batch file 1

cmd >
C: > edit test.bat ("Creating Batch file named test")

test.bat file contains

sqlplus hr/hr@ROCK @D:\ctas.sql

and ctas.sql contains

create table batch_test as select * from employees;
exit;

============================================================

Batch file 2

Now, I will create a batch file for hot backup, that will copy all the tablespaces & archived redo log files, daily. That will be running automatically (at specified time, interval of days), and copy the all the tablespaces including SYSTEM, archived redo log files, REDOLOGS, and control files.

cmd>
c:> edit hot.bat

sqlplus system/manager@rock @d:\hot.sql

This batch file is calling hot.sql and hot.sql contains following commands.

host MD d:\hot\"%DATE:/=_%"
host MD d:\hot\"%DATE:/=_%"\ARCHIVELOGS
alter tablespace SYSTEM begin backup;
host copy D:\oracle\oradata\rock\SYSTEM01.DBF d:\hot\"%DATE:/=_%";
alter tablespace SYSTEM end backup;
exit;

The above (hot.sql) sript, create a new folder named Sun 09_23_2007 (current date) in directory d:\hot\ and ARCHIVELOGS folder within that created folder, Then, backup begins and copies the SYSTEM01.dbf file and pastes in Sun 09_23_2007; when all files for specified tablespace are copied it places the tablspace in default mode. (end backup).

You can check the 09_23_2007 folder whether files are copied or not, and issue the following command to check

SELECT * from v$backup;
check the status column If its status is NOT ACTIVE that means file is not in backup mode.
SQL> select * from V$backup;
FILE# STATUS CHANGE# TIME
---------------------------- ---------- ----------------
1 NOT ACTIVE 787967 24-SEP-07

If you issue the alter tablespace system command the status of file will be changed.

SQL> alter tablespace system begin backup;
Tablespace altered.

SQL> select * from V$backup;
FILE# STATUS CHANGE# TIME
------- ---------------- ---------- ---------
1 ACTIVE 788012 24-SEP-07

You can checkup the status column of v$backup, to verify whether script ran successfuly, and all the tablspace are in default(End Backup/No Backuo mode).

Note: This hot.sql script contains only one system tablespace, i will test for all tablespace Archived redo log files, REDOLOGS, and Control Files, some time later.