OracleDBAdminNote

(1) Download Oracle Developer Day VM from Oracle Web Site.
(2) All password is oracle
(3) configure network on Oracle VirtualBox
(4) Get Started (Of course, Oracle Web Site is the best)

0) S Q L P L U S tricks
•Review the last command entered:
L

•Redo last command entered:
R

•Correcting last command entered:
c/text_tobe_corrected/text_corrected
Type R to have the command executed.

• see the last command you typed (the command buffer)
list

• execute the command buffer
run

• edit a command you just typed (edits the command buffer)
edit

• execute a command (must end in 😉
simply type the command number (1 by default)

• require screen to stop after each page of output
set pause on

• set the page size to
set pagesize

• find out what tables are available
select table_name from tabs

• get information about a table (i.e. it’s schema)
describe

• stop a command that’s running for too long
Ctrl-C

• run system commands
host
e.g.
host ls

• redirect screen output to a file (including everything you type)
spool ‘file.txt’
when done, stop writing to file
spool off

1) Architecture: INSTANCE = MEMORY (SGA + PGA) + PROCESSES (SERVER PROCESS + BKPROCESS)

help index
help show
show all
show sga
show user

show parameter memory_target;
show parameter memory_max_target;

show parameter shared_server;

select name from v$parameter;

select instance_name, status, database_status, archive from v$instance;

select * from v$sga;

show parameter file;
show parameter spfile;
show parameter control_file;

show parameter db_writer_process;

Installation
$ cd /stage/11.2.0/clusterware/Disk1
$ ./runInstaller

2) START UP

linux init file: /etc/inittab

oracle auto startup script file at Linux: /etc/init.d/oracle

##############################
#!/bin/sh
#
#
export JAVA_HOME=/usr/java/latest
export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_2
export ORACLE_HOME_LISTENER=$ORACLE_HOME
export ORACLE_TRACE=Y
export PATH=$JAVA_HOME/bin:$ORACLE_HOME/bin:$PATH
echo $PATH

# Source function library.
. /etc/rc.d/init.d/functions

# See how we were called.
case “$1” in
start)
su – oracle -c “$ORACLE_HOME/bin/lsnrctl start”
su – oracle -c $ORACLE_HOME/bin/dbstart
su – oracle -c “/home/oracle/apexlistener.sh start”
;;
stop)
su – oracle -c $ORACLE_HOME/bin/dbshut
su – oracle -c “$ORACLE_HOME/bin/lsnrctl stop”
su – oracle -c “/home/oracle/apexlistener.sh stop”
;;
restart|reload)
su – oracle -c $ORACLE_HOME/bin/dbshut
su – oracle -c “$ORACLE_HOME/bin/lsnrctl stop”
su – oracle -c “/home/oracle/apexlistener.sh stop”
su – oracle -c “$ORACLE_HOME/bin/lsnrctl start”
su – oracle -c $ORACLE_HOME/bin/dbstart
su – oracle -c “/home/oracle/apexlistener.sh start”
;;
status)
$ORACLE_HOME/bin/lsnrctl status
;;
*)
echo $”Usage: $0 {start|stop|restart|reload}”
exit 1
esac

exit 0

##########################

3) S T O R A G E

PHYSICAL (FILES on Files Systems or Raw Device or ASM):
CONTROL FILES + DATA FILES + ONLINE REDO LOG FILES +
PARAMETER FILE + PASSWORD FILE + BACKUP FILES + ARCHIVED LOG FILES +
ALERT LOG + TRACE FILES

Logical: DATABASE <- TABLESPACE <- SEGMENT <- EXTENT <- BLOCK

LINKS: TABLESPACE <- Many FILES

BLOCK: map to disk blocks
EXTENT: some continuous blocks
SEGMENT: logical data structures (DATA, INDEX, UNDO, TEMPORARY) above extents

show parameter db_block_size
select TABLESPACE_NAME,BLOCK_SIZE,CONTENTS from DBA_TABLESPACES;
select TABLESPACE_NAME, FILE_NAME from DBA_DATA_FILES;
select count(*) from DBA_SEGMENTS where TABLESPACE_NAME='USERS';
select distinct SEGMENT_TYPE from DBA_SEGMENTS;
select count(*) from DBA_EXTENTS where TABLESPACE_NAME='USERS';

4) T O O L S

• Oracle Universal Installer (./runInstaller)
• Database Configuration Assistant (dbca)
• Database Upgrade Assistant (dbua)
• Oracle Net Manager (netmgr)
• Oracle Net Configuration Assistant (netca)
• Oracle Enterprise Manager (em : database console or grid control)
• Server Control Utility (srvctl)
• SQL*Plus (sqlplus)
• Recovery Manager (rman)
• Data Pump
• SQL*Loader

Command-line tools:
– To administer Enterprise Manager:
emctl start | status | stop dbconsole
– To administer the listener:
lsnrctl start | status | stop

em for database console:
https://127.0.0.1:1158/em

Useful Command
emctl start dbconsole
sqlplus user/pwd or /nolog
connect user

mybatch.sh
#
#
slplus hr/oracle <<EOF
select count(*) from employee;
update employee set salary = salary * 1.10;
commit;
quit
EOF

sqlplus hr/oracle @script.sql

5) Initialization Parameters

Show parameter SPFILE

By default, In the platform-specific default location ($ORACLE_HOME/dbs), Oracle Database locates your initialization parameter file by examining filenames in the following order:

spfile$ORACLE_SID.ora
spfile.ora
init$ORACLE_SID.ora

if SPFILE para set, startup will use SPFILE (binary).

You can startup like

STARTUP PFILE = /u01/oracle/dbs/init.ora

File:

* SPFILE: $ORACLE_HOME/dbs/spfileSID.ora (binary, cannot edit)
* Text init para file: $ORACLE_HOME/dbs/initSID.ora

Typical:

CONTROL_FILES
DB_FILES
PROCESSES (max OS user processes can connected)
DB_BLOCK_SIZE (8192 default)
SGA_TARGET (set to non-zero will enable ASMM, and set below)
DB_CACHE_SIZE (buffer cache)
SHARED_POOL_SIZE (shared pool)
LARGE_POOL_SIZE (large pool)
JAVA_POOL_SIZE (Java pool)
STEAMS_POOL_SIZE (Stream pools)
PGA_AGGREGATE_TARGET (default 20MB or 20% of SGA which is greater)
UNDO_MANAGEMENT (AUTO or MANUAL)

Check:

select name, value from v$parameter where name like '%pool%'
show parameter SGA_TARGET
show parameter db

Type:

Static: changed on in file, restart needed
Dynamic: alter session / alter system set xxx=xxx SCOPE=SPFILE / MEMORY / BOTH
session level
system level

Example:
aler session set NLS_DATE_FORMAT='mon dd yyyy';
select SYSDATE from DUAL;

6) A L E R T L O G
Alter log:
XML format
select VALUE from V$DIAG_INFO where NAME='Diag Alert';
$ORACLE_BASE/diag/rdbms///Alert

Text format
select VALUE from V$DIAG_INFO where NAME=’Diag Trace’;
$ORACLE_BASE/diag/rdbms///Trace

7) Dynamic Performance View (V$)

select TABLE_NAME from DICT where TABLE_NAME like ‘V$%’; (name is singular)

select NAME from v$fixed_table where table_name like ‘V$%’;

8) Data Dictionary
select TABLE_NAME from DICT where TABLE_NAME like ‘DBA_%’; (name is plural)
select TABLE_NAME from DICT where TABLE_NAME like ‘USER_%’;
select TABLE_NAME from DICT where TABLE_NAME like ‘ALL_%’;

Not all dictionary views start with the prefix DBA_, ALL_, and USER_. The following views or
synonyms to views are exceptions to this:
• AUDIT_ACTIONS
• CAT
• CHANGE_PROPAGATIONS
• CHANGE_PROPAGATION_SETS
• CHANGE_SETS
• CHANGE_SOURCES
• CHANGE_TABLES
• CLIENT_RESULT_CACHE_STATS$
• CLU
• COLS
• COLUMN_PRIVILEGES
• DATABASE_COMPATIBLE_LEVEL
• DBMS_ALERT_INFO
• DBMS_LOCK_ALLOCATED
• DICT
• DICTIONARY
• DICT_COLUMNS
• DUAL
• GLOBAL_NAME
• IND
• INDEX_HISTOGRAM
• INDEX_STATS
• LOGSTDBY_UNSUPPORTED_TABLES
• NLS_DATABASE_PARAMETERS
• NLS_INSTANCE_PARAMETERS
• NLS_SESSION_PARAMETERS
• OBJ
• RECYCLEBIN
• RESOURCE_COST
• ROLE_ROLE_PRIVS
• ROLE_SYS_PRIVS
• ROLE_TAB_PRIVS
• SEQ
• SESSION_PRIVS
• SESSION_ROLES
• SM$VERSION
• SYN
• TABLE_PRIVILEGES
• TABS

9) A S M Instance
The combination of the background processes and the SGA is called an Oracle ASM instance.

SGA (min 256 MB)

SQLPLUS

$ . oraenv
ORACLE_SID = [orcl] ? +ASM
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/grid is
/u01/app/oracle
$ sqlplus / AS SYSASM

Connected to an idle instance.
SQL> startup

SQL> CREATE DISKGROUP dgroupA NORMAL REDUNDANCY
FAILGROUP controller1 DISK
‘/devices/A1’ NAME diskA1 SIZE 120G FORCE,
‘/devices/A2’,
FAILGROUP controller2 DISK
‘/devices/B1’,
‘/devices/B2’;

SQL> ALTER DISKGROUP dgroupA ADD DISK
‘/dev/sde1’ NAME A5,
‘/dev/sdf1’ NAME A6,
‘/dev/sdg1’ NAME A7,
‘/dev/sdh1’ NAME A8;

or
ALTER DISKGROUP dgroupA ADD DISK ‘/devices/A*’;

SQL> ALTER DISKGROUP dgroupA DROP DISK A5;

SQL> ALTER DISKGROUP dgroupA
DROP DISK A6
ADD FAILGROUP controller3
DISK ‘/dev/sdi1’ NAME A9;

SQL> ALTER DISKGROUP dgroupA UNDROP DISKS;

ASMCMD

$ . oraenv
ORACLE_SID = [orcl] ? +ASM
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/grid is
/u01/app/oracle
$ asmcmd
Connected to an idle instance.
ASMCMD> startup

ASMCMD> ls -l +DATA/orcl/datafile
Type Redund Striped Time Sys Name
DATAFILE MIRROR COARSE JUL 08 21:00:00 Y SYSTEM.256.689832921

ASMCMD> shutdown –abort
ASM instance shut down
Connected to an idle instance.
DATAFILE MIRROR COARSE JUL 08 21:00:00 Y SYSAUX.257.689832923
..
SQL> DROP DISKGROUP dgroupA INCLUDING CONTENTS;

SQL> shutdown abort

10) Oracle Net Service

Oracle Net Listener:

EM DB Control -> Oracle Net Config Files ->…->Listener on server -> DB

$ORACLE_HOME/network/admin/listener.ora or

TOols:
• Enterprise Manager Net Services Administration page
• Oracle Net Manager
• Oracle Net Configuration Assistant
• Command liner

lsnrct
LSNRCTL>
LSNRCTL> start
LSNRCTL> status
LSNRCTL> stop custom_lis
Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=host01)(PORT=5521)))
The command completed successfully

Database Service:
For a listener to forward client connections to an instance, the listener must know the name of
the instance and where the instance’s ORACLE_HOME is located.
the service name (same as the global database name
.), ORACLE_HOME path, and SID (same as the instance name).

Naming Methods:
• Easy connect naming:
CONNECT username/password@host[:port][/service_name]
The easy connect naming method requires no configuration.

tnsping host01.example.com:1521/orcl

• Local naming: The local naming method stores connect descriptors (identified by their net
service name) in a local configuration file named tnsnames.ora on the client.
$ORACLE_HOME/network/admin/tnsnames.ora or TNS_ADMIN

tnsping orcl

Communication between databases:

REMOTE_ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)
(HOST = host02.example.com)
(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl.example.com)
)
)

CONNECT hr/hr@orcl;
CREATE DATABASE LINK remote
CONNECT TO HR IDENTIFIED BY HR
USING ‘REMOTE_ORCL’;

select owner, db_link, username from dba_db_links;

SELECT * FROM employees@remote;

11) Database Storage Structure

EM->Server-> storage Session

DBA_TABLESPACES & DBA_DATA_FILES
V$TABLESPACE & V$DATAFILE
V$DBFILE
V$TEMPFILE

TABLESPACE->Action: SHOW TABLESPACE CONTENTS-> Click Extent Map after that

OMF:
DB_CREATE_FILE_DEST
DB_CREATE_ONLINE_LOG_DEST_n
DB_RECOVERY_FILE_DEST

SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = ‘+DATA’;
or
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = ‘/u01/oradata’;
SQL> CREATE TABLESPACE tbs_1;

12) User Security

* user: (EM->Server->(Security) Users)
CREATE USER “LOGAN_USER”
PROFILE “DEFAULT” IDENTIFIED BY “*******”
DEFAULT TABLESPACE “USERS”
TEMPORARY TABLESPACE “TEMP” ACCOUNT UNLOCK
GRANT “CONNECT” TO “LOGAN_USER”

* system privilege:
GRANT CREATE ANY USER TO logan_user — for create table

GRANT TO [WITH ADMIN
OPTION]

* object privilege:
GRANT ON TO
[WITH GRANT OPTION]

* role (a group of system / object privileges)
(EM->Server->(Security) Roles)
pre-defined roles
GRANT “RESOURCE” TO “LOGAN_USER”
ALTER USER “LOGAN_USER” DEFAULT ROLE ALL

CREATE ROLE secure_application_role
IDENTIFIED USING ;

SET ROLE vacationdba;

* Profiles
(EM->Server->(Security) Profiles)
Profiles impose a named set of resource limits on database usage and instance resources. Profiles
also manage the account status and place limitations on users’ passwords (length, expiration
time, and so on). Every user is assigned a profile and may belong to only one profile at any
given time. If users have already logged in when you change their profile, the change does not
take effect until their next login.
The DEFAULT profile serves as the basis for all other profiles.

CREATE PROFILE “LOGAN_PROFILE”
LIMIT
CONNECT_TIME 30
IDLE_TIME 1

ALTER USER “LOGAN_USER” PROFILE “LOGAN_PROFILE”

select * from DBA_PROFILES where PROFILE=’LOGAN_PROFILE’ and LIMIT ‘DEFAULT’;

13) C O N C U R R E N C Y
Locks:
ROW SHARE: (no exclusive, multiple read / writes allow);
ROW EXCLUSIVE: (multiple read, one write)
SHARE: (multiple readers, no writer);
SHARE FOR EXCLUSIVE: (allow others to read, but not allow to put SHARE lock)
EXLUSIVE: (prevent all other reader / writer)

Manual:
LOCK TABLE employees IN EXCLUSIVE MODE;
LOCK TABLE hr.employees IN SHARE MODE NOWAIT;

Blocking:
EM->Performance->Blocking Session
or
select SID, SERIAL#, USERNAME
from V$SESSION where SID in
(select BLOCKING_SESSION from V$SESSION)

alter system kill session ‘144,8982’ immediate;

14) U N D O
EM-> Server -> Automatic Undo Management

15) A U D I T
Standard database auditing:
(1) Enable database auditing in parameter file
AUDIT_TRAIL = NONE, OS or DB or DB, EXTENDED or XML or XML, EXTENDED

if OS, windows event log or unix by AUDIT_FILE_DEST
if DB, view DBA_AUDIT_TRAIL
if XML, AUDIT_FILE_DEST, and use V$XML_AUDIT_TRAIL view all XML files in directory.

(2) Specify auditing option
SQL statement auditing:
AUDIT TABLE BY hr WHENEVER NOT SUCCESSFUL;
AUDIT table;

System-privilege auditing
AUDIT select any table, create any trigger;
AUDIT select any table BY hr BY SESSION;

Object-privilege auditing:
AUDIT UPDATE,DELETE on hr.employees BY ACCESS;
AUDIT ALL on hr.employees;

EM->Server->Security: Audit Settings

FGA (fine-grained auditing)
dbms_fga.add_policy (
object_schema => ‘HR’,
object_name => ‘EMPLOYEES’,
policy_name => ‘audit_emps_salary’,
audit_condition=> ‘department_id=10’,
audit_column => ‘SALARY,COMMISSION_PCT’,
handler_schema => ‘secure’,
handler_module => ‘log_emps_salary’,
enable => TRUE,
statement_types => ‘SELECT,UPDATE’);

(3) Review audit information
DBA_AUDIT_TRAIL
DBA_FGA_AUDIT_TRAIL
DBA_COMMON_AUDIT_TRAIL

(4) Maintain audit trail

For SYSDBA/SYSOPER:

AUDIT_SYS_OPERATIONS=TRUE (The default is FALSE.)

16) DATABASE MAINTENANCE

(1)Alerts: EM->Related Links=>Alert History=>Instance Status

(2)AWR (Automated Workload Repository)

By default, snapshot taken every 60 minutes,retained for 8 days from SGA in-memry stats by MMON)

* AWR Baseline
* Metric
* Statistics (Optimizer or Database)
* Threshold

DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE ( –
start_snap_id IN NUMBER,
end_snap_id IN NUMBER,
baseline_name IN VARCHAR2);

EM->Server->Statistics Management->Automatic Workload repository

(3)Optimizer: create execution plan for a SQL statement. EM->Execution Plan or SQL PLUS-> AUTOTRACE
Optimizer Statistics: table, column, index and system statistics,
stored in dictionary tables, point of time data (not real time, statistical correct)
EM->Server->Query Optimizer -> Manage Optimizer Statistics->…
To enable automatic maintenance tasks to collect stats, set STATISTICS_LEVEL = TYPICAL or ALL

Manually collect stats when 10% changes on a table via EM or

SQL> EXEC dbms_stats.gather_table_stats(‘HR’,’EMPLOYEES’);
SQL> SELECT num_rows FROM dba_tables
2 WHERE owner=’HR’ AND table_name = ‘EMPLOYEES’;
NUM_ROWS
———-
214

Collect immediate after DB creation:
SQL> EXEC dbms_stats.gather_system_stats(‘NOWORKLOAD’);

Control Stats gathering preferences:
SCOPE (statement, table, schema, database, global)
PREFERENCES (CASCADE,DEGREE,ESTIMATE_PERCENT,NO_INVALIDATE,METHOD_OPT,GRANULARITY,INCREMENTAL,PUBLISH,STALE_PERCENT)

exec dbms_stats.set_table_prefs(‘SH’,’SALES’,’STALE_PERCENT’,’13’);

(4)Automatic Database Diagnostic Monitor (ADDM)
Unlike the other advisors, the ADDM runs automatically after each AWR snapshot. Each time a
snapshot is taken, the ADDM performs an analysis of the period corresponding to the last two
snapshots. The ADDM proactively monitors the instance and detects most bottlenecks before
they become a significant problem.

EM->HOME->Related Links (bottom)->Advisor Central->ADDM

(5) Automated Maintenance Tasks
EM->Server->Oracle Scheduler->Automated Maintenance Tasks->configure to enable the tasks

select client_name,status,consumer_group,attributes from DBA_AUTOTASK_CLIENT

(6) Server-generated alert
Threshold setting: EM->Related Links->Metric and Policy setting (for whole database)
step 1: specify a threshold
step 2: create a test case
step 3: check for an alert
(7) Alert notification
EM->HOme-> (bottom) Preferences ->Notification Rules->edit
stateful alters: DBA_OUTSTANDING_ALERTS
Staeless alters: DBA_ALERT_HISTORY

17) Performance Management

(1)
• EM->Active Session->Concurrency
• EM->Performance->Throughput / IO/Services
• EM->Performance->Additional Monitoring Links->Top Consumers (Top services / Top Sessions, etc)

(2) Memory Management
• Automatic Memory Management (AMM)
– Enables you to specify total memory allocated to instance (including both SGA and PGA)
EM->Related Links (Bottom) -> Advisor Central -> Memory Advisors -> Automatic Memory Management (Enable)

or
initialization parameter: MEMORY_TARGET (=memory_target)
memory_target = mM (non zero

ALTER SYSTEM SET MEMORY_MAX_TARGET = nM SCOPE = SPFILE; (static, need SCOPE to SPFILE)
ALTER SYSTEM SET MEMORY_TARGET = nM;
ALTER SYSTEM SET SGA_TARGET = 0;
ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 0;

• Automatic Shared Memory Management (ASMM):
– Enables you to specify total SGA memory through one initialization parameter
– Enables the Oracle server to manage the amount of memory allocated to the shared pool, Java pool, buffer cache,
streams pool, and large pool
When AMM disabled,
EM->Related Links (Bottom) -> Advisor Central -> Memory Advisors -> SGA-> Automatic Shared Memory Management (Enable)

or
ALTER SYSTEM SET MEMORY_TARGET = 0;
ALTER SYSTEM SET SGA_TARGET=value [SCOPE={SPFILE|MEMORY|BOTH}]
like ALTER SYSTEM SET SGA_TARGET = 992M; (see how to calculate)
ALTER SYSTEM SET SHARED_POOL_SIZE = 0;
ALTER SYSTEM SET LARGE_POOL_SIZE = 0;
ALTER SYSTEM SET JAVA_POOL_SIZE = 0;
ALTER SYSTEM SET DB_CACHE_SIZE = 0;
ALTER SYSTEM SET STREAMS_POOL_SIZE = 0;

How to calculate current SGA_TARGET:
SELECT (
(SELECT SUM(value) FROM V$SGA) –
(SELECT CURRENT_SIZE FROM V$SGA_DYNAMIC_FREE_MEMORY)
) “SGA_TARGET”
FROM DUAL;

Query Result
SELECT SUM(value) FROM V$SGA 1200M
SELECT CURRENT_SIZE FROM V$SGA_DYNAMIC_FREE_MEMORY 208M

• Manually setting shared memory management:
– Sizes the components through multiple individual initialization parameters
– Uses the appropriate Memory Advisor to make recommendations

Set the MEMORY_TARGET initialization parameter to 0.
Set the SGA_TARGET initialization parameter to 0.

(3) Dynamic Performance Stats
• systemwide
V$SYSSTAT (cumulative stats) & V$SYSTEM_EVENT (wait events)

• Session
V$SESSTAT (cumulative stats) & V$SESSION_EVENT (wait events)

• Service Specific
V$SERVICE_STATS (cumulative stats) & V$SERVICE_EVENT (wait event)

Statistics must be available for the effective diagnosis of performance problems. The Oracle
server generates many types of statistics for different levels of granularity.

At the systemwide level, session level, and service level, both wait events and accumulated
statistics are computed. In the slide, the top row of views shows the cumulative statistics. The
bottom row shows the wait event views.

When analyzing a performance problem in any of these scopes, you typically look at the change
in statistics (delta value) over the period of time you are interested in. All the possible wait
events are cataloged in the V$EVENT_NAME view. All statistics are cataloged in the
V$STATNAME view; approximately 480 statistics are available in Oracle Database.

(4) Troubleshooting views

• Instance/Database
V$DATABASE
V$INSTANCE
V$PARAMETER
V$SPPARAMETER
V$SYSTEM_PARAMETER
V$PROCESS
V$BGPROCESS
V$PX_PROCESS_SYSSTAT
V$SYSTEM_EVENT

• Disk
V$DATAFILE
V$FILESTAT
V$LOG
V$LOG_HISTORY
V$DBFILE
V$TEMPFILE
V$TEMPSEG_USAGE
V$SEGMENT_STATISTICS

• Memory
V$BUFFER_POOL_STATISTICS
V$LIBRARYCACHE
V$SGAINFO
V$PGASTAT

• Contention
V$LOCK
V$UNDOSTAT
V$WAITSTAT
V$LATCH

(5) Invalid and Unusable Objects
• Invalid PL/SQL object must be recompiled before they can be used.

SELECT object_name, object_type FROM DBA_OBJECTS
WHERE status = ‘INVALID’;

ALTER PROCEDURE HR.add_job_history COMPILE;

ALTER PACKAGE HR.maintainemp COMPILE;
ALTER PACKAGE HR.maintainemp COMPILE BODY;

to generate compile script:

set pagesize 3000;

select ‘ALTER TRIGGER ‘||OWNER||’.’||OBJECT_NAME||’ COMPILE;’ as script
from dba_objects where status=’INVALID’ AND OBJECT_TYPE=’TRIGGER’
union
select ‘ALTER FUNCTION ‘||OWNER||’.’||OBJECT_NAME||’ COMPILE;’ as script
from dba_objects where status=’INVALID’ AND OBJECT_TYPE=’FUNCTION’
union
select ‘ALTER PACKAGE ‘||OWNER||’.’||OBJECT_NAME||’ COMPILE BODY;’ as script
from dba_objects where status=’INVALID’ AND OBJECT_TYPE=’PACKAGE BODY’
union
select ‘ALTER PROCEDURE ‘||OWNER||’.’||OBJECT_NAME||’ COMPILE;’ as script
from dba_objects where status=’INVALID’ AND OBJECT_TYPE=’PROCEDURE’;

• Unusable indexes are ignored by the optimizer.

SELECT index_name, table_name FROM DBA_INDEXES
WHERE status = ‘UNUSABLE’;

ALTER INDEX HR.emp_empid_pk REBUILD;
ALTER INDEX HR.emp_empid_pk REBUILD ONLINE;
ALTER INDEX HR.email REBUILD TABLESPACE USERS;

18) Backup and Recovery Concept
• MTBF (mean time between failures):
– Real Application Clusters
– Streams
– Oracle Data Guard

• MTTR: (mean time to recover) by practicing recovery procedures in advance
– EM > Advisor Central (in the Related Links section) > MTTR Advisor
– EM > Availability > Recovery Settings
FAST_START_MTTR_TARGET initialization parameter (non zero, in seconds)

• Minimizes the loss of data:
– Archive log files (discussed later in this lesson)
– Flashback technology
– Standby databases and Oracle Data Guard

(1) Oracle Flashback technology
a group of features that support viewing past states of data—and winding data back and forth in time—without requiring
restoring the database from backup.

• Flashback Query:
View committed data as it existed at some point in the past. The SELECT command with the AS OF clause references a time in the past through a time
stamp or SCN.
• Flashback Version Query: View committed historical data for a specific time interval.
Use the VERSIONS BETWEEN clause of the SELECT command (for performance reasons
with existing indexes).
• Flashback Transaction Query: View all database changes made at the transaction level
• Flashback Transaction Backout: Rolls back a specific transaction and dependent
transactions
• Flashback Table: Rewinds one or more tables to their contents at a previous time without
affecting other database objects
• Flashback Drop: Reverses the effects of dropping a table by returning the dropped table
from the recycle bin to the database along with dependent objects such as indexes and
triggers
• Flashback Database: Returns the database to a past time or system change number (SCN)

(2) Configuring for Recoverability
To configure your database for maximum recoverability, you must:
• Schedule regular backups
• Multiplex control files
• Multiplex redo log groups
• Retain archived copies of redo logs

EM->Availability->Recovery Settings

• Multiplex control files
At least two copies, each on separate disk (at least one on separate disk controller)

If using ASM as your storage technique, then as long as you have two control files, one in each
disk group (such as +DATA and +FRA), then you should not require further multiplexing. In a
database using OMF (such as a database using ASM storage), all additional control files must be
created as part of a recovery process using RMAN (or through Enterprise Manager).

In a database using regular file system storage, adding a control file is a manual operation:
1. Alter the SPFILE with the following command:
ALTER SYSTEM SET control_files =
‘/u01/app/oracle/oradata/orcl/control01.ctl’ ,
‘/u02/app/oracle/oradata/orcl/control02.ctl’ ,
‘/u03/app/oracle/oradata/orcl/control03.ctl’ SCOPE=SPFILE;

2. Shut down the database.
3. Copy control file to a new location.
4. Open the database and verify the addition of the new control file.

• Multiplex redo log groups

– At least two members (files) per group
– Each member:
– On a separate disk or controller if using file system storage
– In a separate disk group (such as +DATA and +FRA) if using ASM

Enterprise Manager > Server > Redo Log Groups
ALTER DATABASE ADD LOGFILE MEMBER ‘+DATA’ TO GROUP 1;
V$LOGFILE

• Retain archived copies of redo logs
1. Specify archive log file-naming convention.
2. Specify one or more archive log file locations.
3. Switch the database to ARCHIVELOG mode.

Enterprise Manager > Availability > Configure Recovery Settings.

USE_DB_RECOVERY_FILE_DEST

sqlplus / as sysdba
shutdown immediate
startup mount
alter database archivelog;
alter database open;
archive log list

(3) Configuring the Fast Recovery Area
Configuring the fast recovery area means determining location, size, and retention policy.
DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE

19) Performing Database Backups

(1)user-managed backup (obsolete)
• Querying V$DATAFILE to determine the data files that need to be backed up and theircurrent state
• Querying V$LOGFILE to identify the online redo log files
• Querying V$CONTROLFILE to identify the control file to back up
• Placing each tablespace in online backup mode
• Querying V$BACKUP to see what data files are part of a tablespace that has been placed in online backup mode
• Issuing operating system copy commands to copy the data files to the backup location
• Bringing each tablespace out of online backup mode

(2) Recovery Manager (RMAN) is the recommended method of backing up your Oracle database.
• Configuring Backup Settings
Enterprise Manager > Availability > Backup Settings
Device tab: (Parallelism, Disk backup location, Disk backup type)
Backup Set tab:
Policy tab:
• Scheduling Backups
Enterprise Manager > Availability > Schedule Backup
Strategy:
-Oracle suggested backup strategy
-Customized Backup

(3) Backing Up the Control File to a Trace File
Control files have an additional backup option; they may be backed up to a trace file:
Enterprise Manager > Server > Control Files
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
DIAGNOSTIC_DEST initialization
parameter

(4) Managing Backups
Enterprise Manager > Availability > Manage Current Backup
• Catalog Additional Files: Although RMAN (working through Enterprise Manager) is the
recommended way to create backups, you might have image copies or backup sets that
were created by some other means or in some other environment with the result that
RMAN is not aware of them. This task identifies those files and adds them to the catalog.
• Crosscheck All: RMAN can automatically delete obsolete backups, but you can also
delete them by using operating system commands. If you delete a backup without using
RMAN, the catalog does not know whether the backup is missing until you perform a
cross-check between the catalog and what is really there.
• Delete All Obsolete: This deletes backups older than the retention policy.
• Delete All Expired: This deletes the catalog listing for any backups that are not found
when the cross-check is performed.

(5) Viewing Backup Reports
Enterprise Manager > Availability > Backup Reports

(6) Monitoring the Fast Recovery Area

Enterprise Manager > Availability > Recovery Settings

(7) Using the RMAN Command Line
$ rman target /
RMAN> CONFIGURE DEFAULT DEVICE TYPE TO disk;
or CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COPY;
or CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
or
RMAN> BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;
or
RMAN> BACKUP COPY OF DATABASE;

20) Performing Database Recovery

(1) Opening a Database
shutdown->nomount->mount->open
NOMOUNT: the instance must read the initialization parameter file
MOUNT: checks whether all control files listed in the initialization parameter file are present and synchronized.
OPEN:
– Checks whether all redo log groups known to the control file have at least one member present. Any missing members are noted in the alert log.
– Verifies that all data files known to the control file are present unless they have been taken offline.
– Verifies that all data files that are not offline or read-only are synchronized with the ontrol file
v$recover_file

SQL> startup
ORACLE instance started.
Total System Global Area 171966464 bytes
Fixed Size 775608 bytes
Variable Size 145762888 bytes
Database Buffers 25165824 bytes
Redo Buffers 262144 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 – see DBWR trace
file
ORA-01110: data file 4: ‘/oracle/oradata/orcl/users01.dbf’
SQL> SELECT name, error
2 FROM v$datafile
3 JOIN v$recover_file
4 USING (file#);
NAME ERROR
———————————– ——————
/oracle/oradata/orcl/users01.dbf FILE NOT FOUND
/oracle/oradata/orcl/example01.dbf FILE NOT FOUND

(2) Keeping a Database Open
After the database is open, it fails in the case of the loss of:
• Any control file
• A data file belonging to the system or undo tablespaces
• An entire redo log group
SHUTDOWN ABORT

(3) Data Recovery Advisor
• Availability tabbed page > Perform Recovery > Advise and Recover
• Active Incidents link > on the Support Workbench “Problems” page: Checker Findings
tabbed page > Launch Recovery Advisor
• Database Instance Health > click specific link (for example, ORA 1578) in the Incidents
section > Support Workbench, Problems Detail page > Data Recovery Advisor
• Database Instance Health > Related Links section: Support Workbench > Checker
Findings tabbed page: Launch Recovery Advisor
• Related Link: Advisor Central > Advisors tabbed page: Data Recovery Advisor
• Related Link: Advisor Central > Checkers tabbed page: Details > Run Detail tabbed page: Launch Recovery Advisor

You can also use the Data Recovery Advisor by using the RMAN command line:
rman target /
rman> list failure all;

(4) Loss of a Control File

If using ASM storage,
1. Put the database in NOMOUNT mode.
2. Connect to RMAN and issue the restore controlfile command to restore the
control file from an existing control file, for example:
restore controlfile from
‘+DATA/orcl/controlfile/current.260.695209463’;
3. After the control file is successfully restored, open the database.

If control files are stored as regular file system files then:
– Shut down the database
– Copy existing control file to replace lost control file

(5) Loss of a Redo Log File

If a member of a redo log file group is lost and if the group still has at least one member, note the following results:
• Normal operation of the instance is not affected.
• You receive a message in the alert log notifying you that a member cannot be found.
• You can restore the missing log file by dropping the lost redo log member and adding a new member.
• If the group with the missing log file has been archived you can clear the log group to re-create the missing file.

1. Determine whether there is a missing log file by examining the alert log.

2. Restore the missing file by first dropping the lost redo log member:
SQL> ALTER DATABASE DROP LOGFILE MEMBER
‘+DATA/orcl/onlinelog/group_1.261.691672257’;
Then add a new member to replace the lost red log member:
SQL> ALTER DATABASE ADD LOGFILE MEMBER ‘+DATA’ TO GROUP 2;

3. If the media failure is due to the loss of a disk drive or controller, rename the missing file.

4. If the group has already been archived, or if you are in NOARCHIVELOG mode, you may
choose to solve the problem by clearing the log group to re-create the missing file or files.
Select the appropriate group and then select the Clear Logfile action. You can also clear the
affected group manually with the following command:
SQL> ALTER DATABASE CLEAR LOGFILE GROUP #;

If you must clear an unarchived log group, you should immediately take a full backup of the whole database. Failure to do so may result in a loss
of data if another failure occurs. To clear an unarchived log group, use the following command:
SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP #;

(6) Loss of a Data File in NOARCHIVELOG Mode

If the database is in NOARCHIVELOG mode and if any data file is lost, perform the following tasks:

1. Shut down the instance if it is not already down.
2. Restore the entire database—including all data and control files—from the backup.
3. Open the database.
4. Have users reenter all changes that were made since the last backup.

1. Shut down the instance if it is not already down.
2. Click Perform Recovery on the Maintenance properties page.
3. Select Whole Database as the type of recovery.

(7) Loss of a Noncritical Data File in ARCHIVELOG Mode

If a data file is lost or corrupted, and if that file does not belong to the SYSTEM or UNDO tablespace, you restore and recover the missing data file.

To restore and recover the missing data file:
1. Click Perform Recovery on the Maintenance properties page.
2. Select Datafiles as the recovery type, and then select “Restore to current time.”
3. Add all data files that need recovery.
4. Determine whether you want to restore the files to the default location or (if a disk or
controller is missing) to a new location.
5. Submit the RMAN job to restore and recover the missing files.
Because the database is in ARCHIVELOG mode, recovery is possible up to the time of the last
commit and users are not required to reenter any data.

(8) Loss of a System-Critical Data File in ARCHIVELOG Mode

If a data file is lost or corrupted, and if that file belongs to the
SYSTEM or UNDO tablespace, perform the following tasks:
1. The instance may or may not shut down automatically. If it does not, use SHUTDOWN ABORT to bring the instance down.
2. Mount the database.
3. Restore and recover the missing data file.
4. Open the database.

To perform this recovery:
1. If the instance is not already shut down, shut it down.
2. Mount the database.
3. Click Perform Recovery on the Maintenance properties page.
4. Select Datafiles as the recovery type, and then select “Restore to current time.”
5. Add all data files that need recovery.
6. Determine whether you want to restore the files to the default location or (if a disk or controller is missing) to a new location.
7. Submit the RMAN job to restore and recover the missing files.
8. Open the database. Users are not required to reenter data because the recovery is up to the time of the last commit.

(9) Data Recovery Advisor

In Enterprise Manager, select Availability > Perform Recovery or click the Perform Recovery button if you find your database in a “down” or “mounted” state. Click “Advise and Recover” to
have Enterprise Manager analyze and produce recovery advice.

“View and Manage Failures” page
RMAN LIST FAILURE

You can initiate the following actions:
• Click “Re-assess Failures” after you perform a manual repair. Resolved failures are implicitly closed; any remaining failures are displayed on the “View and Manage Failures”
page.
• Click “Continue with Advise” to initiate an automated repair. When the Data Recovery Advisor generates an automated repair option, it generates a script that shows how RMAN plans to repair the failure. Click Continue if you want to execute the automated repair. If
you do not want the Data Recovery Advisor to automatically repair the failure, you can use this script as a starting point for your manual repair.

Data Recovery Advisor Views

• V$IR_FAILURE: Listing of all failures, including closed ones (result of the LIST FAILURE command)
• V$IR_MANUAL_CHECKLIST: Listing of manual advice (result of the ADVISE FAILURE command)
• V$IR_REPAIR: Listing of repairs (result of the ADVISE FAILURE command)
• V$IR_FAILURE_SET: Cross-reference of failure and advise identifiers

SELECT * FROM v$ir_failure
WHERE trunc (time_detected) = ’21-JUN-2007′;

21) Moving Data

(1) Data Pump
Oracle Data Pump enables very high-speed data and metadata loading and unloading of Oracle databases. The Data Pump infrastructure is callable via the DBMS_DATAPUMP PL/SQL
package. Thus, custom data movement utilities can be built by using Data Pump.

Oracle Database 11g provides the following tools:
– expdp
– impdp
– Web-based interface

(2) Directory Objects for Data Pump

Enterprise Manager, select Schema > Database Objects > Directory Objects

• Creating Directory Objects
1. On the Directory Objects page, click the Create button.
2. Enter the name of the directory object and the OS path to which it maps. OS directories should be created before they are used. You can test this by clicking the Test File System button. For the test, provide the host login credentials (that is, the OS user who has privileges on this OS directory).
3. Permissions for directory objects are not the same as OS permissions on the physical directory on the server file system. You can manage user privileges on individual directory objects. This increases the level of security and gives you granular control over these objects. On the Privileges page, click Add to select the user to which you give read or write privileges (or both).
4. Click Show SQL to view the underlying statements. Click Return when finished.
5. Click OK to create the object.

(3) Data Pump Export and Import Clients

Data Pump Export is a utility for unloading data and metadata into a set of operating system files called dump file sets.

Data Pump Import is used to load metadata and data stored in an export dump file set into a target system.

The Data Pump API accesses its files on the server rather than on the client.

At the center of every Data Pump operation is the master table (MT), which is a table created in the schema of the user running the Data Pump job.

• Data Pump Export and Import interfaces:
– Command line
– Parameter file
– Interactive command line
– Enterprise Manager

• Data Pump Export and Import modes:
– Full
– Schema
– Table
– Tablespace
– Transportable tablespace

(4) Data Pump Export Using Database Control
From the Database Instance home page, click the Data Movement tab, and locate the Data Pump
export and import options under the “Move Row Data” section of the web page. Click “Export to
Export Files” to being a Data Pump export session.

If a privileged user is connect to the
database instance, then the export types include the following:
• Database
• Schemas
• Tables
• Tablespace
If a non-administrative account is used, then the export type list is limited to the following:
• Schemas
• Tables
Click Continue to proceed with the export.

Options->Files (DATA_PUMP_DIR)->Schedule

(5) Data Pump Import
$ impdp hr DIRECTORY=DATA_PUMP_DIR \
DUMPFILE=HR_SCHEMA.DMP \
PARALLEL=1 \
CONTENT=ALL \
TABLES=”EMPLOYEES” \
LOGFILE=DATA_PUMP_DIR:import_hr_employees.log \
JOB_NAME=importHR \
TRANSFORM=STORAGE:n

You can remap:
• Data files by using REMAP_DATAFILE
• Tablespaces by using REMAP_TABLESPACE
• Schemas by using REMAP_SCHEMA
• Tables by using REMAP_TABLE
• Data by using REMAP_DATA
REMAP_TABLE = ‘EMPLOYEES’:’EMP’

(6) Using Enterprise Manager to Monitor Data Pump Jobs
To access the “Export and Import Jobs” page, click the “Monitor Export and Import Jobs” link
in the Move Row Data region on the Maintenance page.

(7) Migration with Data Pump Legacy Mode
transitioning from ‘imp’ and ‘exp’ utilities to ‘impdp’ and ‘expdp’ utilities

(8) SQL*Loader

SQL*Loader loads data from external files into tables of an Oracle database. It has a powerful
data parsing engine that puts little limitation on the format of the data in the data file.

Use SQL*Loader to load data from a non-Oracle database (or user files)

INFILE:
CONTROL FILE:
LOG FILE:
BAD FILE:
DISCARD FILE:

SAMPLE CONTROL FILE:
— This is a sample control file
LOAD DATA
INFILE ’SAMPLE.DAT’
BADFILE ’sample.bad’
DISCARDFILE ’sample.dsc’
APPEND
INTO TABLE emp
WHEN (57) = ’.’
TRAILING NULLCOLS
(hiredate SYSDATE,
deptno POSITION(1:2) INTEGER EXTERNAL(3)
NULLIF deptno=BLANKS,
job POSITION(7:14) CHAR TERMINATED BY WHITESPACE
NULLIF job=BLANKS “UPPER(:job)”,
mgr POSITION(28:31) INTEGER EXTERNAL
TERMINATED BY WHITESPACE, NULLIF mgr=BLANKS,
ename POSITION(34:41) CHAR
TERMINATED BY WHITESPACE “UPPER(:ename)”,
empno POSITION(45) INTEGER EXTERNAL
TERMINATED BY WHITESPACE,
sal POSITION(51) CHAR TERMINATED BY WHITESPACE
“TO_NUMBER(:sal,’$99,999.99’)”,
comm INTEGER EXTERNAL ENCLOSED BY ’(’ AND ’%’
“:comm * 100”
)

Enterprise Manager Data Movement > Move Row Data > Load
Data from User Files.

(9) Defining an External Table with ORACLE_LOADER

CREATE TABLE extab_employees
(employee_id NUMBER(4),
first_name VARCHAR2(20),
last_name VARCHAR2(25),
hire_date DATE)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER DEFAULT DIRECTORY extab_dat_dir
ACCESS PARAMETERS
( records delimited by newline
badfile extab_bad_dir:’empxt%a_%p.bad’
logfile extab_log_dir:’empxt%a_%p.log’
fields terminated by ‘,’
missing field values are null
( employee_id, first_name, last_name,
hire_date char date_format date mask “dd-mon-yyyy“))
LOCATION (’empxt1.dat’, ’empxt2.dat’) )
PARALLEL REJECT LIMIT UNLIMITED;

If you have a lot of data to load, enable PARALLEL for the load operation:
ALTER SESSION ENABLE PARALLEL DML;

(10) External Table Population with ORACLE_DATAPUMP

CREATE TABLE ext_emp_query_results
(first_name, last_name, department_name)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY ext_dir
LOCATION (’emp1.exp’,’emp2.exp’,’emp3.exp’)
)
PARALLEL
AS
SELECT e.first_name,e.last_name,d.department_name
FROM employees e, departments d
WHERE e.department_id = d.department_id AND
d.department_name in
(‘Marketing’, ‘Purchasing’);

(11) Using External Tables
• Querying and external table:
• Querying and joining an external table with an internal table

SELECT * FROM extab_employees;

SELECT e.employee_id, e.first_name, e.last_name,
d.department_name
FROM departments d, extab_employees e
WHERE d.department_id = e.department_id;

• Appending data to an internal table from an external table

INSERT /*+ APPEND */ INTO hr.employees SELECT * FROM extab_employees;

(12) Data Dictionary
View information about external tables in:
• [DBA| ALL| USER]_EXTERNAL_TABLES
• [DBA| ALL| USER]_EXTERNAL_LOCATIONS
• [DBA| ALL| USER]_TABLES
• [DBA| ALL| USER]_TAB_COLUMNS
• [DBA| ALL]_DIRECTORIES

22) Support

My Oracle Support

service request (SR) SR#, Create New Package, upload

Customer Support Identifier (CSI) number

Support is delivered in the following ways:
– My Oracle Support Web site
– Telephone
– Oracle Direct Connect (ODC) remote diagnostic tool

Kinds of patches
• Interim patches
– For specific issues
– No regression testing
• CPUs (Critical Patch Updates)
– Critical security issues
– Regression testing
– Does not advance version number
• Patch releases

Managing Patches
You can apply different kinds of patches at different times for different reasons.

• Interim patches (also known as one-off or one-of patches) are created to solve a specific
problem. They do not go through a full regression test. Interim patches are typically
installed with the opatch utility. The Enterprise Manager Patching Wizard can help
automate the patching process by downloading, applying, and staging the patches. This
wizard uses the opatch utility in the background.

• CPU patches (Critical Patch Update patches) include security patches and dependent nonsecurity
patches. The CPU patches are cumulative, which means fixes from previous
Oracle security alerts and critical patch updates are included. It is not required to have
previous security patches applied before applying the CPU patches. However, you must be
on the stated patch set level. CPU patches are for a specific patch release level (such as
10.2.0.3). CPU patches are installed with the opatch utility or through EM Patching
Wizard. The CPU patches are issued quarterly. CPU patches and interim patches can also
be removed from your system with opatch rollback -id .
Oracle does extensive testing of Critical Patch Updates with our own applications, as well
as running regression tests for the Critical Patch Updates themselves. To verify that a patch
has been applied, query the inventory with opatch -lsinventory and see if the
patch is listed.

Leave a comment