Thursday, September 12, 2013

Work around ORA-10567,ORA-10561, ORA-600: [3020], [3 ] - Oracle 10gr2 RAC on Standby



Standby alertlog file Error -
 
Media Recovery Log /u03/fra/BMSPROD_PD/archivelog/2013_09_11/o1_mf_2_59982_931w1f3d_.arc
Media Recovery Log /u03/fra/BMSPROD_PD/archivelog/2013_09_11/o1_mf_1_60283_931w1o8z_.arc
Wed Sep 11 16:27:34 2013
Errors in file /u01/app/oracle/admin/bmsprod/bdump/bmsprod_p011_7930.trc:
ORA-00600: internal error code, arguments: [3020], [31], [1424770], [131448194], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 31, block# 1424770)
ORA-10564: tablespace ARSYSTEM
ORA-01110: data file 31: '/u03/oradata/BMSPROD_PD/datafile/ARSYSTEM_31.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 212538
Wed Sep 11 16:27:36 2013
Errors in file /u01/app/oracle/admin/bmsprod/bdump/bmsprod_p011_7930.trc:
ORA-00600: internal error code, arguments: [3020], [31], [1424770], [131448194], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 31, block# 1424770)
ORA-10564: tablespace ARSYSTEM

Take the RMAN backup of particular datafile.

On Primary -

$ rman target /

Recovery Manager: Release 10.2.0.3.0 - Production on Thu Sep 12 00:12:02 2013

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

connected to target database: BMSPROD (DBID=3750301684)

RMAN> run {
allocate channel t1 device type disk;
allocate channel t2 device type disk;
backup datafile 31  FORMAT '/u04/backup/bmsprod/incr/datafile_31_%U';
}
2> 3> 4> 5>
using target database control file instead of recovery catalog
allocated channel: t1
channel t1: sid=402 instance=bmsprod1 devtype=DISK

allocated channel: t2
channel t2: sid=351 instance=bmsprod1 devtype=DISK

Starting backup at 12-SEP-13
channel t1: starting full datafile backupset
channel t1: specifying datafile(s) in backupset
input datafile fno=00031 name=+DATA/bmsprod_od/datafile/arsystem.724.808399285
channel t1: starting piece 1 at 12-SEP-13
channel t1: finished piece 1 at 12-SEP-13
piece handle=/u04/backup/bmsprod/incr/datafile_31_4iojkdb0_1_1 tag=TAG20130912T001216 comment=NONE
channel t1: backup set complete, elapsed time: 00:06:05
Finished backup at 12-SEP-13
released channel: t1
released channel: t2

RMAN> exit

Copy the backup file to standby.

On standby -

$ rman target /

Recovery Manager: Release 10.2.0.3.0 - Production on Thu Sep 12 01:18:34 2013

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

connected to target database: BMSPROD (DBID=3750301684, not open)

RMAN> catalog backuppiece '/u03/incre/datafile_31_4iojkdb0_1_1';

using target database control file instead of recovery catalog
cataloged backuppiece
backup piece handle=/u03/incre/datafile_31_4iojkdb0_1_1 recid=6521 stamp=825902332

RMAN> RESTORE DATAFILE 31
2> ;

Starting restore at 12-SEP-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=140 devtype=DISK

channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00031 to /u03/oradata/BMSPROD_PD/datafile/ARSYSTEM_31.dbf
channel ORA_DISK_1: reading from backup piece /u03/incre/datafile_31_4iojkdb0_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/u03/incre/datafile_31_4iojkdb0_1_1 tag=TAG20130912T001216
channel ORA_DISK_1: restore complete, elapsed time: 00:13:06
Finished restore at 12-SEP-13

RMAN> exit


Recovery Manager complete.

$ sqlplus "/as sysdba"

SQL*Plus: Release 10.2.0.3.0 - Production on Thu Sep 12 01:35:06 2013

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

idle> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database altered.

Archive log will start applying on standby Site.
 
In Standby alertlog file –

Thu Sep 12 01:33:59 2013
Full restore complete of datafile 31 /u03/oradata/BMSPROD_PD/datafile/ARSYSTEM_31.dbf.  Elapsed time: 0:13:02
  checkpoint is 12761509036160
  last deallocation scn is 12761472787611
Thu Sep 12 01:35:24 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION
Thu Sep 12 01:35:24 2013

Tuesday, September 10, 2013

Clusterware Processes in 11gR2 RAC Environment

i).Cluster Ready Services (CRS)

$ ps -ef | grep crs | grep -v grep

root 25863 1 1 Oct27 ? 11:37:32 /opt/oracle/grid/product/11.2.0/bin/crsd.bin reboot
crsd.bin => The above process is responsible for start, stop, monitor and failover of resource. It maintains OCR and also restarts the resources when the failure occurs.
This is applicable for RAC systems. For Oracle Restart and ASM ohasd is used.

ii).Cluster Synchronization Service (CSS)

$ ps -ef | grep -v grep | grep css
root 19541 1 0 Oct27 ? 00:05:55 /opt/oracle/grid/product/11.2.0/bin/cssdmonitor
root 19558 1 0 Oct27 ? 00:05:45 /opt/oracle/grid/product/11.2.0/bin/cssdagent
oragrid 19576 1 6 Oct27 ? 2-19:13:56 /opt/oracle/grid/product/11.2.0/bin/ocssd.bin
cssdmonitor => Monitors node hangs(via oprocd functionality) and monitors OCCSD process hangs (via oclsomon functionality) and monitors vendor clusterware(via vmon functionality).This is the multi threaded process that runs with elavated priority.
Startup sequence: INIT --> init.ohasd --> ohasd --> ohasd.bin --> cssdmonitor
cssdagent => Spawned by OHASD process.Previously(10g) oprocd, responsible for I/O fencing.Killing this process would cause node reboot.Stops,start checks the status of occsd.bin daemon
Startup sequence: INIT --> init.ohasd --> ohasd --> ohasd.bin --> cssdagent
occsd.bin => Manages cluster node membership runs as oragrid user.Failure of this process results in node restart.
Startup sequence: INIT --> init.ohasd --> ohasd --> ohasd.bin --> cssdagent --> ocssd --> ocssd.bin

iii) Event Management (EVM)

$ ps -ef | grep evm | grep -v grep

oragrid 24623 1 0 Oct27 ? 00:30:25 /opt/oracle/grid/product/11.2.0/bin/evmd.bin
oragrid 25934 24623 0 Oct27 ? 00:00:00 /opt/oracle/grid/product/11.2.0/bin/evmlogger.bin -o /opt/oracle/grid/product/11.2.0/evm/log/evmlogger.info -l /opt/oracle/grid/product/11.2.0/evm/log/evmlogger.log
evmd.bin => Distributes and communicates some cluster events to all of the cluster members so that they are aware of the cluster changes.
evmlogger.bin => Started by EVMD.bin reads the configuration files and determines what events to subscribe to from EVMD and it runs user defined actions for those events.

iv).Oracle Root Agent

$ ps -ef | grep -v grep | grep orarootagent

root 19395 1 0 Oct17 ? 12:06:57 /opt/oracle/grid/product/11.2.0/bin/orarootagent.bin
root 25853 1 1 Oct17 ? 16:30:45 /opt/oracle/grid/product/11.2.0/bin/orarootagent.bin
orarootagent.bin => A specialized oraagent process that helps crsd manages resources owned by root, such as the network, and the Grid virtual IP address.
The above 2 process are actually threads which looks like processes. This is a Linux specific

v).Cluster Time Synchronization Service (CTSS)

$ ps -ef | grep ctss | grep -v grep

root 24600 1 0 Oct27 ? 00:38:10 /opt/oracle/grid/product/11.2.0/bin/octssd.bin reboot
octssd.bin => Provides Time Management in a cluster for Oracle Clusterware

vi).Oracle Agent

$ ps -ef | grep -v grep | grep oraagent
oragrid 5337 1 0 Nov14 ? 00:35:47 /opt/oracle/grid/product/11.2.0/bin/oraagent.bin
oracle 8886 1 1 10:25 ? 00:00:05 /opt/oracle/grid/product/11.2.0/bin/oraagent.bin
oragrid 19481 1 0 Oct27 ? 01:45:19 /opt/oracle/grid/product/11.2.0/bin/oraagent.bin
oraagent.bin => Extends clusterware to support Oracle-specific requirements and complex resources. This process runs server callout scripts when FAN events occur. This process was known as RACG in Oracle Clusterware 11g Release 1 (11.1).

ORACLE HIGH AVAILABILITY SERVICES STACK

i) Cluster Logger Service
$ ps -ef | grep -v grep | grep ologgerd
root 24856 1 0 Oct27 ? 01:43:48 /opt/oracle/grid/product/11.2.0/bin/ologgerd -m mg5hfmr02a -r -d /opt/oracle/grid/product/11.2.0/crf/db/mg5hfmr01a
ologgerd => Receives information from all the nodes in the cluster and persists in a CHM repository-based database. This service runs on only two nodes in a cluster

ii).System Monitor Service (osysmond)

$ ps -ef | grep -v grep | grep osysmond

root 19528 1 0 Oct27 ? 09:42:16 /opt/oracle/grid/product/11.2.0/bin/osysmond
osysmond => The monitoring and operating system metric collection service that sends the data to the cluster logger service. This service runs on every node in a cluster

iii). Grid Plug and Play (GPNPD):

$ ps -ef | grep gpn

oragrid 19502 1 0 Oct27 ? 00:21:13 /opt/oracle/grid/product/11.2.0/bin/gpnpd.bin
gpnpd.bin => Provides access to the Grid Plug and Play profile, and coordinates updates to the profile among the nodes of the cluster to ensure that all of the nodes have the most recent profile.

iv).Grid Interprocess Communication (GIPC):

$ ps -ef | grep -v grep | grep gipc

oragrid 19516 1 0 Oct27 ? 01:51:41 /opt/oracle/grid/product/11.2.0/bin/gipcd.bin
gipcd.bin => A support daemon that enables Redundant Interconnect Usage.

v). Multicast Domain Name Service (mDNS):

$ ps -ef | grep -v grep | grep dns

oragrid 19493 1 0 Oct27 ? 00:01:18 /opt/oracle/grid/product/11.2.0/bin/mdnsd.bin
mdnsd.bin => Used by Grid Plug and Play to locate profiles in the cluster, as well as by GNS to perform name resolution. The mDNS process is a background process on Linux and UNIX and on Windows.

vi).Oracle Grid Naming Service (GNS)

$ ps -ef | grep -v grep | grep gns
 
gnsd.bin => Handles requests sent by external DNS servers, performing name resolution for names defined by the cluster.

11gR2 Clusterware and Grid Home – What You Need to Know [ID 1053147.1]

In this Document
  Purpose
  Scope
  Details
  11gR2 Clusterware Key Facts
  Clusterware Startup Sequence
  Important Log Locations
  Clusterware Resource Status Check
  Clusterware Resource Administration
  OCRCONFIG Options:
  OLSNODES Options
  Cluster Verification Options
  Database - RAC/Scalability Community
  References


Applies to:

Oracle Database - Enterprise Edition - Version 11.2.0.1 to 11.2.0.1 [Release 11.2]
Information in this document applies to any platform.

Purpose

The 11gR2 Clusterware has undergone numerous changes since the previous release. For information on the previous release(s), see Note: 259301.1 "CRS and 10g Real Application Clusters". This document is intended to go over the 11.2 Clusterware which has some similarities and some differences from the previous version(s). 

Scope


This document is intended for RAC Database Administrators and Oracle support engineers.

Details

11gR2 Clusterware Key Facts

  • 11gR2 Clusterware is required to be up and running prior to installing a 11gR2 Real Application Clusters database.
  • The GRID home consists of the Oracle Clusterware and ASM.  ASM should not be in a separate home.
  • The 11gR2 Clusterware can be installed in "Standalone" mode for ASM and/or "Oracle Restart" single node support. This clusterware is a subset of the full clusterware described in this document.
  • The 11gR2 Clusterware can be run by itself or on top of vendor clusterware.  See the certification matrix for certified combinations. Ref: Note: 184875.1 "How To Check The Certification Matrix for Real Application Clusters"
  • The GRID Home and the RAC/DB Home must be installed in different locations.
  • The 11gR2 Clusterware requires a shared OCR files and voting files.  These can be stored on ASM or a cluster filesystem.
  • The OCR is backed up automatically every 4 hours to /cdata// and can be restored via ocrconfig. 
  • The voting file is backed up into the OCR at every configuration change and can be restored via crsctl. 
  • The 11gR2 Clusterware requires at least one private network for inter-node communication and at least one public network for external communication.  Several virtual IPs need to be registered with DNS.  This includes the node VIPs (one per node), SCAN VIPs (three).  This can be done manually via your network administrator or optionally you could configure the "GNS" (Grid Naming Service) in the Oracle clusterware to handle this for you (note that GNS requires its own VIP).  
  • A SCAN (Single Client Access Name) is provided to clients to connect to.  For more informantion on SCAN see Note: 887522.1
  • The root.sh script at the end of the clusterware installation starts the clusterware stack.  For information on troubleshooting root.sh issues see Note: 1053970.1
  • Only one set of clusterware daemons can be running per node. 
  • On Unix, the clusterware stack is started via the init.ohasd script referenced in /etc/inittab with "respawn".
  • A node can be evicted (rebooted) if a node is deemed to be unhealthy.  This is done so that the health of the entire cluster can be maintained.  For more information on this see: Note: 1050693.1 "Troubleshooting 11.2 Clusterware Node Evictions (Reboots)"
  • Either have vendor time synchronization software (like NTP) fully configured and running or have it not configured at all and let CTSS handle time synchonization.  SeeNote: 1054006.1 for more information.
  • If installing DB homes for a lower version, you will need to pin the nodes in the clusterware or you will see ORA-29702 errors.  See Note 946332.1 and Note:948456.1for more information.
  • The clusterware stack can be started by either booting the machine, running "crsctl start crs" to start the clusterware stack, or by running "crsctl start cluster" to start the clusterware on all nodes.  Note that crsctl is in the /bin directory.  Note that "crsctl start cluster" will only work if ohasd is running.
  • The clusterware stack can be stopped by either shutting down the machine, running "crsctl stop crs" to stop the clusterware stack, or by running "crsctl stop cluster" to stop the clusterware on all nodes.  Note that crsctl is in the /bin directory.
  • Killing clusterware daemons is not supported.
  • Instance is now part of .db resources in "crsctl stat res -t" output, there is no separate .inst resource for 11gR2 instance.
Note that it is also a good idea to follow the RAC Assurance best practices in Note: 810394.1

Clusterware Startup Sequence

The following is the Clusterware startup sequence (image from the "Oracle Clusterware Administration and Deployment Guide):
Don't let this picture scare you too much.  You aren't responsible for managing all of these processes, that is the Clusterware's job!
Short summary of the startup sequence: INIT spawns init.ohasd (with respawn) which in turn starts the OHASD process (Oracle High Availability Services Daemon).  This daemon spawns 4 processes.
Level 1: OHASD Spawns:
  • cssdagent - Agent responsible for spawning CSSD.
  • orarootagent - Agent responsible for managing all root owned ohasd resources.
  • oraagent - Agent responsible for managing all oracle owned ohasd resources.
  • cssdmonitor - Monitors CSSD and node health (along wth the cssdagent).
Level 2: OHASD rootagent spawns:
  • CRSD - Primary daemon responsible for managing cluster resources.
  • CTSSD - Cluster Time Synchronization Services Daemon
  • Diskmon
  • ACFS (ASM Cluster File System) Drivers 
Level 2: OHASD oraagent spawns:
  • MDNSD - Used for DNS lookup
  • GIPCD - Used for inter-process and inter-node communication
  • GPNPD - Grid Plug & Play Profile Daemon
  • EVMD - Event Monitor Daemon
  • ASM - Resource for monitoring ASM instances
Level 3: CRSD spawns:
  • orarootagent - Agent responsible for managing all root owned crsd resources.
  • oraagent - Agent responsible for managing all oracle owned crsd resources.
Level 4: CRSD rootagent spawns:
  • Network resource - To monitor the public network
  • SCAN VIP(s) - Single Client Access Name Virtual IPs
  • Node VIPs - One per node
  • ACFS Registery - For mounting ASM Cluster File System
  • GNS VIP (optional) - VIP for GNS
Level 4: CRSD oraagent spawns:
  • ASM Resouce - ASM Instance(s) resource
  • Diskgroup - Used for managing/monitoring ASM diskgroups.  
  • DB Resource - Used for monitoring and managing the DB and instances
  • SCAN Listener - Listener for single client access name, listening on SCAN VIP
  • Listener - Node listener listening on the Node VIP
  • Services - Used for monitoring and managing services
  • ONS - Oracle Notification Service
  • eONS - Enhanced Oracle Notification Service
  • GSD - For 9i backward compatibility
  • GNS (optional) - Grid Naming Service - Performs name resolution
This image shows the various levels more clearly:

Important Log Locations

Clusterware daemon logs are all under /log/.  Structure under /log/:
alert.log - look here first for most clusterware issues
./admin:
./agent:
./agent/crsd:
./agent/crsd/oraagent_oracle:
./agent/crsd/ora_oc4j_type_oracle:
./agent/crsd/orarootagent_root:
./agent/ohasd:
./agent/ohasd/oraagent_oracle:
./agent/ohasd/oracssdagent_root:
./agent/ohasd/oracssdmonitor_root:
./agent/ohasd/orarootagent_root:
./client:
./crsd:
./cssd:
./ctssd:
./diskmon:
./evmd:
./gipcd:
./gnsd:
./gpnpd:
./mdnsd:
./ohasd:
./racg:
./racg/racgeut:
./racg/racgevtf:
./racg/racgmain:
./srvm:
The cfgtoollogs dir under and $ORACLE_BASE contains other important logfiles.  Specifically for rootcrs.pl and configuration assistants like ASMCA, etc...
ASM logs live under $ORACLE_BASE/diag/asm/+asm//trace
The diagcollection.pl script under /bin can be used to automatically collect important files for support.  Run this as the root user. 

Clusterware Resource Status Check

The following command will display the status of all cluster resources:

$ ./crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATADG.dg
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.LISTENER.lsnr
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.SYSTEMDG.dg
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.asm
               ONLINE  ONLINE       racbde1                  Started
               ONLINE  ONLINE       racbde2                  Started
ora.eons
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.gsd
               OFFLINE OFFLINE      racbde1
               OFFLINE OFFLINE      racbde2
ora.net1.network
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.ons
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.registry.acfs
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racbde1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racbde2
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racbde2
ora.oc4j
      1        OFFLINE OFFLINE
ora.rac.db
      1        ONLINE  ONLINE       racbde1                  Open
      2        ONLINE  ONLINE       racbde2                  Open
ora.racbde1.vip
      1        ONLINE  ONLINE       racbde1
ora.racbde2.vip
      1        ONLINE  ONLINE       racbde2
ora.scan1.vip
      1        ONLINE  ONLINE       racbde1
ora.scan2.vip
      1        ONLINE  ONLINE       racbde2
ora.scan3.vip
      1        ONLINE  ONLINE       racbde2

Clusterware Resource Administration

Srvctl and crsctl are used to manage clusterware resources.  The general rule is to use srvctl for whatever resource management you can.  Crsctl should only be used for things that you cannot do with srvctl (like start the cluster).  Both have a help feature to see the available syntax.
Note that the following only shows the available srvctl syntax.  For additional explanation on what these commands do, see the Oracle Documentation

Srvctl syntax:
$ srvctl -h
Usage: srvctl [-V]
Usage: srvctl add database -d -o [-m ] [-p ] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s ] [-t ] [-n ] [-y {AUTOMATIC | MANUAL}] [-g ""] [-x ] [-a ""]
Usage: srvctl config database [-d [-a] ]
Usage: srvctl start database -d [-o ]
Usage: srvctl stop database -d [-o ] [-f]
Usage: srvctl status database -d [-f] [-v]
Usage: srvctl enable database -d [-n ]
Usage: srvctl disable database -d [-n ]
Usage: srvctl modify database -d [-n ] [-o ] [-u ] [-m ] [-p ] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s ] [-t ] [-y {AUTOMATIC | MANUAL}] [-g "" [-x ]] [-a ""|-z]
Usage: srvctl remove database -d [-f] [-y]
Usage: srvctl getenv database -d [-t ""]
Usage: srvctl setenv database -d {-t =[,=,...] | -T =}
Usage: srvctl unsetenv database -d -t ""

Usage: srvctl add instance -d -i -n [-f]
Usage: srvctl start instance -d {-n [-i ] | -i } [-o ]
Usage: srvctl stop instance -d {-n | -i }  [-o ] [-f]
Usage: srvctl status instance -d {-n | -i } [-f] [-v]
Usage: srvctl enable instance -d -i ""
Usage: srvctl disable instance -d -i ""
Usage: srvctl modify instance -d -i { -n | -z }
Usage: srvctl remove instance -d [-i ] [-f] [-y]

Usage: srvctl add service -d -s {-r "" [-a ""] [-P {BASIC | NONE | PRECONNECT}] | -g [-c {UNIFORM | SINGLETON}] } [-k   ] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z ] [-w ]
Usage: srvctl add service -d -s -u {-r "" | -a ""}
Usage: srvctl config service -d [-s ] [-a]
Usage: srvctl enable service -d -s "" [-i | -n ]
Usage: srvctl disable service -d -s "" [-i | -n ]
Usage: srvctl status service -d [-s ""] [-f] [-v]
Usage: srvctl modify service -d -s -i -t [-f]
Usage: srvctl modify service -d -s -i -r [-f]
Usage: srvctl modify service -d -s -n -i "" [-a ""] [-f]
Usage: srvctl modify service -d -s [-c {UNIFORM | SINGLETON}] [-P {BASIC|PRECONNECT|NONE}] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}] [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z ] [-w ]
Usage: srvctl relocate service -d -s {-i -t | -c -n } [-f]
       Specify instances for an administrator-managed database, or nodes for a policy managed database
Usage: srvctl remove service -d -s [-i ] [-f]
Usage: srvctl start service -d [-s "" [-n | -i ] ] [-o ]
Usage: srvctl stop service -d [-s "" [-n | -i ] ] [-f]

Usage: srvctl add nodeapps { { -n -A //[if1[|if2...]] } | { -S //[if1[|if2...]] } } [-p ] [-m ] [-e ] [-l ]  [-r ] [-t [:][,[:]...]] [-v]
Usage: srvctl config nodeapps [-a] [-g] [-s] [-e]
Usage: srvctl modify nodeapps {[-n -A /[/if1[|if2|...]]] | [-S /[/if1[|if2|...]]]} [-m ] [-p ] [-e ] [ -l ] [-r ] [-t [:][,[:]...]] [-v]
Usage: srvctl start nodeapps [-n ] [-v]
Usage: srvctl stop nodeapps [-n ] [-f] [-r] [-v]
Usage: srvctl status nodeapps
Usage: srvctl enable nodeapps [-v]
Usage: srvctl disable nodeapps [-v]
Usage: srvctl remove nodeapps [-f] [-y] [-v]
Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-e] [-t ""]
Usage: srvctl setenv nodeapps {-t "=[,=,...]" | -T "="}
Usage: srvctl unsetenv nodeapps -t "" [-v]

Usage: srvctl add vip -n -k -A //[if1[|if2...]] [-v]
Usage: srvctl config vip { -n | -i }
Usage: srvctl disable vip -i [-v]
Usage: srvctl enable vip -i [-v]
Usage: srvctl remove vip -i "" [-f] [-y] [-v]
Usage: srvctl getenv vip -i [-t ""]
Usage: srvctl start vip { -n | -i } [-v]
Usage: srvctl stop vip { -n   | -i } [-f] [-r] [-v]
Usage: srvctl status vip { -n | -i }
Usage: srvctl setenv vip -i {-t "=[,=,...]" | -T "="}
Usage: srvctl unsetenv vip -i -t "" [-v]

Usage: srvctl add asm [-l ]
Usage: srvctl start asm [-n ] [-o ]
Usage: srvctl stop asm [-n ] [-o ] [-f]
Usage: srvctl config asm [-a]
Usage: srvctl status asm [-n ] [-a]
Usage: srvctl enable asm [-n ]
Usage: srvctl disable asm [-n ]
Usage: srvctl modify asm [-l ]
Usage: srvctl remove asm [-f]
Usage: srvctl getenv asm [-t [, ...]]
Usage: srvctl setenv asm -t "= [,...]" | -T "="
Usage: srvctl unsetenv asm -t "[, ...]"

Usage: srvctl start diskgroup -g [-n ""]
Usage: srvctl stop diskgroup -g [-n ""] [-f]
Usage: srvctl status diskgroup -g [-n ""] [-a]
Usage: srvctl enable diskgroup -g [-n ""]
Usage: srvctl disable diskgroup -g [-n ""]
Usage: srvctl remove diskgroup -g [-f]

Usage: srvctl add listener [-l ] [-s] [-p "[TCP:][, ...][/IPC:][/NMP:][/TCPS:] [/SDP:]"] [-o ] [-k ]
Usage: srvctl config listener [-l ] [-a]
Usage: srvctl start listener [-l ] [-n ]
Usage: srvctl stop listener [-l ] [-n ] [-f]
Usage: srvctl status listener [-l ] [-n ]
Usage: srvctl enable listener [-l ] [-n ]
Usage: srvctl disable listener [-l ] [-n ]
Usage: srvctl modify listener [-l ] [-o ] [-p "[TCP:][, ...][/IPC:][/NMP:][/TCPS:] [/SDP:]"] [-u ] [-k ]
Usage: srvctl remove listener [-l | -a] [-f]
Usage: srvctl getenv listener [-l ] [-t [, ...]]
Usage: srvctl setenv listener [-l ] -t "= [,...]" | -T "="
Usage: srvctl unsetenv listener [-l ] -t "[, ...]"

Usage: srvctl add scan -n [-k [-S /[/if1[|if2|...]]]]
Usage: srvctl config scan [-i ]
Usage: srvctl start scan [-i ] [-n ]
Usage: srvctl stop scan [-i ] [-f]
Usage: srvctl relocate scan -i [-n ]
Usage: srvctl status scan [-i ]
Usage: srvctl enable scan [-i ]
Usage: srvctl disable scan [-i ]
Usage: srvctl modify scan -n
Usage: srvctl remove scan [-f] [-y]
Usage: srvctl add scan_listener [-l ] [-s] [-p [TCP:][/IPC:][/NMP:][/TCPS:] [/SDP:]]
Usage: srvctl config scan_listener [-i ]
Usage: srvctl start scan_listener [-n ] [-i ]
Usage: srvctl stop scan_listener [-i ] [-f]
Usage: srvctl relocate scan_listener -i [-n ]
Usage: srvctl status scan_listener [-i ]
Usage: srvctl enable scan_listener [-i ]
Usage: srvctl disable scan_listener [-i ]
Usage: srvctl modify scan_listener {-u|-p [TCP:][/IPC:][/NMP:][/TCPS:] [/SDP:]}
Usage: srvctl remove scan_listener [-f] [-y]

Usage: srvctl add srvpool -g [-l ] [-u ] [-i ] [-n ""]
Usage: srvctl config srvpool [-g ]
Usage: srvctl status srvpool [-g ] [-a]
Usage: srvctl status server -n "" [-a]
Usage: srvctl relocate server -n "" -g [-f]
Usage: srvctl modify srvpool -g [-l ] [-u ] [-i ] [-n ""]
Usage: srvctl remove srvpool -g

Usage: srvctl add oc4j [-v]
Usage: srvctl config oc4j
Usage: srvctl start oc4j [-v]
Usage: srvctl stop oc4j [-f] [-v]
Usage: srvctl relocate oc4j [-n ] [-v]
Usage: srvctl status oc4j [-n ]
Usage: srvctl enable oc4j [-n ] [-v]
Usage: srvctl disable oc4j [-n ] [-v]
Usage: srvctl modify oc4j -p [-v]
Usage: srvctl remove oc4j [-f] [-v]

Usage: srvctl start home -o -s -n
Usage: srvctl stop home -o -s -n [-t ] [-f]
Usage: srvctl status home -o -s -n

Usage: srvctl add filesystem -d -v -g [-m ] [-u ]
Usage: srvctl config filesystem -d
Usage: srvctl start filesystem -d [-n ]
Usage: srvctl stop filesystem -d [-n ] [-f]
Usage: srvctl status filesystem -d
Usage: srvctl enable filesystem -d
Usage: srvctl disable filesystem -d
Usage: srvctl modify filesystem -d -u
Usage: srvctl remove filesystem -d [-f]

Usage: srvctl start gns [-v] [-l ] [-n ]
Usage: srvctl stop gns [-v] [-n ] [-f]
Usage: srvctl config gns [-v] [-a] [-d] [-k] [-m] [-n ] [-p] [-s] [-V]
Usage: srvctl status gns -n
Usage: srvctl enable gns [-v] [-n ]
Usage: srvctl disable gns [-v] [-n ]
Usage: srvctl relocate gns [-v] [-n ] [-f]
Usage: srvctl add gns [-v] -d -i [-k [-S /[/]]]
srvctl modify gns [-v] [-f] [-l ] [-d ] [-i ] [-N -A
] [-D -A
] [-c -a ] [-u ] [-r
] [-V ] [-F ] [-R ] [-X ]
Usage: srvctl remove gns [-f] [-d ]


Crsctl Syntax (for further explanation of these commands see the Oracle Documentation)
$ ./crsctl -h
Usage: crsctl add       - add a resource, type or other entity
       crsctl check     - check a service, resource or other entity
       crsctl config    - output autostart configuration
       crsctl debug     - obtain or modify debug state
       crsctl delete    - delete a resource, type or other entity
       crsctl disable   - disable autostart
       crsctl enable    - enable autostart
       crsctl get       - get an entity value
       crsctl getperm   - get entity permissions
       crsctl lsmodules - list debug modules
       crsctl modify    - modify a resource, type or other entity
       crsctl query     - query service state
       crsctl pin       - Pin the nodes in the nodelist
       crsctl relocate  - relocate a resource, server or other entity
       crsctl replace   - replaces the location of voting files
       crsctl setperm   - set entity permissions
       crsctl set       - set an entity value
       crsctl start     - start a resource, server or other entity
       crsctl status    - get status of a resource or other entity
       crsctl stop      - stop a resource, server or other entity
       crsctl unpin     - unpin the nodes in the nodelist
       crsctl unset     - unset a entity value, restoring its default

For more information non each command. Run "crsctl -h". 

OCRCONFIG Options:

Note that the following only shows the available ocrconfig syntax. For additional explanation on what these commands do, see the Oracle Documentation.

$ ./ocrconfig -help
Name:
        ocrconfig - Configuration tool for Oracle Cluster/Local Registry.

Synopsis:
        ocrconfig [option]
        option:
                [-local] -export
                                                    - Export OCR/OLR contents to a file
                [-local] -import          - Import OCR/OLR contents from a file
                [-local] -upgrade [ []]
                                                    - Upgrade OCR from previous version
                -downgrade [-version ]
                                                    - Downgrade OCR to the specified version
                [-local] -backuploc        - Configure OCR/OLR backup location
                [-local] -showbackup [auto|manual]  - Show OCR/OLR backup information
                [-local] -manualbackup              - Perform OCR/OLR backup
                [-local] -restore         - Restore OCR/OLR from physical backup
                -replace -replacement
                                                    - Replace a OCR device/file with
                -add                      - Add a new OCR device/file
                -delete                   - Remove a OCR device/file
                -overwrite                          - Overwrite OCR configuration on disk
                -repair -add | -delete | -replace -replacement
                                                    - Repair OCR configuration on the local node
                -help                               - Print out this help information

Note:
        * A log file will be created in
        $ORACLE_HOME/log//client/ocrconfig_.log. Please ensure
        you have file creation privileges in the above directory before
        running this tool.
        * Only -local -showbackup [manual] is supported.
        * Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry


OLSNODES Options

Note that the following only shows the available olsnodes syntax. For additional explanation on what these commands do, see the Oracle Documentation.

$ ./olsnodes -h
Usage: olsnodes [ [-n] [-i] [-s] [-t] [ | -l [-p]] | [-c] ] [-g] [-v]
        where
                -n print node number with the node name
                -p print private interconnect address for the local node
                -i print virtual IP address with the node name
                print information for the specified node
                -l print information for the local node
                -s print node status - active or inactive
                -t print node type - pinned or unpinned
                -g turn on logging
                -v Run in debug mode; use at direction of Oracle Support only.
                -c print clusterware name

Cluster Verification Options

Note that the following only shows the available olsnodes syntax. For additional explanation on what these commands do, see the Oracle Documentation.

Component Options:
$ ./cluvfy comp -list
USAGE:
cluvfy comp    [-verbose]

Valid components are:
        nodereach : checks reachability between nodes
        nodecon   : checks node connectivity
        cfs       : checks CFS integrity
        ssa       : checks shared storage accessibility
        space     : checks space availability
        sys       : checks minimum system requirements
        clu       : checks cluster integrity
        clumgr    : checks cluster manager integrity
        ocr       : checks OCR integrity
        olr       : checks OLR integrity
        ha        : checks HA integrity
        crs       : checks CRS integrity
        nodeapp   : checks node applications existence
        admprv    : checks administrative privileges
        peer      : compares properties with peers
        software  : checks software distribution
        asm       : checks ASM integrity
        acfs       : checks ACFS integrity
        gpnp      : checks GPnP integrity
        gns       : checks GNS integrity
        scan      : checks SCAN configuration
        ohasd     : checks OHASD integrity
        clocksync      : checks Clock Synchronization
        vdisk      : check Voting Disk Udev settings


Stage Options:
$ ./cluvfy stage -list
USAGE:
cluvfy stage {-pre|-post}   [-verbose]

Valid stage options and stage names are:
        -post hwos    :  post-check for hardware and operating system
        -pre  cfs     :  pre-check for CFS setup
        -post cfs     :  post-check for CFS setup
        -pre  crsinst :  pre-check for CRS installation
        -post crsinst :  post-check for CRS installation
        -pre  hacfg   :  pre-check for HA configuration
        -post hacfg   :  post-check for HA configuration
        -pre  dbinst  :  pre-check for database installation
        -pre  acfscfg  :  pre-check for ACFS Configuration.
        -post acfscfg  :  post-check for ACFS Configuration.
        -pre  dbcfg   :  pre-check for database configuration
        -pre  nodeadd :  pre-check for node addition.
        -post nodeadd :  post-check for node addition.
        -post nodedel :  post-check for node deletion.


Database - RAC/Scalability Community

To discuss this topic further with Oracle experts and industry peers, we encourage you to review, join or start a discussion in the My Oracle Support Database - RAC/Scalability Community

References

NOTE:1050693.1 - Troubleshooting 11.2 Clusterware Node Evictions (Reboots)
NOTE:1053970.1 - Troubleshooting 11.2 Grid Infrastructure root.sh Issues
NOTE:1054006.1 - CTSSD Runs in Observer Mode Even Though No Time Sync Software is Running


NOTE:184875.1 - How To Check The Certification Matrix for Real Application Clusters
NOTE:259301.1 - CRS and 10g/11.1 Real Application Clusters
NOTE:810394.1 - RAC and Oracle Clusterware Best Practices and Starter Kit (Platform Independent)
NOTE:887522.1 - 11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained
NOTE:946332.1 - Unable To Create 10.1 or 10.2 or 11.1(< 11gR2) ASM RAC Databases (ORA-29702) Using Brand New 11gR2 Grid Infrastructure Installation .