Sunday, December 20, 2020

Oracle RAC Startup Sequence and daemon functioning.

ORACLE STARTUP SEQUENCE





My Notes:

  • Oracle High Availability Services Daemon (OHASD) . Init.ohasd daemon is an essential daemon for Clusterware startup. The init.ohasd daemon is started from /etc/inittab. Entries in the inittab is monitored by the init daemon (pid=1) and init daemon will react if the inittab file is modified. The init daemon monitors all processes listed in the inittab file and reacts according to the configuration in the inittab file. For example, if init.ohasd fails for some reason, it is immediately restarted by init daemon.
  • Following is an example entry in the inittab file. Fields are separated a colon, second field indicates that init.ohasd will be started in run level 3, and the third field indicates an action field. Restart in the action field means that, if the target process exist, just continue scanning inittab file; if the target process does not exist, then restart the process.
  • #cat /etc/inittab
  • h1:3:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null

  • With 11gr2 Oracle introduced Oracle Local Registry (OLR), OLR is the OCR’s local counterpart and a new feature introduced with Grid Infrastructure. The OLR file is located in the grid_home/cdata/<hostname>.olr & the location of OLR is stored in /etc/oracle/olr.loc. Each node has its own copy of the file in the Grid Infrastructure software home. The OLR stores important security contexts used by the Oracle High Availability Service early in the start sequence of Clusterware. The information stored in the OLR is needed by the Oracle High Availability Services daemon (OHASD) to start; this includes data about GPnP wallets, Clusterware configuration, and version information. The information in the OLR and the Grid Plug and Play configuration file are needed to locate the voting disks. If they are stored in ASM, the discovery string in the GPnP profile will be used by the cluster synchronization daemon to look them up.  
  • To check the OLR, execute the ocrcheck -local command on the desired node. 
  • $ ocrcheck –local 
  • To view the contents of the OLR, execute the ocrdump -local command, redirecting the output to stdout: 
  • $ ocrdump -local -stdout

  • Cluster configuration information is maintained in the OCR. The OCR relies on distributed shared cache architecture for optimizing queries, and clusterwide atomic updates against the cluster registry. Each node in the cluster maintains an in-memory copy of OCR, along with the CRSD that accesses its OCR cache. Only one of the CRSD processes actually reads from and writes to the OCR file on shared storage. This process is responsible for refreshing its own local cache, as well as the OCR cache on other nodes in the cluster. For queries against the cluster registry, the OCR clients communicate directly with the local CRS daemon (CRSD) process on the node from which they originate. When clients need to update the OCR, they communicate through their local CRSD process to the CRSD process that is performing input/output (I/O) for writing to the registry on disk.
  • CSS is the service that determines which nodes in the cluster are available and provides cluster group membership and simple locking services to other processes. CSS typically determines node availability via communication through a dedicated private network with a voting disk used as a secondary communication mechanism. This is done by sending heartbeat messages through the network and the voting disk.
  • osysmond: The system monitor service (osysmond) is the monitoring and operating system metric collection service that sends data to the cluster logger service, ologgerd. The cluster logger service receives information from all the nodes and persists in the Cluster Health Monitor (CHM) repository. There is one system monitor service on every node.
  • ologgerd: There is a cluster logger service (ologgerd) on only one node in a cluster and another node is chosen by the cluster logger service to house the standby for the master cluster logger service. If the master cluster logger service fails, the node where the standby resides takes over as master and selects a new node for standby. The master manages the operating system metric database in the CHM repository and interacts with the standby to manage a replica of the master operating system metrics database.
  • crsd: The Cluster Ready Services (CRS) process is the primary program for managing high availability operations in a cluster. The CRS daemon (crsd) manages cluster resources based on the configuration information stored in OCR for each resource. This includes start, stop, monitor, and failover operations. The crsd process generates events when the status of a resource changes. When Oracle RAC is installed, the crsd process monitors the Oracle database components and automatically restarts them when a failure occurs.
  • diskmon: The diskmon process monitors and performs I/O fencing for Oracle Exadata.
  • ACFS Drivers: These drivers are loaded in support of ASM Dynamic Volume Manager
    (ADVM) and ASM Cluster File System (ACFS).
  • ctssd: The Cluster Time Synchronization Service process provides time synchronization for the cluster in the absence of ntpd. If ntpd is configured, ctssd will run in observer mode. 
  • ons: The ONS or Oracle Notification Service is a publishing and subscribing service for communicating Fast Application Notification (FAN) events.
  • gipcd: The Grid Interprocess Communication (GIPC) daemon is a support process that enables Redundant Interconnect Usage. Redundant Interconnect Usage enables load- balancing and high availability across multiple (up to four) private networks (also known as interconnects). 
  • mdnsd: The Multicast Domain Name Service (mDNS) daemon is used by Grid Plug and Play to locate profiles in the cluster, as well as by GNS to perform name resolution. 
  • evmd: The Event Management (EVM) daemon is a background process that publishes the events that Oracle Clusterware creates. 
  • ASM: ASM provides disk management for Oracle Clusterware and Oracle Database.
  • gpnpd: Grid Plug and Play (GPNPD) provides access to the Grid Plug and Play profile and coordinates updates to the profile among the nodes of the cluster to ensure that all
    the nodes have the most recent profile.





No comments:

Post a Comment

Oracle Enterprise Manager Cloud Control 13c (OMS upgrade from 13.3 or 13.4 to OMS 13.5)

 OMS upgrade from 13.3 or 13.4 to OMS 13.5 PHASE -I Planning 1. OMS 13.5 is the latest Oracle Cloud Control version available. 2. To directl...