11gR2 Clusterware Key Facts

网友投稿 258 2022-11-28

11gR2 Clusterware Key Facts

11gR2 Clusterware is required to be up and running prior to installing a 11gR2 Real Application Clusters database.The GRID home consists of the Oracle Clusterware and ASM.  ASM should not be in a seperate home.The 11gR2 Clusterware can be installed in "Standalone" mode for ASM and/or "Oracle Restart" single node support. This clusterware is a subset of the full clusterware described in this document.The 11gR2 Clusterware can be run by itself or on top of vendor clusterware.  See the certification matrix for certified combinations. Ref:​​Note: 184875.1​​ "How To Check The Certification Matrix for Real Application Clusters"The GRID Home and the RAC/DB Home must be installed in different locations.The 11gR2 Clusterware requires a shared OCR files and voting files.  These can be stored on ASM or a cluster filesystem.The OCR is backed up automatically every 4 hours to /cdata// and can be restored via ocrconfig.The voting file is backed up into the OCR at every configuration change and can be restored via crsctl.The 11gR2 Clusterware requires at least one private network for inter-node communication and at least one public network for external communication.  Several virtual IPs need to be registered with DNS.  This includes the node VIPs (one per node), SCAN VIPs (up to 3).  This can be done manually via your network administrator or optionally you could configure the "GNS" (Grid Naming Service) in the Oracle clusterware to handle this for you (note that GNS requires its own VIP).A SCAN (Single Client Access Name) is provided to clients to connect to.  For more info on SCAN see​​Note: 887522.1​​​ and/or​​root.sh script at the end of the clusterware installation starts the clusterware stack.  For information on troubleshooting root.sh issues see​​Note: 1053970.1​​Only one set of clusterware daemons can be running per node.On Unix, the clusterware stack is started via the init.ohasd script referenced in /etc/inittab with "respawn".A node can be evicted (rebooted) if a node is deemed to be unhealthy.  This is done so that the health of the entire cluster can be maintained.  For more information on this see:​​Note: 1050693.1​​ "Troubleshooting 11.2 Clusterware Node Evictions (Reboots)"Either have vendor time synchornization software (like NTP) fully configured and running or have it not configured at all and let CTSS handle time synchonization.  See​​Note: 1054006.1​​ for more infomation.If installing DB homes for a lower version, you will need to pin the nodes in the clusterware or you will see ORA-29702 errors.  See​​Note 946332.1​​ for more info.The clusterware stack can be started by either booting the machine, running "crsctl start crs" to start the clusterware stack, or by running "crsctl start cluster" to start the clusterware on all nodes.  Note that crsctl is in the /bin directory.The clusterware stack can be stopped by either shutting down the machine, running "crsctl stop crs" to stop the clusterware stack, or by running "crsctl stop cluster" to stop the clusterware on all nodes.  Note that crsctl is in the /bin directory.Killing clusterware daemons is not supported.

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:使用ServletInputStream在拦截器或过滤器中应用后重写
下一篇:以太网在雷击浪涌测试中的应用
相关文章

 发表评论

暂时没有评论,来抢沙发吧~