如何操作oracle 19c集群节点移除和新增?

摘要:虽然掌握得不够深入,但越来越讨厌oracle数据库这个软件了,实在不愿意再孤岛这个笨重、复杂的oracle了。今天花了好几个小时操作一个实验环境的迁移、配置,记录几个步骤吧,也许后续会有用。 ■ 查看数据库配置信息 [oracle@node
虽然掌握得不够深入,但越来越讨厌oracle数据库这个软件了,实在不愿意再孤岛这个笨重、复杂的oracle了。今天花了好几个小时操作一个实验环境的迁移、配置,记录几个步骤吧,也许后续会有用。 ■ 查看数据库配置信息 [oracle@node1:0 ~]$ srvctl config database -db likingdb Database unique name: likingdb Database name: likingdb Database instances: likingdb1,likingdb2,likingdb3 Configured nodes: node1,node2,node3 ■ 删除db实例3 dbca -silent -deleteInstance -nodeList node3 -gdbName likingdb -instanceName likingdb3 [-sysDBAUserName sysdba -sysDBAPassword password] || [oracle@node1:0 ~]$ dbca -silent -deleteInstance -nodeList node3 -gdbName likingdb -instanceName likingdb3 [WARNING] [DBT-19203] The Database Configuration Assistant will delete the Oracle instance and its associated OFA directory structure. All information about this instance will be deleted. Prepare for db operation 40% complete Deleting instance Unable to copy the file "node3:/etc/oratab" to "/tmp/oratab.node3". 48% complete 52% complete 56% complete 60% complete 64% complete 68% complete 72% complete 76% complete 80% complete Completing instance management. 100% complete [FATAL] Illegal Capacity: -1 Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/likingdb/likingdb.log" for further details. ■ 更新inventory su - oracle cd $ORACLE_HOME/oui/bin ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_1 "CLUSTER_NODES={node1,node2}" ■ 在node3删除GI节点 [root@node3:1 /u01/app/12.2.0/grid/crs/install]# ./rootcrs.sh -deconfig -force Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/node3/crsconfig/crsdeconfig_node3_2024-07-23_04-57-58PM.log PRCR-1070 : 无法检查 资源 ora.net1.network 是否已注册 CRS-0184 : Cannot communicate with the CRS daemon. PRCR-1070 : 无法检查 资源 ora.helper 是否已注册 CRS-0184 : Cannot communicate with the CRS daemon. PRCR-1070 : 无法检查 资源 ora.ons 是否已注册 CRS-0184 : Cannot communicate with the CRS daemon. 2024/07/23 16:58:06 CLSRSC-180: An error occurred while executing the command '/u01/app/12.2.0/grid/bin/srvctl config nodeapps' 2024/07/23 16:58:17 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector. 2024/07/23 17:00:20 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector. 2024/07/23 17:00:21 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node ■ 在node1删除node3的CRS配置 [root@node1:0 /etc/oracle/scls_scr/node1/root]# crsctl delete node -n node3 CRS-4661: Node node3 successfully deleted. ■ 在删除节点上更新node list su - grid cd $ORACLE_HOME/oui/bin ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0/grid "CLUSTER_NODES={node3}" CRS=TRUE -silent -local ■ 在活动节点删除VIP配置 su - grid srvctl config vip -node node3 srvctl stop vip -node node3 srvctl remove vip -vip node3-vip ■ 在node1检查确认 cluvfy stage -post nodedel -n node3 ■ 删除db实例2 dbca -silent -deleteInstance -nodeList node2 -gdbName likingdb -instanceName likingdb2 || [oracle@node1:0 ~]$ dbca -silent -deleteInstance -nodeList node2 -gdbName likingdb -instanceName likingdb2 [WARNING] [DBT-19203] Database Configuration Assistant 将删除 Oracle 实例及其关联的 OFA 目录结构。所有有关此实例的信息都将被删除。 准备执行数据库操作 已完成 40% 正在删除实例 无法将文件 "node2:/etc/oratab" 复制为 "/tmp/oratab.node2"。 已完成 48% 已完成 52% 已完成 56% 已完成 60% 已完成 64% 已完成 68% 已完成 72% 已完成 76% 已完成 80% 正在进行实例管理。 已完成 100% [FATAL] Illegal Capacity: -1 有关详细信息, 请参阅日志文件 "/u01/app/oracle/cfgtoollogs/dbca/likingdb/likingdb0.log"。 ■ 更新inventory su - oracle cd $ORACLE_HOME/oui/bin ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_1 "CLUSTER_NODES={node1}" ■ 在node2删除GI节点 同上 ■ 在node1删除node2的CRS配置 crsctl delete node -n node2 ■ 在删除节点上更新node list su - grid cd $ORACLE_HOME/oui/bin ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0/grid "CLUSTER_NODES={node2}" CRS=TRUE -silent -local ■ 在活动节点删除VIP配置 su - grid srvctl config vip -node node2 srvctl stop vip -node node2 srvctl remove vip -vip node2-vip ■ 在node1检查确认 cluvfy stage -post nodedel -n node2 ■ 增加GI节点node2 首先要配置好ssh互信 ${ORACLE_HOME}/oui/prov/resources/scripts/sshUserSetup.sh -hosts "node1 node2 node3" -user grid -advanced su - grid cd ${ORACLE_HOME}/bin cluvfy comp peer -refnode node1 -n node2 cd ${ORACLE_HOME}/addnode ./addnode.sh -silent -ignoreSysPrereqs -ignorePrereqFailure "CLUSTER_NEW_NODES={node2}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={node2-priv2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}" 各种报错,实在不愿意鼓捣了 srvctl remove vip -vip node2-vip -force