In RAC databases, serious cluster wait events are seen, usually due to the low speed of interconnect interfaces. Therefore, after the installation, faster hardware interfaces (such as 10 gig) are added to the hardware.
In addition, bonding configurations can be made for the backup of interfaces after installation. Then the existing interconnect interfaces must also be changed on the cluster side.
The required operations must be performed at the operating system level on all nodes, and all nodes must be accessible through new interfaces.
In the following steps, we have made adjustments to use the bond1 interface in the bonding structure instead of eth1 which is the current interface.
Let’s first query the current situation.
1 2 3 | -bash-4.3$ /u01/app/11.2.0/grid/bin/oifcfg getif eth0 172.18.197.0 global public eth1 192.168.10.0 global cluster_interconnect |
Assume that the name of our new interface is bond1 and the network where it is located is 10.1.1.0.
You must run the following command with root for change.
1 | /u01/app/11.2.0/grid/bin/oifcfg setif -global bond1/10.1.1.0:cluster_interconnect |
It is also possible to change the public interface. You can change the public interface as follows.
1 | /u01/app/11.2.0/grid/bin/oifcfg setif -global bond1/10.1.1.0:public |
Let’s check the situation after the change.
1 2 3 | -bash-4.3$ /u01/app/11.2.0/grid/bin/oifcfg getif eth0 172.18.197.0 global public bond1 10.1.1.0 global cluster_interconnect |
Cluster services must be restarted on all cluster nodes after the change. You can do this with the following commands.
You must run the commands with root.
1 2 | /u01/app/11.2.0/grid/bin/crsctl stop crs -f /u01/app/11.2.0/grid/bin/crsctl start crs |