Posts

Showing posts from July, 2019

Stop TFA before applying the patches...

During the apply of RU patch on Oracle 12.2 opatchauto apply was failing with below errors. Environment was single instance ASM standby database. Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures: Patch Target : racadrdb1->/u01/app/oracle/product/12.2.0/dbhome_1 Type[sidb] Details: [ ---------------------------Patching Failed--------------------------------- Command execution failed during patching in home: /u01/app/oracle/product/12.2.0/dbhome_1, host: racadrdb1. Command failed: /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatchauto apply /tmp/JAN2019/28828733 -oh /u01/app/oracle/product/12.2.0/dbhome_1 -target_type oracle_database -binary -invPtrLoc /u01/app/12.2.0.1/grid12c/grid/oraInst.loc -jre /u01/app/12.2.0.1/grid12c/grid/OPatch/jre -persistresult /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/auto/dbsessioninfo/sessionresult_racadrdb1_sidb.ser -analyzedresult /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/a

Oracle Cloud Control 13c Dataguard status metric error on AIX

Image
When monitoring Oracle database dataguard targets on AIX using Cloud Control 13C the metric collection was failing with below error. The same metric " Data Guard Status " was collected successfully from same db targets with Cloud Control 12c. Lookup on error points towards requirement of enabling IOCP (I/O completion ports) on target AIX servers. You can check the status of IOCP using below command, bash-3.2$ lsdev -Cc iocp iocp0 Defined I/O Completion Ports bash-3.2$ If the status is defined as above it is required to use smitty and change the IOCP setting to available .

Adding a new network to RAC cluster

There are scenarios where it is required to add additional public network to a RAC cluster in order to segregate the network traffic from the default public network. In our case the requirement was to come up with a separate network which is dedicated to carry the backup traffic between the RAC servers and the enterprise backup server. Below are the steps which were followed during the setup on Oracle 11.2 cluster (2 nodes) running on top of Solaris 10. Before the start, make sure the new IPs are assigned in the network level and available on all nodes of the cluster. In our case below are the test IPs assigned. node 1 - IP --> 172.20.60.46 | VIP --> 172.20.60.146 node 2 - IP --> 172.20.60.47 | VIP --> 172.20.60.147 Step 1 - Add new IP entries to host file. ##Backup Network 172.20.60.46 cbbacknode1 172.20.60.47 cbbacknode2 ##Backup VIP 172.20.60.146 vip-cbbacknode1 172.20.60.147 vip-cbbacknode2 Step 2 - Check currently assigned networks

NTP settings for RAC - Step vs Slew

Image
During Oracle RAC db installation prerequisite check, there is a validation raised on NTP configuration. Oracle needs NTP to be configured with " slew_always " option enabled, So what is this slew setting in NTP? NTP can operate in two modes,                 1. Slew mode -- NTP will adjust time drift at a maximum rate of 500 microseconds persecond. This ensures that the server will not see any quick/large time differences which Oracle consider as a requirement.                 2. Step mode -- In this mode, clock can suddenly change time in large quantities to make any adjustment. Apparently this behavior can cause issues in db server operations. Below is the configuration change required to enable slew_always on Solaris 11.3, svccfg -s svc:/network/ntp:default setprop config/slew_always = true