I had published a post much earlier about some nuances I noticed in the patching process of Exadata Cloud Service, which were mostly because the scripts used to automate the patching process did not consider many edge cases or the possibility of users changing the system configurations manually.
I have noticed some similar errors during the patching of GRID infrastructure to January 2022 PSU. Please note that if you are making any changes to the default configuration of the ExaCS system, please do so after consulting with Oracle Support.
In the earlier post, the errors were there because I had changed some configurations, but this time, I noticed that one of the error was because I created a database “manually” using our in-house tool to create database rather than using OCI specific tooling, and the other error was because of some unknown bug or something.
Before starting the patching process, please note that Oracle has changed almost all commands to manage the database system with new commands and syntax. The old commands still work as of Jan 2022 but may stop working after some time. During the patching process, I couldn’t use the new commands because of some errors, and had to go back to using the old method/commands to patch the GRID infrastructure successfully.
Updating the tooling
Before any major maintenance activity, Oracle recommends updating the tooling to the latest available version.
The “new” way of updating the tooling is documented here which directs me to use the below command to update the tooling:
dbaascli admin updateStack --version LATEST
And here is the error I got when I ran the above command
[FATAL] [DBAAS-60041] Specified URL 'WARN : Possible Invalid entry in OCR : DB94/exa_map' is not syntactically correct.\n CAUSE: Malformed URL exception occurred while processing.\n SUMMARY:\n - no protocol: WARN : Possible Invalid entry in OCR : DB94/exa_map\n\n******** PLUGIN EXECUTION FAILED ********\n\nExecuting jobs which need to be run always...\nCompleted execution. Operation failed with exit code 255 An error occurred during module execution. Please refer to the log file for more information.
DB94 is the database I created using in-house tools, bypassing OCI tooling and console. I then tried the old way of updating the tooling, and it worked just fine.
[root@node1 ~]# dbaascli patch tools list DBAAS CLI version 21.4.1.1.0 Executing command patch tools list Checking Current tools on all nodes node1: Patchid : 21.4.1.1.0_220107.1241 No applicable tools patches are available node2: Patchid : 21.4.1.1.0_220107.1241 No applicable tools patches are available All Nodes have the same tools version
So the new method does not work and the old one works just fine. It is a similar story with the actual patching process as well, but this time, it has nothing to do with the fact that one database was created bypassing OCI tooling.
Patching GRID Infrastructure
Following the new method to patch ExaCS, I started with checking the prerequisites, and here is the output
dbaascli grid patch --targetVersion 19.14.0.0.0 --executePrereqs Grid Patching Prereqs Execution Successful.
As you can see the prerequisite check was completed successfully. Then I started with the actual patch apply command, and this was the result
[root@node 1 oraInventory]# dbaascli grid patch --targetVersion 19.14.0.0.0 DBAAS CLI version 21.4.1.1.0 Executing command grid patch --targetVersion 19.14.0.0.0 ----------------- Setting up parameters... Patch Parameters setup successful. WARN : Possible Invalid entry in OCR : phx94 WARN : Possible Invalid entry in OCR : phx94 ----------------- Loading PILOT... Session ID of the current execution is: 38 Log file location: /var/opt/oracle/log/gridPatch/pilot_2022-01-30_10-39-33-AM ----------------- Running initialization job Completed initialization job ----------------- Running validate_nodes job Completed validate_nodes job ----------------- Running validate_target_version job Completed validate_target_version job ----------------- Running validate_backup_locations job Completed validate_backup_locations job ----------------- Running validate_source_home job Completed validate_source_home job ----------------- Running validate_creg_file_existence job Completed validate_creg_file_existence job ----------------- Running validate_crs_stack_state job Completed validate_crs_stack_state job ----------------- Running validate_databases job Execution of validate_databases failed [FATAL] [DBAAS-60038] Unable to execute sql script: (select con_name from DBA_PDB_SAVED_STATES where upper(instance_name)=upper('dbdev2') and con_name!='PDB$SEED'). ACTION: Refer to the log file for more information. SUMMARY: - ORA-01012: not logged on ******** PLUGIN EXECUTION FAILED ******** Executing jobs which need to be run always... ----------------- Running release_lock job Completed release_lock job Completed execution.
The database instance 'dbdev2' in the error message belongs to a database which was created and removed using OCI console a while back and shouldn’t be querying now. But it seems there are still some cleanups to do. I could open an SR, upload all the required diagnostic files and wait a few days to fix this error, but before that, I decided to try the old way of patching GRID, which is :
[root@node1 ~]# dbaascli patch db list --oh node1:/u01/app/19.0.0.0/grid DBAAS CLI version 21.4.1.1.0 Executing command patch db list --oh node1:/u01/app/19.0.0.0/grid [INFO] [DBAAS-14011] - The usage of this command is deprecated. ACTION: It is recommended to use 'dbaascli cswlib showImages' for this operation. INFO : EXACS patching Available Patches patchid :33509923-GI (Database Release Update : 19.14.0.0.220118 (Jan 2022)) Install database patch using dbaascli patch db apply --patchid 33509923-GI --dbnames <>
The patch apply command:
[root@ node1 oraInventory]# dbaascli patch db apply --patchid 33509923-GI --dbnames grid DBAAS CLI version 21.4.1.1.0 Executing command patch db apply --patchid 33509923-GI --dbnames grid INFO : EXACS patching dbcore: Error: query return ORA error message [INFO] [DBAAS-14011] - The usage of this command is deprecated. ACTION: It is recommended to use 'dbaascli grid patch' for this operation. This might take some time, please take a look at file /var/opt/oracle/log/exadbcpatch/exadbcpatch.log for progress on local node Patch installation successful Exadbcpatch completed successfully on all nodes
Even though it complained that I was using the old method, the patching was successful. I was able to verify the result after patching, and the GRID home was indeed 19.14. It took around 2.5 hrs for the patching to complete.
[grid@node1 opc]$ $ORACLE_HOME/OPatch/opatch lspatches 33575402;DBWLM RELEASE UPDATE 19.0.0.0.0 (33575402) 33534448;ACFS RELEASE UPDATE 19.14.0.0.0 (33534448) 33529556;OCW RELEASE UPDATE 19.14.0.0.0 (33529556) 33515361;Database Release Update : 19.14.0.0.220118 (33515361) 33239955;TOMCAT RELEASE UPDATE 19.0.0.0.0 (33239955) OPatch succeeded.
Monitoring the GRID patching process
If you want to monitor the progress of patching, you can start with the log file mentioned in the patch command output, which is:
/var/opt/oracle/log/exadbcpatch/exadbcpatch.log
tail -100f /var/opt/oracle/log/exadbcpatch/exadbcpatch.log .. .. 2022-01-30 10:50:55.426803 - INFO: finished copying /u01/app/19.0.0.0/grid/OPatch on phexadb1-lam0f2 2022-01-30 10:50:55.427239 - Output from cmd /u01/app/19.0.0.0/grid/OPatch/opatchauto apply -inplace -analyze -ocmrf /var/opt/oracle/exapatch/ocm.rsp -oh /u01/app/19.0.0.0/grid /u02/exapatch/grid/33509923/33509923 2>&1 > /u02/exapatch/grid/33509923/33509923/conflictlog run on localhost is: .. ..
You will notice a few errors and failures in the log. The first one I noticed is:
2022-01-30 10:56:09.088927 - WARNING: corereg: get: No value found for key tns_admin 2022-01-30 10:56:09.089145 - DEBUG: SQL Executing set hea off set pagesize 5000 set linesize 400 set newpage none set feedback off prompt =START= select database_role from v$database; quit 2022-01-30 10:56:09.171937 - select database_role from v$database * ERROR at line 1: ORA-01507: database not mounted 2022-01-30 10:56:09.172173 - Completed in 0.0850358009338379 seconds 2022-01-30 10:56:09.172299 - sql execution failed: ORA-01507: database not mounted2022-01-30 10:56:09.173355 - dbcore: Error: query return ORA error message at /var/opt/oracle/perl_lib/DBAAS/logger.pm line 554. logger::logerr('logger=HASH(0x16091a8)', 'dbcore: Error: query return ORA error message\x{a}') called at /var/opt/oracle/perl_lib/DBAAS/dbcore.pm line 303 dbcore::q1('db=HASH(0x4aea2a8)', 'select database_role from v$database', 'HASH(0x4a0a630)') called at /var/opt/oracle/exapatch/commonApis.pm line 498 commonApis::is_it_dg_standby('commonApis=HASH(0x4a0ee60)', '/u01/app/19.0.0.0/grid', 'grid') called at /var/opt/oracle/exapatch/commonApis.pm line 1448 commonApis::check_options('commonApis=HASH(0x4a0ee60)', 'PRE') called at /var/opt/oracle/exapatch/exadbcpatch line 1189 eval {...} called at /var/opt/oracle/exapatch/exadbcpatch line 922 2022-01-30 10:56:09.173618 - INFO: checking pre options now for dbname grid 2022-01-30 10:56:09.197139 - WARNING: corereg: get: No value found for key dbtype
The second one:
2022-01-30 10:59:09.914292 - DEBUG: SQL Executing set hea off set pagesize 5000 set linesize 400 set newpage none set feedback off prompt =START= select * from dual; quit 2022-01-30 10:59:10.935068 - WARN : non-zero status returned Command: /u01/app/19.0.0.0/grid/OPatch/opatch lsinventory Exit: 255 Excerpt: Oracle Interim Patch Installer version 12.2.0.1.29 Copyright (c) 2022, Oracle Corporation. All rights reserved. OPatch failed with error code 255 2022-01-30 10:59:10.935745 - INFO: setup is running
Apart from these errors (which I think are safe to ignore), I also noticed that in the patch apply process, one of the patches was skipped . In the main log file, you will see output similar to:
2022-01-30 11:53:21.594337 - INFO: opatchauto for GI with psu 33509923 being run on the node phexadb1-lam0f1 2022-01-30 11:53:21.594463 - INFO: Number of nodes which have grid - 2.2022-01-30 11:53:21.594564 - INFO: grid version = 19.13.0.0.0 2022-01-30 11:53:21.594880 - Output from cmd /u01/app/19.0.0.0/grid/OPatch/opatchauto apply -binary -oh /u01/app/19.0.0.0/grid /u02/exapatch/grid/33509923/33509923 2>&1 > /var/opt/oracle/log/exadbcpatch/applylog.132375 run on localhost is: ..
By checking the patch output file listed above, which is:
/var/opt/oracle/log/exadbcpatch/applylog.132375
You will see the below output
tail -100f /var/opt/oracle/log/exadbcpatch/applylog.132375 Verifying environment and performing prerequisite checks... Prerequisite check "CheckPatchApplyDependents" failed. The details are: Interim patch 32139531 requires prerequisite patch(es) [ 32904851 ] which are not present in the Oracle Home. Apply prerequisite patch(es) [ 32904851 ] before applying interim patch 32139531 . Log file location: /u01/app/19.0.0.0/grid/cfgtoollogs/opatch/opatch2022-01-30_12-08-51PM_1.log
You will also see this in the main log file
2022-01-30 12:12:44.458254 - Output from cmd /u01/app/19.0.0.0/grid/OPatch/opatch napply -ocmrf /var/opt/oracle/exapatch/ocm.rsp -oh /u01/app/19.0.0.0/grid -local -silent /u02/exapatch/31789178/ 2>&1 >> /var/opt/oracle/log/exadbcpatch/apply_patch.132375 run on localhost is: 2022-01-30 12:16:22.024755 - UtilSession failed: Please rebuild patch 31789178. Subset patch 31789178 can't overlay superset patch 33515361. 2022-01-30 12:16:22.025040 - cmd took 217.566061019897 seconds 2022-01-30 12:16:22.025270 - WARN : non-zero status returned Command: /u01/app/19.0.0.0/grid/OPatch/opatch napply -ocmrf /var/opt/oracle/exapatch/ocm.rsp -oh /u01/app/19.0.0.0/grid -local -silent /u02/exapatch/31789178/ 2>&1 >> /var/opt/oracle/log/exadbcpatch/apply_patch.132375 Exit: 73 Excerpt: UtilSession failed: Please rebuild patch 31789178. Subset patch 31789178 can't overlay superset patch 33515361. 2022-01-30 12:16:22.028024 - INFO: finished with applying 31789178 2022-01-30 12:16:22.028598 - INFO: Apply Grid oneoff completed
I am not sure if this is going to be a problem, but for now, I have decided to ignore this and see if there is any error or bugs after patching