ORA-27103 [ ] and ORA-6544 [pevm_peruws_callback-1] [27103] in Alert log
In a SPARC Solaris system (Ldom) which runs multiple database instances on 12.2.0.2, the alerts logs were flooded with ORA-27103 [ ] and ORA-6544 [pevm_peruws_callback-1] [27103] error messages.
Database operations were impacted, new connection requests were starting to timeout.
Further digging showed high paging on the server and the below snap from the OS log (/var/adm/messages) confirmed the memory exhaustion. An external diagnosis process which was running has caused significant memory usage resulting the above high paging.
Database operations were impacted, new connection requests were starting to timeout.
2020-06-01T12:47:06.635714+05:30
Errors in file /u01/app/oracle/diag/rdbms/memdb/memdb1/trace/memdb1_dbw0_19325.trc:
ORA-27103: internal error
Additional information: 11818
Additional information: 9
ORA-27103: internal error
SVR4 Error: 11: Resource temporarily unavailable
Additional information: 7175
Additional information: 1114112
Errors in file /u01/app/oracle/diag/rdbms/memdb/memdb1/trace/memdb1_dbw0_19325.trc (incident=173114) (PDBNAME=CDB$ROOT):
ORA-27103 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/oracle/diag/rdbms/memdb/memdb1/incident/incdir_173114/memdb1_dbw0_19325_i173114.trc
2020-06-01T12:47:06.651393+05:30
Errors in file /u01/app/oracle/product/12.2.0/diag/rdbms/memadb/memadb1/trace/memadb1_ora_23750.trc (incident=762431) (PDBNAME=PRODMEMA):
ORA-6544 [pevm_peruws_callback-1] [27103] [] [] [] [] [] [] [] [] [] []
2020-06-01T12:47:06.651417+05:30
Errors in file /u01/app/oracle/product/12.2.0/diag/rdbms/memadb/memadb1/trace/memadb1_j006_3204.trc (incident=762727) (PDBNAME=PRODMEMA):
ORA-6544 [pevm_peruws_callback-1] [27103] [] [] [] [] [] [] [] [] [] []
PRODONL(6):Incident details in: /u01/app/oracle/product/12.2.0/diag/rdbms/memadb/memadb1/incident/incdir_762727/memadb1_j006_3204_i762727.trc
Server has 140 GB of memory allocated from which 85 GB was allocated to Oracle via project file. (/etc/projects). This was based on classic 60:40 ratio (60% for DB and 40% for OS)Further digging showed high paging on the server and the below snap from the OS log (/var/adm/messages) confirmed the memory exhaustion. An external diagnosis process which was running has caused significant memory usage resulting the above high paging.
Jun 1 12:47:06 dbserva1 genunix: [ID 470503 kern.warning] WARNING: Sorry, no swap space to grow stack for pid 23928 (oracle)
Jun 1 12:47:06 dbserva1 genunix: [ID 470503 kern.warning] WARNING: Sorry, no swap space to grow stack for pid 23929 (java)
Jun 1 12:47:36 dbserva1 fbt: [ID 795213 kern.warning] WARNING: couldn't allocate FBT table for module oracleacfs
Jun 1 12:47:42 dbserva1 last message repeated 13 times
Jun 1 12:47:46 dbserva1 genunix: [ID 603404 kern.notice] NOTICE: core_log: oracle[23938] core dumped: /var/cores/core_dbserva1_oracle_54321_54321_1590995856_23938
Jun 1 12:47:55 dbserva1 genunix: [ID 780570 kern.info] modload of socketmod/sockrds failed
Jun 1 12:48:01 dbserva1 last message repeated 89 times
Comments
Post a Comment