标签归档:ORA-600 3020

在线mv方式迁移数据文件导致数据库无法正常启动

有客户在数据库没有关闭的情况下,直接操作系统层面mv方式把数据文件从一个分区迁移到另外一个分区,再创建ln -s(软连接)的方式实现数据文件不修改路径的方式数据文件迁移,结果数据库重启之后,库无法正常启动,报ORA-01172 ORA-01151错误

Fri Dec 29 09:49:19 2023
ALTER DATABASE OPEN
Beginning crash recovery of 1 threads
 parallel recovery started with 11 processes
Started redo scan
Completed redo scan
 read 11591 KB redo, 1566 data blocks need recovery
Started redo application at
 Thread 1: logseq 6320, block 479571
Recovery of Online Redo Log: Thread 1 Group 4 Seq 6320 Reading mem 0
  Mem# 0: /data/oracle/oradata/orcl/redo04.log
Fri Dec 29 09:49:19 2023
Hex dump of (file 6, block 3598593) in trace file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_p002_6696.trc
Fri Dec 29 09:49:19 2023
Fri Dec 29 09:49:19 2023
Hex dump of (file 5, block 27832) in trace file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_p008_6708.trc
Hex dump of (file 6, block 3598208) in trace file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_p010_6712.trc
Reading datafile '/oadate/xff/xff_01.dbf' for corruption at rdba: 0x01b6e901 (file 6, block 3598593)
Reading datafile '/oadate/xff/xff.dbf' for corruption at rdba: 0x01406cb8 (file 5, block 27832)
Reread (file 6, block 3598593) found same corrupt data (logically corrupt)
Reading datafile '/oadate/xff/xff_01.dbf' for corruption at rdba: 0x01b6e780 (file 6, block 3598208)
Reread (file 5, block 27832) found same corrupt data (logically corrupt)
    RECOVERY OF THREAD 1 STUCK AT BLOCK 3598593 OF FILE 6

Reread (file 6, block 3598208) found same corrupt data (logically corrupt)
RECOVERY OF THREAD 1 STUCK AT BLOCK 3598208 OF FILE 6RECOVERY OF THREAD 1 STUCK AT BLOCK 27832 OF FILE 5

Fri Dec 29 09:49:32 2023
Slave exiting with ORA-1172 exception
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_p010_6712.trc:
ORA-01172: recovery of thread 1 stuck at block 3598208 of file 6
ORA-01151: use media recovery to recover block, restore backup if needed
Fri Dec 29 09:49:32 2023
Fri Dec 29 09:49:32 2023
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_p008_6708.trc:
ORA-10388: parallel query server interrupt (failure)
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_p002_6696.trc:
ORA-10388: parallel query server interrupt (failure)
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_p008_6708.trc:
ORA-10388: parallel query server interrupt (failure)
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_p002_6696.trc:
ORA-10388: parallel query server interrupt (failure)
Fri Dec 29 09:49:32 2023
Aborting crash recovery due to slave death, attempting serial crash recovery
Beginning crash recovery of 1 threads
Started redo scan
Completed redo scan
 read 11591 KB redo, 1566 data blocks need recovery
Started redo application at
 Thread 1: logseq 6320, block 479571
Recovery of Online Redo Log: Thread 1 Group 4 Seq 6320 Reading mem 0
  Mem# 0: /data/oracle/oradata/orcl/redo04.log
Hex dump of (file 6, block 3598593) in trace file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_6690.trc
Reading datafile '/oadate/xff/xff_01.dbf' for corruption at rdba: 0x01b6e901 (file 6, block 3598593)
Reread (file 6, block 3598593) found same corrupt data (logically corrupt)
RECOVERY OF THREAD 1 STUCK AT BLOCK 3598593 OF FILE 6
Aborting crash recovery due to error 1172
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_6690.trc:
ORA-01172: recovery of thread 1 stuck at block 3598593 of file 6
ORA-01151: use media recovery to recover block, restore backup if needed
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_6690.trc:
ORA-01172: recovery of thread 1 stuck at block 3598593 of file 6
ORA-01151: use media recovery to recover block, restore backup if needed
ORA-1172 signalled during: ALTER DATABASE OPEN...

sqlplus恢复数据库报错

SQL> recover datafile 6;
ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [3020], [6], [3578240], [28744064],
[], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 6, block# 3578240, file
offset is 3543138304 bytes)
ORA-10564: tablespace xff
ORA-01110: data file 6: '/oadate/xff/xff_01.dbf'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'

alert日志报ORA-600 3020错误

Fri Dec 29 17:43:03 2023
ALTER DATABASE RECOVER  datafile 6  
Media Recovery Start
Serial Media Recovery started
Recovery of Online Redo Log: Thread 1 Group 4 Seq 6320 Reading mem 0
  Mem# 0: /data/oracle/oradata/orcl/redo04.log
Fri Dec 29 17:43:42 2023
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_30140.trc  (incident=462294):
ORA-00600: internal error code, arguments: [3020], [6], [3578240], [28744064], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 6, block# 3578240, file offset is 3543138304 bytes)
ORA-10564: tablespace XFF
ORA-01110: data file 6: '/oadate/xff/xff_01.dbf'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'
Incident details in: /data/oracle/diag/rdbms/orcl/orcl/incident/incdir_462294/orcl_ora_30140_i462294.trc
Fri Dec 29 17:43:42 2023
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Media Recovery failed with error 600
ORA-283 signalled during: ALTER DATABASE RECOVER  datafile 6  ...

此类故障是由于在线拷贝数据文件,可能有不少最新写入的数据都有数据文件和redo不一致的风险,引起这里的ORA-600 3020最好不要通过allow N corruption的方式跳过,因为可能导致大量数据文件坏块,这样就不光丢失了redo数据,可能数据文件中好的block中的很多数据也丢失.对于这种情况,我们为了减少客户的数据丢失,选择了最少数据丢失的方法:通过bbed修改文件头,然后直接recover 数据文件,open库

Fri Dec 29 18:05:36 2023
ALTER DATABASE RECOVER  datafile 5  
Media Recovery Start
Serial Media Recovery started
Recovery of Online Redo Log: Thread 1 Group 4 Seq 6320 Reading mem 0
  Mem# 0: /data/oracle/oradata/orcl/redo04.log
Media Recovery Complete (orcl)
Completed: ALTER DATABASE RECOVER  datafile 5  
ALTER DATABASE RECOVER  datafile 6  
Media Recovery Start
Serial Media Recovery started
Recovery of Online Redo Log: Thread 1 Group 4 Seq 6320 Reading mem 0
  Mem# 0: /data/oracle/oradata/orcl/redo04.log
Media Recovery Complete (orcl)
Completed: ALTER DATABASE RECOVER  datafile 6  
Fri Dec 29 18:07:02 2023
ALTER DATABASE OPEN
Beginning crash recovery of 1 threads
 parallel recovery started with 11 processes
Started redo scan
Completed redo scan
 read 11591 KB redo, 0 data blocks need recovery
Started redo application at
 Thread 1: logseq 6320, block 479571
Recovery of Online Redo Log: Thread 1 Group 4 Seq 6320 Reading mem 0
  Mem# 0: /data/oracle/oradata/orcl/redo04.log
Completed redo application of 0.00MB
Completed crash recovery at
 Thread 1: logseq 6320, block 502754, scn 2657849964
 0 data blocks read, 0 data blocks written, 11591 redo k-bytes read
Thread 1 advanced to log sequence 6321 (thread open)
Thread 1 opened at log sequence 6321
  Current log# 5 seq# 6321 mem# 0: /data/oracle/oradata/orcl/redo05.log
Successful open of redo thread 1
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
SMON: enabling cache recovery
[2656] Successfully onlined Undo Tablespace 2.
Undo initialization finished serial:0 start:933676634 end:933676704 diff:70 (0 seconds)
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is AL32UTF8
No Resource Manager plan active
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Fri Dec 29 18:07:04 2023
QMNC started with pid=31, OS id=2687 
Completed: ALTER DATABASE OPEN

然后逻辑方式迁移数据到新库中,最大程度抢救客户数据

发表在 Oracle备份恢复 | 标签为 , , , | 评论关闭

ORA-07445: exception encountered: core dump [kdxlin()+4088]处理

abort方式关闭数据库,启动报错

Tue Sep 19 21:52:56 2023
NOTE: dependency between database orcl and diskgroup resource ora.DATA.dg is established
Tue Sep 19 21:52:57 2023
Reconfiguration started (old inc 4, new inc 6)
List of instances:
 1 (myinst: 1) 
 Global Resource Directory frozen
 * dead instance detected - domain 0 invalid = TRUE 
 Communication channels reestablished
 Master broadcasted resource hash value bitmaps
 Non-local Process blocks cleaned out
Tue Sep 19 21:52:57 2023
Tue Sep 19 21:52:57 2023
 LMS 3: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Tue Sep 19 21:52:57 2023
 LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
 LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Tue Sep 19 21:52:57 2023
 LMS 2: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
 Set master node info 
 Submitted all remote-enqueue requests
 Dwn-cvts replayed, VALBLKs dubious
 All grantable enqueues granted
 Post SMON to start 1st pass IR
 Submitted all GCS remote-cache requests
 Post SMON to start 1st pass IR
 Fix write in gcs resources
Reconfiguration complete
Tue Sep 19 21:53:05 2023
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_ora_28917.trc  (incident=492333):
ORA-00600: internal error code, arguments: [2131], [33], [32], [], [], [], [], [], [], [], [], []
Incident details in:/u01/app/oracle/diag/rdbms/orcl/orcl1/incident/incdir_492333/orcl1_ora_28917_i492333.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
ORA-600 signalled during: ALTER DATABASE MOUNT /* db agent *//* {1:34652:2} */...

重建ctl之后,尝试recover数据库报错ORA-600 3020和ORA-07445 kdxlin等错误

SQL> recover database;
ORA-00600: internal error code, arguments: [3020], [41], [3142531],
[175108995], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 41, block# 3142531, file
offset is 4268777472 bytes)
ORA-10564: tablespace XIFENFEI
ORA-01110: data file 41: '+DATA/orcl/datafile/xifenfei07.dbf'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'
Wed Sep 20 00:15:00 2023
ALTER DATABASE RECOVER  database  
Media Recovery Start
 started logmerger process
Parallel Media Recovery started with 64 slaves
Wed Sep 20 00:15:02 2023
Recovery of Online Redo Log: Thread 2 Group 6 Seq 67008 Reading mem 0
  Mem# 0: +DATA/orcl/onlinelog/group_6.268.942097791
Recovery of Online Redo Log: Thread 1 Group 2 Seq 81767 Reading mem 0
  Mem# 0: +DATA/orcl/onlinelog/group_2.262.942097651
Recovery of Online Redo Log: Thread 1 Group 5 Seq 81768 Reading mem 0
  Mem# 0: +DATA/orcl/onlinelog/group_5.263.942097651
Wed Sep 20 00:15:08 2023
Hex dump of (file 41, block 3142531) in trace file /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_pr1m_45463.trc
Reading datafile '+DATA/orcl/datafile/ts_his3bz07.dbf' for corruption at rdba: 0x0a6ff383 (file 41, block 3142531)
Reread (file 41, block 3142531) found different corrupt data (logically corrupt)
Hex dump of (file 41, block 3142531) in trace file /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_pr1m_45463.trc
Wed Sep 20 00:15:08 2023
Exception [type: SIGSEGV, Address not mapped to object][ADDR:0xC] [PC:0x95FB582, kdxlin()+4088][flags: 0x0,count:1]
Wed Sep 20 00:15:08 2023
Exception [type: SIGSEGV, Address not mapped to object][ADDR:0xC] [PC:0x95FB582, kdxlin()+4088][flags: 0x0,count:1]
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_pr10_45419.trc  (incident=564584):
ORA-07445: exception encountered:core dump [kdxlin()+4088][SIGSEGV][ADDR:0xC][PC:0x95FB582][Address not mapped to object]
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl1/incident/incdir_564640/orcl1_pr17_45433_i564640.trc

尝试随机恢复文件,也遭遇ORA-07445 kdxlin异常

SQL> recover datafile 34;
ORA-00283: recovery session canceled due to errors
ORA-10562: Error occurred while applying redo to data block (file# 34, block#
1999809)
ORA-10564: tablespace XIFENFEI
ORA-01110: data file 34: '+DATA/orcl/datafile/xifeifenfei06'
ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 97961
ORA-00607: Internal error occurred while making a change to a data block
ORA-00602: internal programming exception
ORA-07445: exception encountered: core dump [kdxlin()+4088] [SIGSEGV]
[ADDR:0xC] [PC:0x95FB582] [Address not mapped to object] []

出现这种情况是由于redo和数据文件块不一致导致无法正常应用日志,人工对于异常的block进行处理,数据库open成功,然后遭遇undo回滚段异常,对其进行规避,数据库open并且稳定运行

发表在 Oracle备份恢复 | 标签为 , , | 评论关闭

ORA-01172 ORA-01151 故障恢复

节点2报Error: Controlfile sequence number in file header is different from the one in memory,导致实例异常

Tue May 09 23:03:24 2023
Thread 2 cannot allocate new log, sequence 16728
Checkpoint not complete
  Current log# 3 seq# 16727 mem# 0: +DATA/xff/onlinelog/group_3.265.941900045
  Current log# 3 seq# 16727 mem# 1: +FRA/xff/onlinelog/group_3.259.941900045
Thread 2 advanced to log sequence 16728 (LGWR switch)
  Current log# 4 seq# 16728 mem# 0: +DATA/xff/onlinelog/group_4.266.941900045
  Current log# 4 seq# 16728 mem# 1: +FRA/xff/onlinelog/group_4.260.941900045
Tue May 09 23:03:31 2023
LNS: Standby redo logfile selected for thread 2 sequence 16728 for destination LOG_ARCHIVE_DEST_2
Tue May 09 23:03:32 2023
Archived Log entry 431615 added for thread 2 sequence 16727 ID 0x5ffc99b5 dest 1:
Tue May 09 23:05:30 2023
Error: Controlfile sequence number in file header is different from the one in memory
       Please check that the correct mount options are used if controlfile is located on NFS
USER (ospid: 30162): terminating the instance
Tue May 09 23:05:30 2023
System state dump requested by (instance=2, osid=30162), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/xff/xff2/trace/xff2_diag_6650.trc
Instance terminated by USER, pid = 30162

在节点1 进行实例重组之后,节点1 实例异常

Tue May 09 23:04:54 2023
Thread 1 cannot allocate new log, sequence 2060
Checkpoint not complete
  Current log# 1 seq# 2059 mem# 0: +DATA/xff/onlinelog/group_1.261.941899887
  Current log# 1 seq# 2059 mem# 1: +FRA/xff/onlinelog/group_1.257.941899887
Thread 1 advanced to log sequence 2060 (LGWR switch)
  Current log# 2 seq# 2060 mem# 0: +DATA/xff/onlinelog/group_2.262.941899889
  Current log# 2 seq# 2060 mem# 1: +FRA/xff/onlinelog/group_2.258.941899889
Tue May 09 23:04:58 2023
********************* ATTENTION: ******************** 
 The controlfile header block returned by the OS
 has a sequence number that is too old. 
 The controlfile might be corrupted.
 PLEASE DO NOT ATTEMPT TO START UP THE INSTANCE 
 without following the steps below.
 RE-STARTING THE INSTANCE CAN CAUSE SERIOUS DAMAGE 
 TO THE DATABASE, if the controlfile is truly corrupted.
 In order to re-start the instance safely, 
 please do the following:
 (1) Save all copies of the controlfile for later 
     analysis and contact your OS vendor and Oracle support.
 (2) Mount the instance and issue: 
     ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
 (3) Unmount the instance. 
 (4) Use the script in the trace file to
     RE-CREATE THE CONTROLFILE and open the database. 
*****************************************************
Tue May 09 23:05:31 2023
Reconfiguration started (old inc 20, new inc 22)
List of instances:
 1 (myinst: 1) 
 Global Resource Directory frozen
 * dead instance detected - domain 0 invalid = TRUE 
 Communication channels reestablished
 Master broadcasted resource hash value bitmaps
 Non-local Process blocks cleaned out
Tue May 09 23:05:31 2023
 LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Tue May 09 23:05:31 2023
 LMS 0: 3 GCS shadows cancelled, 0 closed, 0 Xw survived
 Set master node info 
 Submitted all remote-enqueue requests
 Dwn-cvts replayed, VALBLKs dubious
 All grantable enqueues granted
 Post SMON to start 1st pass IR
Tue May 09 23:05:32 2023
Instance recovery: looking for dead threads
 Submitted all GCS remote-cache requests
 Post SMON to start 1st pass IR
 Fix write in gcs resources
Reconfiguration complete
Tue May 09 23:06:00 2023
ARC1 (ospid: 26512): terminating the instance
Tue May 09 23:06:00 2023
System state dump requested by (instance=1, osid=26512 (ARC1)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/xff/xff1/trace/xff1_diag_26311.trc
Tue May 09 23:06:01 2023
ORA-1092 : opitsk aborting process
Instance terminated by ARC1, pid = 26512

实例重启报错

Recovery of Online Redo Log: Thread 1 Group 1 Seq 2059 Reading mem 0
  Mem# 0: +DATA/dbm/onlinelog/group_1.261.941899887
  Mem# 1: +FRA/dbm/onlinelog/group_1.257.941899887
Recovery of Online Redo Log: Thread 2 Group 3 Seq 16727 Reading mem 0
  Mem# 0: +DATA/dbm/onlinelog/group_3.265.941900045
  Mem# 1: +FRA/dbm/onlinelog/group_3.259.941900045
Recovery of Online Redo Log: Thread 2 Group 4 Seq 16728 Reading mem 0
  Mem# 0: +DATA/dbm/onlinelog/group_4.266.941900045
  Mem# 1: +FRA/dbm/onlinelog/group_4.260.941900045
Hex dump of (file 1, block 102777) in trace file /u01/app/oracle/diag/rdbms/dbm/dbm2/trace/dbm2_ora_30749.trc
Reading datafile '+DATA/dbm/datafile/system.256.941899799' for corruption at rdba: 0x00419179 (file 1, block 102777)
Reread (file 1, block 102777) found different corrupt data (logically corrupt)
Hex dump of (file 1, block 102777) in trace file /u01/app/oracle/diag/rdbms/dbm/dbm2/trace/dbm2_ora_30749.trc
RECOVERY OF THREAD 2 STUCK AT BLOCK 102777 OF FILE 1
Abort recovery for domain 0
Aborting crash recovery due to error 1172
Errors in file /u01/app/oracle/diag/rdbms/dbm/dbm2/trace/dbm2_ora_30749.trc:
ORA-01172: recovery of thread 2 stuck at block 102777 of file 1
ORA-01151: use media recovery to recover block, restore backup if needed
Abort recovery for domain 0
Errors in file /u01/app/oracle/diag/rdbms/dbm/dbm2/trace/dbm2_ora_30749.trc:
ORA-01172: recovery of thread 2 stuck at block 102777 of file 1
ORA-01151: use media recovery to recover block, restore backup if needed
ORA-1172 signalled during: ALTER DATABASE OPEN /* db agent *//* {0:890:17} */...

人工recover操作失败报ORA-600 3020错误

SQL> recover datafile 1;
ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [3020], [1], [102777], [4297081],[], []
ORA-10567: Redo is inconsistent with data block (file# 1, block# 102777, file
offset is 841949184 bytes)
ORA-10564: tablespace SYSTEM
ORA-01110: data file 1: '+DATA/dbm/datafile/system.256.941899799'
ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 469884

---alert日志
Tue May 09 23:28:44 2023
ALTER DATABASE RECOVER  datafile 1  
Media Recovery Start
Serial Media Recovery started
Recovery of Online Redo Log: Thread 2 Group 3 Seq 16727 Reading mem 0
  Mem# 0: +DATA/xff/onlinelog/group_3.265.941900045
  Mem# 1: +FRA/xff/onlinelog/group_3.259.941900045
ORA-279 signalled during: ALTER DATABASE RECOVER  datafile 1  ...
ALTER DATABASE RECOVER    CONTINUE DEFAULT  
Media Recovery Log +FRA/xff/archivelog/2023_05_09/thread_1_seq_2055.20899.1136415701
ORA-279 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
ALTER DATABASE RECOVER    CONTINUE DEFAULT  
Media Recovery Log +FRA/xff/archivelog/2023_05_09/thread_1_seq_2056.20837.1136415753
ORA-279 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
ALTER DATABASE RECOVER    CONTINUE DEFAULT  
Media Recovery Log +FRA/xff/archivelog/2023_05_09/thread_1_seq_2057.20911.1136415803
ORA-279 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
ALTER DATABASE RECOVER    CONTINUE DEFAULT  
Media Recovery Log +FRA/xff/archivelog/2023_05_09/thread_1_seq_2058.21898.1136415853
Recovery of Online Redo Log: Thread 2 Group 4 Seq 16728 Reading mem 0
  Mem# 0: +DATA/xff/onlinelog/group_4.266.941900045
  Mem# 1: +FRA/xff/onlinelog/group_4.260.941900045
Recovery of Online Redo Log: Thread 1 Group 1 Seq 2059 Reading mem 0
  Mem# 0: +DATA/xff/onlinelog/group_1.261.941899887
  Mem# 1: +FRA/xff/onlinelog/group_1.257.941899887
Hex dump of (file 1, block 102777) in trace file /u01/app/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_16246.trc
Reading datafile '+DATA/xff/datafile/system.256.941899799' for corruption at rdba: 0x00419179 (file 1, block 102777)
Reread (file 1, block 102777) found different corrupt data (logically corrupt)
Hex dump of (file 1, block 102777) in trace file /u01/app/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_16246.trc
Tue May 09 23:28:59 2023
Errors in file /u01/app/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_16246.trc  (incident=6868615):
ORA-00600: internal error code, arguments: [3020], [1], [102777], [4297081], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 1, block# 102777, file offset is 841949184 bytes)
ORA-10564: tablespace SYSTEM
ORA-01110: data file 1: '+DATA/xff/datafile/system.256.941899799'
ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 469884
Incident details in: /u01/app/oracle/diag/rdbms/xff/xff1/incident/incdir_6868615/xff1_ora_16246_i6868615.trc
Tue May 09 23:29:00 2023
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Media Recovery failed with error 600
ORA-283 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
ALTER DATABASE RECOVER CANCEL 
ORA-1112 signalled during: ALTER DATABASE RECOVER CANCEL ...

根据上述报错信息可以确认报错的是一个index,而且非系统核心对象,可以通过allow 1 corruption方式进行恢复,并且open库成功

SQL> recover  datafile 1 allow 1 corruption;
Media recovery complete.
SQL> alter database open;

Database altered.

SQL> select owner,object_name,object_type from dba_objects where object_id=469884;

OWNER
--------------------------------------------------------------------------------
OBJECT_NAME
--------------------------------------------------------------------------------
OBJECT_TYPE
---------------------------------------------------------
SYSTEM
PK_XFF_SERVERS
INDEX

SQL> alter index system.PK_XFF_SERVERS rebuild online;

Index altered.

数据库完美恢复,数据0丢失,业务可以直接正常使用

发表在 Oracle备份恢复 | 标签为 , , , | 评论关闭