标签归档:ORA-600 3020

ORA-600 3020

ORA-600 3020官方解释说明

ERROR:
  Format: ORA-600 [3020] [a] [b] {c} [d] [e]

VERSIONS:
  version 6.0 and above

DESCRIPTION:

  This is called a 'STUCK RECOVERY'.

  There is an inconsistency between the information stored in the redo
  and the information stored in a database block being recovered.

ARGUMENTS:

For Oracle 9.2 and earlier:
  Arg [a] Block DBA
  Arg [b] Redo Thread
  Arg 1 Redo RBA Seq
  Arg [d] Redo RBA Block No
  Arg [e] Redo RBA Offset.

For Oracle 10.1
  Arg [a] Absolute file number of the datafile.
  Arg [b] Block number
  Arg 1 Block DBA

FUNCTIONALITY:
  kernel cache recovery parallel

IMPACT:
  INSTANCE FAILURE during recovery.

SUGGESTIONS:

  There have been cases of receiving this error when RECOVER has
  been issued, but either some datafiles were not restored to disk,
  or the restoration has not finished.

  Therefore, ensure that the entire backup has been restored and that
  the restore has finished PRIOR to issuing a RECOVER database command.

  If problems continue, consider restoring from a backup and doing a
  point-in-time recovery to a time PRIOR to the one implied by
  the ORA-600[3020] error.

  Example:

     SQL> recover database until time 'YYYY-MON-DD:HH:MI:SS';

  This error can also be caused by a lost update.

  During normal operations, block updates/writes are being performed to
  a number of files including database datafiles, redo log files,
  archived redo log files etc.

  This error can be reported if any of these updates are lost for some
  reason.

  Therefore, thoroughly check your operating system and disk hardware.

  In the case of a lost update, restore an old copy of the datafile and
  attempt to recover and roll forward again.

相关bug信息

NB

Prob

Bug

Fixed

Description

III

16384983

11.2.0.4.6, 11.2.0.4.BP15, 12.1.0.2, 12.2.0.0

RMAN bad backup – causes recovery to fail with ORA-600 [3020]

III

16057129

11.2.0.3.BP22, 11.2.0.4, 12.1.0.1.2, 12.1.0.2

Exadata cell optimized incremental backup can skip some blocks to backup

I

22302666

12.2.0.0

ORA-753 ORA-756 or ORA-600 [3020] with KCOX_FUTURE after RMAN Restore / PITR with BCT after Open Resetlogs in 12c

II

21629064

12.1.0.2.160719, 12.2.0.0

ORA-600 [3020] KCOX_FUTURE by RECOVERY for KTU UNDO BLOCK SEQ:254 sometime after RMAN Restore of UNDO datafile in Source Database

III

20509482

11.2.0.4.BP20, 12.1.0.2.160119, 12.1.0.2.DBBP09, 12.2.0.0

ORA-600 [3020] / ORA-752 Wrong Results after Parallel Direct Load or RMAN ORA-600 [krcrfr_nohist] in RAC (caused by fix for bug 9962369)

II

19445860

12.2.0.0

ORA-1172 or ORA-600 [3020] Stuck recovery in RAC after attempted block rebuild

III

18284763

11.2.0.4.BP13, 12.1.0.2, 12.2.0.0

ORA-600 [3020] on ASSM blocks in Standby Database after CONVERT TO PHYSICAL or ORA-8103 ORA-600 [4552] in non-standby after FLASHBACK

III

18268201

12.1.0.2, 12.2.0.0

ORA-600 [3020] ORA-10567 and ORA-10560: block type ’0′ / ORA-600 [kdBlkCheckError] ORA-600 [ktfbbset-2] after flashback which reversed a datafile extend – superseded

I

16891789

11.2.0.4, 12.1.0.2, 12.2.0.0

ORA-600 [3020] or ORA-752 if db_lost_write_protect is enabled. Bystander standby recovers wrong redo log after switchover or failover.

+

III

17761775

11.2.0.3.9, 11.2.0.3.BP22, 11.2.0.4.2, 11.2.0.4.BP03, 12.1.0.1.3, 12.1.0.2

ORA-600 [kclchkblkdma_3] ORA-600 [3020] or ORA-600 [kcbchg1_16] Join of temp and permanent table in RAC might lead to corruption – superseded

I

16844448

11.2.0.3.9, 11.2.0.3.BP22, 11.2.0.4, 12.1.0.2

ORA-600 [3020] after flashback database in a RAC

III

15999982

11.2.0.3.BP24, 11.2.0.4, 12.1.0.2

ORA-600 [kcl_sanity_check_cr_1] ORA-600 [kclchkblkdma_3] in RAC or ORA-752 ORA-600 [3020] during recovery

II

21425496

ORA-752 or ORA-600 [3020] on recovery of Block Cleanout Operation OP:4.6

I

9847338

Session hang after applying the patch for Bug 9587912 which causes ORA-600 [3020]

+

III

13467683

11.2.0.2.9, 11.2.0.2.BP15, 11.2.0.3.3, 11.2.0.3.BP04, 11.2.0.4, 12.1.0.1

Join of temp and permanent tables in RAC might cause corruption of permanent table. Regression by bug 10352368

E

-

12831782

11.2.0.2.BP11, 11.2.0.3.BP01, 11.2.0.4, 12.1.0.1

ORA-600 [3020] / ORA-333 Recovery of datafile or async transport do not read mirror if there is a stale block

II

12821418

11.2.0.3.8, 11.2.0.3.BP18, 11.2.0.4, 12.1.0.1

Direct NFS appears to be sending zero length windows to storage device. It may also cause Lost Writes

I

12582839

11.2.0.3, 12.1.0.1

ORA-8103/ORA-600 [3020] on RMAN recovered locally managed tablespace

P

I

12330911

12.1.0.1

EXADATA LSI firmware for lost writes

III

11689702

11.2.0.2.5, 11.2.0.2.BP13, 11.2.0.2.GIPSU05, 11.2.0.3, 12.1.0.1

ORA-600 [3020] during recovery after datafile RESIZE (to smaller size)

+

II

10425010

11.2.0.3, 12.1.0.1

Stale data blocks may be returned by Exadata FlashCache

-

10329146

11.2.0.1.BP10, 11.2.0.2.2, 11.2.0.2.BP03, 11.2.0.2.GIBUNDLE02, 11.2.0.2.GIPSU02, 11.2.0.3, 12.1.0.1

Lost write in ASM with multiple DBWs and a disk is offlined and then onlined

II

10218814

11.2.0.2.2, 11.2.0.2.BP02, 11.2.0.3, 12.1.0.1

ORA-600 [3020] during recovery / on standby

+

II

10209232

11.1.0.7.7, 11.2.0.1.BP08, 11.2.0.2.1, 11.2.0.2.BP02, 11.2.0.2.GIBUNDLE01, 11.2.0.3, 12.1.0.1

ORA-1578 / ORA-600 [3020] Corruption. Misplaced Blocks and Lost Write in ASM

*

III

10205230

11.2.0.1.6, 11.2.0.1.BP09, 11.2.0.2.2, 11.2.0.2.BP04, 11.2.0.3, 12.1.0.1

ORA-600 / corruption possible during shutdown in RAC

II

10094823

11.2.0.2.4, 11.2.0.2.BP09, 11.2.0.3, 12.1.0.1

Block change tracking on physical standby can cause data loss

-

10071193

11.2.0.2.BP02, 11.2.0.3, 12.1.0.1

Lost write / ORA-600 [kclchkblk_3] / ORA-600 [3020] in RAC – superceded

-

9587912

11.2.0.2, 12.1.0.1

ORA-600 [3020] in datafile that went offline/online in a RAC instance

-

8774868

11.2.0.1.2, 11.2.0.1.BP06, 11.2.0.2, 12.1.0.1

OERI[3020] reinstating primary

+

III

8769473

11.2.0.2, 12.1.0.1

ORA-600 [kcbzib_5] on multi block read in RAC. Invalid lock in RAC. ORA-600 [3020] in Recovery

P

II

8635179

10.2.0.5, 11.2.0.2, 12.1.0.1

Solaris: directio may be disabled for RAC file access. Corruption / Lost Write

+

II

8597106

11.2.0.1.BP06, 11.2.0.2, 12.1.0.1

Lost Write in ASM when normal redundancy is used

+E

II

17752121

11.2.0.3.9, 11.2.0.3.BP22, 11.2.0.4.2, 11.2.0.4.BP03

ORA-600 [kclchkblkdma_3] ORA-600 [3020] RAC diagnostic/fix to avoid a block being modified in Shared Mode and prevent corruption – Superseded

III

8826708

10.2.0.5, 11.2.0.2

ORA-600 [3020] for block type 0x3a (58) during recovery for block restored by RMAN backup

I

16579042

11.2.0.4

ORA-600 [kjbmpocr:alh] ORA-600 [kclchkblkdma_3] by LMS in RAC which may lead to corruption

-

11684626

11.2.0.1

ORA-600 [3020] on standby involving “BRR” redo when db_lost_write_protect is enabled

-

8230457

10.2.0.4.1, 10.2.0.5, 11.1.0.7.1, 11.2.0.1

Physical standby media recovery gets OERI[krr_media_12]

+

II

7680907

10.2.0.5, 11.1.0.7.1, 11.2.0.1

ORA-600 [kclexpandlock_2] in LMS / instance crash. Incorrect locks in RAC. ORA-600 [3020] in recovery

II

4637668

10.2.0.3, 11.1.0.6

IMU transactions can produce out-of-order redo (OERI [3020] on recovery)

-

4594917

9.2.0.8, 10.2.0.2, 11.1.0.6

Write IO error can cause incorrect file header checkpoint information

-

4453449

10.2.0.2, 11.1.0.6

OERI:3020 / corruption errors from multiple FLASHBACK DATABASE

III

7197445

10.2.0.4.1, 10.2.0.5

Standby Recovery session cancelled due to ORA-600 [3020] “CHANGE IN FUTURE OF BLOCK”

II

5610267

10.2.0.5

MRP terminated by ORA-600[krr_media_12] / OERI:3020 after flashback

-

3762714

9.2.0.7, 10.1.0.4, 10.2.0.1

ALTER DATABASE RECOVER MANAGED STANDBY fails with OERI[3020]

I

3560209

10.2.0.1

OERI[3020] stuck recovery under RAC

-

3397181

9.2.0.5, 10.1.0.3, 10.2.0.1

ALTER SYSTEM KILL SESSION of recovery slave causes stuck recovery

*

-

3381950

10.2.0.1

Backups from RAC DB before Data Guard Failover cannot be used

-

3535712

9.2.0.6, 10.1.0.4

OERI[3020] / ORA-10567 from RAC with standby in max performance mode

-

4594912

9.2.0.8, 10.1.0.2

Incorrect checkpoint possible in datafile headers

-

3635331

9.2.0.6, 10.1.0.4

Stuck recovery (OERI:3020) / ORA-1172 on startup after a crash

II

2322620

9.2.0.1

OERI:3020 possible on recovery of LOB DATA

P+

-

656370

7.3.3.4, 7.3.4.0, 8.0.3.0

AlphaNT only: Corrupt Redo (zeroed byte) OERI:3020

发表在 Oracle备份恢复 | 标签为 | 评论关闭

强制关机导致数据库无法正常启动恢复

有客户qq找到我,说有朋友推荐,让我帮他们恢复数据库.由于强制关机后,数据库无法正常启动.
数据库recover database失败

Mon Mar 28 10:20:33 2016
ALTER DATABASE RECOVER  database  
Media Recovery Start
 started logmerger process
Parallel Media Recovery started with 32 slaves
Mon Mar 28 10:20:36 2016
Recovery of Online Redo Log: Thread 1 Group 2 Seq 18686 Reading mem 0
  Mem# 0: E:\ORACLE_DATA\YCCY\REDO02.LOG
Recovery of Online Redo Log: Thread 1 Group 3 Seq 18687 Reading mem 0
  Mem# 0: E:\ORACLE_DATA\YCCY\REDO03.LOG
Recovery of Online Redo Log: Thread 1 Group 1 Seq 18688 Reading mem 0
  Mem# 0: E:\ORACLE_DATA\YCCY\REDO01.LOG
Mon Mar 28 10:20:38 2016
Hex dump of (file 45, block 7431) in trace file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0q_2968.trc
Corrupt block relative dba: 0x0b401d07 (file 45, block 7431)
Mon Mar 28 10:20:38 2016
Hex dump of (file 45, block 7836) in trace file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr01_2220.trc
Bad header found during media recovery
Corrupt block relative dba: 0x0b401e9c (file 45, block 7836)
Data in bad block:
Bad header found during media recovery
 type: 0 format: 0 rdba: 0x1d070000
 last change scn: 0x4917.f8dc0b40 seq: 0x0 flg: 0x00
 spare1: 0x6 spare2: 0xa2 spare3: 0xc7f7
 consistency value in tail: 0x06010000
 check value in block header: 0x601
 block checksum disabled
Reading datafile 'E:\ORACLE_DATA\YCCY\DT_SYS_IDX12.DBF' for corruption at rdba: 0x0b401d07 (file 45, block 7431)
Reread (file 45, block 7431) found valid data
Repaired corruption at (file 45, block 7431)
Hex dump of (file 45, block 7556) in trace file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0q_2968.trc
Corrupt block relative dba: 0x0b401d84 (file 45, block 7556)
Bad header found during media recovery
Data in bad block:
 type: 106 format: 3 rdba: 0x1d840000
 last change scn: 0x461d.391a0b40 seq: 0x0 flg: 0x00
 spare1: 0x6 spare2: 0xa2 spare3: 0x2499
 consistency value in tail: 0x06013999
 check value in block header: 0x401
 block checksum disabled
Reading datafile 'E:\ORACLE_DATA\YCCY\DT_SYS_IDX12.DBF' for corruption at rdba: 0x0b401d84 (file 45, block 7556)
Reread (file 45, block 7556) found valid data
Repaired corruption at (file 45, block 7556)
Mon Mar 28 10:20:38 2016
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x1334748, kcbzfw()+3094]
Mon Mar 28 10:20:39 2016
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0k_3900.trc  (incident=131189):
ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131189\yccy_pr0k_3900_i131189.trc
ERROR: Unable to normalize symbol name for the following short stack (at offset 199):
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0r_3060.trc  (incident=131245):
ORA-07445: exception encountered: core dump [kcbzfw()+3094] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x1334748] [UNABLE_TO_READ] []
ORA-10567: Redo is inconsistent with data block (file# 5, block# 169345, file offset is 1387274240 bytes)
ORA-10564: tablespace DT_SYS_DAT
ORA-01110: data file 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131245\yccy_pr0r_3060_i131245.trc
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C, kcbzdh()+942]
Mon Mar 28 10:20:39 2016
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0d_2112.trc  (incident=131133):
ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131133\yccy_pr0d_2112_i131133.trc
Mon Mar 28 10:20:39 2016
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0e_3260.trc  (incident=131141):
ORA-00600: internal error code, arguments: [3020], [5], [163457], [21134977], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 5, block# 163457, file offset is 1339039744 bytes)
ORA-10564: tablespace DT_SYS_DAT
ORA-01110: data file 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131141\yccy_pr0e_3260_i131141.trc
Mon Mar 28 10:20:39 2016
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr04_3980.trc  (incident=131021):
ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131021\yccy_pr04_3980_i131021.trc
Data in bad block:
 type: 0 format: 0 rdba: 0x1e9c0000
 last change scn: 0x4915.f8320b40 seq: 0x0 flg: 0x00
 spare1: 0x6 spare2: 0xa2 spare3: 0x8029
 consistency value in tail: 0x0602e40c
 check value in block header: 0x602
 block checksum disabled
Reading datafile 'E:\ORACLE_DATA\YCCY\DT_SYS_IDX12.DBF' for corruption at rdba: 0x0b401e9c (file 45, block 7836)
Reread (file 45, block 7836) found valid data
Repaired corruption at (file 45, block 7836)
Mon Mar 28 10:20:39 2016
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0f_816.trc  (incident=131149):
ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131149\yccy_pr0f_816_i131149.trc
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C, kcbzdh()+942]
Mon Mar 28 10:20:39 2016
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0i_2132.trc  (incident=131173):
ORA-00600: internal error code, arguments: [3020], [5], [154240], [21125760], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 5, block# 154240, file offset is 1263534080 bytes)
ORA-10564: tablespace DT_SYS_DAT
ORA-01110: data file 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131173\yccy_pr0i_2132_i131173.trc
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0k_3900.trc  (incident=131190):
ORA-07445: exception encountered: core dump [kcbzdh()+942] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C] [UNABLE_TO_READ] []
ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131190\yccy_pr0k_3900_i131190.trc
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr01_2220.trc  (incident=131037):
ORA-00600: internal error code, arguments: [kcbrapply_14], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131037\yccy_pr01_2220_i131037.trc
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C, kcbzdh()+942]
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0f_816.trc  (incident=131150):
ORA-07445: exception encountered: core dump [kcbzdh()+942] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C] [UNABLE_TO_READ] []
ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131150\yccy_pr0f_816_i131150.trc
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr01_2220.trc  (incident=131038):
ORA-07445: exception encountered: core dump [kcbzdh()+942] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C] [UNABLE_TO_READ] []
ORA-00600: internal error code, arguments: [kcbrapply_14], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131038\yccy_pr01_2220_i131038.trc
Mon Mar 28 10:20:39 2016
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0h_4036.trc  (incident=131165):
ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131165\yccy_pr0h_4036_i131165.trc
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C, kcbzdh()+942]
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B, kcbzpnd()+299]
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x1351BB9, kcbs_dump_adv_state()+1529]
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B, kcbzpnd()+299]
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0h_4036.trc  (incident=131166):
ORA-07445: exception encountered: core dump [kcbzdh()+942] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C] [UNABLE_TO_READ] []
ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131166\yccy_pr0h_4036_i131166.trc
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B, kcbzpnd()+299]
Mon Mar 28 10:20:40 2016
Checker run found 60 new persistent data failures
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0d_2112.trc  (incident=131134):
ORA-07445: exception encountered: core dump [kcbzpnd()+299] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B] [UNABLE_TO_READ] []
ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131134\yccy_pr0d_2112_i131134.trc
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr04_3980.trc  (incident=131022):
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+1529] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x1351BB9] [UNABLE_TO_READ] []
ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], [], [], [], [], []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131022\yccy_pr04_3980_i131022.trc
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0e_3260.trc  (incident=131142):
ORA-07445: exception encountered: core dump [kcbzpnd()+299] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B] [UNABLE_TO_READ] []
ORA-00600: internal error code, arguments: [3020], [5], [163457], [21134977], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 5, block# 163457, file offset is 1339039744 bytes)
ORA-10564: tablespace DT_SYS_DAT
ORA-01110: data file 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131142\yccy_pr0e_3260_i131142.trc
Mon Mar 28 10:20:41 2016
Trace dumping is performing id=[cdmp_20160328102041]
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0i_2132.trc  (incident=131174):
ORA-07445: exception encountered: core dump [kcbzpnd()+299] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B] [UNABLE_TO_READ] []
ORA-00600: internal error code, arguments: [3020], [5], [154240], [21125760], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 5, block# 154240, file offset is 1263534080 bytes)
ORA-10564: tablespace DT_SYS_DAT
ORA-01110: data file 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131174\yccy_pr0i_2132_i131174.trc
Mon Mar 28 10:20:41 2016
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0, 0000000074CAE3F0]
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr06_2684.trc  (incident=131077):
ORA-07445: exception encountered: core dump [PC:0x74CAE3F0] [ACCESS_VIOLATION] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0] [UNABLE_TO_READ] []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131077\yccy_pr06_2684_i131077.trc
Mon Mar 28 10:20:42 2016
Exception [type: ACCESS_VIOLATION, UNABLE_TO_WRITE] [ADDR:0x0] [PC:0x4D20D2, kslgetl()+54]
Mon Mar 28 10:20:42 2016
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pmon_3856.trc  (incident=130853):
ORA-07445: exception encountered: core dump [kslgetl()+54] [ACCESS_VIOLATION] [ADDR:0x0] [PC:0x4D20D2] [UNABLE_TO_WRITE] []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_130853\yccy_pmon_3856_i130853.trc
Trace dumping is performing id=[cdmp_20160328102042]
Errors in file d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131077\yccy_pr06_2684_i131077.trc:
ORA-00607: Internal error occurred while making a change to a data block
ORA-00602: internal programming exception
ORA-07445: exception encountered: core dump [PC:0x74CAE3F0] [ACCESS_VIOLATION] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0] [UNABLE_TO_READ] []
Process debug not enabled via parameter _debug_enable
Trace dumping is performing id=[cdmp_20160328102043]
Mon Mar 28 10:21:01 2016
RECO (ospid: 3524): terminating the instance due to error 472
Instance terminated by RECO, pid = 3524

通过观察这段日志,基本上可以发现主要是FILE 45,虽然提示坏块但是最终验证确定为正常块(类似:Reread (file 45, block 7836) found valid data),这里主要是file 5,报了大量的ORA-600[3020].

对数据文件逐个进行recover操作

SQL> startup mount;
ORACLE 例程已经启动。

Total System Global Area 1.7103E+10 bytes
Fixed Size                  2192864 bytes
Variable Size            9059699232 bytes
Database Buffers         8019509248 bytes
Redo Buffers               21762048 bytes
数据库装载完毕。
SQL> recover datafile 1;
完成介质恢复。
SQL> recover  datafile 2;
ORA-03113: 通信通道的文件结尾
进程 ID: 1652
会话 ID: 551 序列号: 55

SQL> recover datafile 3;
完成介质恢复。
SQL> recover datafile 4;
完成介质恢复。

SQL> recover datafile 5;
ORA-03113: 通信通道的文件结尾
进程 ID: 4900
会话 ID: 551 序列号: 56131

SQL> recover datafile 6;
完成介质恢复。
…………
SQL> recover datafile 63;
完成介质恢复。
SQL> recover datafile 64;
完成介质恢复。

除掉datafile 2,5之外,其他文件全部recover成功.

对于file 2 尝试处理
无法通过recover成功,只能暂时放弃,后续考虑先offline open库,然后把这个文件强制online

SQL> recover  datafile 2 ;
ORA-03113: 通信通道的文件结尾
进程 ID: 5020
会话 ID: 551 序列号: 3


Mon Mar 28 10:47:12 2016
ALTER DATABASE RECOVER  datafile 2  
Media Recovery Start
Serial Media Recovery started
Recovery of Online Redo Log: Thread 1 Group 1 Seq 18688 Reading mem 0
  Mem# 0: E:\ORACLE_DATA\YCCY\REDO01.LOG
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0, 0000000074CAE3F0]
Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_ora_3508.trc  (incident=143022):
ORA-07445: 出现异常错误: 核心转储 [PC:0x74CAE3F0] [ACCESS_VIOLATION] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0] [UNABLE_TO_READ] []
Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_143022\yccy_ora_3508_i143022.trc
Errors in file d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_143022\yccy_ora_3508_i143022.trc:
ORA-00607: 当更改数据块时出现内部错误
ORA-00602: 内部编程异常错误
ORA-07445: 出现异常错误: 核心转储 [PC:0x74CAE3F0] [ACCESS_VIOLATION] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0] [UNABLE_TO_READ] []

对于file 5处理

SQL> recover datafile 5;
ORA-00283: 恢复会话因错误而取消
ORA-00600: 内部错误代码, 参数: [3020], [5], [163457], [21134977], [], [], [],
[], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 5, block# 163457, file
offset is 1339039744 bytes)
ORA-10564: tablespace DT_SYS_DAT
ORA-01110: 数据文件 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'


SQL> recover  datafile 5 allow 1 corruption;
ORA-00283: 恢复会话因错误而取消
ORA-00600: 内部错误代码, 参数: [3020], [5], [162433], [21133953], [], [], [],
[], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 5, block# 162433, file
offset is 1330651136 bytes)
ORA-10564: tablespace DT_SYS_DAT
ORA-01110: 数据文件 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'


SQL> recover  datafile 5 allow 1 corruption;
ORA-00283: 恢复会话因错误而取消
ORA-00600: 内部错误代码, 参数: [3020], [5], [166272], [21137792], [], [], [],
[], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 5, block# 166272, file
offset is 1362100224 bytes)
ORA-10564: tablespace DT_SYS_DAT
ORA-01110: 数据文件 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'



SQL> recover  datafile 5 allow 1 corruption;
ORA-00283: 恢复会话因错误而取消
ORA-00600: 内部错误代码, 参数: [3020], [5], [169346], [21140866], [], [], [],
[], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 5, block# 169346, file
offset is 1387282432 bytes)
ORA-10564: tablespace DT_SYS_DAT
ORA-01110: 数据文件 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA'
ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK'


SQL> recover  datafile 5 allow 1 corruption;
完成介质恢复。

open数据库并online datafile 2

SQL> startup pfile='d:/pfile.txt' mount;
ORACLE 例程已经启动。

Total System Global Area 1.7103E+10 bytes
Fixed Size                  2192864 bytes
Variable Size            9059699232 bytes
Database Buffers         8019509248 bytes
Redo Buffers               21762048 bytes
数据库装载完毕。
SQL> alter database datafile 2 offline;

数据库已更改。

SQL> alter database open;

数据库已更改。

SQL> shutdown immediate;
ORA-03113: 通信通道的文件结尾
SQL> conn / as sysdba
已连接到空闲例程。

SQL> startup pfile='d:/pfile.txt' mount;
ORACLE 例程已经启动。

Total System Global Area 1.7103E+10 bytes
Fixed Size                  2192864 bytes
Variable Size            9059699232 bytes
Database Buffers         8019509248 bytes
Redo Buffers               21762048 bytes
数据库装载完毕。
SQL> select group#,status from v$log;

    GROUP# STATUS
---------- ----------------
         1 INACTIVE
         3 INACTIVE
         2 CURRENT

SQL> recover database until cancel;
ORA-00279: 更改 1226478477 (在 03/28/2016 20:23:37 生成) 对于线程 1 是必需的
ORA-00289: 建议:
D:\ORACLE\FLASH_RECOVERY_AREA\YCCY\ARCHIVELOG\2016_03_28\O1_MF_1_18689_%U_.ARC
ORA-00280: 更改 1226478477 (用于线程 1) 在序列 #18689 中


指定日志: {<RET>=suggested | filename | AUTO | CANCEL}
E:\ORACLE_DATA\YCCY\REDO02.LOG
已应用的日志。
完成介质恢复。
SQL> alter database datafile 2 online;

数据库已更改。

SQL> alter database open resetlogs;

数据库已更改。

数据库基本上属于正常打开,处理掉3020部分的坏块基本ok

发表在 Oracle备份恢复 | 标签为 , , , , | 一条评论

ORA-10562 故障恢复—allow 1 corruption

朋友数据库由于存储变动,导致数据库瞬间hang住,然后直接crash,之后无法正常启动,请求技术支持.
数据库报ORA-00600[2131]错误
不能mount,可以通过重建控制文件解决

Mon Nov 30 20:35:38 2015
alter database mount
Mon Nov 30 20:35:38 2015
NOTE: Loaded library: System 
Mon Nov 30 20:35:38 2015
SUCCESS: diskgroup DATADG was mounted
Mon Nov 30 20:35:38 2015
NOTE: dependency between database xifenfei and diskgroup resource ora.DATADG.dg is established
Errors in file /u01/app/oracle/diag/rdbms/xifenfei/xifenfei1/trace/xifenfei1_ora_26450.trc  (incident=3032256):
ORA-00600: internal error code, arguments: [2131], [33], [32], [], [], [], [], [], [], [], [], []
ORA-600 signalled during: alter database mount...

尝试recover数据库

Mon Nov 30 20:45:53 2015
ALTER DATABASE RECOVER  database  
Media Recovery Start
 started logmerger process
Parallel Media Recovery started with 80 slaves
Mon Nov 30 20:45:56 2015
Recovery of Online Redo Log: Thread 2 Group 11 Seq 617 Reading mem 0
  Mem# 0: +DATADG/xifenfei/redo011.log
Recovery of Online Redo Log: Thread 1 Group 4 Seq 5410 Reading mem 0
  Mem# 0: +DATADG/xifenfei/redo04.log
Recovery of Online Redo Log: Thread 1 Group 5 Seq 5411 Reading mem 0
  Mem# 0: +DATADG/xifenfei/redo05.log
Mon Nov 30 20:46:07 2015
Recovery of Online Redo Log: Thread 1 Group 6 Seq 5412 Reading mem 0
  Mem# 0: +DATADG/xifenfei/redo06.log
Mon Nov 30 20:46:13 2015
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0xC] [PC:0x95FB502, kdxlin()+4088] [flags: 0x0, count: 1]
Errors in file /u01/app/oracle/diag/rdbms/xifenfei/xifenfei1/trace/xifenfei1_pr13_30480.trc  (incident=3032568):
ORA-07445: 出现异常错误: 核心转储 [kdxlin()+4088] [SIGSEGV] [ADDR:0xC] [PC:0x95FB502] [Address not mapped to object] []
Mon Nov 30 20:46:17 2015
Sweep [inc][3032568]: completed
Sweep [inc2][3032568]: completed
Mon Nov 30 20:46:31 2015
Slave exiting with ORA-10562 exception
Errors in file /u01/app/oracle/diag/rdbms/xifenfei/xifenfei1/trace/xifenfei1_pr13_30480.trc:
ORA-10562: Error occurred while applying redo to data block (file# 2, block# 165054)
ORA-10564: tablespace SYSAUX
ORA-01110: 数据文件 2: '+DATADG/xifenfei/datafile/sysaux.265.861925867'
ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 271
ORA-00607: 当更改数据块时出现内部错误
ORA-00602: 内部编程异常错误
ORA-07445: 出现异常错误: 核心转储 [kdxlin()+4088] [SIGSEGV] [ADDR:0xC] [PC:0x95FB502] [Address not mapped to object] []
Mon Nov 30 20:46:31 2015
Recovery Slave PR13 previously exited with exception 10562
Mon Nov 30 20:46:33 2015
Checker run found 28 new persistent data failures
Mon Nov 30 20:46:35 2015
Media Recovery failed with error 448
Errors in file /u01/app/oracle/diag/rdbms/xifenfei/xifenfei1/trace/xifenfei1_pr00_30400.trc:
ORA-00283: 恢复会话因错误而取消
ORA-00448: 后台进程正常结束
ORA-10562 signalled during: ALTER DATABASE RECOVER  database  ...

通过这里可以看到,由于在recover 操作之时,由于某种原因redo的数据无法apply到file 2 block 165054中,导致数据库recover database失败.

按照数据文件recover操作

SQL> recover datafile 1;
Media recovery complete.
SQL> recover datafile 3,4,5,6,7;
Media recovery complete.
SQL> recover datafile 8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28;              
Media recovery complete.
SQL> recover datafile 2;
ORA-00283: recovery session canceled due to errors
ORA-10562: Error occurred while applying redo to data block (file# 2, block#
165054)
ORA-10564: tablespace SYSAUX
ORA-01110: data file 2: '+DATADG/xifenfei/datafile/sysaux.265.861925867'
ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 271
ORA-00607: Internal error occurred while making a change to a data block
ORA-00602: internal programming exception
ORA-07445: exception encountered: core dump [kdxlin()+4088] [SIGSEGV]
[ADDR:0xC] [PC:0x95FB502] [Address not mapped to object] []

错误提示和recover database一样,那我们只能让恢复跳过该block继续恢复,因为根据经验判断data object# 271不是系统核心对象,不会影响数据库的启动

跳过坏块继续恢复

SQL> recover  datafile 2 allow 1 corruption;
ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [3020], [2], [69793], [8458401], [],
[], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 2, block# 69793, file
offset is 571744256 bytes)
ORA-10564: tablespace SYSAUX
ORA-01110: data file 2: '+DATADG/xifenfei/datafile/sysaux.265.861925867'
ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 272


SQL> recover  datafile 2 allow 1 corruption;
Media recovery complete.
SQL> alter database open;

Database altered.

出现了ORA-600[3020] 继续跳过坏块,然后数据库顺利open,别忘记加tempfile

处理异常对象

SQL> select object_name,object_type from dba_objects where data_object_id in(272,271);

OBJECT_NAME
--------------------------------------------------------------------------------
OBJECT_TYPE
-------------------
SMON_SCN_TIME_TIM_IDX
INDEX

SMON_SCN_TIME_SCN_IDX
INDEX

SQL> select index_name from dba_indexes where table_name='SMON_SCN_TIME';

INDEX_NAME
------------------------------
SMON_SCN_TIME_TIM_IDX
SMON_SCN_TIME_SCN_IDX

SQL> set pages 1000
SQL> set long 1000
SQL> Select dbms_metadata.get_ddl('TABLE','SMON_SCN_TIME','SYS') FROM DUAL ;

DBMS_METADATA.GET_DDL('TABLE','SMON_SCN_TIME','SYS')
--------------------------------------------------------------------------------

  CREATE TABLE "SYS"."SMON_SCN_TIME"
   (    "THREAD" NUMBER,
        "TIME_MP" NUMBER,
        "TIME_DP" DATE,
        "SCN_WRP" NUMBER,
        "SCN_BAS" NUMBER,
        "NUM_MAPPINGS" NUMBER,
        "TIM_SCN_MAP" RAW(1200),
        "SCN" NUMBER DEFAULT 0,
        "ORIG_THREAD" NUMBER DEFAULT 0           /* for downgrade */
   ) CLUSTER "SYS"."SMON_SCN_TO_TIME_AUX" ("THREAD")

SQL> analyze table smon_scn_time validate structure cascade online;
analyze table smon_scn_time validate structure cascade online
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 2, block # 165054)
ORA-01110: data file 2: '+DATADG/xifenfei/datafile/sysaux.265.861925867'

SQL> truncate CLUSTER "SYS"."SMON_SCN_TO_TIME_AUX";

Cluster truncated.

关于SMON_SCN_TIME部分处理,可以参考:关于SMON_SCN_TIME若干问题说明.至此数据库基本上恢复完成,而且运气非常好,恢复的非常完美,数据实现0丢失.

发表在 Oracle备份恢复 | 标签为 , , , , | 评论关闭