6 instances fell down yesterday afternoon with a because ORA-00227: corrupt block detected.

This morning, I tried to delete under ASM, a corrupted database and the ASM fall down also ..

WARNING: cache read a corrupt block: group=1(DATA) fn=287 indblk=0 disk=1 (DATA_0001) incarn=2501166118 au=20232 blk=0 count=1

Fri Mar 06 10:07:40 2020

Errors in file /u01/app/grid/diag/asm/+asm/+ASM/trace/+ASM_ora_197630.trc

ORA-15196: invalid ASM block header [kfc.c:29416] [endian_kfbh] [287] [2147483648] [0 != 1]

NOTE: a corrupted block from group DATA was dumped to /u01/app/grid/diag/asm/+asm/+ASM/trace/+ASM_ora_197630.trc

WARNING: cache read (retry) a corrupt block: group=1(DATA) fn=287 indblk=0 disk=1 (DATA_0001) incarn=2501166118 au=20232 blk=0 count=1

Can someone help me?

How is the storage being provisioned? Are you using external redundancy for the disk groups? Are there any issues reported on the hardware side?

Do you have the list of ASM disks allocated to the 'DATA' disk group? You can check on the metadata blocks through 'kfed' -

$kfed read <disk1> aunum=0 blknum=1

You can also make use of 'amdu'. -

$ amdu -diskstring '<your_path_to_ASM_disks>' -dump '<diskgroup>'

DBRECOVER Recovery Options

For Oracle incidents, start with the DBRECOVER for Oracle trial to verify table visibility, row previews, and export readiness on copied datafiles. For MySQL and InnoDB incidents, DBRECOVER for MySQL is free software and can inspect.ibd files, ibdata1, and database directories locally.

When the case is urgent, preserve the original files first, work from copies, and contact paid emergency support with the database version, platform, error messages, file list, and recovery objective.

Archive ParnassusData Blog Migration Archive