[PLUTO-help] problemi su RAID

Piviul pluto a flanet.org
Gio 22 Apr 2004 18:54:29 CEST


Ciao a tutti, ho una box scsi con 3 hd da 4 Gb le cui partizioni sono 
così configurate:

sda1 - partizione di boot
sdc1 - partizione di swap
sda2, sdb1, sdc2 - array (software) raid5

Guardando i log mi sono accorto che il raid non funziona più e mi sembra 
di aver capito che a partire sia stato sda2.

Prima di buttare via l'HD vorrei provare a riformattare sda a basso 
livello dal bios scsi ma in sda1 ho la partizione di boot...

Qualcuno sa darmi qualche suggerimento su come affrontare il problema?

Vi allego i log che vi dovrebbero raccontare quello che vi ho già 
detto... dovrebbero però! nel senso che sono ancora un newbie sicché...

Grazie mille

Piviul
[...]
> Apr 22 17:44:33 rh-proxy kernel: SCSI device sda: 8330543 512-byte hdwr sectors (4265 MB)
> Apr 22 17:44:33 rh-proxy kernel: Partition check:
> Apr 22 17:44:33 rh-proxy kernel:  sda: sda1 sda2
> Apr 22 17:44:33 rh-proxy kernel: SCSI device sdb: 8330543 512-byte hdwr sectors (4265 MB)
> Apr 22 17:44:33 rh-proxy kernel:  sdb: sdb1
> Apr 22 17:44:33 rh-proxy kernel: SCSI device sdc: 8330543 512-byte hdwr sectors (4265 MB)
> Apr 22 17:44:33 rh-proxy kernel:  sdc: sdc1 sdc2
> Apr 22 17:44:33 rh-proxy kernel: raid5: measuring checksumming speed
> Apr 22 17:44:34 rh-proxy kernel:    8regs     :   491.600 MB/sec
> Apr 22 17:44:34 rh-proxy kernel:    32regs    :   250.400 MB/sec
> Apr 22 17:44:34 rh-proxy kernel:    pII_mmx   :   598.000 MB/sec
> Apr 22 17:44:34 rh-proxy kernel:    p5_mmx    :   628.400 MB/sec
> Apr 22 17:44:34 rh-proxy kernel: raid5: using function: p5_mmx (628.400 MB/sec)
> Apr 22 17:44:34 rh-proxy kernel: md: raid5 personality registered as nr 4
> Apr 22 17:44:34 rh-proxy kernel: Journalled Block Device driver loaded
> Apr 22 17:44:34 rh-proxy kernel: md: Autodetecting RAID arrays.
> Apr 22 17:44:34 rh-proxy kernel:  [events: 00000001]
> Apr 22 17:44:34 rh-proxy kernel:  [events: 000000d2]
> Apr 22 17:44:34 rh-proxy kernel:  [events: 000000d2]
> Apr 22 17:44:34 rh-proxy kernel: md: autorun ...
> Apr 22 17:44:35 rh-proxy kernel: md: considering sdc2 ...
> Apr 22 17:44:35 rh-proxy kernel: md:  adding sdc2 ...
> Apr 22 17:44:35 rh-proxy kernel: md:  adding sdb1 ...
> Apr 22 17:44:35 rh-proxy kernel: md:  adding sda2 ...
> Apr 22 17:44:35 rh-proxy kernel: md: created md0
> Apr 22 17:44:35 rh-proxy kernel: md: bind<sda2,1>
> Apr 22 17:44:35 rh-proxy kernel: md: bind<sdb1,2>
> Apr 22 17:44:35 rh-proxy kernel: md: bind<sdc2,3>
> Apr 22 17:44:35 rh-proxy kernel: md: running: <sdc2><sdb1><sda2>
> Apr 22 17:44:35 rh-proxy kernel: md: sdc2's event counter: 000000d2
> Apr 22 17:44:35 rh-proxy kernel: md: sdb1's event counter: 000000d2
> Apr 22 17:44:35 rh-proxy kernel: md: sda2's event counter: 00000001
> Apr 22 17:44:36 rh-proxy kernel: md: superblock update time inconsistency -- using the most recent one
> Apr 22 17:44:36 rh-proxy kernel: md: freshest: sdc2
> Apr 22 17:44:36 rh-proxy kernel: md: kicking non-fresh sda2 from array!
> Apr 22 17:44:36 rh-proxy kernel: md: unbind<sda2,2>
> Apr 22 17:44:36 rh-proxy kernel: md: export_rdev(sda2)
> Apr 22 17:44:36 rh-proxy kernel: md0: max total readahead window set to 512k
> Apr 22 17:44:36 rh-proxy kernel: md0: 2 data-disks, max readahead per data-disk: 256k
> Apr 22 17:44:36 rh-proxy kernel: raid5: device sdc2 operational as raid disk 2
> Apr 22 17:44:36 rh-proxy kernel: raid5: device sdb1 operational as raid disk 1
> Apr 22 17:44:36 rh-proxy kernel: raid5: md0, not all disks are operational -- trying to recover array
> Apr 22 17:44:36 rh-proxy kernel: raid5: allocated 3284kB for md0
> Apr 22 17:44:36 rh-proxy kernel: raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 0
> Apr 22 17:44:36 rh-proxy kernel: RAID5 conf printout:
> Apr 22 17:44:36 rh-proxy kernel:  --- rd:3 wd:2 fd:1
> Apr 22 17:44:36 rh-proxy kernel:  disk 0, s:0, o:0, n:0 rd:0 us:1 dev:[dev 00:00]
> Apr 22 17:44:37 rh-proxy kernel:  disk 1, s:0, o:1, n:1 rd:1 us:1 dev:sdb1
> Apr 22 17:44:37 rh-proxy kernel:  disk 2, s:0, o:1, n:2 rd:2 us:1 dev:sdc2
> Apr 22 17:44:37 rh-proxy kernel: RAID5 conf printout:
> Apr 22 17:44:37 rh-proxy kernel:  --- rd:3 wd:2 fd:1
> Apr 22 17:44:37 rh-proxy kernel:  disk 0, s:0, o:0, n:0 rd:0 us:1 dev:[dev 00:00]
> Apr 22 17:44:37 rh-proxy kernel:  disk 1, s:0, o:1, n:1 rd:1 us:1 dev:sdb1
> Apr 22 17:44:37 rh-proxy kernel:  disk 2, s:0, o:1, n:2 rd:2 us:1 dev:sdc2
> Apr 22 17:44:37 rh-proxy kernel: md: updating md0 RAID superblock on device
> Apr 22 17:44:37 rh-proxy kernel: md: sdc2 [events: 000000d3]<6>(write) sdc2's sb offset: 3775168
> Apr 22 17:44:37 rh-proxy kernel: md: recovery thread got woken up ...
> Apr 22 17:44:37 rh-proxy kernel: md0: no spare disk to reconstruct array! -- continuing in degraded mode
> Apr 22 17:44:37 rh-proxy kernel: md: recovery thread finished ...
> Apr 22 17:44:37 rh-proxy kernel: md: sdb1 [events: 000000d3]<6>(write) sdb1's sb offset: 4160704
> Apr 22 17:44:37 rh-proxy kernel: md: ... autorun DONE.
[...]




More information about the pluto-help mailing list