RAID6 : clean, degraded

  • I had a mail this morning :


    Code
    This is an automatically generated mail message from mdadmrunning on ZettaA DegradedArray event had been detected on md device /dev/md/zetta:0.Faithfully yours, etc.P.S. The /proc/mdstat file currently contains the following:Personalities : [raid6] [raid5] [raid4]md0 : active raid6 sdf[0] sdb[4] sde[3] sda[2] sdc[1]      5860548608 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5] [UUUUU_]unused devices: <none>


    Here is the details of RAID



    Seems that sdbg is missing (added it one month ago via a pci-raid card)


    Here the smart of sdbg (smart is showing some errors) :


    http://pastebin.com/cdnhBb7G


    Thanks

    Gigabyte GA-H87M-HD3, Pentium G3220, 4 GO DDR3, 6 x 2 To raid 6, 2.5' 100 Go for system
    Donator because OMV deserves it (20€)

  • Try to replace the cable for that drive. Then put the disk back into the array.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • OK, just did replace the cable. How do I put the disk back into the array ?


    In gui ? raid management, recover ?


    thanks

    Gigabyte GA-H87M-HD3, Pentium G3220, 4 GO DDR3, 6 x 2 To raid 6, 2.5' 100 Go for system
    Donator because OMV deserves it (20€)

    • Offizieller Beitrag

    What is the output of:


    cat /proc/mdstat
    blkid
    fdisk -l

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    root@Zetta:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] 
    md127 : inactive sdg[5](S)
          1953383512 blocks super 1.2
    
    md0 : active raid6 sdf[0] sdb[4] sde[3] sda[2] sdc[1]
          5860548608 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5] [UUUUU_]
    
    unused devices: <none>


    Code
    root@Zetta:~# blkid
    /dev/sda: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="70983681-b5c2-9f70-1801-948b1b7c97d1" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdc: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="2873612e-8481-0c22-4d4f-9183d2bf6a6d" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdb: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="66843575-1a9c-30a6-8172-29f3ded468dc" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdd1: UUID="b364f189-c9af-4774-92ba-a49307966cf7" TYPE="ext4" 
    /dev/sdd5: UUID="c78b6a64-1caa-4290-bc1a-eebe67731ca5" TYPE="swap" 
    /dev/sdf: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="b2460066-00b1-5070-cfe3-7ac67aae96c1" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sde: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="ee7eab8f-dc90-e3be-146d-a4e09d104418" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdg: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="5692ca13-808b-56c0-8bf5-066f5574a5c4" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/md0: LABEL="ZettaFiles" UUID="76c7546c-5ae6-4884-ac9f-3ecda0f473bc" TYPE="ext4"


    Gigabyte GA-H87M-HD3, Pentium G3220, 4 GO DDR3, 6 x 2 To raid 6, 2.5' 100 Go for system
    Donator because OMV deserves it (20€)

    • Offizieller Beitrag

    Delete /dev/md127 in the raid management tab
    Add /dev/sdg to the array using the grow button in the raid management tab.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I do not see md127 in the raid management tab.


    Should I begin with the grow button anyway ?


    EDIt : I cant grow, I only have recover that I can use.


    What happened before ? Do you have an idea why I got a degraded state ?

    Gigabyte GA-H87M-HD3, Pentium G3220, 4 GO DDR3, 6 x 2 To raid 6, 2.5' 100 Go for system
    Donator because OMV deserves it (20€)

  • Do you have an idea why I got a degraded state ?


    Googlefu said the errors could be caused by a bad cable. The errors could make the disk fall out of the array.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • OK, thanks !


    I did not do anything though. Should I recover (there is no option for grow that is availbale for md0)

    Gigabyte GA-H87M-HD3, Pentium G3220, 4 GO DDR3, 6 x 2 To raid 6, 2.5' 100 Go for system
    Donator because OMV deserves it (20€)

    • Offizieller Beitrag

    I was trying to do it in the web interface but I guess that won't work.


    mdadm --stop /dev/md127


    Try to grow the array. If that doesn't work, wipe /dev/sdg in the physical disks tab and the grow the array.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks.


    Code
    root@Zetta:~# mdadm --stop /dev/md127
    mdadm: stopped /dev/md127
    root@Zetta:~#


    Cannot grow : not available.



    Should I really wipe dev/sdg ? Will the grow option be available then ? Sorry but I always fear to loosse some data... :/

    Gigabyte GA-H87M-HD3, Pentium G3220, 4 GO DDR3, 6 x 2 To raid 6, 2.5' 100 Go for system
    Donator because OMV deserves it (20€)

    • Offizieller Beitrag

    Technically, you can lose two disks with Raid 6 without losing data. So, wiping one should not be an issue. Without wiping the drive, mdadm still thinks the drive is a member of another array. You can zero the superblock (mdadm --zero-superblock /dev/sdg) instead of wiping it. But, mdadm is going to resync the array as soon as you add it anyway. After either option, you should be able to grow the array.


    And just a note about losing data... Raid is not a backup.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks ryecoaaron :)


    I know about the backup, I have some data on another backup, but not "all" datas.


    Just fast wiped dev/sdg, raid tab is still not showing grow, just recover. (the size of the array is still the same, so I don't understand why to grow)


    I did reboot after a wipe, still the same. Here the details :



    Here :


    Code
    root@Zetta:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] 
    md0 : active raid6 sdf[0] sdb[4] sde[3] sda[2] sdc[1]
          5860548608 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5] [UUUUU_]
    
    unused devices: <none>


    Code
    root@Zetta:~# blkid
    /dev/sda: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="70983681-b5c2-9f70-1801-948b1b7c97d1" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdc: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="2873612e-8481-0c22-4d4f-9183d2bf6a6d" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdb: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="66843575-1a9c-30a6-8172-29f3ded468dc" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdd1: UUID="b364f189-c9af-4774-92ba-a49307966cf7" TYPE="ext4" 
    /dev/sdd5: UUID="c78b6a64-1caa-4290-bc1a-eebe67731ca5" TYPE="swap" 
    /dev/sdf: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="b2460066-00b1-5070-cfe3-7ac67aae96c1" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sde: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="ee7eab8f-dc90-e3be-146d-a4e09d104418" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/md0: LABEL="ZettaFiles" UUID="76c7546c-5ae6-4884-ac9f-3ecda0f473bc" TYPE="ext4"



    EDIT : 03/01/2015 13:35 GMT : I went to linux-raid to have some help, I launched a badblock stress test on /dev/sdg while monitoring smart values and dmesg. Here the syslog I grabed in var/log when the disk was kicked out : http://pastebin.com/vjmVe7K7. Someone told me about write-intent bitmap that I might want to look into adding a write-intent bitmap for my array. Then I looked on a bugtracker http://bugtracker.openmediavault.org/view.php?id=669 about this feature, and slow speed http://blog.liw.fi/posts/write-intent-bitmaps/.
    The performance penalty depends of what bitmap chunk size I use, he said. He said I could use something big like 256MiB for example, he remember seeing some test results where bigger chunk sizes had a very minimal effect on performance. He wrote that it is easy to test for myself because bitmaps can be added and removed on the fly and that omv probably should revisit their decision of not enabling it by default.


    EDIT2-4 : Added the package iotop to monitor my badblacks writing, beacause it's veryyyyy slooooooooooowwwwwwwwwwww (and it is : 4849 be/4 root 0.00 B/s 7.19 M/s 0.00 % 99.99 % badblocks -w -s /dev/sdg). Added a -b xxx and -s xxxx and badblocks writing is now on 129M/s


    Edit 3 : Ryecoaaron, should I add the disk via the cli (beacause the gui won't let me, via mdadm --add /dev/md0 /dev/sdg ?


    EDIT 5 : I did add sdg to md0 via CLI, now it's recovering. should I add mdadm --grow --bitmap=internal --bitmap-chunk=256M /dev/md0 after ?

    Gigabyte GA-H87M-HD3, Pentium G3220, 4 GO DDR3, 6 x 2 To raid 6, 2.5' 100 Go for system
    Donator because OMV deserves it (20€)

    10 Mal editiert, zuletzt von hubertes ()

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!