RAID Filesystem is n/a

  • I build a RAID 6 with 11 connections as a Software RAID.
    7 SATA Mainboard-Connections extended with a HighPoint Rocket 640L-Card with 4 SATA-Connections.


    Maybe the Highpoint had an short Problem and the RAID lost in this moment the connection. But now the Highpoint looks still ok.
    The result:
    SMART only show the Temps from 7 HDs, but all HDs are listet.
    OMV wants to resync the system -but it hangs at 27%, only the estimatet time grows up.
    Because a software Restart dont works, I do a hard Reset.


    After rebooting :
    All HDs are physical present
    SMART shows the Temps from all HDs.
    -> But the Filesystem from the RAID is n/a
    -> And the RAID is not existing any more.


    "fdisk -l" say to all 11 HDs:
    "doesn't contain a valid partition table"


    To solve the Problem (click) I found a repair command for fixing the XFS Filesystem.
    I think it is possible, that omv dont do a right HD Shutdown, because of the hard Reset.
    But if I understand it right, this is required before I do repair commands, like this:

    Code
    xfs_check
    xfs_repair -n


    -But I've no plan to shutdown the HDs, also I've to much fear to lost my Data by the repair commands.
    Because my Linux- and english knowledge is minimal, I hope we can find a solution, to help a stupid german girl to save her omv!


    lG
    Petra

    • Offizieller Beitrag

    You need to fix your array before you can fix the filesystem. What is the output of: cat /proc/mdstat and complete output of fdisk -l

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hallo,


    I only know, that Ive to fix anything :(


    Code
    /var$ cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] 
    unused devices: <none>




    omv-Screensnaps you find here (click)


    lG
    Petra

    • Offizieller Beitrag

    Try:
    mdadm --assemble --scan
    Then post:
    cat /proc/mdstat

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Try this:
    mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcdefghijl]

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Oh ryecoaaron!
    Thank you so much! It looks as if the omv forgetfulness is over!



    Repair is in progress? Or?
    A very special thank for your looking along the much drive letters!
    You give me back faith in omv!


    But the filesystem is still n/a and RAID is pending.
    I ve todo anything to start the repair?


    Or I ve only to wait?
    Because there is no progress bar...



    lG
    Petra

    • Offizieller Beitrag

    Glad it is working :) I wouldn't do anything to it until it is done syncing.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • No progress bar...
    System-protocol says:


    Code
    Filter subsystem: Built-in target `write`: Dispatching value to all write plugins failed with status -1.


    And

    Code
    rrdcached plugin: rdc_update (/var/lib/rrdcached/db/localhost/df-root/df_complex-used.rrd,[1455384180:1142435840.000000] failed with status -1.


    I ve no plan... :(


    lG
    Petra

  • I do a restart, but nothing changed.


    If I look for the RAID Details:


    I ve to click on "RAID Wiederherstellung"/ "Restoration" but lack of courage...


    lG
    Petra

    • Offizieller Beitrag

    The rrd errors are just because you don't have a filesystem mounted. Don't worry about it.


    DON"T REBOOT! If you do reboot, the resyncing just has to start over. Wait until the web interface or the cat /proc/mdstat says clean or active and no resyncing. Then you can reboot or mount the filesystem. This will also fix the rrd errors. Just have to be very patient especially when fixing a raid array this size.


    You don't need to click restore either. That is for replacing one drive. This is not your case.

  • Ok,
    "RAID Wiederherstellung"/ "Restoration" dont works,
    but at "Filesystem" I select the n/a "xfs" and embed and accept it.
    "RAID" suddenly start the "active resyncing! And it estimate lovely 653 Minutes. :)


    THX a lot 4 giving me the right inspiration :))


    lG
    Petra

    • Offizieller Beitrag

    You have a very large array. 653 minutes is not bad for it.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    That error is very minor. I wouldn't reboot because of it.


    I agree that you raid controller may be causing the issues. You have a big investment in drives. I will buy some better quality hardware. Even if it is a IBM M1015 flashed with new firmware from eBay.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    The m1015 is an 8 port board. 16 port boards would probably cost quite a bit more than two m1015 (re-badged LSI board). I have an LSI 9211-8i and an LSI 9260-8i. If you have to have a 16 port board, any LSI board would be good in my opinion. Most of these boards have SFF-8087 mini-SAS connectors (one connector can connect to four sata drives). You just need to get the Mini SAS SFF-8087 to four sata cables.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Ryecoaaron ment Aaron?
    Regardless, your tip is really great. Because I think all the Conrollers are Bul.... ;)
    An ASRock X99 WS-E in mind, it would be really too expensive -especially considering that also new RAM and CPU hang on it.


    In the moment there are 4+2 SATA Ports at my ASRock B85M Pro3. Actually I ve an HighPoint Rocket 640L 4Port and anHighPoint Rocket 620 2Port Controller.


    Tomorrow I order the the LSI 9211-8i and hope it is finaly the rescue at my heavy SATA-sea :)
    I could smooch you: THX!!!


    lG
    Petra

  • So,
    the Avago SAS 9211-8i SGL-Controller is inside.
    • All Drives are shown
    • SMART also
    • Filesysten is n/a
    • RAID is empty (as before)


    Well, I do your command, but it fails:

    Code
    /media$ mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcdefghijl]
    mdadm: looking for devices for /dev/md127
    mdadm: no RAID superblock on /dev/sdh
    mdadm: /dev/sdh has no superblock - assembly aborted


    Do I need a Driver for the Controller?


    lG
    Petra

  • Yeah!
    the fdrive letters have changed!
    "H" is now "L"



    • Raid is shown as clean
    • filesystem n/a


    So I enabled the filesystem
    And -Wonder- The RAID is still clean!


    WinSCP shows all the files, but Windows don't find omv...
    I hope I will remember the right thing :)


    lG
    Petra

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!