Btrfs/ZFS support

  • Hi


    I have a small problem. 2 out of 5 of my brand new WD Red drives pretty much kicked it - SMART errors, cannot even do a selftest etc.. . So I pulled them out and my zraid2 is still online as expected.


    When I replace the drives with new ones, can I add them to the pool via the GUI or do I need the cli ? And if I need the cli how do I do it?


    Can I add both at the same time or better one after another?


    thx for any help

  • zpool get all poolname in linux displays pool features/properties


    Cool, i did overlook this one


    So its confirmed, out of luck for F...nas users:


    NAME PROPERTY VALUE SOURCE
    INTERNAL size 10.9T -
    INTERNAL capacity 85% -
    INTERNAL altroot /mnt local
    INTERNAL health ONLINE -
    INTERNAL guid 5948082030589541869 default
    INTERNAL version - default
    INTERNAL bootfs - default
    INTERNAL delegation on default
    INTERNAL autoreplace off default
    INTERNAL cachefile /data/zfs/zpool.cache local
    INTERNAL failmode continue local
    INTERNAL listsnapshots off default
    INTERNAL autoexpand on local
    INTERNAL dedupditto 0 default
    INTERNAL dedupratio 1.00x -
    INTERNAL free 1.60T -
    INTERNAL allocated 9.28T -
    INTERNAL readonly off -
    INTERNAL comment - default
    INTERNAL expandsize 0 -
    INTERNAL freeing 0 default
    INTERNAL feature@async_destroy enabled local
    INTERNAL feature@empty_bpobj active local
    INTERNAL feature@lz4_compress active local
    INTERNAL feature@multi_vdev_crash_dump enabled local
    INTERNAL feature@spacemap_histogram active local
    INTERNAL feature@enabled_txg active local
    INTERNAL feature@hole_birth active local
    INTERNAL feature@extensible_dataset enabled local
    INTERNAL feature@bookmarks enabled local

    Tom


    ----


    HP N54L, 6GB, 5disc Raid5, SSD Boot with OMV 5
    HP N54L, 16GB, 4disc ZFS pool, SSD Boot with other NAS system

  • Hi


    I already "resilvered" one if my two drives I had to replace in my raid-z2. the first one took 48h with a speed of 40mb/s !! WoW (each drive has 5TB)


    Now I enabled write back cache on all disk because I have a UPS attached - now the speed is 370mb/s (will finish in less 6h!!!)- that like 9x the original speed!


    Is that because resilvering is faster when the 1st drive has already finished or is that only due to the enabled write cache !? ..if it is only the cache I am about to byte my a.. !!

  • Hi



    I successfully replaced all my drives on-by-one. The Zpool is online and doesnt show any errors.


    However after the replacement I recognized constant I/O access on my raid ever 5-10s or so. This was really making me crazy - disk clicking all the time!.
    In iotop it seemed txg_sync to be the cause. However nothing and nobody was writing to the disk, so no change of data!?.


    So I did:
    disable ALL plugins incl. SMB NFS ..ect
    unplugged NIC cables


    ...still clicking drives and txg_sync in iotop every couple of seconds.


    I am no doing a zfs scrub out of desperation, hopefully that solves the issue.


    Does anyone have an idea why that problem occured after exchanging my discs?

  • I did some digging around with regards to the collection of statistics for Disk Usage on a ZFS pool. It turns out that collectd was throwing the following errors:

    Code
    Dec 28 07:42:00 server collectd[2837]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
    Dec 28 07:42:00 server collectd[2837]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/df-root/df_complex-reserved.rrd, [1419748920:2893754368.000000], 1) failed with status -1.


    I guess this error occurs due to the fact that the df plugin from collectd does not properly recognizes the ZFS file share. This can fixed by changing /etc/collectd/collectd.conf:

    Code
    <Plugin df>
     #MountPoint "/Data"
    FSType "zfs"
     IgnoreSelected false
    </Plugin>


    After this change the syslog did not throw the errors. However, the information page still did not show the stats for the ZFS pool. It turned out that the script /usr/sbin/omv-mkgraph did not properly generated the graphs for the pool. However the rrd database df-Data was properly created in the folder var/lib/rrdcached/db/localhost. The culprit is this part of the script, because there is no sub folder inside df-root:



    I altered this code to:


    Now the statistics work again. However, this fix is not ideal since I manually added the df-Data part.

  • I run zfs in two VM, i don't have this problems that you mention


    That is indeed strange. I use it on a HP Microserver N54L. I did a clean install of OMV 1.7 and installed the ZFS plugin (0.6.3.5). My pool is called Data and is mounted as /Data. Maybe someone else that can reproduce the problem?

    • Offizieller Beitrag

    I had to manually do the following:


    omv-mkconf collectd
    service collectd restart


    Then click Refresh on page. After that, it worked fine. Will have to look into this more. Shouldn't be too hard to fix.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!