[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • Offizieller Beitrag

    Am a developer and would love to help. However, Is there any documentation on the porting process ? I tried searching for it, but to be honest it was a half hearted attempt.

    No, there really isn't any documentation other than looking at the changes to the other plugins. That is how I figured it out. Feel free to submit a pull request on github for any changes you make :)

  • As a note, there's still some issues with the ZFS plugin that can cause a bit of a problem here and there. Notably, once I started doing automatic snapshots, any attempt to see file systems or even load the ZFS interface just fails with a timeout.


    Being decidedly not a programmer, I monkeyed around with the plugin files on my system anyhow trying to disable whatever it is that's including snapshots in the ZFS plugin and/or filesystems, but was unsuccessful. But I went from 2 pools with 11 datasets displaying everything but with a bit of delay (total of 13 ZFS filesystems, plus boot drive and a pair of other drives for 16 total), to doing automatic snapshots every 15 minutes with a destroy timer of 24 hours, and it never loaded again.


    Again, I'm not a programmer, but I think this may be a fundamental flaw with the plugin itself; the main screen of the plugin is displaying EVERYTHING - pools, datasets, snapshots, volumes, you name it. If someone with a coding bent is going to take a look at the plugin, if you could remove whatever it is that's listing snapshots everywhere, that would probably fix being unable to see filesystems from your NAS's control panel.


    And if not, well, maybe I'll finally get around to using it as an excuse to learn a bit of coding. :)

  • ZFS in OMV3 would only affect the GUI right? - I mean we all could use CLI for creation and maintenance of ZFS - right? From my experience that seems to be quiete easy, the only stuff I have no idea would be to define a mount point and make the volume mount at each boot

    • Offizieller Beitrag

    ZFS in OMV3 would only affect the GUI right? - I mean we all could use CLI for creation and maintenance of ZFS - right? From my experience that seems to be quiete easy, the only stuff I have no idea would be to define a mount point and make the volume mount at each boot


    The last part you mention is essential for zfs to be integrated with Omv. Mount points appearing in the fs section and being able to create shared folder. Of course underneath everything will go ok. But if they are mount point registered and the zfs plugin is not installed the webui will pop errors.

  • I ported omv-zfs to OMV 3.0.26
    it is tough than I think


    it is not released in omv-extras, it is in beta stage
    I need your help for testing


    https://github.com/OpenMediaVa…leases/tag/PRERELEASE-RC2


    Special Thanks
    @nicjo814 helped testing and developing !

    OMV3 on Proxmox
    Intel E3-1245 v5 | 32GB ECC RAM | 4x3TB RAID10 HDD
    omv-zfs | omv-nginx | omv-letsencrypt | omv-openvpn
    Click link for more details

    4 Mal editiert, zuletzt von luxflow ()

  • This are really great news. Sadly I don't have much time at the moment for testing this.


    But I want to thank you @luxflow and @nicjo814 for investigating your time.

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    Einmal editiert, zuletzt von hoppel118 ()

  • I would like to ask you OMV/ZFS users out there to take a few hours to install OMV 3.x in your favorite Virtualization system and then get the ZFS plugin up and running according to @luxflow instructions. It's important that we get a lot of testing done on this plugin since it had to be modified in quite a lot of places to make it work with OMV 3.x. So PLEASE get it installed and try to mimic your production system as best you can and let's all help out locating any bugs before it's introduced back into omv-extras.


    Also please report back when you have tested the plugin so that we know if it's working as it should.


    Finally I would like to thank @luxflow for all the work put into porting the plugin, without it I don't know when (if ever) the plugin would make it to OMV 3.x.

    • Offizieller Beitrag

    Why not put it in the zfs testing repo?

  • Why not put it in the zfs testing repo?

    unfortunately small patch is needed for omv core to work properly zfs plugin
    I made PR for omv core but I'm not sure it will be accepted

    OMV3 on Proxmox
    Intel E3-1245 v5 | 32GB ECC RAM | 4x3TB RAID10 HDD
    omv-zfs | omv-nginx | omv-letsencrypt | omv-openvpn
    Click link for more details

    • Offizieller Beitrag

    I missed that. If Volker fixes that, I think we should put the plugin in the zfs testing repo then :)

  • I also should have some time next weekend for testing. What about the omv core patch? Is it implemented? If not, what is the issue with it?


    Greetings Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • Okay, got my test system up and running with 4x2TB drives. First, the ZFS plugin installed without a hiccup through the WebUI by uploading the .deb as a plugin. It showed up even before I made the change to the JSON file. Completed those steps, then went to create the pool.


    First attempt: Name tank, RAIDZ2, all four drives selected (they're the only ones in the system), Device Alias set to "By ID" (which should be the default, by the way), and mountpoint set to /media/tank. Hit okay, get the warning about sitting for a long time doing nothing, and then check. No pool created - zpool status and zpool import both say the same thing.


    Second attempt: Same as the first, but this time I enable Force Creation. Hear the drives churn, no zpool status and no zpool import.


    This, by the way, happened all the time under the old plugin on 2.x - the pools just wouldn't create no matter what I tried. In the end I just manually created them, so this time I did that as well. Command used was "zpool create tank raidz2 sdb sdc sdd sde". Pool created just fine, no errors, and showed up automatically under the ZFS plugin interface, and in the File Systems section.


    I did get an error that is new, and I've been able to duplicate it reliably. This behavior was not seen under 2.x at any time. Because I had to manually create the pool, I changed the mountpoint using "zfs set mountpoint=/media/tank tank" to move it into the /media directory. This worked just fine. When I went to access the ZFS plugin interface, I got the following:


    Code
    The configuration object 'conf.system.filesystem.mountpoint' is not unique. An object with the property 'fsname' and the value 'tank' already exists.

    Details are:




    This is fixable by changing the mountpoint property back to the default (/tank). However, there is no way to change the base pool's dataset's mountpoint inside the ZFS interface; you can do it with child datasets, but there's no options to affect the parent.


    Please let me know if there's somewhere I can look for logs of the creation process to see if there was an error generated that prevented the pool's creation.


    I'll be heading to bed shortly, but before I do that I'm going to set up some cron jobs to create snapshots once a minute. I will check in the morning to see if the issue with file systems being unable to display when there's a bunch of snapshots still exists; since nobody was ever able to track that down it might've been something in OMV 2.x that is gone now, so we'll see.


    If you want me to perform specific tests, let me know and I'll gladly do them tomorrow at some point.

  • Followup: After letting it run overnight, the cron job generated approximately 500 snapshots. I am now getting "Bad Gateway" followed by "Communication Failure" messages on both the ZFS plugin screen and the File Systems screen.


    Note that neither of those screens actually display snapshots generated by a cron job.


    EDIT: After clearing the existing snapshots with the cron job still running, they're now showing up in the ZFS plugin, something they didn't do when I did the same thing on 2.x. Not sure if that was a bug or this is a new feature, but there you have it.


    Once the snapshots were cleared, both pages function again.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!