Btrfs/ZFS support

  • same here, /etc/init.d/collectd restart do not solve error.


    more info, lot of this errors on log file:


  • Ok, I continue doing Test, I add a volume to my TPool , and format as ext4 this volume, not problem at this point, what when I'll try to mount, error appear, Backup Volume is really mounted ( I test it in the backup pluging), but I have a yellow warnning
    "The configuration has been changed. You must apply the changes in order for them to take effect." and if I click on "Apply" button error appear:



    Code
    Failed to execute command 'export LANG=C; monit restart collectd 2>&1': /etc/monit/conf.d/openmediavault-filesystem.conf:14: Error: service name conflict, fs_mnt already defined '"/mnt"'
    Error #4000: exception 'OMVException' with message 'Failed to execute command 'export LANG=C; monit restart collectd 2>&1': /etc/monit/conf.d/openmediavault-filesystem.conf:14: Error: service name conflict, fs_mnt already defined '"/mnt"'' in /usr/share/php/openmediavault/monit.inc:113 Stack trace: #0 /usr/share/php/openmediavault/monit.inc(70): OMVMonit->action('restart', 'collectd', false) #1 /usr/share/openmediavault/engined/module/collectd.inc(53): OMVMonit->restart('collectd') #2 /usr/share/openmediavault/engined/rpc/config.inc(206): OMVModuleCollectd->startService() #3 [internal function]: OMVRpcServiceConfig->applyChanges(Array, Array) #4 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array) #5 /usr/share/php/openmediavault/rpcservice.inc(158): OMVRpcServiceAbstract->callMethod('applyChanges', Array, Array) #6 /usr/share/openmediavault/engined/rpc/config.inc(224): OMVRpcServiceAbstract->callMethodBg('applyChanges', Array, Array) #7 [internal function]: OMVRpcServiceConfig->applyChangesBg(Array, Array) #8 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array) #9 /usr/share/php/openmediavault/rpc.inc(79): OMVRpcServiceAbstract->callMethod('applyChangesBg', Array, Array) #10 /usr/sbin/omv-engined(500): OMVRpc::exec('Config', 'applyChangesBg', Array, Array, 1) #11 {main}



    I used /mnt to mount all my pools as you can see here: [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

  • Ok, I'll try to answer some questions in the same post :)


    A small feature request: After creating a pool the panel should automatically refresh so that the new pool is displayed in the panel.


    I'll take a look at this. I actually thought it behaved like you want it to...


    Another bug:
    1) Create a RaidZ1 pool
    2) Press details for the newly created pool


    Could you point out the issue here?


    Other bug, do not show same info ZFS Plugin & Filesystem ( Free and size):


    This is most likely because I haven't implemented a proper free/size method in the ZFS plugin Filesystem backend. The "default" method is probably based on "df" which doesn't show ZFS values correctly. I'll see what I can do regarding this.


    This is related to the bug I reported earlier today - caching problem.


    Not sure if this is really the same issue. See above.



    I think this is another issue from deleteing pools manually. There are probably some stuff left in your "/etc/monit/conf.d/openmediavault-filesystem.conf" file. Make a backup of this file and delete all duplicate entries, then retry to mount the filesystem.

  • Zitat

    I think this is another issue from deleteing pools manually. There are probably some stuff left in your "/etc/monit/conf.d/openmediavault-filesystem.conf" file. Make a backup of this file and delete all duplicate entries, then retry to mount the filesystem.

    I do a new fresh install and create pools with same name , but first wipe disk.


    this is my /etc/monit/conf.d/openmediavault-filesystem.conf file:



    /media/0072b787-36af-4155-b0cf-36a021f099cd is my recently Zvolume created and formated on ext4




    Good News, Graphs are show when I create Backup Volume:


    Perhaps /media/0072b787-36af-4155-b0cf-36a021f099cd or something is necessary in /media to work?

  • Could you point out the issue here?


    Two problems:
    1) Details of pool should be the merge of zpool status {pool} and zfs get all {pool} and not as it is now a merge of zpool status {pool}, zpool get all {pool}, zfs get all {pool}.
    2) The size in the backend seems to be calculated by using the total size of the disks where they in a diskset (RAID0) presumably because calculating size of pool is using the output from zpool get all {pool}. Size should be calculated using the output of zfs get all {pool}.


  • Well I discover the problem, because I destroy all my pools and recreate.


    The problem is that I have 2x1TB Disk that some times appears like sda & sdb and in some cases (always before a reboot) appear like sdj & sdk <- Change BIOS Order.


    I notice the real problem when I'll try to copy video files to my video folder located on Rpool and I see that is TPool the pool that grow in size.


    because pools are created using id :


    This MUST no be happens ( is like pool where created using /dev/sdx names).


    So I recreate pools , and reboot several times and this time pools are consistent, and do not notice strange pool grow, and a side and good effect I see correct size in filesystem:




    PD: Other strange problem detected, I always mount my previos pools on /mnt <- I write this on apropiate field when create the pool, but I notice that no folder are created on /mnt when do this.


    My new recreated pools are created by default ( no not use /mnt in field) and create apropiate folder on "/" (Tpool & Rpool) like is expected, please can someone test if you create a newpool and use /mnt in the field "monutpoint", that really a folder with the name of the pool are created on /mnt?

  • This MUST no be happens ( is like pool where created using /dev/sdx names).


    I'm not sure I follow you. There is a setting for which type of "alias" to use when you create the pool. Default is to use "By path", which is the recommendation according to the ZFS on Linux FAQ. Did you change his value when you cretaed your pool?


    PD: Other strange problem detected, I always mount my previos pools on /mnt <- I write this on apropiate field when create the pool, but I notice that no folder are created on /mnt when do this.


    My new recreated pools are created by default ( no not use /mnt in field) and create apropiate folder on "/" (Tpool & Rpool) like is expected, please can someone test if you create a newpool and use /mnt in the field "monutpoint", that really a folder with the name of the pool are created on /mnt?


    If you specify a "mountpoint" when creating the pool, this directory will be used instead of the pool name. In your case you specified /mnt (which already existed in your system) and thus no folder was created. If you want the pool to be mounted in "/mnt/Rpool" you have to specify the full path to this directory when creating the pool. This is how ZFS on Linux handles the "mountpoint" option. The plugin does not do much magic itself, but is merely a simple frontend to ZFS on Linux. If you find some strange behaviour I think that this would be a good place to look for information.

  • There is a setting for which type of "alias" to use when you create the pool. Default is to use "By path", which is the recommendation according to the ZFS on Linux FAQ.


    Is there a way for changing this when the array is already created, even if it's manually?


    That s what I was referring some time ago with using identifiers in place of drive letters. Mines are changing recurrently and doesn't allow me to diagnosticate a drive that self spins up.

  • I think I read somewhere that you could accomplish this by exporting the pool and then re-importing it with the proper flags. Might be worth to research?


    Edit: This is from the ZFS on Linux FAQ:
    Changing the /dev/ names on an existing pool can be done by simply exporting the pool and re-importing it with the -d option to specify which new names should be used. For example, to use the custom names in /dev/disk/by-vdev:


    $ sudo zpool export tank
    $ sudo zpool import -d /dev/disk/by-vdev tank


    Maybe test in a VM first :)

  • Zitat

    I'm not sure I follow you. There is a setting for which type of "alias" to use when you create the pool. Default is to use "By path", which is the recommendation according to the ZFS on Linux FAQ. Did you change his value when you cretaed your pool?



    Yes I use your WebGUI, and when create a pool two possible options can be selected:


    1- by-path
    2 - by-id


    Your webGUI select 1= by-path By default, but I prefer 2= by-id
    But I think that once you select one to create, this election it's used and can't be changed.


    you can see more info on:


    http://zfsonlinux.org/faq.html…uldIUseWhenCreatingMyPool

  • only a caution for users of ZFS, I'm copying a big amount of data (1.5TB = All my data disk , multiple photos, documents, etc..).


    Copy from Windows PC to NAS it's really superb. but is growing mem little a little, so define a good zfs_arc_max and zfs_arc_min is really usefull to avoid kernel panics due RAM exahust.


    I use zfs_arc_max=3221225472 = 3GB for a 8GB RAM system , and this is my RAM use in one hour +/- :



    this memory usage is due my intensive copy of data:




    I post about RAM tunning on ZFS on: [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!