Ok, finally works.
Thanks a lot.
Ok, finally works.
Thanks a lot.
Finally I install ZFS Plugin, and I understand that problems are not related to ZFS pluging , perhaps some corruption in download of one package.
So ZFS Plugin is good to be public , not sure if in testing Repo or directly in ZFS repo.
My next test is to know how to create my needed pool.
I have 8x3T disk and 2x1TB Disk.
I want to create a mirror pool with my 2x1TB Disk, and a more complex RAidZ1 pool: 4x3TB in RaidZ1 + 4x3TB in RaidZ1 because my plan is to grow with other 4x3TB in RaidZ1 vdev in the near future.
How can I do it from WebGUI?
I use this options from WebGUI to create mirror:
But , What I need to do to create a pool with 2 vedv , aeach vdev from 4x3TB Disk?.
OK, some experience when create my first mirror pool:
I create TPool with default options and want to use /dev/sdj & /dev/sdk names and really was created using route, not a problem, I still can delete and recreate.
Same aplies to path creatio, is created in root "/" not in /media or in /mnt , same answer, I only need to delete and recreate new time using apropiate values.
last question is : how to mount?
and finally i see that pool are created using 512b sectors not 4K:
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Feb 13 15:30:36 2015 from raul
root@RNAS:~# zpool status
pool: TPool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
TPool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
pci-0000:04:00.0-scsi-1:0:0:0 ONLINE 0 0 0
pci-0000:04:00.0-scsi-1:0:1:0 ONLINE 0 0 0
errors: No known data errors
root@RNAS:~# zdb -C TPool | grep ashift
ashift: 9
root@RNAS:~#
Alles anzeigen
I test to create pool from shell , and works fine:
root@RNAS:~# zpool create TPool mirror /dev/sdj /dev/sdk
root@RNAS:~# zpool status
pool: TPool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
TPool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdj ONLINE 0 0 0
sdk ONLINE 0 0 0
errors: No known data errors
root@RNAS:~# zdb -C TPool | grep ashift
ashift: 9
root@RNAS:~#
Alles anzeigen
But after a reboot my pool changed to:
Last login: Fri Feb 13 18:24:32 2015 from raul
root@RNAS:~# zpool status
pool: TPool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
TPool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-SAMSUNG_HD103UJ_S1PVJDWQ717227 ONLINE 0 0 0
ata-SAMSUNG_HD103UJ_S1PVJDWQ716436 ONLINE 0 0 0
errors: No known data errors
root@RNAS:~#
Alles anzeigen
You don't need to mount a pool. You just need to create shared folders on it.
Also datasets or filesystems can be used as shared folders. You just make sure you reset the permissions on them so they behave properly with samba
and finally i see that pool are created using 512b sectors not 4K:
You specify the ashift value when you create the pool (there is a setting for this in the GUI). Default is 512b sector size as you noticed, but it's possible to change this to 4k.
But after a reboot my pool changed to:
CodeAlles anzeigenLast login: Fri Feb 13 18:24:32 2015 from raul root@RNAS:~# zpool status pool: TPool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM TPool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-SAMSUNG_HD103UJ_S1PVJDWQ717227 ONLINE 0 0 0 ata-SAMSUNG_HD103UJ_S1PVJDWQ716436 ONLINE 0 0 0 errors: No known data errors root@RNAS:~#
I don't think it's the plugin that changes this for you, but rather the ZoL implementation itself...
Also datasets or filesystems can be used as shared folders. You just make sure you reset the permissions on them so they behave properly with samba
ok, but zfs-pluging still need some work, others plugins still do not know shared folders on ZFS like omv-backup
others like TFTP woks fine and recognize:
one more question , why TPool/Data do not show size?:
more question , how to create from webGUI something like I create from Shell:
zpool create RPool raidz1 /dev/sdb /dev/sdc /dev/sdd /dev/sde raidz1 /dev/sdf /dev/sdg /dev/sdh /dev/sdi
this is 4x3TB disk in one vdev + 4x3TB disk in other vdev , and all are conforming RPool
and last question , how to share by SAMBA Axis Volume?
ZitatYou specify the ashift value when you create the pool (there is a setting for this in the GUI). Default is 512b sector size as you noticed, but it's possible to change this to 4k.
Ok, no problem with this, but add a line to say posibles values ( eg: 512 or 4k ) eg2: (9 or 12)
Making a share from zfs works for me. You have to create a filesystem first. The plugin and omv core is ready for this. Don't know why doesn't work for you.
Size is a complicated matter of discussion in advanced file systems like zfs and btrfs. You'll have to read more about.
Axis is a snapshot. Again back to the zfs manual.
Sorry axis is not a snapshot is a volume. Is a software block device, ready for iscsi export for example.
Sorry axis is not a snapshot is a volume. Is a software block device, ready for iscsi export for example.
Ok, I never did it before in N4F and FreeNAS , but try it here, i'll try to use Axis volume in FileSystem to format as Ext4 Filesystem and then mount it.
Not really sure if is util or not to format & mount ZFS Volume devices.
ZitatMaking a share from zfs works for me. You have to create a filesystem first. The plugin and omv core is ready for this. Don't know why doesn't work for you.
Size is a complicated matter of discussion in advanced file systems like zfs and btrfs. You'll have to read more about.
Axis is a snapshot. Again back to the zfs manual.
finally works for me I can share several folders:
openmediavault-backup (system backup tab) doesn't use shared folders. It uses volumes and it won't see zfs volumes.
Zitatopenmediavault-backup (system backup tab) doesn't use shared folders. It uses volumes and it won't see zfs volumes.
Bad news.
Is possible in future to mount a ZFS Volume or use some kind of softlink to simulate a volumen using ZFS?.
Perhaps some like:
I will look at what backup needs to see zfs.
I will look at what backup needs to see zfs.
Aaron the backup plugin doesn't need anything. Back in the development days volker implemented all features to integrate zfs at block devices and filesystems. Even I was surprised when i started testing iscsi with zvols
I can show you some screenshots
how do you see /dev/zd0 ? I'll try to format My ZFS Volume, but do not work.
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!