Either the other folders are outside of the first 16 TB or there are ACLs causing problems. I can't think of any other way to cause permission denied on those files but allow root to write.
SnapRaid & AUFS Permissions problems
-
- OMV 1.0
- RustyPitchfork
-
-
I just did a "setfacl -b -R /media/69c31ed2-5b0a-4856-89bb-00801e23ac35/" which didn't seem to fix things... I'll try the resized AUFS as soon as I can.
-
Also post the results of cat /proc/mounts
-
Well, from an initial test, it looks like the problem is possibly the 16TB issue. Which is a big bummer.
-
Also post the results of cat /proc/mounts
mediaman@RAGE:/root$ cat /proc/mounts
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=255967,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=206116k,mode=755 0 0
/dev/disk/by-uuid/d8b25905-a7f1-4763-b693-70a780e639f7 / ext4 rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=558440k 0 0
tmpfs /tmp tmpfs rw,relatime 0 0
/dev/sdb1 /media/c2cf1386-a3eb-41da-841d-8221c75f81e8 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdc1 /media/140c023b-5657-4d05-85e8-29b49b344c3e ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdd1 /media/d6633415-d2cd-4006-84bb-75ed80019838 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sde1 /media/ab9e2a7c-3482-44e3-bd20-d21fcd03d8ce ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdh1 /media/09fac8e2-2485-4e47-b85c-5a2bdc05daa9 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdi1 /media/bae82e12-551c-43f0-a6d8-a295e69c0458 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdj1 /media/1b8fbcdc-5a9d-4748-9f2b-c53ec1c10ee9 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdk1 /media/2214b65f-6875-4e39-936a-b316cbeddb11 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdm1 /media/79777254-0e6d-400f-baee-38964178f868 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdn1 /media/6039acda-137c-43bd-a4d1-d397502f3c6b ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdo1 /media/9b57fe5f-377b-4274-9899-1ba4d077a0dd ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sds1 /media/3c52939b-5cbe-4a3b-9a60-d067ced0d047 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdu1 /media/0d71efdb-ca0c-4bd9-8f7a-759ed55d0d79 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdp1 /media/9d7b9c88-7ae1-4198-867d-692de29c2a5e ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdr1 /media/c95aa88e-06f4-469e-9189-5ead227ae9e8 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdt1 /media/36e50d18-9600-468d-8218-7973030d0871 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdf1 /media/b9701df9-db36-4572-a463-954657818a01 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdq1 /media/8b53d6b8-a9b8-41a0-9273-4e16b0520054 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdv1 /media/e01ae567-eb61-428c-a584-4e1c69f01b7d ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdg1 /media/ae11cb02-9843-448d-a7fa-630bcbb8e4b1 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/sdl1 /media/d2847003-dbf9-497f-b6c3-fb43bb4deac0 ext4 rw,noexec,relatime,user_xattr,barrier=1,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
none /media/63e03b6a-ccaa-42f6-b0e4-e26414efddf4 aufs rw,relatime,si=72eefcd76fa19710,create=mfs,sum 0 0
none /export/Storage aufs rw,relatime,si=72eefcd76fa19710,create=mfs,sum 0 0
none /export/MediaManagement aufs rw,relatime,si=72eefcd76fa19710,create=mfs,sum 0 0 -
So I did the dumpe2fs on my drive that I know is the one AUFS uses as the largest open drive, and I got that the drive was NOT EXT4 with 64bit. Very well could be not supported without 64bit being enabled on drives.
Code
Alles anzeigenroot@RAGE:~# sudo dumpe2fs /dev/sdg1 dumpe2fs 1.42.5 (29-Jul-2012) Filesystem volume name: 3TB5 Last mounted on: /media/ae11cb02-9843-448d-a7fa-630bcbb8e4b1 Filesystem UUID: ae11cb02-9843-448d-a7fa-630bcbb8e4b1 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 182845440 Block count: 731378939 Reserved block count: 0 Free blocks: 334606243 Free inodes: 182826768 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 849 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Mon Dec 29 22:59:31 2014 Last mount time: Mon Jan 12 16:29:18 2015 Last write time: Mon Jan 12 16:29:18 2015 Mount count: 20 Maximum mount count: -1 Last checked: Mon Dec 29 22:59:31 2014 Check interval: 0 (<none>) Lifetime writes: 1567 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 4b4daf34-6a37-4023-8e89-de98a67c1379 Journal backup: inode blocks Journal features: journal_incompat_revoke Journal size: 128M Journal length: 32768 Journal sequence: 0x00023041 Journal start: 1
-
So here is another question... if 16TB is the problem, then why can I do everything fine with the filesystem as root?
-
Maybe a bug in aufs?? aufs isn't really supported anymore. I am really looking forward to overlayfs. It is not in the Debian kernel so we are trying to figure out a solution. Ubuntu has patched their kernels since Ubuntu 11.10 to include overlayfs. overlayfs wasn't officially included in the kernel until 3.18. I'm hoping Debian backports it to the 3.16 kernel or Debian Jessie's backport kernel includes it.
-
@RustyPitchfork if you have time come the freenode channel in my signature. Sometimes we use remote sessions to take a look at problems, etc.
-
Maybe a bug in aufs?? aufs isn't really supported anymore. I am really looking forward to overlayfs. It is not in the Debian kernel so we are trying to figure out a solution. Ubuntu has patched their kernels since Ubuntu 11.10 to include overlayfs. overlayfs wasn't officially included in the kernel until 3.18. I'm hoping Debian backports it to the 3.16 kernel or Debian Jessie's backport kernel includes it.
Sorry for the slight offtopic, but may i ask you what does overlayfs offer compared to aufs? I couldn't find much documentation, but from what i found it seems it can only overlay a write enabled directory tree on a read only tree. How can you use that to pool several hard disks?
-
Union filesystems allow multiple filesystems to be combined and presented to the user as a single tree. OverlayFS is the latest union mount filesystem. aufs (in kernel) and mhddfs (fuse) are union mount filesystems. aufs is being phased out and overlayfs is its replacement. The terminology and methodology they use is different but the goal is the same. Overlaying a write enabled directory on a read only tree is just one of the uses (live cds).
-
@RustyPitchfork if you have time come the freenode channel in my signature. Sometimes we use remote sessions to take a look at problems, etc.
Subzero79,
Cool! I am leaving for vacation this evening and will be back next week. I will get in touch with you then!
-
@RustyPitchfork did you get anything new on this subject? I'm in the same boat.
After reading, I wanted to comment with you something.
You said repeatedly that you adjusted/reset permissions, but you didn't specified where. The permissions working through the poolshare are different from the one's you get directly accessing the same elements on the branch.
I have a pool with three branches D1,D2,D3. I could list permissions thru the "storage" (poolshare) and set them and they where what I wanted. but after browsed to D3/Series/MySeries found that permissions there weren't the appropriate and I couldn't modify anything.
It comes even worse if you try to move the entire parent folder. AUFS fails at half and creates phantom folders and files with zero size that steals the real files access that still are on the other branch.I fixed the permissions but still don't know how they be as different services work on the pool (sonarr/mediabrowser/me), so I would thank if you share your findings.
-
I've found through another user in the forum last week, that the reset permission doesn't work properly on a AUFS pool.
Why? AUFS doesn't support ACL, so if there are some present it wont get displayed with ls -la /media/uuid-pool/
So when you try to reset permissions in the aufs pool ACL won't get flushed.
The user had to go to pick the disks one by one to reset the folders inside. On this last point i am not sure if the POSIX permissions in the disk where different from the pool. That should be very annoying.
-
Just create a shared folder at the root of each branch/drive and reset the permissions there.
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!