Hi there
So I decided to do an upgrade to v. 1 - All went rather flawlessly except for a raid5 that died and most disks are misplaced - Or atleast looks like it.
Code
root@storage00:~# blkid
/dev/md0: UUID="592adfb1-ae68-451d-a700-f4258bde0415" TYPE="ext4"
/dev/md1: UUID="9c6d6c30-aacf-4c2b-8ec1-7db3dde5b4eb" TYPE="swap"
/dev/sdd: UUID="76f03b1e-c6f6-2c7e-ee09-750bb500b2d2" LABEL="storage00:CloudShare" TYPE="linux_raid_member" UUID_SUB="10ac9062-dfa9-290b-2932-b454fd49316b"
/dev/sdc: UUID="76f03b1e-c6f6-2c7e-ee09-750bb500b2d2" LABEL="storage00:CloudShare" TYPE="linux_raid_member" UUID_SUB="1ec2b85e-c9d9-8d86-2432-f4a003f942b3"
/dev/sdg: UUID="327ce2af-1c7b-0965-9c1e-f8a7c852eb72" LABEL="storage00:bCache00" TYPE="linux_raid_member" UUID_SUB="6c2b09eb-871b-14e1-84a1-841d74b6fcc7"
/dev/sdh: UUID="327ce2af-1c7b-0965-9c1e-f8a7c852eb72" LABEL="storage00:bCache00" TYPE="linux_raid_member" UUID_SUB="435bcd6b-2228-c3f2-dc76-213940befd06"
/dev/bcache0: UUID="ec15c4f4-64d9-4fb0-a16a-4b3c5e886a27" TYPE="ext4"
/dev/sdi: UUID="c47d2ea7-1762-a811-e3c3-2a01c1c501c0" LABEL="storage00:vmShare00" TYPE="linux_raid_member" UUID_SUB="2f20bdfa-12fa-cc53-b5bb-8742568ccbc6"
/dev/sdj: UUID="c47d2ea7-1762-a811-e3c3-2a01c1c501c0" LABEL="storage00:vmShare00" TYPE="linux_raid_member" UUID_SUB="569b060e-637e-604f-a5f2-4e4665265224"
/dev/sda: UUID="76f03b1e-c6f6-2c7e-ee09-750bb500b2d2" UUID_SUB="e271d618-1567-f94e-9d81-8f6fb2f163ad" LABEL="storage00:CloudShare" TYPE="linux_raid_member"
/dev/sde1: UUID="3f7580b6-09ac-b91a-531b-5b40dd2348f2" UUID_SUB="9344d3f0-d1f3-aaff-1ca9-f8b8d664d77e" LABEL="storage00:0" TYPE="linux_raid_member"
/dev/sde5: UUID="783f3fed-4432-e1c7-3421-e9155fff7b37" UUID_SUB="96341b04-9788-6862-11e8-ea14865296b2" LABEL="storage00:1" TYPE="linux_raid_member"
/dev/sdf1: UUID="3f7580b6-09ac-b91a-531b-5b40dd2348f2" UUID_SUB="343d2e2d-f60a-9dc3-eb7b-660ad86fccc4" LABEL="storage00:0" TYPE="linux_raid_member"
/dev/sdf5: UUID="783f3fed-4432-e1c7-3421-e9155fff7b37" UUID_SUB="e3fd0e18-c64d-82fd-b767-e63c63bc6fb4" LABEL="storage00:1" TYPE="linux_raid_member"
/dev/sdb: UUID="76f03b1e-c6f6-2c7e-ee09-750bb500b2d2" UUID_SUB="934dc79b-a736-a6bd-3bac-2d313607e554" LABEL="storage00:CloudShare" TYPE="linux_raid_member"
/dev/sdl: UUID="c47d2ea7-1762-a811-e3c3-2a01c1c501c0" UUID_SUB="f3eea358-d79d-0fa5-ace8-a3dbac955fac" LABEL="storage00:vmShare00" TYPE="linux_raid_member"
/dev/sdk: UUID="c47d2ea7-1762-a811-e3c3-2a01c1c501c0" UUID_SUB="83ab4847-bcae-1a7a-4ba3-e7d07d53ff4a" LABEL="storage00:vmShare00" TYPE="linux_raid_member"
/dev/md4: LABEL="CloudShare" UUID="7fe59b53-c96a-4513-bdbd-05560e24797b" TYPE="xfs"
/dev/sdq1: LABEL="c0urierMediaTMP" UUID="52294bc4-dd5c-450d-8057-0e1dd8a4a783" TYPE="ext4"
/dev/sdr1: LABEL="Media00" UUID="76598b8a-f8dd-4ca7-9193-00c834e020a2" TYPE="xfs"
/dev/sds1: UUID="12e77943-18bc-42d5-8aca-a8c031131651" TYPE="xfs"
/dev/sdt1: UUID="c9960dbc-36ef-4c9c-b161-502173503dc4" TYPE="xfs"
Alles anzeigen
Code
root@storage00:~# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 141G 8,4G 126G 7% /
udev 10M 0 10M 0% /dev
tmpfs 764M 620K 764M 1% /run
/dev/md0 141G 8,4G 126G 7% /
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 2,8G 0 2,8G 0% /run/shm
tmpfs 3,8G 0 3,8G 0% /tmp
/dev/bcache0 1,8T 290G 1,5T 17% /media/ec15c4f4-64d9-4fb0-a16a-4b3c5e886a27
/dev/sdm1 1,8T 381G 1,5T 21% /media/52294bc4-dd5c-450d-8057-0e1dd8a4a783
/dev/sdo1 2,8T 1,2T 1,6T 42% /media/12e77943-18bc-42d5-8aca-a8c031131651
/dev/sdp1 2,8T 2,8T 2,1G 100% /media/c9960dbc-36ef-4c9c-b161-502173503dc4
/dev/sdn1 2,8T 302G 2,5T 11% /media/76598b8a-f8dd-4ca7-9193-00c834e020a2
/dev/bcache0 1,8T 290G 1,5T 17% /export/vmShare00
/dev/bcache0 1,8T 290G 1,5T 17% /export/REAR-backup
Alles anzeigen
And a printscreen from within OMV disks.
http://prntscr.com/4n19zg
Another print from the RAID:
http://prntscr.com/4n1ao4
Any suggestions on how to get it aligned again. The Raid5 was recovered, but I wan't to add sdd, but that is not possible - It's not selectable in the recover menu.