## 2013/05/06 17:11 Thread started
## 2013/05/07 00:23 Screenshots added (Mounting the first four Drives, one bad Drive, w/ old Controller)
## 2013/05/08 03:31 Added Plugins, purpose, future plans etc.
## 2013/06/XX WoL is now running
Hey Guys,
i thought today is the Day I will start my own Thread here.
As a little Background, i've been using Linux for over 3 Years now at home for my NAS and used OMV since Version .2.
So iam aware what it means to setup SMB Shares by hand etc. etc.
Specs Today:
Core i5 650 @ 3,2GHz
4GB RAM
500GB System Drive
RAID Controller: (LSI) 3ware 9650SE-16ML (upgraded 04/2013 from 9650SE-8LPML)
5 x WD RED 3TB as a Hardware RAID 5 - 10,91TiB net.
The Story behind
So, Some Months ago i had moved from an old Desktop Case which held a 4x2TB RAID5 to the Xigmatek Asgard 381.
This Case provides easy acces to all my Drives and also the place for two 120mm Fans in front of all the Harddrives attached to it.
After moving to the new Case i got some problems regarding an unusable Array because one Drive Failed and another one got disconnected - I thank Josef from the LSI support here, thanks to him i could recover all my Data.
My other Problem was that my Disk Usage was already over 5TB so i had to expand my Storage.
I was looking at the Prices for the WD RED 3TB some weeks and waited for the point where i could order 4 Drives for under 600 Euros. So it was close before Easter when i ordered the 4 Drives.
Unfortunately one Drive got damaged at shipping and the Dealer refused to resend a new Drive right away.
At this Point i didn't wanted to wait over the long Holidays to get a new Drive, so i went to my local Dealer an picked up one "replacement" Drive.
With these 4 Drives i then build a new RAID 5 and filled it with my Data.
A few Days later, the replaced Drive from the online Dealer came, and i thought what should i do with it?
Well after some discussion with a friend he told me into putting the Drive into the Array (i wanted to stick with 4 Drive Arrays before) and use the OCE (Online Capacity Expansion) Feature of my RAID Controller.
Now I already had about 5TB of Data on my Disks and thought how long the "Migration" would take, and just little over 5 Days later i had an Array with 10,91TiB ;).
All i had todo now was to expand the XFS Filesystem (i used a gparted live cd for it) to use the new Space available.
Penis-o-Meter
Here are some Benchmarks of the Array:
Write:
# dd if=/dev/zero of=tempfile bs=1MB count=102400
10737418240 Bytes (11 GB) kopiert, 19,9063 s, 539 MB/s
107374182400 Bytes (107 GB) kopiert, 224,111 s, 479 MB/s
Read:
# dd if=tempfile of=/dev/zero bs=1MB count=102400
10737418240 Bytes (11 GB) kopiert, 22,4036 s, 479 MB/s
107374182400 Bytes (107 GB) kopiert, 220,974 s, 486 MB/s
hdparm:
/dev/sdb:
Timing O_DIRECT cached reads: 1168 MB in 2.00 seconds = 583.57 MB/sec
Timing O_DIRECT disk reads: 1640 MB in 3.00 seconds = 546.61 MB/sec
/dev/sdb:
Timing cached reads: 10732 MB in 2.00 seconds = 5368.82 MB/sec
Timing buffered disk reads: 1646 MB in 3.00 seconds = 548.62 MB/sec
I'll add some more Screenshots later.
Greetings
David