[Tutorial][Experimental][Third-party Plugin available]Reducing OMV's disk writes, also to install it on USB flash

  • I played around with fs2ram and added this to /etc/fs2ram/fs2ram.conf

    Code
    tmpfs           /var/lib/openmediavault/rrd     keep_file_content       -               tmpfs
    tmpfs           /var/spool                      keep_file_content       -               tmpfs
    tmpfs           /var/lib/rrdcached/             keep_file_content       -               tmpfs
    tmpfs           /var/lib/monit                  keep_file_content       -               tmpfs
    tmpfs           /var/cache                      keep_file_content       -               tmpfs
    tmpfs           /var/lib/php5                   keep_folder_structure   -               tmpfs


    then you don't have to edit the other files and all the dirs are in RAM after a reboot:

    Code
    tmpfs on /var/log type tmpfs (rw,relatime)
    tmpfs on /var/cache type tmpfs (rw,relatime)
    tmpfs on /var/tmp type tmpfs (rw,relatime)
    tmpfs on /var/lib/openmediavault/rrd type tmpfs (rw,relatime)
    tmpfs on /var/spool type tmpfs (rw,relatime)
    tmpfs on /var/lib/rrdcached type tmpfs (rw,relatime)
    tmpfs on /var/lib/monit type tmpfs (rw,relatime)


    My system is running fine and only in my LAN, but logfiles are important:
    I added a hourly backup of /var/log with the internal function of fs2ram in anarcon -> cron.hourly/fs2ram_log_backup. So you'll loose only 59 min max of logfiles at a power loss

    Bash
    #!/bin/sh
    script_date="$(date +%y%m%d_%H%M)"
    
    
    /lib/fs2ram/unmount-scripts/keep_file_content /var/log >> /log_backup/tmp_log_backup
    if [ $? -eq 0 ]; then
            rm /log_backup/log_backup_*
            mv /log_backup/tmp_log_backup /log_backup/log_backup_$script_date
    fi
  • Good to see that someone liked my experiments :D Ty for trying it out and for the cronjob script. Adding a link to your post in the OP.


    Some comments below:


    Zitat

    then you don't have to edit the other files and all the dirs are in RAM after a reboot

    I know, I tended to be more conservative at first because I did not want to break things. fs2ram is in sid (unstable repo) after all.

    Besides, I feel it is more messy to tmpfs entire folders with other files in them. Reminds me of the firmware mods of mediacenters... it ends up looking like a mess of symlinks and tmpfs and scripts jumping all over the place.


    Although it is less likely to break when something is updated, so it is probably the way to go.


    Zitat

    tmpfs on /var/cache type tmpfs (rw,relatime)


    if you keep /var/cache as a tmpfs it's better to place a script somewhere that runs a
    apt-get autoclean
    every now and then to keep the size in check, or it will eventually fill the tmpfs and then fs2ram will complain (too big folder) and refuse to mount that folder as a tmpfs at boot (see the error in dmesg).


    My zyxel NAS has too limited RAM to keep the whole apt-get package cache in it, so I had to drop that folder from the fs2ram list.


    -----------------------------------------------
    I'm experimenting some more with fs2ram to find a way to break everything boost performance/responsiveness by placing some OMV and system folders into a tmpfs. So far it's not really helping that much.

  • Gave this a try, seems to not have caused problems, although symlinking did give an error about /etc/mtab not existing I believe.
    Will see how long this lasts on a very mediocre 4GB usb which was used as a ubuntu disk for a while, in the meantime I placed an order for an 8GB SLC USB from aliexpress ($11).

  • Also, good hard drives don't need to be powered down. It is bad for them. They are designed to run 24/7. I use Enterprise class Hitachi's which are never spun down. If they make it to me in one piece I never had an issue with them.


    Real-life is a bit more subtle.
    HDD don't like to be stop/start too often, true. But there is no "good" HDD.
    There are enterprise level HDD, which are designed to run 24/7, and there is also desktop level HDD, which are also good for their usage (desktop usage).
    These are not designed to run 24/7 for a long time. That's why it's interesting to be able to power down HDD is that situation.
    All NAS builders (QNap, Synology, etc ...) have this option (power down HDD).
    But back to USB key used for OS, and back to enterprise world, it IS very common to have "small" OS installed on USB key, like hypervisors (ESX, ...). This is not because it's cheap, but because it's very easy to create and switch between two revisions, and go back if something goes bad. Servere motherboards usually have two internal USB ports for that usage.


    As USB installation is offered in OMV, analysing and reducing OMV disk writes is a real subject.

  • OMV is not a hypervisor. Don't lose sight of this fact, based on Debian Wheezy. USB stick installations are not recommended for OMV and there are warnings in various places.


    When discussing power management of disks in forums most people are failing to mention the disks they are using. You cannot give advice on APM without this pertinent information. Many times advice is given before knowing what type of hard drives are in use. This is bad. I do recommend that the drives are spun down when I know what they are, like the WDC Greens.

  • I have now the same problem. I uses an Intel Nuc Kit 2820 with a WD Red 2,5" as data and an USB Stick (Patriot Sparc USB 3.0 16GB) for system. Unfortunately the stick has died within less then 2 months.


    I don´t know what to do. Using a new stick which uses wear leveling ( http://patriotmemory.com/support/faq1p.jsp?id=228 ) , or using your script here or trying to get all together (System and Data) on the one 2,5" WD Red?

    Vielen Dank, Regards
    Torsten
    --------------------------------------------------------------------------------------------------------------------------------------
    Tyan Xeon Server S5512, Ubuntu YAVDR 12.04.3 as Host, OMV 1.0 as KVM Guest with Network and Raid Passtrough

    • Offizieller Beitrag

    It should be static wear leveling. I'm fairly sure that the patriot usb stick uses dynamic wear leveling. It may be better than your previous stick but still probably not what you want.


    Since your NUC doesn't allow you to put an SSD inside it, I would get an mSata to usb enclosure kit and a 24 GB mSata ssd.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I didn´t know that static and dynamic differences exists. An external SSD is to expensive for me, and i found no parts from my distributors. For the mony i can use an external small USB Disk. Or i will better try to find a way to install System and Data on one Hard Disk.


    Edit: ah i found the mushkin flux... and this ssd https://www.wave-distribution.…-GB/html/product/1086867?


    Thank you

    Vielen Dank, Regards
    Torsten
    --------------------------------------------------------------------------------------------------------------------------------------
    Tyan Xeon Server S5512, Ubuntu YAVDR 12.04.3 as Host, OMV 1.0 as KVM Guest with Network and Raid Passtrough

    Einmal editiert, zuletzt von torsten73 ()

    • Offizieller Beitrag

    Those should work. Otherwise, install debian and create a 8 gb partition (#1), a swap partition (#2 - same size as memory of system), and the rest data (#3). Then install OMV:


    Code
    echo "deb http://packages.openmediavault.org/public kralizec main" > /etc/apt/sources.list.d/openmediavault.list
    apt-get update
    apt-get install openmediavault-keyring postfix
    apt-get update
    apt-get install openmediavault
    omv-initsystem

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Time to work on folder /var/lib/openmediavault/rrd and the /var/spool
    All this stuff goes directly in fs2ram's own config file, that looks suspiciously like fstab.


    I think you can simplify this a bit.


    If you look in this file you can spot two variables: OMV_RRDCACHED_BASEDIR and OMV_COLLECTD_RRDTOOL_GRAPH_IMGDIR. If you add those two variables to /etc/default/openmediavault and generate the configs for collectd and rrdcached with omv-mkconf those files should hopefully be at the new location.

  • @bobafetthotmail Thanks for this contribution. I found it very interesting.


    Curiously, I came across this post while studying my next server deploy at work. I have in my desk one "small" HP Proliant DL380 G7 with 128Gigs ram that I'm preparing for insert on the DPC. FYI this server comes with both internal usb and SDCARD.
    Hewlett Packard, recommends and even supplies an ESXi image for using on it. On the same HP web, you can find a comprehensive article on newer sdcards explaining the benefits of running them. ESXi is optimized for less (almost none) writes on sdcard.
    The benefits are massive. Once the server is on the datacenter, I can prepare a new sdcard with system modifications at home and only go to the DPC for swapping the cards. The downtimes are really reduced and can be eliminated if using nodes.


    The installation on this server may be slightly different as I'll use proxmox for it. @ proxmox they also didn't think following ESXi to nand installations was a road to take, but hey, HP recommends and supplies ESXi just for this.
    One of the choices for the server it's running from a read-only system on sdcard, it will have a pcie SSD+SAS 1TB disk.
    Relocated /home,/var... to a partition on the pcie SAS.
    The entire SAS RAID will be used for VMs and DBs.
    SSD could be fully used for swap or run the system from there (no sdcard) without swap.
    Then I'll have a periodical entire system dump to the rest of the pcie-SAS that I'll make bootable, in case I can't reach the server before something ugly happens.


    So I was just adapting proxmox (that runs on Debian) to work more like ESXi when I came here.
    But that post was double seasonable, also for my personal OMV installation, that I run from a SSD. I also experienced continuously early morning server wakes, and let me say that I don't like to give away money to the power company. (I heard feedthepig?)
    I have been keeping an eye on the last seagate I bought that likes to throttle up and down continuously, and that script yours it's so helpful for my obsession.


    So thanks again, and keep contributing.

  • bobbafett - Thank you for sharing your research. This adds tremendous flexibility to OpenMediaVault.


    I'm surprised that some of the mods/seasoned members have had such a negative knee-jerk response to this. This isn't merely about being "cheap". While an SSD has more write endurance, it just delays eventual write failures. Instead of lasting 1 year, an SSD might draw that out to 4 years. Maybe it'll be obsoleted by then, but I have a D-Link NAS that's been faithfully serving files for almost that long. When I put a piece of hardware into production, I want to decide when to take it offline, not have it decide for me.


    Even with an SSD's better wear leveling, it doesn't make sense to hammer it to death. Why would I want to use it in a way that prematurely kills it? Similarly, I wouldn't want to write unnecessarily to a spinning disk either because it thrashes the heads. To add insult to injury, those writes consume IOPs (performance hit) and use more power (wallet hit). Microsoft took the stance of throwing hardware at problems and look where that got them. It's an inherently wasteful approach that leads to sloppy, bloated software. It could be argued that that's polar opposite to the spirit of Linux.


    I sincerely hope that the OpenMediaVault devs consider integrating fs2tmp. Doesn't have to be on by default, but having an easy way to switch it on in the UI would be handy.

    • Offizieller Beitrag

    I'm surprised that some of the mods/seasoned members have had such a negative knee-jerk response to this. This isn't merely about being "cheap". While an SSD has more write endurance, it just delays eventual write failures. Instead of lasting 1 year, an SSD might draw that out to 4 years. ... When I put a piece of hardware into production, I want to decide when to take it offline, not have it decide for me.


    We try to stay with what is supported. Between that and experience, that is the reason for our response. Four years, huh? Read this. I personally experienced a microsd card and a usb stick fail in less than a week. Some hard drives don't last 4 years either.


    To add insult to injury, those writes consume IOPs (performance hit) and use more power (wallet hit).


    If you are that worried about IOPs, you have a fast processor that won't be affected by writing log/rrdcache files. Have you measured that it doesn't use more power writing a log file to ram than an SSD? Most SSDs use very little energy even when writing (especially the small files we are talking about.


    Microsoft took the stance of throwing hardware at problems and look where that got them. It's an inherently wasteful approach that leads to sloppy, bloated software. It could be argued that that's polar opposite to the spirit of Linux.


    Bloated??? OMV is just about as stripped down as you can make a Debian install. There is no bloat. OMV uses Debian's design principals. Yes, the packages are there but not by default.


    I sincerely hope that the OpenMediaVault devs consider integrating fs2tmp. Doesn't have to be on by default, but having an easy way to switch it on in the UI would be handy.


    There is only one dev - Volker. If you really want this, file a feature request here.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!


  • We try to stay with what is supported. Between that and experience, that is the reason for our response. Four years, huh? Read this. I personally experienced a microsd card and a usb stick fail in less than a week. Some hard drives don't last 4 years either.



    Bloated??? OMV is just about as stripped down as you can make a Debian install. There is no bloat. OMV uses Debian's design principals. Yes, the packages are there but not by default.


    Fair enough on the supported configuration stance. There's limited resources and much to do. When people offer to contribute... don't look a gift horse in the mouth :)


    4 years is just an example round ballpark figure that is on the order of magnitude of SSD longevity in my experience. Some last longer, some not. It's meant to illustrate comparatively that such a system's typical lifespan isn't really that long compared to what I expect of an archival system. The worst part is how SSDs can suddenly and catastrophically fail as they run out of wear leveling sectors.


    On "bloated" --- you misunderstand my point. I'm not saying that OMV is bloated, only that Microsoft's strategy of throwing hardware at a problem leads down a path that's proven to be wasteful. People love Linux because it's lean & efficient and doesn't require fat heavy power-hungry hardware to run like Windows does. Let's not ridicule those who pursue efficiency.

    • Offizieller Beitrag

    I have donated a LOT of time to OMV and would never "look a gift horse in the mouth". I just know how little help we get. So, I try to point people in more valuable directions. I was not trying to ridicule anyone. I will say I disagree that using ram and a usb stick is pursuing efficiency over an SSD.


    Speaking of SSDs... Four years may be what you have experienced with other OS but I don't believe that would be the case with OMV. Of my seven active OMV systems, none of them use more than 2.5 GB on the OS drive. If you were using a 60 GB SSD, that would be a lot of wear leveling sectors to fail. I'm guessing the SSD would outlast most of the other system components. Heck, my year old Windows 7 desktop with a Samsung 840 Pro (OS on ssd, everything else on hard drive) has only written 2.84 TB. With the tests showing 250 TB should be attainable even by cheap SSDs, it should last a while longer :)


    That said, I like the idea of using ram instead of the drive. The problem I have is that it is very difficult to use when it is not part of the core. People will try these tips and fail miserably. They get angry when I tell them they lost their config and will have to reinstall.


    Well, I am way off topic. Lets get this thread back to the original idea. If someone comes up with a bulletproof way to implement all of this, I would gladly help make a plugin.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • So much anger....


    I don't even care about these complaints anymore. And inaccurate ones at that. Maybe some kids should stop gaming/complaining so much and get a job. SSDs are cheap and they work great.

  • ryecoaaron -- I honestly appreciate your contributions. OMV wouldn't be what it is today without you.


    I've seen 16GB DOMs die in 6 months on a server that does constant collectd writes. There are other things going on including small database writes so the utilization may be a little different. Nevertheless, since DOMs are supported, their write endurance is a real concern.


    I've been running with fs2ram for about a week. It was surprisingly simple to set up which hopefully would translate to being easy to integrate.


    So much anger....


    I don't even care about these complaints anymore. And inaccurate ones at that. Maybe some kids should stop gaming/complaining so much and get a job. SSDs are cheap and they work great.


    Come on man, what's with the insults? Didn't we talk about this already? Your obsession with being "right" isn't particularly endearing. There are many ways to look at a problem, open your mind ;)


    There's a fable about a pot and a kettle that you might want to digest.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!