after update: “monit alert -- Resource limit matched"

  • Dear All,
    have a HP N54L Micorserver. Running with the OS on USB Stick inside. Also a Radi5 with in total 3 HDD and in addition 1 single HDD.


    Last weekend I updated OMV from 0.5 to 1.0.28. At this update I have had 3 errors (Errors were encountered while processing: postfix bsd-mailx openmediavault) which I was able to fix with "apt-get -f install". On Monday I updated to OMV 1.0.29. On Wednesday a lot of updates were installed (e.g. MiniDLNA). I was not able to get into my shares but could fix this by using “omv-mkconf fstab” (see thread: File System: Failed to mount '8c4bfe06-19ba-4d95-b11c-121829f53c85': mount: no such partition found ?).


    But I am not sure if OMV is installed properly.


    This night I have received nearly 50 “monit alert -- Resource limit matched” emails followed by 40 emails with subject “monit alert -- Resource limit succeeded”.


    The monit alert text is as follows:
    “Resource limit matched Service localhost
    Date: Fri, 24 Oct 2014 00:34:20
    Action: alert
    Host: OpenMediaVault.local


    Description: cpu wait usage of 98.5% matches resource limit [cpu wait usage>95.0%]


    Your faithful employee
    ,Monit”


    So it looks like there is somehow a process which delays the whole system. I could I identify where the problem is. And how could I solve this?


    Neither I am not a network specialist nor a Linux pro. The server is without a monitor. I get access to it by using SSH via putty.


    The last option could be a new clean installation? If I do this what happened with my Radi5? I am able to mount it by only plugging in all 3 HDD after installation? Or will I lose all data on it?


    Any help is appreciated.

    • Offizieller Beitrag

    Your first problem is you installed OMV on a usb stick. It is most likely failing. Your only option is to do a clean install. Unplug your data drives, reinstall on a hard drive or ssd (they fit nicely in the optical bay), plug your data drives back in, mount them, and recreate shares the same way you created them the first time. Your data will still be there.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks for your information. What about the capacity of the SSD is 16 GB enough?


    I will then do a complete new install. I was not aware about problems installing on USB stick. Friend of mine (IT expert) recommended to put it on USB stick.


    My configures are not so much. My only concern is the data drive. The raid. OMV will recongized the RAID5 after a clean installation for sure? I only have to replug the 3 HDD, mount it on the web GUI and that it?


    OMV will not format order build up the RAID5 again with deleting all my data?

    • Offizieller Beitrag

    16 gb is more than enough.


    Your friend is probably used to Freenas or something. Debian + OMV equals too many writes (logging, etc) for a usb stick without static wear leveling. It is in the guides and wiki to not use a usb stick.


    Yes, OMV will recognize the raid5 array. Just mount it in the Filesystems tab and recreate the shares. OMV will not format the drive. Just don't wipe or create a new filesystem.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!