Originally posted by rpmws
I have had mine also not want to come alive. I think in most cases it's fsck but not sure. Mine (both boxes) would crash and not respond to anything but pings. So far so good. I can tell you I have 80 sites on a ensim box that does 150GB month and has been up for 231 days.
I can beat your uptime. My Ensim box ran for 422 days before i converted it to a cPanel box. Big mistake! Anyway, im still looking though the logs on what happened last night. Problem is MRTG was show 10% loads when it crashed. So unless my NOC rebooted my server instead of somebody elses i cant see any other reason for it to have just crashed like that. BTW, running TOP all day on your 19inch takes up quite a bit of resource.
One strange thing i found after my NOC rebooted my box in my kernel log;
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 230025
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 2982067
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 3638083
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 2982066
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 2490396
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 2212488
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 4325697
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 2343708
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 721728
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 1966626
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 1737206
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 327774
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 213331
May 12 20:53:36 srv05 kernel: ext3_orphan_cleanup: deleting unreferenced inode 213320
and
May 13 01:42:00 srv05 kernel: VFS: find_free_dqentry(): Data block full but it shouldn't.
May 13 01:42:00 srv05 kernel: VFS: Error -5 occured while creating quota.
May 13 04:45:34 srv05 kernel: VFS: Quota for id 32130 referenced but not present.
May 13 04:45:34 srv05 kernel: VFS: Can't read quota structure for id 32130.
May 13 04:55:39 srv05 kernel: VFS: Quota for id 32143 referenced but not present.
May 13 04:55:39 srv05 kernel: VFS: Can't read quota structure for id 32143.
May 13 04:55:40 srv05 kernel: VFS: Quota for id 32131 referenced but not present.
May 13 04:55:40 srv05 kernel: VFS: Can't read quota structure for id 32131.
May 13 06:02:23 srv05 kernel: VFS: Quota for id 32139 referenced but not present.
May 13 06:02:23 srv05 kernel: VFS: Can't read quota structure for id 32139.
May 13 06:02:26 srv05 kernel: VFS: Quota for id 32140 referenced but not present.
May 13 06:02:26 srv05 kernel: VFS: Can't read quota structure for id 32140.
What is all this? I see this in the logs and is part of the server inital boot process. This looks like some kind of quota problem and i dont believe this was the reason for the crash. Since the system crashed to abruptly i think these entries were a result of the system going down.