The Community Forums

Interact with an entire community of cPanel & WHM users!
  1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

/dev/vda1 getting filled up

Discussion in 'General Discussion' started by thesmahesh, Sep 26, 2015.

  1. thesmahesh

    thesmahesh Member

    Joined:
    Sep 26, 2015
    Messages:
    10
    Likes Received:
    0
    Trophy Points:
    1
    Location:
    India
    cPanel Access Level:
    Website Owner
    Dear All,
    I am having a vps with cpanel 11.5 installed on centos 7 machine

    df -h gives me this:
    <code>
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/cos-root 2.0T 58G 1.9T 3% /
    devtmpfs 15G 0 15G 0% /dev
    tmpfs 15G 666M 14G 5% /dev/shm
    tmpfs 15G 849M 14G 6% /run
    tmpfs 15G 0 15G 0% /sys/fs/cgroup
    /dev/vda1 243M 210M 21M 92% /boot
    </code>

    But i frequently getting alert from cpanel stating that
    "The filesystem “/dev/vda1”, which is mounted at “/boot”, has reached “warn” status because it is 86.5% full"
    Please help me in getting this resolved.

    Thanks and Regards
     
  2. Jcats

    Jcats Well-Known Member

    Joined:
    May 25, 2011
    Messages:
    275
    Likes Received:
    31
    Trophy Points:
    28
    Location:
    New Jersey
    cPanel Access Level:
    DataCenter Provider
    /boot stores your kernel images, you most likely have a lot of older kernel images in there that are no longer necessary

    What does this show:
    Code:
    # ls -lah /boot/
    Can you also do:
    Code:
    # uname -a
     
    #2 Jcats, Sep 26, 2015
    Last edited: Sep 26, 2015
  3. thesmahesh

    thesmahesh Member

    Joined:
    Sep 26, 2015
    Messages:
    10
    Likes Received:
    0
    Trophy Points:
    1
    Location:
    India
    cPanel Access Level:
    Website Owner
    Thanks for the reply. Please help

    Code:
    root@server [~]# ls -lah /boot/
    total 200M
    dr-xr-xr-x.  5 root root 3.0K Sep 25 15:34 ./
    dr-xr-xr-x. 20 root root 4.0K Sep 24 06:10 ../
    -rw-r--r--   1 root root 121K Aug  6 03:15 config-3.10.0-229.11.1.el7.x86_64
    -rw-r--r--   1 root root 121K Sep 15 17:14 config-3.10.0-229.14.1.el7.x86_64
    -rw-r--r--   1 root root 121K Jun 24 00:15 config-3.10.0-229.7.2.el7.x86_64
    -rw-r--r--.  1 root root 121K Mar  6  2015 config-3.10.0-229.el7.x86_64
    drwxr-xr-x.  2 root root 1.0K Jul 15 10:03 grub/
    drwxr-xr-x.  6 root root 1.0K Sep 25 15:34 grub2/
    -rw-r--r--.  1 root root  39M Jul 15 09:55 initramfs-0-rescue-7d6225e3e2594f369d47e539de05d237.img
    -rw-r--r--   1 root root  18M Aug  8 12:58 initramfs-3.10.0-229.11.1.el7.x86_64.img
    -rw-r--r--   1 root root  18M Aug  8 14:32 initramfs-3.10.0-229.11.1.el7.x86_64kdump.img
    -rw-r--r--   1 root root  18M Sep 25 15:34 initramfs-3.10.0-229.14.1.el7.x86_64.img
    -rw-r--r--   1 root root  18M Jul 15 10:04 initramfs-3.10.0-229.7.2.el7.x86_64.img
    -rw-r--r--   1 root root  18M Jul 15 11:03 initramfs-3.10.0-229.7.2.el7.x86_64kdump.img
    -rw-r--r--.  1 root root  18M Jul 15 09:55 initramfs-3.10.0-229.el7.x86_64.img
    -rw-r--r--   1 root root  18M Jul 19 21:18 initramfs-3.10.0-229.el7.x86_64kdump.img
    -rw-r--r--.  1 root root 576K Jul 15 09:53 initrd-plymouth.img
    drwx------.  2 root root  12K Jul 15 09:51 lost+found/
    -rw-r--r--   1 root root 235K Aug  6 03:17 symvers-3.10.0-229.11.1.el7.x86_64.gz
    -rw-r--r--   1 root root 235K Sep 15 17:16 symvers-3.10.0-229.14.1.el7.x86_64.gz
    -rw-r--r--   1 root root 235K Jun 24 00:17 symvers-3.10.0-229.7.2.el7.x86_64.gz
    -rw-r--r--.  1 root root 235K Mar  6  2015 symvers-3.10.0-229.el7.x86_64.gz
    -rw-------   1 root root 2.8M Aug  6 03:15 System.map-3.10.0-229.11.1.el7.x86_64
    -rw-------   1 root root 2.8M Sep 15 17:14 System.map-3.10.0-229.14.1.el7.x86_64
    -rw-------   1 root root 2.8M Jun 24 00:15 System.map-3.10.0-229.7.2.el7.x86_64
    -rw-------.  1 root root 2.8M Mar  6  2015 System.map-3.10.0-229.el7.x86_64
    -rwxr-xr-x.  1 root root 4.8M Jul 15 09:55 vmlinuz-0-rescue-7d6225e3e2594f369d47e539de05d237*
    -rwxr-xr-x   1 root root 4.8M Aug  6 03:15 vmlinuz-3.10.0-229.11.1.el7.x86_64*
    -rw-r--r--   1 root root  171 Aug  6 03:15 .vmlinuz-3.10.0-229.11.1.el7.x86_64.hmac
    -rwxr-xr-x   1 root root 4.8M Sep 15 17:14 vmlinuz-3.10.0-229.14.1.el7.x86_64*
    -rw-r--r--   1 root root  171 Sep 15 17:14 .vmlinuz-3.10.0-229.14.1.el7.x86_64.hmac
    -rwxr-xr-x   1 root root 4.8M Jun 24 00:15 vmlinuz-3.10.0-229.7.2.el7.x86_64*
    -rw-r--r--   1 root root  170 Jun 24 00:15 .vmlinuz-3.10.0-229.7.2.el7.x86_64.hmac
    -rwxr-xr-x.  1 root root 4.8M Mar  6  2015 vmlinuz-3.10.0-229.el7.x86_64*
    -rw-r--r--.  1 root root  166 Mar  6  2015 .vmlinuz-3.10.0-229.el7.x86_64.hmac

    Code:
    "Uname -a" gives me this
    3.10.0-229.11.1.el7.x86_64
    
    Code:
    root@server [/]# rpm -qa |grep kernel
    kernel-3.10.0-229.7.2.el7.x86_64
    kernel-tools-libs-3.10.0-229.14.1.el7.x86_64
    kernel-3.10.0-229.14.1.el7.x86_64
    kernel-devel-3.10.0-229.7.2.el7.x86_64
    kernel-devel-3.10.0-229.11.1.el7.x86_64
    kernel-tools-3.10.0-229.14.1.el7.x86_64
    kernel-devel-3.10.0-229.14.1.el7.x86_64
    kernel-3.10.0-229.el7.x86_64
    kernel-3.10.0-229.11.1.el7.x86_64
    kernel-devel-3.10.0-229.el7.x86_64
    kernel-headers-3.10.0-229.14.1.el7.x86_64
    
     
  4. Jcats

    Jcats Well-Known Member

    Joined:
    May 25, 2011
    Messages:
    275
    Likes Received:
    31
    Trophy Points:
    28
    Location:
    New Jersey
    cPanel Access Level:
    DataCenter Provider
    You are running 3.10.0-229.11.1 but you have 3.10.0-229.14.1 installed.

    I would boot into the newest kernel, then you can remove the older kernels

    Once you confirmed you are running the latest kernel, check
    Code:
    # rpm -qa |grep kernel
    again, if you see any older ones, use
    Code:
    rpm -e kernel-3.10.0-229.11.1.el7.x86_64
    as an example to remove the old ones.
    You can also remove any of the images in /boot that remain to clean it up.

    The only other way is to make that partition bigger or removing that partition so it uses the space on /
     
    thesmahesh likes this.
  5. thesmahesh

    thesmahesh Member

    Joined:
    Sep 26, 2015
    Messages:
    10
    Likes Received:
    0
    Trophy Points:
    1
    Location:
    India
    cPanel Access Level:
    Website Owner
    Hello Jcats,
    Thanks again for replying.
    Sorry for the newbie question:
    <quote>"I would boot into the newest kernel, then you can remove the older kernels"</quote>
    Can you please let me know how to do that ?
    Thanks and Regards
     
  6. Jcats

    Jcats Well-Known Member

    Joined:
    May 25, 2011
    Messages:
    275
    Likes Received:
    31
    Trophy Points:
    28
    Location:
    New Jersey
    cPanel Access Level:
    DataCenter Provider
    Code:
    # cat /etc/grub.conf
    Make sure the latest kernel is at the top of the list. Then reboot, you will boot into the latest kernel, you can also hit any key while its rebooting during the splash screen(it shows a count down) but you would need console/kvm access to see it.
     
  7. thesmahesh

    thesmahesh Member

    Joined:
    Sep 26, 2015
    Messages:
    10
    Likes Received:
    0
    Trophy Points:
    1
    Location:
    India
    cPanel Access Level:
    Website Owner
    Code:
    root@server [/]# cat /etc/grub.conf
    cat: /etc/grub.conf: No such file or directory
    
    Please help. Thanks and Regards
     
  8. cPanelMichael

    cPanelMichael Forums Analyst
    Staff Member

    Joined:
    Apr 11, 2011
    Messages:
    30,675
    Likes Received:
    648
    Trophy Points:
    113
    cPanel Access Level:
    Root Administrator
    Hello :)

    Typically, you can simply reboot the server to ensure it's using the latest kernel. Once you do this, the following thread explains how to safely remove the older kernels:

    https://forums.cpanel.net/threads/clean-boot-partition.146889/#post624373

    However, this is a VPS, so I suggest consulting with your VPS provider first to see how they suggest removing old kernels from the /boot partition as a precaution.

    Thank you.
     
    thesmahesh likes this.
Loading...

Share This Page